Tech for good isn't a thing

Because 'tech for bad' isn't a thing either

One of the frustrating things about technology built and deployed in capitalist systems is that any technology built in the public interest is gets called #TechforGood. It's shorthand for technology built focused on good stuff, not on bad stuff (you’re not off the hook #DataforGood). Using shorthand here is very problematic.

There are 'Tech for Good' debates and conversations happening all the time, but these are often wholly separate from policy conversations: things like governance and questions of rights, as well as debates over who should have power to build for whom, and how we should manage all of our digital systems in pro-social, anti-racist, and net beneficial ways.

But ‘tech for good’ implies that positive intent drives good technology. That’s not true. There is technology built in accountable organisations, and technology built in unaccountable organisations. There is technology built and governed with legitimacy, technology not governed at all, technology governed poorly, etc.

Just because you are trying to do something good doesn't mean you are forgiven for being sloppy and causing harm. And just because companies make money doesn't mean we should let them off the hook when they play fast and loose.

By focusing too much on the 'what', we miss the challenging and hugely consequential details of the 'how'. E.g. an app that focuses on youth civic engagement is a good enough idea, but how is it going to work? How do you manage the power dynamics and politics of building a technology system? Why should you be the one making it? Just because you are trying to do something good doesn't mean you are forgiven for causing harm or being sloppy. And just because companies are working to make money, doesn’t mean we should forgive them for being ‘bad’ with technology either.

We should raise the bar

We should expect, and demand, that all tech (is governed to) be good. Consider this sample of different organisational orientations and structures:

  • Non-profit = will never make more than it spends. Its work will never be market-based.

  • Not-for-profit = can make more than it spends but will reinvest excess in the underlying mission.

  • For-profit = aims to make more than it spends so it can pay excess to shareholders.

  • State-run = designed and deployed by civil servants, policymakers, and procurement process. Funded by the state.

None of these organisations should be permitted (legally, politically, or socially) to deploy poorly governed technology. But all of them can. And do!

By separating out 1) the mission, 2) revenue structure, and 3) legitimacy of governance we can have way more nuanced conversations about various implementations of technology.

Mission

It is very possible to set up a company that has a mission designed to benefit society. There are a lot of governance structures out there that don't optimise purely for shareholder profit, as per capitalist tradition. Interested? Check out the Zebra community which is made up of many businesses and investors growing companies of all stripes to benefit workers and society. And if you find yourself inventing solutions that seek to fill gaps in a failed system, you should probably carry out advocacy to fix that broken system while you manufacture band-aids.

Revenue Structure

Prepare to be shocked: some companies take grants (!), some charitable organisations make a profit through service contracts or selling products (!), and some non-profits make investments (!). Thinking more dynamically about income is liberating and often independent of mission and technology governance. And in my view the revenue structure itself is value-neutral.

Technology Governance

Technology can be built, managed, and governed in many ways. It can:

  • be developed in close (ideally remunerated) consultation with future — particularly marginalised — users

  • intentionally rebuke dark design patterns and extractive data practice

  • be explicit and straightforward about motivations and practices

  • develop and enforce acceptable use policies based on transparent values and consistent process

  • design and effectively manage incident response mechanisms

  • set responsible growth targets and methods

  • adapt your governance over time to be even stronger as you learn and grow

Or it can be a hot mess of exploitation and extraction stumbling in the vague direction of an unlikely unicorn status.

So, if you want to make tech that does good in the world there is a pretty clear recipe:

  1. Have a clear mission, with operational structures that serve it

  2. Hard code your values into your governance so it doesn't matter if you get hit by a bus or get acquired

  3. Be intentional and incorporate what you learn into the very fabric of the rules of your organisation

  4. Don’t cut yourself slack on the ethics of the ‘how’ because you are trying to do something good

Instead of having a 'tech for good' kids table, let's join the grownup table and start a food fight. We want power and resource invested in systems that will make society stronger. That’s not possible if we sequester ideas for social benefit semantically, financially, and politically.

I recently came across PIT Lab which is an organisation that takes a more systems-led approach to developing technology — it's a really good example of the way we should advance this work. Anything you’re seeing that matches the mission with the method? What challenges do you see in building meaningful, good technology that is pro-social?


If you want to get an issue of The Relay in your inbox every other Wednesday, subscribe. And, if you liked this one, check out the last issue here.


Follow-up from the last issue

Several people asked for more about how consequentialism and deontology can be meaningfully combined. My thought is deontology first then consequentialism. I find the human rights framework useful for a first deontological pass. If an idea or project passes the basic bar of being rights-compatible (e.g. isn’t bespoke tech to manage data about kids in cages, or education technology built with the promise of monetising the surveillance of children) then we can begin to use consequentialist methods of understanding how to weigh the effects of various choices compared to others. For large systems of choices, deontological analysis should be carried out frequently, and especially at moments when you are considering new directions, products, or process.