One of the frustrating things about technology built and deployed in capitalist systems is that any technology built in the public interest is gets called #TechforGood. It's shorthand for technology built focused on good stuff, not on bad stuff (you’re not off the hook #DataforGood). Using shorthand here is very problematic.
There are 'Tech for Good' debates and conversations happening all the time, but these are often wholly separate from policy conversations: things like governance and questions of rights, as well as debates over who should have power to build for whom, and how we should manage all of our digital systems in pro-social, anti-racist, and net beneficial ways.
But ‘tech for good’ implies that positive intent drives good technology. That’s not true. There is technology built in accountable organisations, and technology built in unaccountable organisations. There is technology built and governed with legitimacy, technology not governed at all, technology governed poorly, etc.
Just because you are trying to do something good doesn't mean you are forgiven for being sloppy and causing harm. And just because companies make money doesn't mean we should let them off the hook when they play fast and loose.
By focusing too much on the 'what', we miss the challenging and hugely consequential details of the 'how'. E.g. an app that focuses on youth civic engagement is a good enough idea, but how is it going to work? How do you manage the power dynamics and politics of building a technology system? Why should you be the one making it? Just because you are trying to do something good doesn't mean you are forgiven for causing harm or being sloppy. And just because companies are working to make money, doesn’t mean we should forgive them for being ‘bad’ with technology either.
We should raise the bar
We should expect, and demand, that all tech (is governed to) be good. Consider this sample of different organisational orientations and structures:
Non-profit = will never make more than it spends. Its work will never be market-based.
Not-for-profit = can make more than it spends but will reinvest excess in the underlying mission.
For-profit = aims to make more than it spends so it can pay excess to shareholders.
State-run = designed and deployed by civil servants, policymakers, and procurement process. Funded by the state.
None of these organisations should be permitted (legally, politically, or socially) to deploy poorly governed technology. But all of them can. And do!
By separating out 1) the mission, 2) revenue structure, and 3) legitimacy of governance we can have way more nuanced conversations about various implementations of technology.
Mission
It is very possible to set up a company that has a mission designed to benefit society. There are a lot of governance structures out there that don't optimise purely for shareholder profit, as per capitalist tradition. Interested? Check out the Zebra community which is made up of many businesses and investors growing companies of all stripes to benefit workers and society. And if you find yourself inventing solutions that seek to fill gaps in a failed system, you should probably carry out advocacy to fix that broken system while you manufacture band-aids.
Revenue Structure
Prepare to be shocked: some companies take grants (!), some charitable organisations make a profit through service contracts or selling products (!), and some non-profits make investments (!). Thinking more dynamically about income is liberating and often independent of mission and technology governance. And in my view the revenue structure itself is value-neutral.
Technology Governance
Technology can be built, managed, and governed in many ways. It can:
be developed in close (ideally remunerated) consultation with future — particularly marginalised — users
intentionally rebuke dark design patterns and extractive data practice
be explicit and straightforward about motivations and practices
develop and enforce acceptable use policies based on transparent values and consistent process
design and effectively manage incident response mechanisms
set responsible growth targets and methods
adapt your governance over time to be even stronger as you learn and grow
Or it can be a hot mess of exploitation and extraction stumbling in the vague direction of an unlikely unicorn status.
So, if you want to make tech that does good in the world there is a pretty clear recipe:
Have a clear mission, with operational structures that serve it
Hard code your values into your governance so it doesn't matter if you get hit by a bus or get acquired
Be intentional and incorporate what you learn into the very fabric of the rules of your organisation
Don’t cut yourself slack on the ethics of the ‘how’ because you are trying to do something good
Instead of having a 'tech for good' kids table, let's join the grownup table and start a food fight. We want power and resource invested in systems that will make society stronger. That’s not possible if we sequester ideas for social benefit semantically, financially, and politically.
I recently came across PIT Lab which is an organisation that takes a more systems-led approach to developing technology — it's a really good example of the way we should advance this work. Anything you’re seeing that matches the mission with the method? What challenges do you see in building meaningful, good technology that is pro-social?
If you want to get an issue of The Relay in your inbox every other Wednesday, subscribe. And, if you liked this one, check out the last issue here.
Follow-up from the last issue
Several people asked for more about how consequentialism and deontology can be meaningfully combined. My thought is deontology first then consequentialism. I find the human rights framework useful for a first deontological pass. If an idea or project passes the basic bar of being rights-compatible (e.g. isn’t bespoke tech to manage data about kids in cages, or education technology built with the promise of monetising the surveillance of children) then we can begin to use consequentialist methods of understanding how to weigh the effects of various choices compared to others. For large systems of choices, deontological analysis should be carried out frequently, and especially at moments when you are considering new directions, products, or process.
Great piece. Another point in support of this case is how many of the very worst tech companies could comfortably have defined themselves publicly as 'for good' up until very recently, and indeed actively did so. The widespread public acceptance of this narrative in which motivation is sufficient justification of virtue certainly has some blame to bear for the mess they played a big part in creating.
One thing I feel is slightly missing though is in the call at the end for 'growing up' - ultimately this isn't only a matter of choice, especially on the part of the organisations involved, but that so much of the funding and support in the 'tech for good' world seems to go towards initial ideas and startups, rather than taking risks on growing more established businesses. That's partly a lack of overall capital available for these ends - which is a hard problem to solve - but also because in the absence of the drive from VC to back 100 startups for a decent period in the hope one becomes a unicorn, the sector defaults to a culture that prefers novelty and dislikes risk. Need some way of balancing against that if theres going to be major growth of the sort of tech orgs we need.
I really love this! I think (3) "Be intentional and incorporate what you learn into the very fabric of the rules of your organisation" gets lost often, and in particular, gets erased by marketing that makes the cycles of learning longer. When companies/orgs create something and there is a lot to learn from it, but the marketing machine of how-amazing-are-we starts turning and I think orgs start really believing their marketing uplift, it cycles back to their own detriment of not really learning from what went wrong. I think there has to be a shift in our culture to demand that case studies, reports, etc contain a clear section of things that went wrong, things that could have been done better, not just "everything was rainbows and butterflies" (I am looking at you - tech conferences). I think transparency helps you continue to be intentional and the ability to incorporate what you have learnt back into your practice.