Don't be evil or... be the lesser of two evils?
Consequentialist and deontological ethics: why they matter for shaping views on appropriate technology
What's old is new! In the vortex of conversation concerning what is right and what is wrong about the use of technology and data, I have found practically ancient concepts — deontology and consequentialism — enormously helpful. Helpful in navigating my own thoughts, and helpful in understanding why debates about these issues are so lacklustre.
Think Don't Be Evil versus Be the Lesser of Two Evils...
For someone using deontology to determine whether something is right or wrong, they would refer to a guiding set of values or principles and hold fast to them. Consequences do not come into it: if something is inconsistent with their guiding values, they just say no. Terms like abolition, rights, liberation, and red lines are the language of deontological thinkers.
When you hear terms like ‘unintended consequences’ you can be pretty sure that the person speaking is taking a consequentialist view. In consequentialism, the ethical aim is to minimise the negative consequences and maximise the positive ones. So, with consequentialism, as long as you are trying your best to maximise benefit and mitigate harm, you are behaving ‘ethically’.
And a quick note on the term 'unintended consequences'. I find it insidious. 1) It centres intent of the maker rather than the harm caused by a technology 2) it preemptively lets incredibly powerful actors off the hook for negligence and because of that 3) it is politically naive. As Deb Chachra says: Any sufficiently advanced negligence is indistinguishable from malice.
How do these two things apply to debates about technology? Let's say we are debating the deployment of facial recognition.
The pure consequentialist would entertain ideas about how facial recognition could be regulated and controlled in ways that lead to positive outcomes and fewer negative ones. E.g. "it could help us spot missing children and make identity authentication cheaper and more efficient!" But are the downsides worth it? And how do we manage those downsides to lower the risk for various groups?
The pure deontological thinker would reject facial recognition entirely (even to identify people in the Capitol mob). Why? Because in this frame, we don't think about a balance sheet of consequences, but rights: it doesn’t matter how facial recognition systems are governed because — automated, invisible, discriminate, pervasive — surveillance is fundamentally incompatible with rights.
But wait. Are these modes of thinking in conflict? Not necessarily. In fact, most of us are comfortable with both when considering right and wrong. The challenge comes when two people talking about the same issue — like facial recognition — come with different moral mental models and don't notice. I see this a lot in the 'AI Ethics' debate. A community of human rights groups and others think deontologically, while a growing group of actors (within industry and without) start from a premise of consequentialism, even if they engage with bigger questions about principles.
It is no surprise that this latter group finds wider support. After all, industry and other powerful actors will fight like hell to have the freedom to assert they are acting 'ethically' while they retain the power to calculate a moral balance sheet of upsides and downsides. Shareholders and leaders can distract from harms of their innovation by pointing to positive use cases for their technology, e.g.: oh, you think that facial recognition is ALL bad? Well I guess you don't care about the missing children and murderers on the loose. (Yes, I have heard someone make that case). Any upside of technology can be framed as an act of kindness by technology companies that offsets.
The human rights activist will be tempted to use purely deontological thinking: consolidated power is wrong; no matter what value a behemoth technology company contributes, it cannot be good. This kind of thinking can lead to undermining of causes. I've seen activist communities waste precious time trying to find other platforms to organise on, besides Google. Avoiding Big Tech may not always be the answer.
Both modes of thinking have their place. Where a deontologist may reject powerful technologies to their own detriment, consequentialist thinking turns harm into 'unintended consequences'.
Really, it's the power of combining these two modes of thought that leads to better outcomes. But maybe that's just me being too consequentialist in my thinking. In honour of my deontological side: not all technology is 'progress' that requires us to find the least harmful way to deploy it. Sometimes we really should just say no.
Great thoughts Alix!
I had never had a word for deontology, but have thought about the concept before. I have internally debated whether it is itself an act of survival when as a person I consider whether I should take the deontology approach or a consequential approach - almost feels like sometimes, I default to consequentialism when I can't navigate the possibility of not having something at all, slightly like a cop-out but convince myself its "for the best". I have thought about whether deontology makes me uncomfortable when my creativity does not allow me to imagine a world where we do not use a particular piece of technology at all.
I have been reading Surveillance Capitalism by Zuboff and feel constantly aware of Big Tech - and I have to say, I have debated many times if what I should do is avoid it - and come back exactly to your point - avoiding it would cost me _precious_ time, so I have opted out only selectively, almost like a cost-benefit analysis (how capitalist of me!).
A really illuminating read. Thank you.