A tweet by Austen Allred recently caught my eye. It's about the robust and well-defined methodologies used for building software...
"One of the reasons software is so powerful is it forces operationally complex business to agree definitively on processes, and there's an established and agreed upon protocol for changing them."
He went on to talk about the Git process (branch, commit, test, review, merge), and how nearly all software companies use some approximation of this process. This is true, and it makes something that would otherwise be extremely cumbersome, rather elegant.
But the power of Git, or any other great tool we may use to streamline organisation-wide workflows actually has very little to do with the underlying technology of those tools. While the entry point and enthusiasm for most digital transformation in organisations is about innovation, the actual value — and reason it is so challenging to get right — is that it forces a codification of practices, roles, and decision-making frameworks and governance.
That is, if it's digital transformation done well. When it's successful, it is truly transformative. It creates surface area for renegotiation of power relationships; more transparency; and more intentionality, because what was once implicit must now be hard coded. This creates new, exciting opportunities for challenging hidden power, and aligning values with actual practice. Exciting! Daunting! And mostly: hardly about technology at all...
However, all these great things are often the reason why digital transformation is not truly transformative, most of the time — because most organisations are not prepared to do the work. This work requires understanding and addressing the microclimates of power that you find when engaging in these transformations. So, if an organisation digitises their financial systems without even considering the political complexity of salary transparency, people will just start freaking out. But what an amazing opportunity to transform the way your organisation talks about money, power, and values!
This is also why automation and inference is so scary — the people, organisations, and states who want to leapfrog to automation and inference (rather than slow, steady overhaul of old ways of thinking and doing) are scary because if you want a shortcut to magical efficiency, you probably have no interest in the governance and power challenges that lurk therein. In fact, if your primary focus is on efficiency, rather than effectiveness and equity, you are probably focused on the wrong problem entirely.
Let's take an example. I recently learned that the UK is 23 times more likely to go after someone for benefits fraud than for tax fraud, even though the latter costs the state 9 times more than the former. When a country like the UK attempts to use automation to identify 'fraud' in its benefits systems, it is compounding all of these problems. Rather than using digital possibility to positively reshape systems, it is:
using resources to police the poor rather than helping those in poverty (akin to Rachel Thomas' description of 'artificial scarcity');
automating a high-stakes process for the vulnerable people subjected to it, knowing that it will get things wrong at least some of the time;
putting the onus on individuals already suffering to correct a marauding system;
and reinforcing the political narrative that seeking state support is an illegitimate act committed by nefarious actors who are just one algorithmic hint away from being caught.
This isn't digital transformation. It is more just more of the same (but harsher, faster, and more invasive).
I get excited about the opportunity of digital transformation — it can be experimental and radical. It opens systems up that have long been closed to change, and it aligns people with the resource, energy and patience necessary to do something hard. But when leaders use the power of digital transformation as a cheap endorphin hit (or yet another opportunity to punch down) instead of the political will to overhaul the status quo, that's just Shangy.
If you want to get an issue of The Relay in your inbox every other Wednesday, subscribe. And, if you liked this one, check out the last issue here.
Follow-up from the last issue
Since the last issue, the Google AI Ethics debacle continued to unravel. This has made it clear(er) that labour protections in technology are absolutely necessary if we want hard-hitting sociotechnical research taking place within companies.
Katherine Schwabe wrote a great piece last week about the state of the so called ‘AI Ethics’ space, and the amazing leaders working within it. I also found this piece from Jake Metcalf and Emanuel Moss on unpacking the (many meanings of the) word ‘ethics’ really helpful when thinking about technology production .
And two other great examples of workers organising to do something good for the world that worked against their own direct interests:
And another from Adrian McEwan on Lucas Aerospace, whose workers union developed “the Lucas Plan” that aimed to “avoid plant closures and job losses while changing what they were producing away from defence work and into what they termed ‘socially productive goods’, such as wind turbines, hybrid bus/train vehicles, etc”. More from Adrian’s blog here & a feature-length documentary on the Lucas Plan here.
Thank you for reading. As always I’m curious about what you’re noodling on, and would love to hear your thoughts!