Kind Environments for technology organisations? [part two]
You can't manage what you can't measure, and what really matters can't be measured...
This piece is part two of The Slow Violence of Emerging Technologies — reading that first is a good idea, but not a must.
In part one, I compared Rob Nixon's concept of 'slow violence' with the feedback loop of harm we perpetuate with new technologies: Creators ship products which are of value to some, but of harm to others. The products are only revisited if the chorus of feedback about the harm is loud enough. This cycle moves at glacial speed, and is hard to identify. It's tough to break the cycle, and kind environments are part of the reason why.
A kind environment is one which has consistent dynamics and parameters, and clear indications of success and failure. They are a good space for 'learning', essentially. Machines are trained in kind environments like video games or board games. They learn what winning looks like, and so do more 'good' actions, which lead to winning — and less that lead to losing.
But the world is complicated. When we seek the 'good' or want to mitigate complex things like 'slow violence' how do we know if we are making the right decisions? And how do we orient our systems (technical and human) to 'learn' to work towards good, rather than optimise for things that we know will lead to externalised harm?
Under capitalism, we are ruled by markets, and are encouraged to think of markets as kind environments. Like this:
Start up a business/build a product
Test it in a market place, where the metrics for success are users, monthly recurring revenue, market share, valuation and further investment by those with power, and ultimately acquisition (and maybe eventually profit).
If you can grow, sell to consumers, and get acquired, your product is good — transformative even. If you haven't done these things, your product is probably bad or too early or too late and you should 'pivot'.
These metrics are a very clean-cut measure of success, meaning it is easier to build systems that are oriented towards them. Ones which learn from experiments, and optimise choices and structures to achieve those metrics. This kind environment precludes examination of complex, compounded harms. Not only does it overlook them as a proxy of failure or fail to incorporate them in formulas for success, it encourages systems to actually overlook them entirely. With growth and success being so visible, and harm being so invisible (to those incentivised to ignore it), you can see how breaking out of cycles of slow violence is such a challenge.
I ran a non-profit for close to a decade, and I was always struck by the proxies we used to know if we were having an impact.
Easy metrics: are you able to fundraise, grow, be known?
Hard metrics: do people like working with you, are you helping shift power, are you actually making a difference at all? Is your work 'worth' it?
I thought about it all the time, and in the end I don't have answers, just more questions. And a longing for a kind environment for learning, experimentation, and change that is oriented to optimise for something I know will make the world better.
This has me noodling on: how can we ask organisations to pursue a different success criteria, if we don't know what it is yet? How can we establish strategies for long term social benefit, that create a kind environment which we can orient technology creators towards? How can we end slow violence and reduce our dependence on inequitable, long-term harm to fuel change?
I don't think ESG goals cut it, but I'm curious if any ESG thinkers out there are considering how to build pro-technosocial metrics that we can use to condition moral thinking for technology makers and stop letting them off the hook for 'unintended consequences' on the path to profit.
If you want to get an issue of The Relay in your inbox every other Wednesday, subscribe. And, if you liked this one, check out the last issue here.
Oh, Basecamp. I have thoughts. Listen to me rant for 3 minutes here. And if you want me to do this more often let me know. It’s new and weird to me.
Something for people at foundations: if you've been enjoying The Relay, you may also find my handbook on How To Fund Tech useful. It brings together a decade of my experiences advising foundations and non-profits to use technology strategically and responsibly. You can get it here.
Perhaps we have already talked about this, but the idea of reorienting the fundamental goals and metrics of success for technology makers was core to my dissertation. I proposed that this be based around citizen empowerment when it comes to civic technology. I even proposed a measure for it but it's tricky. New metrics are hard to implement, especially if you want to acknowledge the nuance of subjective experiences like empowerment—you end up with metrics that are hard to measure and interpret compared to commonly used KPIs. I still think this will be important infrastructure for making tech truly responsible through accountability architectures reliant on serving the public interest and redistributing power to a broad set of stakeholders when it comes to design and decision-making. https://scholars.org/contribution/how-silicon-valley-can-support-citizen-empowerment