And a fantastic talk about tech leadership, principles and ethics from Bryan Cantrill at Monktoberfest...
I've been reading Eric Rodenbeck's Fake-but-good-enough-for-robots satellite imagery, drawn by artificial intelligences. It's a good and interesting read, but there's something in its language that needles me.
We shouldn't surrender agency to algorithms. As software engineers, system designers and technologists we should be wary of explanations that imply that "the AI did it". It's a convenient, and understandable, defence because the alternative is to admit that we built a system that doesn't work as we intended, that has bugs. Even if the bugs are really subtle and dependent on datasets used for training, or combinations of sensors that are hard to predict.
However, it's all a bit "a big boy did it and ran away".
It feels to me that it's similar to the "code isn't political" myth that I hope we can all agree was a lie.
Eric's examples, and I'm as guilty as he is for reaching for the same metaphors when trying to explain what I do, aren't really "how robots see us" or "how robots talk with us".
The green circles overlaid on the video imagery aren't something a robot has created, they're written by people to help said people get a better understanding of how their code works. When I build things like that they're little meta-tools to help me work out why my code isn't doing what I thought it would.
I think that if we frame it in that way - tools and techniques to help humans understand algorithms - it leads us into a different but more useful rabbithole. Chasing down that one leads people to ask better questions of the technologists about what they were trying to achieve, why that has ended up in this unintended consequence, and how we might fix that or build better tools to explain it further.