Philosophy

A better way to A.I.

Mars Rover Spirit on Mars
Mars Rovers have on-board AI, but they aren't busy mining our data, faking our essays, and sewing social chaos. NASA/JPL-Caltech/Cornell

Boy is there a lot of hype about AI today. You'd think it was absolutely novel, completely new, and destined to help us all progress. Of course, none of this is true.

On the one hand, this is at least the fourth wave of A.I. for those who are counting. There was the 60's wave, also called Cybernetic Systems as it was centered at Carnegie Mellon and MIT. Then there was the 1980's wave influenced by my colleague Lucy Suchman, featuring people like Rodney Brooks and bequeathing us tools like the Roomba.

Incidentially, it was this crossover between sociologists and computer scientists that took us away from intelligence-as-rule-following and more toward intelligence-as-interpretation-in-the-moment: like your roomba bashing into walls until it vaccuums the whole room.

There was a brief sprint in the late naughties that inspired the study of AI and ethics, culminating in a terrific book about the humans-all-the-way-down phenomenon, Ghost Work by my colleagues Mary Grey and Siddarth Suri at Microsoft Research.

And now there is this boomtown wave, featuring large language models that are like the worst of autocomplete on steroids. Sometimes I think we should call AI Auto-Incomplete: it's a better description of what it fails to do.

Or Augment Invesments, as that's its actual job in the wake of the Covid19 recession and collapse of cryptocurrencies.

But I digress. Because on the other hand, there is an entirely other way to do artificial intelligence. And that version is already here. It has been going for years. I have seen it in action. And that's the form of co-robotics and AI-assisted science that I've witnessed at NASA.

I wrote about this recently in a piece for The Conversation. Head on over there to learn more about this way of doing AI-and-society: a model of healthy collaboration that underlines, not undermines, our humanity.

But it's also worth mentioning here for its Opt Out sensibilities. After all, the Mars Rover isn't busy stealing our data. These systems run on data about the real world, using learning models to inform responses to the real world: climate modeling, Martian seasonality, particle detections, etc. But they are not engaged in social engineering. 

They do not take data about humans, nor do they run algorithmic interventions to torque and tweak what we see.

A more humane AI is possible if we building off these lessons in co-robotics to build collaborative human-machine systems. But here is what we need:

1. Human autonomy is sacrosanct. Humans are good at certain tasks. We are not perfect, but we are pretty good at those tasks. Allow us to do them, to commit to them, and perform them the best we can. AI should not undermine this autonomy for decision making, meaning-making, or agency in the world.

2. Make room for ambiguity. Most AI systems want to cut this down, and go for more perfect optimization. But ambiguity can be a resource for sense-making, for pattern-finding, for synethesis--even just for conversation. It also recognizes the world is a messy place. This is something humans can help AI to navigate, not the other way around.

3. Algorithms must not tweak us or data about us to do their jobs. Sure, a Mars rover could learn more about the humans that command it by processing those commands and mechanisms for generating them; but it should not. It can instead learn more about Mars and how best to help humans understand Mars. Humans are not mineable model-generating material, and we aren't optimizable machinery either.

4. Human interactions should be traceless. Not anonymous, not differentialized--not collected at all. When technologies support human activity, traces of those human interactions must not be gathered for machine learning purposes. It is not as important to optimize the human or learn from human interactions by storing and processing data, as it is to supplement and support actions as they come.

5. Support work in teams. Because models of individual cognition only go so far: most of what we achieve, we achieve in groups. Look to models of small and large group behavior to figure out how best to integrate artificial intelligences into a sociotechnical system, with organizational and interactive elements. No human is an island, and AI shouldn't think of them as such.

6. Do not extract value from unpaid or low-paid human labor. So many systems today rely on stealing people's work--writing, painting, etc--to feed a machine. So many systems also depend on paying pennies to someone far away to clean up after the machine's mess. Humane AI should do neither of these. A collaborative AI would work interactively with humans under a consent framework to produce novel forms of art, music, literature, etc.

You can read more at The Conversation about this form of thinking. But as we choose which tools to use to opt out of AI systems, we should think about how to opt in, instead, to better, more ethical and responsible forms of "A.I." More on this as we move forward: for now, I am keeping a look out for humane-AI examples and will post them as I get them.

 

Related posts