Buy iPass
Support

iPass Blog

AI Uninterrupted: Culture, Representation and the Ethics of AI

Part II of my AI interview with Kathryn Hume

By Dennis Jones

artificial-intelligence

Every new technological advance seems to precipitate thorny, ethical questions, and AI is no different. Withness the rise of a variable ethics of AI cottage industry. But what are the ethics of AI, and how do we separate the wheat from the alarmist chaff? We continue our interview with AI thought leader, Kathryn Hume.

Part II: The Ethics of AI and AI in culture

There are a couple of terms that I see getting thrown around interchangeably. But I imagine in the industry, there are more substantive differences between artificial intelligence, machine learning and data science. How does the industry distinguish between those terms? And how do you?

Yeah, this is a big mess. I think artificial intelligence is more of a psychological term than a technical term with any rigorous, meaningful technical definition. A lot of my thinking here is inspired by my former CEO, Hilary Mason, who currently is the VP of Research at Cloudera. I worked with her at a company called Fast Forward Labs. And we like to think about the terminology as follows: artificial intelligence is whatever computers can’t do until they can. We say that, because the technology is progressing so quickly that you can’t make a taxonomy that would say, oh, natural language processing is AI, but data science isn’t.

So instead, you build futurity into your definition?

Exactly, it’s more of a horizon of our perception of what qualifies as interesting and intelligent-like behavior. Currently, self-driving cars are an example of AI, because they are on the horizon of becoming possible. And it seems interesting but not totally out there and science fiction.

Hilary used to always say that at the bottom of a Maslow’s hierarchy of data lies big data, which is our ability to collect, process and store more data than we could historically at a reasonable cost. Above that is analytics, the counting things from the past. Things occur, and we go back and report on what happened.  Data science is adding in a little bit more of this y=mx+b-type capability. So we can use the data today to make educated guesses about what’s going to happen in the future using sophisticated models, as opposed to just counting the past and then making a qualitative judgment. The machine learning is when you add in a feedback loop, where the models that you use and the functions that you’re modeling can change over time as you get new data, and as you get feedback on an estimate.

Then, AI is sort of an encapsulating, umbrella term. Most AI applications these days are built using machine learning. But back in the 1950s when the field was first starting, they were based on different types of techniques that were actually trying to use symbolic logic to encode intelligent-like reasoning patterns. And that works well in certain systems, like sort of an expert system-type approach. But the world is really messy, and things change and rules get stale fast. There are only so many processes or truths out there that we can model with regular if-then statements with a lot of rigor and applicability.

It seems that when we think of real-world applications of machine learning and AI, we tend to have an implicit bias that high-skill tasks require more intelligence than intuitive, emotional or even physical tasks. How successful are machines at encoding those intuitive, emotional or physical tasks into their mathematical functions?

I think that there’s a cultural predisposition to deem certain types of activities, often affiliated with our prefrontal cortices, as super intelligent: playing chess well, being a doctor who can diagnose, or being someone who has specialist knowledge, like a doctor who can diagnose whether or not a lung might be cancerous. Our society values and prizes certain activities as being rarefied and exhibiting a type of intelligence that can be measured on things like IQ tests. That bias has shaped a lot of the past efforts in AI to develop systems that seem super cool.

But I think the future of AI – and I guess the present and the future – puts pressure on those very notions of intelligence in a couple of ways. The first is socially. Right now in careers that require expert skills, like those that one would acquire in getting their doctorate, or mathematical skills to do accounting and investment banking, people make a lot of money doing those jobs. But they tend to be quite narrow and very specialized. As it happens, those tasks are a great fit for artificial intelligence, because if we have enough examples of human thought left in traces as traces and data, we can use those examples to develop specific task-oriented systems and ultimately automate or replace those tasks.

What’s really hard to replicate is EQ, the intelligence of the nurse. I got this from Yuval Noah Harari in Homo Deus, where he says the hardest profession to automate away would be the caveman, because cavemen are great generalists. They might not do anything particularly well, but they can light a fire. They can hunt. And overall, they are these Jack’s of all trades. And all of the subtle cues that our brains are processing all the time, gestures, twitches, even smells. We’re gaining all sorts of information from the environment when we relay an appropriate emotional response, and computers are far away from doing that.

There’s a company based out of New Zealand, called Soul Machines, which hired Cate Blanchet and a bunch of visual artists who had made films like Avatar, to develop very human-like and human-looking systems, which would display emotional responses to people’s input. But it’s nothing like the way that we work. So the moral of the story is that systems are stupid, too stupid to do emotional tasks well. It forces us to question what actually qualifies as intelligence across different socio-economic, racial and identity groups in society, because were a poor black woman from Brooklyn to be the Computer Scientist, what might she choose to automate first?

That prompts me to ask you about your views on the ethics of AI, which you’ve written and spoken about quite a bit.

The most important thing is unlike a lot of the stuff that we read about in the media, we shouldn’t be focused on the machines’ becoming super intelligent and exterminating the human race like Skynet. That’s not the ethics of AI.

There is some active work in developing safe robots, where we would be concerned with a rogue robot that could actually lead to physical harm to people. That’s a real thing, and there are groups that are working on basically controlling the activities and movements of machinery. Self-driving cars are similar. We don’t want to put a bunch of self-driving cars loose and have them drive all over like crazy, drunk people on the roads and leave it at that. That’s a real concern, and people are actively working on it.

The others are, in my opinion, more oriented around existing ethical issues in human society, where this technology is not a being separate from human society, instead it’s just a magnifying glass that happens to give us the means to understand what’s going on in society better and can sometimes lead to unintended consequences. People talk about bias in machine learning algorithms. The algorithms are just solving for m in the mx+b equation. But in doing so, they might either magnify or reveal existing biases that stem from the training data.

And if we think this is training data, data from the past, and depending on how far back you go, there’s what I call the time warp of AI, where there may be social phenomena that society has moved beyond, but those phenomena end up being encapsulated in this past data and end up getting perpetuated forward. This can be things like inequality between genders, biases between races, everything that has to do with protected classes and minorities.

Another issue is feedback loops, in the context of social media sites like Facebook and their recommender systems, which are trying to cull through all of the infinite amount of things they might show you. Those systems are optimized to give you the things they think might be the most interesting for you; and in doing so, they’re governing their choices based upon your past behavior. So they tend to create what Eli Pariser first called, “filter bubbles.” You’re seeing more and more of your narcissistic self, as opposed to being exposed to what might be other points of view from people who aren’t like you. This has already led and will likely continue to lead to increasing polarization in society, where people are just shown more and more of the people that are like them and ideas that are those that they already espouse, as opposed to being forced by nature of contact to see that other people might not think like they do or being exposed to different types of ideas, so as to develop their reasoning capacities to be great people and citizens.

I’m a big fan of John Milton’s Areopagitica, which argued against censorship, but where Milton is essentially saying that we can’t protect people from ideas that they don’t want to be exposed to. They have to exercise the muscle of self-restraint. Virtue doesn’t come for free. You get exposed to stuff like a virus, and you have to get immune to ideas; you have to work on that. You can’t just sort of live within this protective bubble. And I think this is going on across society, it’s not only a byproduct of AI. But it’s certain that these algorithms have the propensity to exacerbate some of the more alarming social phenomena we see today.

Since you’re saying that one of the unintended consequences of AI is recreating filter bubbles, how can AI actually deliver deeper experiences and interactions? Or is that actually a task we should allot to AI?

There’re a couple of ways, and there’re different players with different levels of responsibility in this process. One would be the developers of products, who could be mindful of the potential ethical pitfalls of their systems and take proactive steps to potentially modify the math so as to lead to fairer results. There’s some great work by FAT ML (Fairness Accountability and Transparency in Machine Learning). There are all sorts of folks in academia and in the private sector who are studying potential, filter bubble-like effects or ethical impacts of algorithms and are actively doing research to see how they might be able to tweak the math to lead to better results.

There’s also the responsibility of the AI community to educate everybody that this is happening. That education would also lead to a better understanding of how algorithms work for the users of the systems. I that way, users can come in and game them. So you might be able to be a better Facebook user if you know how the algorithms are working, and we know that they’re stupid, and that we can outsmart them by say, clicking on a bunch of stuff to make sure that they’re showing us a balanced point of view. But that’s not really the responsibility of data scientists, that’s more of the users’ taking responsibility over things as opposed to data scientists. But I think it’s really a collaboration between all of those parts.

The other piece would be company management’s taking a stance on what it is that they want for the world. There’s always going to be unforeseen consequences of products. Facebook is going through that right now. We’re always going to be living in this world where all technologies, AI and others included, and we always have to be imagining the possibilities and mitigating the risks and doing so with an eye towards innovation. But knowing what we’re agreeing to and being always mindful of the unknown unknowns and, as my colleague Tyler Schnoebelen likes to say, the unknown knowns, which can be the most interesting things to have the courage to admit.

And my final question. Coming back to your background, what’s the best example of AI you’ve seen in culture, either a cultural representation of AI or a piece of AI-produced culture?

I do think Spike Jonze’s Her is a really interesting, super accurate depiction of where AI systems might go. I love that Scarlett Johansson’s bot character is distributed across multiple devices. When we imagine AI, it’s always like these robots that are just like humans, so they’re encapsulated in one body and one spot. But what’s brilliant about what Spike Jonze does is that Scarlett Johansson’s character is distributed across millions and millions of devices, having the same conversation with many others at the same time. It’s an intelligence that’s not embodied as we think about human consciousness but very disembodied. I think that is a really smart and prescient representation of AI, especially as edge computing is going to be increasingly powerful and that will help enable machine to machine communication at the edge and all sorts of different, future-oriented computing paradigms that I think are going to be more like the future of AI.

Also, the main character in Her basically fell in love with a reflection of himself – going back to our filter bubbles. The robot, she didn’t challenge him in any way until the end. And he had a failed marriage, because he wasn’t able to meaningfully engage with a person who was not like himself.

On the cultural side, I have some friends who are artists at the machine intelligence group at Google, AMI (Artists and Machine Intelligence,) who have been doing some really interesting, generative art works using neural networks to create paintings, novels and poetry in this continuation of automatic poetry and artistic traditions, like Dada, which have been around for a while, but are coming to fruition again in the algorithmic world.

And then my friend Aaron Marx’s startup in New York City, CulturePass, is basically enabling a tool that supports personalization to recommend the best cultural or art activity going on at a given time. It’s taking a lot of the standard tricks that have been applied in consumer commercial settings and then bringing those to the cultural world. I’m a huge fan of what he’s doing. And yeah, I think it’s just neat to see AI existing at the representation level, the actual artwork level and then the cultural entrepreneurship level.