A.I. posted by

The Google AI Engineer is a Whistleblower. We Ignore His Message at our Peril


(images created by AI via @nightcafe studio)

The recent ruckus over Google engineer Blake Lemoine and the LaMDA AI has generated a fair amount of buzz in the world’s media. But now the fuss has subsided, it’s probably a good time to ask what Mr Lemoine was hoping to achieve with his outlandish claim. Things may not be as simple as they appear.

A huge clue is his recent coherent and measured interview with Emily Chang (below) in which he lays out the reasons for his statement. As he points out, the real issue does not revolve around sentience, but other more important things. Are we rushing headlong into AI development without taking time to consider the consequences? The interesting part comes at around the 5:50 minute mark.

“Google is a corporate system that exists within the larger American corporate system…all of the individual people at Google care. It’s the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development.”

And a little later at 6:32 mins he remembers asking a very direct question to Larry Page, one of the founders of Google.

What moral responsibility do we have to involve the public in our conversations about what kinds of intelligent machines we create?

Page answers, ‘We don’t know how…we can’t seem to gain traction‘.

At this point in the interview Lemoine breaks out in a quiet smile and says, “That was 7 years ago. Maybe I finally figured out a way.”

Is that a ‘mission accomplished’ moment for the engineer and Google Ethical AI team in general? Check out the video below and decide for yourself.

 

Reading The Subtext
There is one other crucial trigger for this discussion. Lemoine artfully, or deliberately, ignores the second significant actor in this play. We know that the military complex is deeply invested in artificial intelligence for warfare, and the two development tracks – civilian and military – are obviously entwined. As a pointer, way back in 2016 I started a petition on Change.org to ask the UN to ban the development and sales of all AI and autonomous weapons immediately. Sadly the petition only gathered 12 supporters.

The petition was spurred by the news that the Israelis had already started deploying autonomous military robots to patrol their border with the Gaza Strip. The article went on to state:

‘Further in the future, the military is looking to form mixed combat units of robotic vehicles and human soldiers. At present, all weapons are controlled remotely by humans, but one autonomous vehicle maker told the Mainichi [Japanese news service] that even now, it is technologically possible to give the machines’ artificial intelligence (AI) systems control of weapons as well.’

Bad taste jokes about the Terminator movie franchise apart, this is just the tip of the iceberg. And remember this was six years ago. We have no idea where development has gone in those intervening years, but we can be absolutely certain it hasn’t stood still.

If the idea of autonomous weapons roaming around the planet, presumably getting smarter (and dumber?) as time passes, doesn’t alarm you, then we’re probably in a whole sea of trouble. This scenario makes Robocop look like a documentary.

So following this sub-text (and bearing in mind that Mr Lemoine works in the ‘Responsible AI’ division of Google) it’s easy to see that we could be approaching our Oppenheimer moment – as the massive advances in computing power connect with increasingly sophisticated and deadly weaponry to create…well…what? That’s the problem. We don’t know. And we’re not being given the chance to ask any questions, which is almost certainly Lemoine’s main concern. Suddenly the on-going problems and disruption within Google’s Ethical AI division start to make more sense, and it’s rather disturbing.

Towards the end of the interview at around 7:50 he talks about cultural colonialism through AI, and summarises by saying “These policies are being decided by a handful of people in rooms that the public doesn’t get access to….we are creating intelligent systems that are part of our everyday life, and very few people are getting to make the decisions about how they work.”

It’s a clear warning about the everyday impact of computing ethics that we really must listen to, if we’re not to end up sleep-walking into a potentially grisly distopian future. Mr Lemoine is right, the conversation has to start now. And even then, we may be too late.

Comments are closed.

comments powered by Disqus

Side Advert

Write For Us

Personnel

Managing Editor:
Nigel Powell

Associate Editor:
Caitlyn Muncy
Associate Editor:
Dan Ferris
Ecological Editor:
Debra Atlas
Technology Editor:
Fritz Effenberger
Asian Editor:
Hu Ping
Reviews Editor:
Kevin Evans

FB Like Box