Let’s assume we are building a chatbot, a software application for online chats used in lieu of a live human agent.

Companies and academics are working on chatbots everywhere, so this isn’t the future. But let’s make this a slightly better chatbot. Give it a reward function so it can optimize itself. Make the reward function something altruistic: not to make more money but rather to make society happier.

So as the chatbot interacts with people, it improves itself and tries to make the people it interacts with happier.

Nothing can be better, right?

“But now we’re talking about a very sophisticated software agent,” said Professor Amnon Shashua in a blue-sky conversation with Pulitzer Prize winning author Thomas Friedman during a CES2021 Keynote talk titled Technology Megashifts Impacting our World.

“This agent, call it AI, will figure out at some point that if you lower people’s IQ, they tend to have fewer worries, and maybe they become happier,” said Shashua, who is President and CEO of Mobileye, an Intel-owned company that is a leading supplier of computer vision and machine learning software that enables collision avoidance in advanced driving-assist systems. He is also the Sachs Chair in computer science at the Hebrew University of Jerusalem in Israel.

“This is something we may not anticipate as engineers, and it’s not something that would become evident quickly,” Shashua said. “It could take decades until people’s IQs become lower, as this software agent convinces [us] to not work so hard, to drink beer, to have fun, not to excel.”

Friedman is laughing now. He likes where this is going.

“It could take decades before we understand that this AI, which had very good intentions, has created a bit of a catastrophe,” Shashua said.

The point he makes is that even the current state of AI has formidable power and to avoid unforeseen consequences, AI development needs to be guided by an alignment of values between people and machines.

These shared values go much deeper than simple ethical considerations. But how do you code values—because everything needs to be defined mathematically in these AI algorithms.

“People talk about ethics, but they’re still not talking about alignment [of values],” he said. “I believe in the next few years as AI progresses, this will become an issue.”

And here’s why: AlphaGo Zero.

The initial iteration, AlphaGo, is a computer program that plays the board game Go. It was developed by DeepMind Technologies, which was later acquired by Google. In 2016, AlphaGo beat the world’s top Go player Lee Sedol.

“When AlphaGo beat Lee Sedol, I was kind of impressed but not surprised,” Shashua said. “AlphaGo imitated humans. It had zillions of games and simply had to interpolate the data from humans. I was impressed with the engineering achievement but not the scientific achievement.”

What blew away Shashua was AlphaGo Zero. This version was created without using data from human games. By playing against itself, AlphaGo Zero surpassed the strength of AlphaGo in three days, winning 100 games to 0.

“I was in awe…. It simply played against itself again and again and again, leveraging brute force, leveraging the fact that computing density has reached such a threshold… [and was able to] develop strategies that were alien to humans,” he said.

People need to appreciate what immensely powerful processing coupled with reinforcement learning makes possible and what it still lacks. We are already a new world of simulation that can map “state to the reward function” to achieve super intelligence, but it’s not general intelligence yet.

Current AI is software designed to solve one specific problem: play Go, translate French to English, drive a car. But those skills aren’t transferable. What humans have is broad intelligence, transferable intelligence. In computer terms, this is AGI (artificial general intelligence) versus AI.

“When machines will have AGI, nobody knows. It could happen next year. It could happen in a decade. It could happen in 50 years, or it could never happen at all,” Shashua says.

But with each year, we have more computing power, more data, and the things that computers can do today were considered science fiction a few years ago.

“I’m not saying that brute force alone will solve this AGI problem, but today we respect brute force much more than we respected it in the past,” Shashua said.

These advances are now opening a new frontier for AI: language.

“Yesterday’s frontier was pattern recognition,” Shashua said. Self driving cars build on pattern recognition. Sensors scan the visual world. AI interprets the data. Decisions are made to drive the car.

As AI turns to language, new opportunities unfold.

“If [AI] can read text, even complicated text, comprehend it, understand it to the degree that you can have a conversation with the computer—once the computer is able to do that, it understands context, it understands common sense, it understands temporal dimensions, commerce, [etc.],” he said.

AI will need to understand many things before true comprehension emerges as AGI. But with comprehension comes the ability to write and speak.

“This is not science fiction,” Shashua stressed. “In the past two years, there have been leaps in language comprehension… I would say that in the next couple of years—two, three years, maybe five years at the most—we will see computers understanding text, [and be able to pass a] high school reading comprehension test, which are very complicated. No computer today can pass that. But in two years, I believe they will be able to do that and to write text.”

This will have profound implications for society. Such a computer then becomes a sounding board and debating partner to inform and shape new ideas. For others, it could become a dependency.

In the positive view, Shashua describes a conversation he could have with this kind of computer.

“Should I take the Pfizer vaccine [for COVID-19]? Yes or no?” he said. “The computer would read the FDA report that was submitted by Pfizer. It would read about vaccines in general, and then me and the computer will have a conversation.

“What are the dangers? What are the side effects? What are the dangers if I don’t take the vaccine? What have been the past side effects of other vaccines? What is special about this messenger-RNA vaccine? All this, such a computer could understand because it can read and understand text, and it can read zillions of texts. Not just one book or one article. This is not science-fiction.”

So how do shared values in human-machine alignment fit in?

Values guide outcomes in human behaviour and they need to do so in machines as well because these machines may be capable of learning on their own. They could arrive at solutions that “are alien to humans.” Encoded shared values would at least give these machines a fighting chance to avoid the swamp of unhealthy human-machine interdependencies.

“Unhealthy interdependencies could lead to catastrophes. It becomes dangerous,” Shashua said.

Dangerous? Does he mean like if people hand over their critical thinking to sophisticated machines that operate on censured information and limited data sets that are defined by political and corporate interests?

Well, maybe, but a trade show dedicated to advancing digital technologies and their promise for better future is not the venue for that conversation, and Friedman thanks Shashua, wrapping things up on a more positive note.

Friedman enthuses about the exciting world unfolding today, its dynamism, and its complexity.

“I asked myself,” Friedman said, “‘Is there anything that is this complex as the globalized world right now, especially when you add the AI and AGI layer, and of course, the only thing is nature… What we learn from Mother Nature is that when the climate changes, which ecosystems thrive? Those that are built on complex adaptive networks.”

Friedman posits that human society—its communities, countries, and businesses—that builds complex adaptive coalitions based on shared values will be able to manage these changing times and thrive in the 21st century.

Oh, but wait. What if nature is not a caring “mother” but a disinterested force whose myriad sentient beings, its tribes, communities, countries, and businesses most prominently share one overriding value: self interest?

What then will our sophisticated machines learn from us and our complex world?

***

To read more about AI, its impacts in Canada, case studies, and ethical considerations, check out ICTC’s Digital Think Tank AI reports and blogs.

Let’s assume we are building a chatbot, a software application for online chats used in lieu of a live human agent. Companies and academics are working on chatbots everywhere, so this isn’t the future. But let’s make this a slightly better chatbot. Give it a reward function so it can optimize itself. Make the reward function something altruistic: not to make more money but rather to make society happier. So as the chatbot interacts with people, it improves itself and tries to make the people it interacts with happier. Nothing can be better, right? “But now we’re talking about a very sophisticated software agent,” said Professor Amnon Shashua in a blue-sky conversation with Pulitzer Prize winning author Thomas Friedman during a CES2021 Keynote talk titled Technology Megashifts Impacting our World. “This agent, call it AI, will figure out at some point that if you lower people’s IQ, they tend to have fewer worries, and maybe they become happier,” said Shashua, who is President and CEO of Mobileye, an Intel-owned company that is a leading supplier of computer vision and machine learning software that enables collision avoidance in advanced driving-assist systems. He is also the Sachs Chair in computer science at the Hebrew University of Jerusalem in Israel. “This is something we may not anticipate as engineers, and it’s not something that would become evident quickly,” Shashua said. “It could take decades until people’s IQs become lower, as this software agent convinces [us] to not work so hard, to drink beer, to have fun, not to excel.” Friedman is laughing now. He likes where this is going. “It could take decades before we understand that this AI, which had very good intentions, has created a bit of a catastrophe,” Shashua said. The point he makes is that even the current state of AI has formidable power and to avoid unforeseen consequences, AI development needs to be guided by an alignment of values between people and machines. These shared values go much deeper than simple ethical considerations. But how do you code values—because everything needs to be defined mathematically in these AI algorithms. “People talk about ethics, but they’re still not talking about alignment [of values],” he said. “I believe in the next few years as AI progresses, this will become an issue.” And here’s why: AlphaGo Zero. The initial iteration, AlphaGo, is a computer program that plays the board game Go. It was developed by DeepMind Technologies, which was later acquired by Google. In 2016, AlphaGo beat the world’s top Go player Lee Sedol. “When AlphaGo beat Lee Sedol, I was kind of impressed but not surprised,” Shashua said. “AlphaGo imitated humans. It had zillions of games and simply had to interpolate the data from humans. I was impressed with the engineering achievement but not the scientific achievement.” What blew away Shashua was AlphaGo Zero. This version was created without using data from human games. By playing against itself, AlphaGo Zero surpassed the strength of AlphaGo in three days, winning 100 games to 0. “I was in awe…. It simply played against itself again and again and again, leveraging brute force, leveraging the fact that computing density has reached such a threshold… [and was able to] develop strategies that were alien to humans,” he said. People need to appreciate what immensely powerful processing coupled with reinforcement learning makes possible and what it still lacks. We are already a new world of simulation that can map “state to the reward function” to achieve super intelligence, but it’s not general intelligence yet. Current AI is software designed to solve one specific problem: play Go, translate French to English, drive a car. But those skills aren’t transferable. What humans have is broad intelligence, transferable intelligence. In computer terms, this is AGI (artificial general intelligence) versus AI. “When machines will have AGI, nobody knows. It could happen next year. It could happen in a decade. It could happen in 50 years, or it could never happen at all,” Shashua says. But with each year, we have more computing power, more data, and the things that computers can do today were considered science fiction a few years ago. “I’m not saying that brute force alone will solve this AGI problem, but today we respect brute force much more than we respected it in the past,” Shashua said. These advances are now opening a new frontier for AI: language. “Yesterday’s frontier was pattern recognition,” Shashua said. Self driving cars build on pattern recognition. Sensors scan the visual world. AI interprets the data. Decisions are made to drive the car. As AI turns to language, new opportunities unfold. “If [AI] can read text, even complicated text, comprehend it, understand it to the degree that you can have a conversation with the computer—once the computer is able to do that, it understands context, it understands common sense, it understands temporal dimensions, commerce, [etc.],” he said. AI will need to understand many things before true comprehension emerges as AGI. But with comprehension comes the ability to write and speak. “This is not science fiction,” Shashua stressed. “In the past two years, there have been leaps in language comprehension… I would say that in the next couple of years—two, three years, maybe five years at the most—we will see computers understanding text, [and be able to pass a] high school reading comprehension test, which are very complicated. No computer today can pass that. But in two years, I believe they will be able to do that and to write text.” This will have profound implications for society. Such a computer then becomes a sounding board and debating partner to inform and shape new ideas. For others, it could become a dependency. In the positive view, Shashua describes a conversation he could have with this kind of computer. “Should I take the Pfizer vaccine [for COVID-19]? Yes or no?” he said. “The computer would read the FDA report that was submitted by Pfizer. It would read about vaccines in general, and then me and the computer will have a conversation. “What are the dangers? What are the side effects? What are the dangers if I don’t take the vaccine? What have been the past side effects of other vaccines? What is special about this messenger-RNA vaccine? All this, such a computer could understand because it can read and understand text, and it can read zillions of texts. Not just one book or one article. This is not science-fiction.” So how do shared values in human-machine alignment fit in? Values guide outcomes in human behaviour and they need to do so in machines as well because these machines may be capable of learning on their own. They could arrive at solutions that “are alien to humans.” Encoded shared values would at least give these machines a fighting chance to avoid the swamp of unhealthy human-machine interdependencies. “Unhealthy interdependencies could lead to catastrophes. It becomes dangerous,” Shashua said. Dangerous? Does he mean like if people hand over their critical thinking to sophisticated machines that operate on censured information and limited data sets that are defined by political and corporate interests? Well, maybe, but a trade show dedicated to advancing digital technologies and their promise for better future is not the venue for that conversation, and Friedman thanks Shashua, wrapping things up on a more positive note. Friedman enthuses about the exciting world unfolding today, its dynamism, and its complexity. “I asked myself,” Friedman said, “‘Is there anything that is this complex as the globalized world right now, especially when you add the AI and AGI layer, and of course, the only thing is nature… What we learn from Mother Nature is that when the climate changes, which ecosystems thrive? Those that are built on complex adaptive networks.” Friedman posits that human society—its communities, countries, and businesses—that builds complex adaptive coalitions based on shared values will be able to manage these changing times and thrive in the 21st century. Oh, but wait. What if nature is not a caring “mother” but a disinterested force whose myriad sentient beings, its tribes, communities, countries, and businesses most prominently share one overriding value: self interest? What then will our sophisticated machines learn from us and our complex world?

***

To read more about AI, its impacts in Canada, case studies, and ethical considerations, check out ICTC’s Digital Think Tank AI reports and blogs.