Can neural networks actually simulate human minds?
No, but current applications, future potential still incredibly advanced
From "Iron Man’s" Just a Rather Very Intelligent System (J.A.R.V.I.S). to "Portal's" GLaDOS to "2001: A Space Odyssey's" HAL 9000, the idea of having a fully sentient digital assistant is one that people hope to see jump from science fiction into reality sooner rather than later.
Google, Apple and Windows have all begun this process with their own digital assistants, which allow users to control their phones or computers with voice commands. Facebook founder and CEO Mark Zuckerberg spent a large part of 2016 trying to push the idea of digital assistants a step further, by creating an artificial intelligence that can control his house.
While these digital assistants are examples of smart systems, they are not thinking independently and they cannot pass the Turing test — they cannot fool humans into thinking they are sentient. The advance of neural networks may have changed this by creating artificial intelligences that can successfully simulate a human being.
Late last year, a team of programmers came one step closer to simulating a full intelligence when they built an artificial intelligence capable of responding to messages as a person.
Eugenia Kuyda, founder and CEO of Luka, an artificially intelligent messenger bot, directed her team to develop a program that could simulate Roman Mazurenko — her former colleague, boyfriend and best friend who died in a car accident early in 2016.
In late 2016, using thousands of stored text messages and pictures, the Luka team completed a neural network that could respond to people as Mazurenko through a chat interface. The network was realistic enough that Kuyda, as well as Mazurenko’s friends and family, said that speaking to it was similar, if not identical to speaking to him.
What is a neural network?
Neural networks are a new type of computer program, which can respond to queries or carry out instructions more efficiently and creatively than existing programs. They are programs used to identify faces in photos, analyze medical data, crunch through quantum mechanics, steer self-driving cars and improve automation, among other tasks.
They do this by creating artificial neurons — the human brain cells which allow people to think and reason, said Ahmed Elgammal, a professor in the Department of Computer Science. Perceptrons — artificial neurons — have existed as a concept since the 1950s, and form the basis of a neural network.
The components of a neural network are designed to cooperate in order to produce a unique output from a given input that might be different from what each neuron might produce on its own.
In other words, neural networks try to simulate being a human brain.
“Other algorithms have a lot of tweaking and are designed,” Elgammal said. “So to recognize an image, for example, you would have to start by designing certain features or elements that you need in order to recognize it. A neural network doesn’t do that, a neural network takes the image and the task you’re trying to (teach it) and learns.”
Researchers approach neural networks differently than they do traditional algorithms, he said.
“You don’t design the algorithm, you design the architecture of the network,” he said. “You don’t design what exactly (it) should look for in the image — is it the colors, is it the line, is it the corners, you don’t do that anymore. You look at the data that is available and let the machine learn for itself.”
These networks are trained to identify patterns on their own through test data provided by the program’s creators. Unlike traditional programs, these networks are then expected to find similar patterns given unique data, instead of simply providing an expected result with a given input.
There are two ways to train data, Elgammal said. Supervised learning occurs when a researcher provides input data and directs the network to create the matching output data. Unsupervised learning occurs when the researcher lets the network train itself to transform the inputs into an output.
During the process of creating an output — for example, the name of an image from a given input, the image itself — the network "learns" how to complete tasks, like identifying images.
One of the disadvantages to neural networks is that they are black boxes, Elgammal said. Often, the people who create or work on these networks are not entirely sure how they work — they build the framework of the network and let it take on tasks for themselves.
What scientists were unable to do during neural networks’ early stages, was have a neural network explain its reasoning, according to All About Circuits. While a network might produce the desired output based on the given inputs, it would not be able to provide a reason, which would not help any of the research fields listed.
According to an article published by the Massachusetts Institute of Technology (MIT), “After training, a network may be very good at classifying data, but even its creators will have no idea why.”
In the October 2016 article, researchers said that because they cannot figure out why a neural network might reach any particular conclusion, they cannot trust the results.
This inability to predict a neural network’s reasoning process makes using them a disadvantage for researchers who want to understand how a network can relate to a human brain, Elgammal said.
“It’s really hard to predict its behavior on some data that it has never seen before,” he said. “Its performance can be easily fooled if you give it well-designed inputs. This is a major limitation of the networks.”
What did the Luka team do?
Roman Mazurenko was hit by a speeding vehicle while crossing the street and died shortly after the accident. Three months later, his friends and family could send him messages and receive responses by texting him.
The Verge notes that the idea of resurrecting deceased loved ones through the use of technology is not new — an episode of "Black Mirror" actually features robots created with the messages left behind by the deceased.
In "Black Mirror," these robots can simulate everything about a person except for their emotions. Kuyda said in The Verge that she saw the episode and her project was inspired at least in part by it.
The Mazurenko bot’s responses are identical to what he would have sent and are based on feeding the neural network some 8,000 messages he sent over his lifetime. While it originally only responded with archived versions of the actual messages the person sent, it is now able to choose its own words in each message.
While it primarily responds with text, the bot can also respond with images.
Though the bot can seem real, it cannot feel emotions — the same problem facing the androids in "Black Mirror." The bot is also not the person it was modeled after, despite how effective the simulation may be.
After interacting with the bot, one of Mazurenko’s friends said “What really struck me is that the phrases (the bot) speaks are really his. You can tell that’s the way he would say it.”
The same friend said he asked the bot for advice, and the response also matched what Mazurenko might have said, which means the bot is not only able to respond to queries, it can determine what a person might need to know.
Can a neural network simulate a person?
Elgammal said it is unlikely for an artificial intelligence to fully simulate a human, even with the use of neural networks.
The present technology is simply not sufficiently advanced to do so. But by simulating specific aspects of a reasoning process, whether that is for responding to people’s messages, analyzing traffic patterns or even just agreeing with a doctor’s diagnosis, neural networks have proven their potential for future applications, he said.
Neural networks can dramatically change how researchers analyze different problems facing society, but it is clear they can also be used to advance artificial intelligence research by leaps and bounds. Though the Mazurenko bot is based on a specific person and carries the traits of that person, it is possible that in the future, unique people may be simulated without needing a source person to create it.
Nikhilesh De is a correspondent for The Daily Targum. He is a School of Arts and Sciences senior. Follow him on Twitter @nikhileshde for more.