I wake up in the middle of the night. It’s cold.
“Hey, Google, what’s the temperature in Zone 2,” I say into the darkness. A disembodied voice responds: “The temperature in Zone 2 is 52 degrees.” “Set the heat to 68,” I say, and then I ask the gods of artificial intelligence to turn on the light.
Many of us already live with A.I., an array of unseen algorithms that control our Internet-connected devices, from smartphones to security cameras and cars that heat the seats before you’ve even stepped out of the house on a frigid morning.
But, while we’ve seen the A.I. sun, we have yet to see it truly shine.
Researchers liken the current state of the technology to cellphones of the 1990s: useful, but crude and cumbersome. They are working on distilling the largest, most powerful machine-learning models into lightweight software that can run on “the edge,” meaning small devices such as kitchen appliances or wearables. Our lives will gradually be interwoven with brilliant threads of A.I.
Our interactions with the technology will become increasingly personalized. Chatbots, for example, can be clumsy and frustrating today, but they will eventually become truly conversational, learning our habits and personalities and even develop personalities of their own. But don’t worry, the fever dreams of superintelligent machines taking over, like HAL in “2001: A Space Odyssey,” will remain science fiction for a long time to come; consciousness, self-awareness and free will in machines are far beyond the capabilities of science today.
Privacy remains an issue, because artificial intelligence requires data to learn patterns and make decisions. But researchers are developing methods to use our data without actually seeing it — so-called federated learning, for example — or encrypt it in ways that currently can’t be hacked.
Our homes and our cars will increasingly be watched over with A.I.-integrated sensors. Some security cameras today use A.I.-enabled facial recognition software to identify frequent visitors and detect strangers. But soon, networks of overlapping cameras and sensors will create a mesh of “ambient intelligence,” that will be available to monitor us all the time, if we want it. Ambient intelligence could recognize changes in behavior and prove a boon to older adults and their families.
“Intelligent systems will be able to understand the daily activity patterns of seniors living alone, and catch early patterns of medically relevant information,” said Fei-Fei Li, a Stanford University computer science professor and a co-director of the Stanford Institute for Human-Centered Artificial Intelligence who was instrumental in sparking the current A.I. revolution. While she says much work remains to be done to address privacy concerns, such systems could detect signs of dementia, sleep disorders, social isolation, falls and poor nutrition, and notify caretakers.
Streaming services such as Netflix or Spotify already use A.I. to learn your preferences and feed you a steady diet of enticing entertainment. Google Play uses A.I. to recommend mood music that matches the time and weather. A.I. is being used to bring old films into focus and bring black-and-white into color and even add sound to silent movies. It’s also improving streaming speed and consistency. Those spinning animations that indicate a computer is stuck on something may soon be a relic of the past that people will recall with fondness, the way many of us do with TV “snow” today.
Increasingly, more of the media we consume will actually be generated by A.I. Google’s open-source Magenta project has created an array of applications that make music indistinguishable from human composers and performers.
The research institute OpenAI has created MuseNet, which uses artificial intelligence to blend different styles of music into new compositions. The institute also has Jukebox, which creates new songs when given a genre, artist and lyrics, which in some cases are co-written by A.I.
These are early efforts, achieved by feeding millions of songs into networks of artificial neurons, made from strings of computer code, until they internalize patterns of melody and harmony, and can recreate the sound of instruments and voices.
Musicians are experimenting with these tools today and a few start-ups are already offering A.I.-generated background music for podcasts and video games.
Artificial intelligence is as abstract as thought, written in computer code, but people imagine A.I. embodied in humanoid form. Robotic hardware has a lot of catching up to do, however. Realistic, A.I.-generated avatars will have A.I.-generated conversations and sing A.I.-generated songs, and even teach our children. Deepfakes also exist, where the face and voice of one person, for example, is transposed onto a video of another. We’ve also seen realistic A.I.-generated faces of people who don’t exist.
Researchers are working on combining the technologies to create realistic 2D avatars of people who can interact in real time, showing emotion and making context-relevant gestures. A Samsung-associated company called Neon has introduced an early version of such avatars, though the technology has a long way to go before it is practical to use.
Such avatars could help revolutionize education. Artificial intelligence researchers are already developing A.I. tutoring systems that can track student behavior, predict their performance and deliver content and strategies to both improve that performance and prevent students from losing interest. A.I. tutors hold the promise of truly personalized education available to anyone in the world with an Internet-connected device — provided they are willing to surrender some privacy.
“Having a visual interaction with a face that expresses emotions, that expresses support, is very important for teachers,” said Yoshua Bengio, a professor at the University of Montreal and the founder of Mila, an artificial intelligence research institute. Korbit, a company founded by one of his students, Iulian Serban, and Riiid, based in South Korea, are already using this technology in education, though Mr. Bengio says it may be a decade or more before such tutors have natural language fluidity and semantic understanding.
There are seemingly endless ways in which artificial intelligence is beginning to touch our lives, from discovering new materials to new drugs — A.I. has already played a role in the development of Covid-19 vaccines by narrowing the field of possibilities for scientists to search — to picking the fruit we eat and sorting the garbage we throw way. Self-driving cars work, they’re just waiting for laws and regulations to catch up with them.
Artificial intelligence is even starting to write software and may eventually write more complex A.I. Diffblue, a start-up out of Oxford University, has an A.I. system that automates the writing of software tests, a task that takes up as much as a third of expensive developers’ time. Justin Gottschlich, who runs the machine programming research group at Intel Labs, envisions a day when anyone can create software simply by telling an A.I. system clearly what they want the software to do.
“I can imagine people like my mom creating software,” he said, “even though she can’t write a line of code.”