Culture Trip stands with
Black Lives Matter
Lois Smith’s character Marjorie is kept company by an AI construct of her dead husband Walter, played by Jon Hamm. Walter relies on Marjorie’s dementia-ridden memory for anecdotes and information to properly impersonate the deceased. It’s a topic that plays on our fears of artificial intelligence and technology in general, but goes beyond the obvious, and instead touches on humanity’s interaction with machine, and the unintended consequences.
Marjorie’s re-imagining of her husband becomes an idealized version of the departed, something she needs at that point in her life rather than an actual reflection of her spouse. And the pattern continues as other characters in the film get their own “Prime”: an AI hologram of a dead loved one.
The film is really an exploration into the memories we keep, the stories we tell, and the lasting impression people leave on us after their deaths. But there is also a cautionary tale of the perils of future technology.
As artificial intelligence and algorithms play an increasingly important role in our lives, ethical and moral questions continue to be raised. As long as AI is created by humans rather than other computers, the technology will always be created in our own image, and thus run the danger of having our flaws and prejudices built in.
One recent example was brought to light by the brilliant reporting of ProPublica last year, which showed how software used to predict who would become criminals had absorbed and adopted the bias against black people of the American judicial system. Algorithms mean no bias—they don’t “intend” anything at all—but if they’re based on data that’s the result of decades of prejudice, then they’ll only ever reflect that data.
This year, a study revealed that an AI program to interpret human language had exhibited gender and racial biases. “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it,” Joanna Bryson, a computer scientist at the University of Bath and a co-author of the report, told The Guardian.
In Marjorie Prime, the AI is given bad data by unreliable human memory (and not just that of Marjorie, who appears to be suffering from Alzheimer’s), and this causes ripples across the movie. It’s a subtle problem many may have missed among the doomsday scenarios Hollywood, and others, usually present. The results of the miscommunication in the movie are less severe, and by the end the holographic version of the humans in the film appear a lot more content than their fleshy counterparts.
One of the major themes of Marjorie Prime is how we come to accept different memories and versions of our lives, even if they are slightly skewed from reality. At one point, Walter Prime tells the story of the dog he and Marjorie adopted to replace their previous pet Toni, which had died.
“[She was called] Toni II, but that was soon shortened to just Toni,” Walter Prime explains. “And of course it wasn’t exactly Toni, but the longer they had her the less it mattered which Toni it was that ran along the beach and which Toni it was that dug up all the bulbs in the garden. The more time that passed, the more she became the same dog in their memories.”
This tale speaks of the comfort humans can gain when accepting a slanted version of the truth, but when put in the context of AI as a whole, this ability to accept something not quite human will define how AI is used in our lives forever.
The idea of artificial intelligence provokes many fears, but perhaps our greatest concern should be seeing a spark of life in a machine, and realizing it’s a reflection of humanity looking back at us.
Marjorie Prime opens on August 18 in New York City.