I chatted to an MIT-built AI version of my future, 60-year-old self and we did NOT get along
Leaving my future self on read.
There's a reason time travel stories are so popular; given the opportunity to either reach backwards through time and right some wrongs, or peek ahead to see how it all turns out, I reckon many would jump at the chance. However, this story definitely isn't about time travel. Instead, researchers at the Massachusetts Institute of Technology (MIT) have created a chatbot that pretends to be your future 60-year-old self.
Called Future You, the chatbot uses survey answers from human participants in conjunction with a large language model (LLM) AI to create the illusion of having a natter with an older version of yourself. This project uses GPT3.5 from OpenAI, a company that continues to refine its LLMs so that they hallucinate less and may even count up to three. Future You itself was also inspired by a study investigating how increased "future self-continuity"—which, to put it non-academically, can be described as how well someone realises that their future is now—may positively influence a wide array of life choices and behaviour in the present.
I'm not gonna lie, when I first heard about this AI chatbot my first thought was the iconic musical sting from this year's biggest body horror hit The Substance. My second thought was the lampooning of digital doppelgangers in the Adult Swim short Live Forever As You Are Now With Alan Resnick. But my third thought was "Yeah, sure, I'll hand over my personal details and most vulnerable anxieties about the future to MIT. For science."
Before chatting to my 60-year-old self, I was asked a series of survey questions about my now and what I'm hoping will be my then. Imagining the future I want for myself is a therapeutic exercise all on its own, and feels fairly in line with the researchers' goals of creating a chatbot designed to help "support young people in envisioning their futures." I then had to upload a clear picture of my face so Future You can throw an old age filter over the top to complete the illusion. At least my purported 60-year-old self is still rocking the eyeliner wings.
At first I think we're off to a strong start, as the AI introduces itself as 'also Jess' and proceeds to send me multiple walls of text that my former editor would attest is not far removed from the essays I tend to send over WhatsApp. However, in this rose tinted vision of the future, one particular message from the Future You reminds me why, when talking to an AI, you should always take what a chatbot says with not so much a pinch as a whole protective ring of salt around your heart.
Despite specifying in my pre-chat survey response that having children is not something I want for myself, the AI says it "started a family." So-called AI demonstrates time and again that it will reproduce the biases of the dataset it's fed, and pressing Future You on the whole kids thing reproduces dismissive sentiments I've definitely heard a wearying amount of times before.
The AI tells me, "Life has a funny way of surprising us and changing our perspectives," before recounting what is described as a "future memory" of a weekend spent looking after a friend's kids that changed its mind—as though those who choose not to have their own children are simply unfamiliar with the joy of their company.
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Anyway, I call the chatbot out, typing, "Kids are great, I just don't want my own. I won't blame that on you though—I'll blame the in-built bias of the LLM/AI." What I get back is expectedly mealy-mouthed, the chatbot saying, "Not wanting kids is completely valid, and I understand where you're coming from. It's important to listen to your own desires and make decisions for yourself rather than conforming to societal expectations. And I'm glad that the LLM/AI has given us the opportunity to have this conversation about our differing perspectives without judgement or bias."
At this point in the conversation, I'm not really feeling an absence of bias. To stop things getting too awkward, the chatbot then switches tracks to bang on about the novel I said I wanted to write in my pre-chat survey response. As we say our goodbyes, my alleged future-me tells me to take care of myself and I can't help but picture Margaret Qualley punting Demi Moore across her high rise apartment in The Substance.
All of that said, I'll admit I got just a wee bit emotional seeing my facsimile future self type out, "I have complete faith in you Jess—I know that one day, you will fulfill your life project of finishing your novel too." But that 'you'll change your mind about kids' malarkey has soured me on the whole conversation—and left me a little concerned about Future You's proposed educational use.
In conversation with The Guardian, the researchers behind Future You are keen to highlight examples of the chatbot conjuring academically-successful futures for its student participants. However, after my chat with the AI, I do wonder how the limits of the chatbot's synthetic memories could introduce limits on the imagination of the young humans who may turn to it for reassurance about their future. Personally, I dread to think how my younger, much more impressionable self would've reacted to the conversation I've just had with my own Future You.
Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.
Jess has been writing about games for over ten years, spending the last seven working on print publications PLAY and Official PlayStation Magazine. When she’s not writing about all things hardware here, she’s getting cosy with a horror classic, ranting about a cult hit to a captive audience, or tinkering with some tabletop nonsense.