AI researchers build ‘future self’ chatbot to inspire wise life choices

If your carefully crafted life plan has been scuppered by sofa time, bingeing on fast food, drinking too much and failing to contribute to the company pension, it may be time for a chat with your future self.

Without ready access to a time machine, researchers at the Massachusetts Institute of Technology (MIT) have built an AI-powered chatbot that simulates a user’s older self and dishes out observations and pearls of wisdom. The aim is to encourage people to give more thought today to the person they want to be tomorrow.

With a profile picture that is digitally aged to show youthful users as wrinkly, white-haired seniors, the chatbot generates plausible synthetic memories and draws on a user’s present aspirations to spin tales about its successful life.

“The goal is to promote long-term thinking and behaviour change,” said Pat Pataranutaporn, who works on the Future You project at MIT’s Media Lab. “This could motivate people to make wiser choices in the present that optimise for their long-term wellbeing and life outcomes.”

In one conversation, a student who hoped to be a biology teacher asked the chatbot, a simulated 60-year-old version of herself, about the most rewarding moment in her career. The chatbot said it was a retired biology teacher in Boston and recalled a special moment when it helped a struggling student turn their grades around. “It was so gratifying to see the student’s face light up with pride and accomplishment,” the chatbot said.

To interact with the chatbot, users are first prompted to answer a series of questions about themselves, their friends and family, the past experiences that shaped them, and the ideal life they envisaged for the future. They then upload a portrait image, which the program digitally ages to produce a likeness of the user aged 60.

Next, the program feeds information from the user’s answers into a large language model that generates rich synthetic memories for the simulated older self. This ensures that when the chatbot responds to questions, it draws on a coherent backstory.

The final part of the system is the chatbot itself, powered by OpenAI’s GPT3.5, which introduces itself as a potential older version of the user that is able to talk about its life experiences.

Pataranutaporn has had several conversations with his “future self”, but said the most profound was when the chatbot reminded him that his parents would not be around for ever, so he should spend time with them while he could. “The session gave me a perspective that is still impactful to me to this day,” he said.

Users are told the “future self” is not a prediction but rather a potential future self based on the information they provided. They are encouraged to explore different futures by changing their answers to the questionnaire.

According to a preprint scientific paper on the project, which has not been peer-reviewed, trials involving 344 volunteers found that conversations with the chatbot left people feeling less anxious and more connected to their future selves. This stronger connection should encourage better life decisions, Pataranutaporn said, from focusing on specific goals and exercising regularly to eating healthily and saving for the future.

Ivo Vlaev, a professor of behavioural science at the University of Warwick, said people often struggled to imagine their future self, but doing so could drive greater persistence in education, healthier lifestyles and more prudent financial planning.

He called the MIT project a “fascinating application” of behavioural science principles. “It embodies the idea of a nudge – subtle interventions designed to guide behaviour in beneficial ways – by making the future self more salient and relevant to the present,” he said. “If implemented effectively, it has the potential to significantly impact how people make decisions today with their future wellbeing in mind.”

“From a practical standpoint, the effectiveness will likely depend on how well it can simulate meaningful and relevant conversations,” he added. “If users perceive the chatbot as authentic and insightful, it could significantly influence their behaviour. However, if the interactions feel superficial or gimmicky, the impact might be limited.”

Source link