Connect with us

Hi, what are you looking for?

Education

MIT Researchers Enable Large Language Models to Learn Like Humans

A groundbreaking approach developed by researchers at the Massachusetts Institute of Technology (MIT) allows large language models (LLMs) to update themselves with new information, simulating a human-like learning process. Traditionally, once an LLM is trained and deployed, its ability to adapt is limited; it cannot incorporate new knowledge permanently. This new method enables LLMs to generate their own study materials based on user input, enhancing their capacity to retain and utilize information over time.

In a typical classroom setting, students take notes to reinforce learning, but LLMs do not have a similar mechanism. According to Jyothish Pari, an MIT graduate student and co-lead author of the study, “Just like humans, complex AI systems can’t remain static for their entire lifetimes.” The research showcases how LLMs could evolve to better respond to user interactions and adapt to diverse tasks in dynamic environments.

Introducing SEAL: A New Framework for Self-Adaptation

The innovative framework, named SEAL (Self-Adapting LLMs), empowers LLMs to create synthetic data from user inputs. This data serves as a means for the model to internalize new knowledge. The model engages in a trial-and-error process, generating multiple self-edits to determine which adaptations lead to the most significant performance improvements.

During experiments, the researchers observed a marked increase in accuracy during question-answering tasks—nearly 15 percent—and more than 50 percent in success rates for skill acquisition. This advancement illustrates SEAL’s potential to outperform larger LLMs while maintaining a smaller footprint.

Pari emphasizes the importance of giving LLMs a mechanism similar to human learning capabilities. “By providing the model with the ability to control how it digests this information, it can figure out the best way to parse all the data that are coming in,” he explains.

Challenges and Future Directions

Despite these promising results, the researchers acknowledge limitations, particularly the phenomenon known as catastrophic forgetting. As LLMs adapt to new information, there is a risk that their performance on previously learned tasks may decline. Mitigating this issue will be a focus of future research.

Additionally, the team envisions applying SEAL in multi-agent environments where several LLMs can learn from each other. Adam Zweiger, another co-lead author and MIT undergraduate, notes, “One of the key barriers to LLMs that can do meaningful scientific research is their inability to update themselves based on their interactions with new information.”

The work is set to be presented at the Conference on Neural Information Processing Systems and is supported by organizations including the U.S. Army Research Office, the U.S. Air Force AI Accelerator, and the MIT-IBM Watson AI Lab. As researchers continue to refine these models, the hope is that they will enable AI systems to learn and adapt much like humans, ultimately contributing to advancements in various fields, including science and technology.

You May Also Like

Technology

Tesla (TSLA) recently reported a year-over-year drop in second-quarter deliveries, yet the market responded with optimism, pushing the stock up by 5%. This unexpected...

Health

The All England Lawn Tennis Club in London experienced its hottest-ever opening day on Monday, as the prestigious Wimbledon tournament kicked off under unprecedented...

Technology

In a bold reimagining of the DC Universe, director James Gunn has introduced a significant narrative element in his latest film, which reveals that...

Science

Look out, daters: a new toxic relationship trend is sweeping through the romantic world, leaving many baffled and heartbroken. Known as “Banksying,” this phenomenon...

Technology

Former Speaker of the House Nancy Pelosi has recently made headlines with her latest investment in the tech sector. According to official filings, she...

Entertainment

A new documentary series titled “Animals on Drugs” is set to premiere on the Discovery Channel on July 28, 2023. The three-part series follows...

Entertainment

Netflix’s eagerly anticipated talent competition Building the Band is set to premiere on July 9, promising an emotional journey for viewers. This series, centered...

Technology

The answer to today’s NYT Wordle, dated August 8, 2025, is the verb IMBUE. This word, which means “to fill or saturate,” features three...

World

The first dose of the hepatitis B vaccine is recommended at birth, a practice that has come under scrutiny following recent comments by Health...

Sports

ZAGREB, Croatia — A concert by Marko Perkovic, a right-wing Croatian singer known for his controversial views, attracted tens of thousands of fans to...

Technology

The Evo 2025 tournament is set to take place from August 1 to August 3, 2025, showcasing some of the most popular fighting games...

Sports

The Chicago Cubs will enter the National League Wild Card Series following a disappointing sweep by the Cincinnati Reds this week. This outcome not...

Entertainment

tvN’s new series, Bon Appétit, Your Majesty, has quickly captured the spotlight, dominating the buzzworthy rankings for dramas and actors this week. In its...

Politics

On August 29, 2023, U.S. Attorney General Pamela Bondi announced the immediate termination of a Department of Justice (DOJ) employee due to inappropriate conduct...

World

NATO has introduced a new language manual advising its personnel to adopt gender-inclusive terms, sparking considerable debate. The manual suggests replacing traditional terms like...

Sports

As the summer of 2025 unfolds, the video game industry is set to deliver a diverse array of new releases that promise to captivate...

Copyright © All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site.