MIT Pushes AI Toward Self-Learning With SEAL Framework

Until now, large language models have relied on human-led retraining to adjust their reasoning and update the parameters that shape their understanding. Once deployed, their weights remain static, they can process new information but cannot internalize it, a limitation that could keep these models reactive when business decisions demand real-time adaptation.

Researchers at the Massachusetts Institute of Technology have developed the Self-Adapting Language Models (SEAL) framework to address that limitation. SEAL helps artificial intelligence (AI) systems get updated and adjust automatically, reducing reliance on manual retraining and improving how models learn from new information.

The Problem With Fixed Knowledge

Large language models have transformed how quickly organizations can find and interpret information. Systems such as GPT-5, Claude 3.5, and Gemini 2.0 can retrieve the Federal Reserve’s latest policy statement or a company’s earnings report in seconds and summarize the key points with impressive accuracy.

That capability depends on retrieval, a process that allows a model to look up relevant data without changing how it reasons. Retrieval tells a system where to find information but not how to update its understanding based on what it learns. Once the task ends, the model’s internal logic, stored in billions of parameters or weights, remains unchanged.

In contrast, updating weights is more like being told, “Here is new information or a new way to think, update your understanding so you can now answer slightly different, or even completely different, but structurally similar, questions.” Updating weights lets a model connect new information to what it already knows, helping it understands implications rather than just isolated facts.

Imagine a model used for loan approvals. Retrieval lets it pull the latest credit reports or policy updates before making a decision. But if new guidelines redefine what counts as a high-risk borrower, the model can read the update yet still evaluate applications using outdated thresholds. A model that updates its weights continuously is likely to infer such change and adjust its reasoning automatically for future applications.

Retrieval keeps a model informed, while weight updates make it adaptive. That is the gap the SEAL framework aims to close by exploring whether models can refine their understanding automatically when they encounter new information.

How SEAL Works

SEAL introduces a training loop that allows a model to generate its own learning instructions. The model writes what MIT calls self-edits, or short written explanations of what new material it wants to learn and how it should adjust its reasoning. It then generates example data to test those changes and keeps only the updates that improve its performance.

MIT tested SEAL on Meta’s Llama model, an open-weight system that lets researchers observe how parameter updates affect results. Open models like Llama make it possible to measure how self-directed learning works, which is not yet possible with closed commercial systems such as GPT-5 or Gemini.

In experiments, SEAL helped Llama adapt to new tasks using only a handful of examples, achieving about 72% accuracy compared with 20% for standard fine-tuning. It also incorporated factual updates more efficiently than models trained on data generated by GPT-4. The findings suggest that future AI systems could continuously update their reasoning without requiring full retraining cycles.

Implications for Financial Institutions

For financial institutions, SEAL represents an early look at how AI systems might evolve from reactive to adaptive. In today’s environment, models that power credit underwriting, portfolio analysis, or compliance monitoring are retrained periodically when regulations or market data change. A self-adapting framework could shorten that cycle by allowing systems to learn from new information as it appears, reducing the lag between discovery and response.

This evolution arrives as regulators and central banks are paying closer attention to AI’s growing role in financial infrastructure. A recent PYMNTS coverage on the Financial Stability Board and Bank for International Settlements warned that financial authorities should monitor how generative AI alters risk models and governance frameworks. Policymakers are also weighing these dynamics at the national level. A House hearing on AI in banking earlier this year highlighted both the promise of automation and the risk of bias and opacity, with lawmakers urging stronger oversight as financial institutions expand their AI budgets.

 “It can be very difficult to gain a customer’s trust, but then, once they’ve given you the privilege of holding their money or lending credit to them, you have to keep that trust,” Melissa Douros, chief product officer at Green Dot, told PYMNTS. She emphasized that financial services firms cannot afford to treat AI as a “black box” or obscure how models operate. “We should be able to expose how we’re using [AI], what’s the data that’s being ingested and what’s being spit out at any time anyone asks, especially a regulator,” she added.

Source: https://www.pymnts.com/