Friday, September 15, 2023

Mesa-optimization algorithms in Transformers[pdf]

[Submitted on 11 Sep 2023]

Title:Uncovering mesa-optimization algorithms in Transformers

Download a PDF of the paper titled Uncovering mesa-optimization algorithms in Transformers, by Johannes von Oswald and 11 other authors

Download PDF
Abstract: Transformers have become the dominant model in deep learning, but the reason for their superior performance is poorly understood. Here, we hypothesize that the strong performance of Transformers stems from an architectural bias towards mesa-optimization, a learned process running within the forward pass of a model consisting of the following two steps: (i) the construction of an internal learning objective, and (ii) its corresponding solution found through optimization. To test this hypothesis, we reverse-engineer a series of autoregressive Transformers trained on simple sequence modeling tasks, uncovering underlying gradient-based mesa-optimization algorithms driving the generation of predictions. Moreover, we show that the learned forward-pass optimization algorithm can be immediately repurposed to solve supervised few-shot tasks, suggesting that mesa-optimization might underlie the in-context learning capabilities of large language models. Finally, we propose a novel self-attention layer, the mesa-layer, that explicitly and efficiently solves optimization problems specified in context. We find that this layer can lead to improved performance in synthetic and preliminary language modeling experiments, adding weight to our hypothesis that mesa-optimization is an important operation hidden within the weights of trained Transformers.

Submission history

From: Johannes Von Oswald Jvo [view email]

[v1] Mon, 11 Sep 2023 22:42:50 UTC (4,051 KB)



from Hacker News https://ift.tt/qL2waHX

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.