This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

Are we approaching the foothills of GenAI recursive self-improvement, and what might that mean for regulating AI?

In today's episode of exciting but slightly scary GenAI advances, we have a new paper (released earlier today) that describes a GenAI model independently discovering new algorithms that measurably improve GenAI performance. In this post we step through why this matters, and what it might mean in terms of future regulation. 

Why it matters

Advances in AI model capabilities are (broadly speaking) currently limited by compute/data and human creativity in algorithm discovery. Models that can automatically discover new or optimised algorithms that improve GenAI performance (as the new paper appears to demonstrate) reduce the need for human creativity and could accordingly significantly and rapidly advance AI capabilities over the next few years.

That is particularly important because of the increasing expense associated with obtaining compute for frontier GenAI models and because (incredible as it might seem) leading AI developers may be running out of useful training data for models.

“All you need is scale” BUT the models are running out of (useful) data

Frontier GenAI models are trained on vast volumes of publicly available data (i.e. the internet) and feeding more data to those models has previously led to significant performance increases (referred to as the “scaling laws”). 

However, that data (particularly the useful, information-rich portions of it) is a finite resource that is running out — the point at which the (useful) data runs out is known as the “data wall”. 

The data wall is a potential problem for AI advances because the capabilities of current frontier GenAI models have generally scaled with the volume of their training data, so a cap on useful training data could lead to a “plateau” on advances in GenAI model capabilities. 

By contrast, compute (the ability to throw processing power at these increasingly vast hordes of data) is increasing to deal with these huge datasets but with commensurate costs (and a 200%+ increase in Nvidia's stock price over the last year).

So, what next?

In order to navigate the data wall, leading model developers are exploring training models on “synthetic” data, and/or looking to access other caches of less public but still useful data. However, it remains to be seen how productive such approaches will be.

If the data wall does present a hard (or even soft) cap on AI advancements — and given the ever-spiraling costs of compute for the largest models — it may well be algorithm discovery and optimisation that form the majority of AI advancements in the near future; in particular models that generalise better from the same amount of data or can search for relevant information more efficiently.

Models that could independently discover ways to improve themselves in measurable ways, building on the advances exemplified in this new paper, represent a path towards AI recursive self-improvement. 

The pace of AI improvement is already fast, but models that can identify how to improve themselves would accelerate that progression to a blistering pace. Unlike individuals with machine learning PHDs, the progress such models could make would only be limited by compute, rather than human concerns like the need to sleep.  

Such models would represent significant alignment risks (i.e. they might improve themselves in ways humans do not or cannot understand or control), but the speed of technological advancement they could unleash is likely to make them difficult to resist. 

The intersection of model capabilities with regulation

Algorithmic improvements that lead to improved model capabilities (particularly in a recursive fashion) are also likely to prove more difficult to evaluate (and therefore regulate) than other measures of model capability, such as the number of parameters in a model, or the amount of compute used to train it. 

One example of where these developments will interact with regulation is the EU AI Act, which currently requires model providers to notify the Commission in respect of “general purpose AI models with systemic risk” if they reach a certain threshold of compute used for training, within two weeks (Articles 51 and 52). 

However, consistent with the points made above, the Act anticipates that it may prove necessary to amend that threshold in the future for technological developments such as “algorithmic improvements” to reflect the state of the art. 

Providers of frontier models will accordingly need to keep close tabs on models (recursively self-improving or otherwise) that might be said to give rise to “systemic risk” and incur associated obligations under the Act — particularly given that the legislative goalposts may move. See here for a broader summary of the EU AI Act.

The take-home

Consistent with the ongoing trend, leading AI developers and researchers will need to make finely judged decisions as to the risks and rewards of AI innovation, while governments, regulation and the rest of the world will need to run (or perhaps sprint) to keep up. 

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

genai, ai, data and cyber