Ladder: Self-improving LLMs through recursive problem decomposition

by fofozon 3/7/25, 6:45 AMwith 110 comments
by EMIRELADEROon 3/7/25, 8:07 AM

What the hell is going on this week?!?!? (asking positively, with a smile on my face)

I have seen at least 3 interesting/mildly promising breakthroughs on ML just these past two days! I mean, a Google research team just discovered that you can combine NNs with CLAs using digital logic gates as a medium, so you could potentially reduce many kinds of non-linear problems to a simple, efficient digital circuit! And it was on the HN front page, TODAY![1]

I keep seeing more mind-bending stuff related to neural nets and logic/intelligence in general, my mind has been running wild with speculation about the future and just how close we could (or could not) be to truly understanding how intelligence works from first principles.

[1] https://news.ycombinator.com/item?id=43286161

by isaacfrondon 3/7/25, 9:47 AM

Reminds me of a quote by famous number theoretic mathematician Hendrik Lenstra:

For every problem you can't solve, there's a simpler problem that you also can't solve.

by barteloniuon 3/7/25, 1:50 PM

Their test time RL approach seems a bit fishy. From what I understand, TTRL works by asking a language model to generate simpler versions of the test case. Once we have the simpler problems, we run RL on them, hoping that an improvement on the simplified cases will also strengthen the model performance on the original problem.

The issue is, they use a numerical integrator to verify the simpler problems. One could imagine a scenario where a barely simpler problem is generated, and the model is allowed to train on pretty much the test case knowing the ground truth. Seems like training on the test set.

The rest of the paper is nice though.

by mentalgearon 3/7/25, 7:59 AM

> We demonstrate LADDER's effectiveness in the subject of mathematical integration, improving Llama 3.2 3B's accuracy from 1% to 82% on undergraduate-level problems

by niemandhieron 3/7/25, 9:27 AM

Frank Herbert knew it: This is basically an implementation of the mentats recursive self inspection described in Dune.

by Davidzhengon 3/7/25, 8:23 AM

test-time training/RL is definitely the right approach for math AI in the future. It is probably one of only a few ways to spend an obscene amounts of compute at a given problem (think 10^5 gpus for a few days) and has hopes of making progress when test-time inference scaling may not at first (think if you try to do MCTS on a go position with a bad value/policy net). Alphaproof already did this but nice to see it done again--good results!

by neoneye2on 3/7/25, 8:42 AM

Sidenote: `Tufa Labs` team includes the `MindsAI` team of ARC-AGI fame. https://tufalabs.ai/team.html

by pyryton 3/7/25, 10:25 AM

Some names are just too tempting https://arxiv.org/abs/1507.02672

by thomasahleon 3/7/25, 2:51 PM

At the end of the paper they mention "two problems from the 2025 MIT Integration Bee qualifying exam which the system continued to answere incorrectly".

They say the questions were among the most complex questions on the exam, but the first one is just

   ∫ ∛(x · ∜(x · ∜(x · √(x · √(x · ⋯ ))))) dx
which just requires you to compute

   1/3 + 1/(3*4) + 1/(3*4*5) + ...
So hardly very advanced math.

by vesseneson 3/7/25, 8:50 PM

That this works at all is pretty interesting. That it seems to work very well with math is quite interesting.

That said, this paper is part of the move we have right now blurring the lines of training and inference -- part of their method involves doing some reinforcement learning on questions they don't know the answer to, but can decompose into simpler questions, and using GRPO on those with a numerical 'checker'. This reinforced model then can answer more questions.

I like this. I think humans do this a lot; mulling on something, turning it over in their heads, analogizing, etc. Adding test time training is a way to do a lot more thinking than adding tokens to the context for fixed inference.

Just as DeepSeek and o1/o3 show that we can increase capacity with inference-time-token generation and assessment, it looks like we can increase capacity with inference-time automated fine tuning as well.

I'd hope that as these techniques solidify we'll have a new way to talk and think about this -- they are all part of the same fundamental process at some level.

Either way, super cool.

by mentalgearon 3/7/25, 7:57 AM

It's exciting to see approaches like RL and curriculum learning, which I always felt were the way to go for real self-improvement ~7y ago when training in robotics (openAI gym days), finally getting successfully applied to NLP/LLM to highly boost small model performance.

(Ladder is a sort of RL self curriculum learning approach)

by flakinesson 3/7/25, 7:28 PM

Off topic, but their site is lovely: https://tufalabs.ai/index.html It feels like a gold rush for sure.

by cratermoonon 3/7/25, 5:39 PM

How many rungs of a ladder would you be willing to climb if you knew that each rung was made from half the previous rung?

by daxfohlon 3/7/25, 7:35 PM

How much GPU would an RL like this need for tuning? Is the approach something someone could experiment with themselves, or is it like thousands of USD in cloud costs and/or years of compute if done on a laptop GPU?

by nis0son 3/7/25, 2:52 PM

What’s the difference between this and what Wolfram Alpha has been doing?

https://www.wolfram.com/artificial-intelligence/

by explosion-son 3/7/25, 1:04 PM

I would love to be able to use the actual model! If I'm understanding correctly this makes small models as intelligent as much larger models like GPT4o

by evjanon 3/7/25, 4:29 PM

I had NotebookLM make a 15 min podcast about it and listened to it while walking the dogs. It was a very interesting way of trying to understand a research paper!

You need a google account to access it unfortunately. https://notebooklm.google.com/notebook/fbaba495-d4f2-48a3-a3...

by goyelon 3/7/25, 2:45 PM

I wonder why nobody made a NN to find the weigths faster and better than gradient descent

by majordroidon 3/7/25, 7:50 AM

> We also introduce TTRL (Test-Time Reinforcement Learning), where we perform reinforcement learning on variants of test problems at inference time. TTRL enables Qwen2.5 7B Deepseek-R1 Distilled to achieve a state-of-the-art score of 90% on the MIT Integration Bee qualifying examination, surpassing OpenAI o1's performance.

That's incredible!

by revskillon 3/7/25, 11:28 AM

Llm keeps deleting my file content proved that we have far many things to do.

by bloomingkaleson 3/7/25, 8:20 AM

I’m kinda getting the sense this is still just prompt engineering in a loop.

Persona-based prompting: We prompted the model to adopt different mathematical perspectives (e.g., "think like Euler focusing on series", "approach like Gauss looking for patterns").

I mean … I guess that’s scientific?

Besides that, how can the model learn at test time (at inferencing)?. It’s stateless, it doesn’t incorporate the last prompt into the model.

by ma9oon 3/7/25, 6:50 PM

divide and conquer :)