Discussion about this post

User's avatar
Neville Clemens's avatar

Nice post! LLMs are being trained on more and more niche content as well, as part of post-training. For example, PhDs are creating content on advanced math reasoning problems to train models.

The question is whether LLMs are always going to be limited by their training data. With the emergence of Reinforced Learning, it appears that the models are learning to figure stuff out themselves (e.g. DeepSeeks “aha” moments captured in its thinking through a problem).

So between that and the fact that most coding is not revolutionary (just like most house-building projects are not revolutionary) this likely has the potential to cover most use cases soon!

Expand full comment

No posts