LLMs are Beta
Using LLMs is like index investing. When you are functioning outside your area of expertise, you better use them. Real skill now is in identifying when to use LLMs, and how!
The more perceptive of you might know that I started off my career in capital markets, and derivatives. And so I overindex on that when I’m writing (and I sort of got reminded of this the other day when I got Claude to built itself a skill to write like me; hence I’m mentioning).
Beta is a measure of how the stock moves relative to the market. So if the stock exactly tracks the broader market (an index, like Nifty or S&P500, for example), then it has a beta of 1. The whole field of index investing is built on just getting beta. The theory there is that searching for “alpha” (outperformance relative to the market) is expensive and risky, so if you are a vanilla retail investor, you should just focus on getting market returns (or beta). In exchange, you end up paying lower fees (most of my money is in index funds, FWIW).
Indexing and LLMs
There are some interesting theories on what this phenomenon of indexing means for larger financial markets - the more retail investors index, the fewer the “suckers” against whom you can get outperformance (relative to the “index”, investing is zero sum. If the index represents average performance, then the outperformance and underperformance needs to be balanced).
I’d written about indexing in an old blogpost:
Indexing has become so popular over the years that researchers at Sanford Bernstein, an asset management firm, have likened it to being “worse than Marxism“. People have written dystopian fiction about “the last active manager”. And so on.
In a sense, LLMs are like an index with a lag. Assuming that these models are constantly trained, the “intelligence” of LLMs basically represents the average internet of the internet from a (slightly) prior point in time. And this lag is super useful in not making it a zero sum game.
As long as people use LLMs cleverly, the average quality of “intelligence” or “writing quality” can go up over time. Basically, if you consider yourself to be a bad writer, or doing something you don’t know, just use an LLM. If you know what you are doing, add your own intelligence.
In other words, LLMs are Beta. Or index investing. For any given task they will give you the average output. If you decide to vibe code an app or algorithm, what you get is the average.
In a lot of cases, especially where this part of the app or algorithm is not core to what you are doing, then using averages is the ideal thing to do. And you just vibe code it. However, there are certain parts where your individual skill and opinion is superior to that of the overall internet.
In the vibe coding era, these are the reasons why you are going to get paid. Rather, you get paid for two things - figuring out where you need to use your own intelligence (rather than that of the average “internet’), and then the intelligence that you apply in these cases.
What this means is that there is going to be a bunch of places where you will choose to vibe code entirely - and if you and I are given the same set of tasks, the ones that you will vibe code entirely will be different from the ones that I choose to vibe code fully (this is the difference in the relative self-perceived expertise of you and me).
Dystopian Software Engineering
You constantly see tweets from “AI leaders” about how they hardly write code by hand any more, and everything they do is vibe code. These get quoted by other tweeters who use this to make doomsday predictions that software engineering as a profession is dying (“if such luminaries vibe code everything, so can everyone. We are all doomed”).
It is true that the proportion of code written by LLMs will only go up over a period of time. It is not funny how much “boilerplate” code most of us write, and there is absolutely no reason to do it by hand any more - an LLM is adequate. And as the training data of the LLMs improves, the more such “boilerplate” code that they will be able to write competently.
Even when that happens, the two skills that I mentioned above still remain
figuring out when to use one’s own intelligence rather than using an LLM
figuring out how to solve the problems where you think an LLM is inadequate
Just that the proportion of usage of the latter will monotonically go down over time.
Will it go down to zero? Not quite. For one, everyone will need to make a decision on WHAT to build. And since people will be building software for different purposes, this decision will remain.
And I can’t pull out the exact source here, but I’m pretty sure that there exists a result in theoretical computer science that says that if there exists a system that can automatically write code for everyone’s requirements (including requirements not already stated at the time of the system’s building), then there is some contradiction that makes it impossible. Basically a “universal AI” cannot possibly contain the solution for every single person’s every single coding requirement.
So the proportion of code written by LLMs will asymptotically approach 100%, while never getting close to there! It’s theoretically impossible.
I was thinking of this conversation last week (more like a job interview - remember that I’m still in the market) where someone asked me about how LLMs will change product management, and whether the whole process can be outsourced to LLMs.
This post is yet another answer to that question. In any job, there will be large parts that are boilerplate - “purely mechanical” things (of varying complexity, depending on the job itself) that just need to be done. You can use LLMs for that. There, “average is good enough”. Where average is not good enough, though, you will need to use your own intelligence!
PS: I wrote this blogpost in a train with patchy internet. So that disturbed my train (!!) of thought a fair bit. So it may not be fully coherent.



Love this Analogy of the Beta. In effect, if one imagines one's work output as a sequence of technical & judgemental capability blocks, One can be happy to be using LLMs for some blocks where you don't have mastery or don't want to get into the details and keep the other more interesting blocks for oneself. The other use case for this approach is resource constraints - if you don't have resources for a block of work - just use an LLM there and satisfice.
I doubt if any programmer can code better than LLMs. Definitely he can't code faster than an LLM. But better also I am not sure since that's the whole objective of AI.