Makes sense from a business perspective. I was just thinking are we trying to play cricket with a baseball bat?
To the extent that the Anthropic paper says that LLMs are able to identify "features" that help in meta word vector associations, would you classify that as "reasoning"?
is data analysis even a good use case for encoder-only transformers which is trained to identify word vector closeness?
man you’re asking some really fundamental questions!
i’m working off the hypothesis that there exists a business in taming only the DOWNSIDE volatility of LLMs’ output, and cashing from the upside.
Makes sense from a business perspective. I was just thinking are we trying to play cricket with a baseball bat?
To the extent that the Anthropic paper says that LLMs are able to identify "features" that help in meta word vector associations, would you classify that as "reasoning"?