5-year Predictions for Enterprise AI

Corey Keyser
2 min readNov 6, 2024

--

Here are some predictions for AI based on 2 years of building in the space.

1. LLM Concentration — Large model development will continue to be concentrated in OpenAI, Anthropic, Mistral, and the hyperscalers. there’s unlikely to be any large and lasting development outside of those. Most cmpanies trying to build their own LLMs for specific use cases will fall behind companies that quickly move forward while building on established LLMs.

2. AI Adoption by Workers — only 1 in 9 US workers use AI in their job. Adoption will be gradual (10–20% increased adoption) for the next 2 years until Gen Z starts entering the workforce. Why? The killer use case for LLMs is using it to do your schoolwork, an entire generation is using it for that now and will continue to use it once they leave school.

3. AI Adoption by Enterprises — Just as there were no internet sized holes in 1997, there are no AI sized holes in 2024. AI adoption means ripping out and changing internal processes. This is hard and takes time but there are hundreds of cases of leading companies adopting AI with massive impact and so rather than Enterprises taking 10+ years to fully adopt the internet and computing, AI adoption is going to rapidly transform critical business functions within 3–5 years.

4. Small language models won’t win — LLMs are too good even when compared with SLMs that are custom built/tuned for specific use cases. Economies of scale will fix the cost issues with LLMs (GPU cost and scale), and I think we will solve the compliance hurdles to deployment that some have argued make SLMs interesting (bias, hallucinations, data exposure, non onprem access). In short, why would I train, deploy, and maintain 10 models that are worse than just using 1 model?

5. Niche LLMs in Highly Regulated Industries — The exceptions to the LLM Monopolies will be found in highly regulated industries where RLHF has hurt much of the usefulness of the topic for popular models. As some have found, ChatGPT and Claude have gotten worse over time in answering questions around touchy topics like health, politics, and law. This is mainly because the major AI companies have (probably rightly) nerfed the models to limit liability. We should expect that highly regulated industries like Law, Finance, Healthcare, and Defense will have their own models with their own evolving guardrails. Everyone else will largely rely on RAG or prompt tuned implementations of the existing major models.

6. Open source won’t win — I believe in open source and I think open source LLMs can and will survive. But most Enterprises don’t have the expertise to make proper use of these models. The existing model providers are too good and people don’t seem to be that concerned with taking advantage of what are supposed to be the benefits of open source.

--

--

No responses yet