GenAI ROI: Is 2024 the year that Enterprises will start to see meaningful productivity gains?

Economists project that AI could contribute 1.5 percentage point increase in annual US productivity if widespread adoption occurs over the next decade. The deployment of Generative AI is anticipated to accelerate in 2024, driving associated productivity benefits. As this unfolds, the gap between high-quality and low-quality growth companies is expected to widen, impacting enterprises that lag in AI adoption.

In 2023, though Enterprise AI spend was about 18% of the $400B Cloud spend, GenAI spend was only about 1% of this $400B cloud spend. 45% of this GenAI spend was in Foundational Deep Tech, including LLM models, Infrastructure and Data layers; and another 45% was in Co-pilots and Vertical applications. Remaining 10% was in Horizontal AI and middleware.*

We foresee AI adoption leading to comprehensive business reorganization, integrating AI into both decision-making processes and in-house data. Firms embracing AI are expected to outpace competitors in terms of productivity and Return on Invested Capital (ROIC). McKinsey’s assessment of American firms in 2018 revealed a 25 percentage point higher return for those in the 75th percentile compared to the median, marking a triple-fold increase from 2000. This divergence was primarily driven by Internet, Cloud, and Mobile adoption. 

Unlike the gradual adoption timelines of Internet, Mobile, and Cloud technologies, AI adoption appears to be accelerating. Context-aware workflows utilizing proprietary datasets are already driving GenAI Enterprise adoption. We anticipate that, in five years, AI will be an integral component of most Consumer and Enterprise Services, embedded in every decision, interaction, and workflow. Many large enterprises possess untapped high-value data, particularly in industries like healthcare, financial services, retail, and manufacturing. However, the challenge often posed is how AI startups can compete with large incumbents possessing vast datasets and capital resources to drive AI adoption.

As outlined in my April 2023 Blog, building high-value use cases, not covered by incumbents, and utilizing proprietary domain-specific data will be crucial for startups seeking to establish themselves in enterprises leveraging AI as a competitive advantage. Two startups we invested in, Artera and Ex-Human, exemplify this approach.

Artera, launched ArteraAI-a suite of AI enabled predictive and prognostic cancer tests such as the ArteraAI Prostate Test, which received Medicare payment approval in record time. These tests look at images of the patient’s biopsy and accurately predict how likely a patient will benefit from specific therapies. Artera’s multimodal AI model learns from both pathology images and clinical data from clinical trials with long-term (10-15 year) outcomes. The AI learns which features in images and clinical data can predict individual therapeutic response and patient prognosis. Ex-Human builds engaging multi-modal Bots that create human-like interactions and contexts. This consumer offering provides relevant learning to improve Ex-Human’s models used by Enterprises upon re-training. Both Artera and Ex-Human offer products or services that are not only valuable to end-users, but also leverage their learning from users to develop insights and meta-data for broader use cases and platform development.

The choice between open and closed LLMs is becoming less binary, with developers increasingly using both. In addition to LLMs, Enterprises are also experimenting with specialized Small Language Models (SLMs), especially for information retrieval workflows, customer support, and code generation applications, interacting with their custom code.

Hugging Face, a major provider of open-source LLM infrastructure, reports widespread adoption of open-source LLMs and other tools, including frameworks like LangChain and LlamaIndex. Specialized SLMs will play a crucial role in automating business tasks, making connections, and synthesizing insights by being capable of training on an enterprise’s high-value unstructured and structured data. Startups can assist enterprises in building such specialized SLMs and providing tools for creating proprietary datasets, which can later be leveraged for broader industry use cases and to establish Data Moats.

One of our portfolio company’s customer, uses a closed LLM for its internal chatbot, but uses Llama for the same use case but to do things like flagging messages that had personally identifiable content, as an open-source model gave the company more control over the data. Early GenAI adopters like Intuit and Perplexity employ multiple models in a single application, leveraging generative AI “orchestration layers” to autonomously choose the best model for specific sub-tasks, whether open or closed. Intuit, provider of TurboTax and Quickbooks software, was early to build its own GenAI platform which leverages open-source in the mix of models trained on Intuit’s proprietary data, to help users with analysis, financial task completion and tax filing. Perplexity, when posed with a user question, uses multiple LLMs to generate contextual responses, building its models on top of Mistral and Llama models, and using AWS Bedrock for fine-tuning.

As AI’s value becomes evident, companies will recognize that thoughtful implementation outweighs perceived risks. Instead of imposing strict restrictions, defining the boundaries of appropriate use is crucial to mitigate concerns like runaway costs, security, or compliance. The appetite for enterprise AI solutions is genuine, holding the potential to disrupt the entire IT stack. We look forward to engaging  entrepreneurs with deep-tech or domain-specific knowledge, building  the needed infrastructure or applications essential for driving Enterprise AI adoption.

*Source: Mckinsey, MenloVC, VentureBeat

By Ravi Sundararajan, Partner, AIspace Ventures