Skip to main content
main content, press tab to continue
Article | WTW Research Network Newsletter

Reshaping the GenAI Landscape: Part 1 - Small Models, Big Disruptions

Generative AI’s New Competitive Landscape

By Anas Alfarra , Swetha Garimalla , Carlos Loarte , Crystal McKinney , Sonal Madhok and Omar Samhan | June 20, 2025

The GenAI landscape is undergoing rapid shifts as startups, open-source tools, and Small Language Models disrupt LLM giants by offering faster, cheaper, and localized AI solutions across many industries.
Insurance Consulting and Technology|Risk and Analytics
Artificial Intelligence

The Generative AI (GenAI) space advanced significantly due to investments and product releases from major technology companies, boosting their competitive edge. However, startups, open-source communities, and small language models (SLMs) have responded quickly and are shifting the competitive landscape.

SLMs’ competitive advantages are primarily due to their ability to provide localized solutions that reduce reliance on cloud infrastructure. They are smaller, can be integrated much quicker, and allow for focused expertise rather than broad solutions. SLMs typically range from 1 million to 10 billion parameters,[1] whereas large language models (LLMs) can range between several hundred billion and even trillions of parameters. This shift is just one of many forces that are shaping the future of Generative AI and LLMs. For risk managers in all industries looking to utilize AI, understanding this dynamic is key to navigating both emerging threats and opportunities in the AI-driven market.

To help WTW better understand the forces influencing the AI market, the WTW Research Network (WRN) partnered with the University of Pennsylvania’s Wharton School and its Mack Institute’s Collaborative Innovation Program (CIP) to better understand this evolving landscape. Building on our previous work with the CIP and their Executive MBA students, Green Algorithms – AI and Sustainability, the WRN has sought to further examine the LLM competitive landscape including new disruptions and opportunities for optimization and efficiency. This piece looks at GenAI’s impact on risk management frameworks while Part 2 will explore LLM Effectiveness at Scale and Part 3 rounds out the series, providing a look at The Future of Hardware Computing and examining the implications for the market going forward as the industry moves from the training to the inference phase.

The Rise of Startups and Open-Source AI Models

A leaked Google memo in 2023 sounded the alarm: “we aren’t positioned to win this arms race… open-source [is] lapping us. Things we consider ‘major open problems’ are solved and in people’s hands today.”[2]   In other words, smaller open-source projects and agile startups have dismantled many of the long-standing advantages that large AI labs had.

These advantages include access to high-end computing infrastructure, proprietary data, and top-tier research talent. Innovation that used to take months now happens in a matter of days. One recent industry analysis found that the time gap for open-source models to catch up with frontier models has shrunk to less than 24 hours[3], signaling to well-funded incumbents that breakthroughs in open-source models can now rival or surpass them.

Meta’s release of Llama (an open-source LLM, recent release Llama 4[4] which spurred a flood of community-driven offshoots, compressed the innovation cycle dramatically.[5] Meanwhile, startups like Mistral AI, Claude, and Hugging Face provide open models or platforms that anyone can adapt and customize. Even industry leaders are exploring smaller models: Microsoft’s Phi-2 SLM (just a few billion parameters) claims to “outperform… larger models on math-related reasoning”,[6] and IBM’s new Granite SLMs are 3x–23x cheaper to run than “frontier” LLMs while matching their performance on key tasks.[7] This trend signals a shift toward efficiency and targeted performance, rather than brute-force scaling. Additionally, this trend helps support sustainability with AI as targeted algorithms reduce energy needs and carbon emissions (see Solving the AI energy dilemma for the WRN’s research on how efficient AI models affect carbon footprint).

Generative AI Across Industries: A Rising Tide of Disruption

Across industries, startups, open-source consortia, and in-house model development are offering tailored, industry-specific and lower-cost alternatives to diversify from larger incumbent firms.

Media: Open-source voice and video models are enabling startups to bypass expensive content pipelines. AI-powered dubbing tools use generative speech technology to translate speech across languages while preserving tone and emotional nuance. When combined with synthetic avatars and voice cloning, smaller studios and independent creators can reach global audiences quicker without the overhead of full-scale dubbing operations. This has reduced dependence on large studio-grade tools.[8]

Enterprise Tech & Software

Open-source copilots fine-tuned on public code repositories are allowing smaller software vendors to integrate generative assistants without licensing more costly APIs. These lighter solutions are emerging as viable alternatives to Microsoft's and Salesforce's AI offerings. In addition, these enterprise LLMs work with private data, limiting the training data available for LLMs.

Cloud & Telecom

Open foundation models deployed on-prem are giving telecoms and cloud-native startups a way to avoid vendor lock-in. Organizations are increasingly opting for model-agnostic orchestration platforms over tying themselves to OpenAI or Google.[9]

Financial Services & Fintech

Small models trained on regulatory documentation or transaction data are being used by fintechs for KYC (Know Your Client), anti-fraud, and customer service. These narrow models are cheaper and easier to audit — giving them a leg up over general-purpose models in compliance-heavy environments.[10]

Healthcare & Life Sciences

Clinical startups are bypassing generalist LLMs in favor of compact models tuned on biomedical texts. These models are embedded in radiology tools, documentation systems, and electronic health record (EHR) platforms, enabling more efficient deployment under HIPAA constraints.[11]

Retail and Distribution

AI-native startups in industrial IoT (Internet of Things) are deploying SLMs at the edge for predictive maintenance, defect detection, and workflow instructions. They do this all without needing to call back to a centralized cloud model, sidestepping traditional infrastructure costs.[12]

Legal & Professional Services

Firms are increasingly choosing open models for sensitive contract work and internal research. Custom fine-tuning allows legal departments to train models with jurisdiction-specific case law, which large providers are less likely to support directly.[13] Together, these examples illustrate that LLM incumbents should focus on their innovation and position. The advantage is shifting to those who can deploy fast, localize easily, and fine-tune cheaply.

Strategic Risks and Opportunities

Many incumbent firms are built on business models that assume scale, proprietary data, or exclusive access to infrastructure as lasting advantages. The rise of SLMs and open-source innovation is changing that environment. Incumbents that fail to invest in experimentation, model diversification, or open collaboration may face competitive displacement in key market segments.

In contrast, smaller firms and new entrants adopting open or portable models face a different risk profile. While their agility offers an edge, they often lack the governance, cybersecurity maturity, or insurance coverage of established firms.

Key risks to both groups include:

  • Strategic Obsolescence (Incumbents): A failure to respond quickly to new models can result in loss of customer relevance, pricing power, or control over distribution. Risk managers should monitor horizon signals, such as open benchmarks or adoption spikes, and consider product-level scenario analysis to flag inflection points.
  • Model Safety & Explainability (All Firms): Poor explainability or safety failures can expose firms to regulatory and reputational fallout. While larger firms may build robust controls, smaller firms must balance speed with formalizing testing and documentation practices.
  • Liability Gaps & Insurance Response (All Firms): Generative AI creates new liability categories, including misattribution, false outputs, and infringing responses. Incumbents may need tailored policy endorsements, while startups may lack E&O, D&O, or cyber coverages that reflect new model-driven exposures.
  • Third-Party Dependence (All Firms): Whether relying on proprietary LLM APIs or open repositories, AI supply chain risk is rising. Risk teams must conduct due diligence on model lineage, uptime SLAs, versioning controls, and legal terms. For insurers, this increases demand for clear disclosures about model dependencies.
  • Data Privacy and Regulatory Alignment (Startups): Agile adopters of open-source AI must still comply with increasingly complex privacy laws (e.g., GDPR, HIPAA, CPRA). Compliance gaps can create downstream insurability issues or block partnership opportunities with regulated incumbents.
  • Operational Resilience (All Firms): The centrality of generative AI to operations makes failure scenarios more critical. Model outages, API throttling, or compliance investigations can now stall product pipelines. Resilience planning must include AI-specific recovery scenarios.

Generative AI’s competitive landscape is evolving. For risk managers across sectors, this calls for dual awareness: incumbents should be mindful of the speed of disruption, while challengers should consider building risk maturity as they scale. Companies considering the potential risks and challenges to their AI strategy creates an opportunity to turn uncertainty into advantage.

References

  1. Small Language Models (SLM): A Comprehensive Overview. Return to article
  2. Google “We Have No Moat, And Neither Does OpenAI”. Return to article
  3. Predictability and Surprise in Large Generative Models. Return to article
  4. The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation. Return to article
  5. LLaMA: Open and Efficient Foundation Language Models. Return to article
  6. Phi-2: The surprising power of small language models. Return to article
  7. Granite Code Models: A Family of Open Foundation Models for Code Intelligence. Return to article
  8. OpenVoice: Versatile Instant Voice Cloning. Return to article
  9. Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives. Return to article
  10. Hidden power of small language models in banking. Return to article
  11. Proof-of-concept study of a small language model chatbot for breast cancer decision support – a transparent, source-controlled, explainable and data-secure approach. Return to article
  12. Empowering Edge AI with Small Language Models Architectures, Challenges, and Transformative Enterprise Applications. Return to article
  13. Small models as paralegals: LexisNexis distills models to build AI assistant. Return to article

Authors


MBA Student, Wharton University, Pennsylvania, USA

MBA Student, Wharton University, Pennsylvania, USA

MBA Student, Wharton University, Pennsylvania, USA

MBA Student, Wharton University, Pennsylvania, USA

Technology Risks Analyst
email Email

Technology and People Risks Analyst
email Email

Contact us