Cost Effective & Sustainable AI
Why today "Good Enough" is often a smarter choice.

Fujitsu / October 23, 2025

For much of the last decade, the narrative around artificial intelligence has been driven by the pursuit of scale. Each new generation of frontier large language models (LLMs) boasts more parameters, deeper reasoning abilities, and ever expanding benchmarks. The implicit assumption is that bigger is better, that organizations will always gain advantages by adopting the most advanced, state-of-the-art systems. Yet, as the true costs of training and operating these models become more visible, an alternative strategy is emerging: the deliberate use of smaller, more efficient “good enough” language models that balance capability, cost, and sustainability.

For managers and executives, the question is no longer simply “What can AI do?” but rather “what level of AI is worth deploying for which business case?” As AI adoption accelerates across industries, choosing between frontier models and leaner alternatives may become one of the most consequential cost and sustainability decisions leaders face.

The Cost Curve of Frontier AI

The economics of training and operating frontier AI models reflect a stark reality. Building the largest models demands access to tens of thousands of high-end GPUs or TPUs, operating continuously for weeks or even months. Current estimates suggest that training a single state-of-the-art model can cost hundreds of millions of dollars. Moreover, the demands continue beyond training. Running inference, or generating responses, requires substantial computational resources for each interaction, thereby inflating operational costs.

Enterprises face these expenses indirectly through pricing structures. Frontier models are often available only via APIs that charge usage-based fees reflecting their massive infrastructure needs. Managing millions of customer queries through such systems can rapidly result in millions of dollars in recurrent costs. For the majority of companies, particularly those outside the technology sector, this cost curve is daunting.

The environmental costs mirror the financial burden. The energy required to train frontier AI models has been likened to the lifetime consumption of a small city, and the carbon footprint of these models increasingly attracts regulatory attention, especially in regions like the European Union, where sustainability reporting requirements are becoming stricter.

For executives tasked with balancing budgets and corporate social responsibility, the allure of frontier AI must be carefully weighed against these significant trade-offs.

The Rise of “Good Enough” AI

Against this backdrop, small and midsize LLMs are gaining traction. These models may not match the abstract reasoning of frontier systems, but for the vast majority of tasks they are more than capable. They can summarize documents, draft emails, generate code snippets, and answer customer queries with sufficient accuracy to transform workflows.

The appeal is twofold: cost efficiency and sustainability. Smaller models require far less compute power both to train and to run. This translates into lower API costs for those accessing them through providers, and in many cases, the ability to host them on-premises or in private clouds. Good enough models can deliver 80–90% of the value at 10–20% of the cost. For organizations handling sensitive data, local deployment also offers security and compliance advantages alongside reduced expenses.

Environmental benefits are equally compelling. Running a smaller model consumes less energy per query, reducing emissions at scale. For enterprises striving to meet net-zero commitments, this alignment between cost reduction and sustainability is particularly powerful. “Good enough” models allow organizations to embrace AI without undermining their climate strategies.

AI Computing Brokers

Today, AI Computing Brokers such as the Fujitsu AI Computing Broker are becoming increasingly important in the effective use of “Good Enough” AI models. These brokers help organizations optimize the use of costly, power-intensive resources such as GPUs. By orchestrating workloads across various models, they enable enterprises to automate the deployment of smaller language models for routine tasks, while reserving advanced systems for complex reasoning. This approach facilitates a more cost-effective and sustainable use of AI models without compromising functionality or capability. https://en-documents.research.global.fujitsu.com/ai-computing-broker/

Bigger Isn’t Always Better

The fixation on scale can obscure a critical truth that larger models are not always more effective for enterprise tasks. In fact, over capability can introduce new risks. Highly creative frontier systems are more prone to “hallucinations” outputs that are fluent but incorrect. In domains like healthcare, law, or finance, these errors carry significant risk.

Smaller, fine-tuned models are often more predictable and easier to align with specific needs. A midsize model trained on legal vocabulary, for example, may outperform a frontier system in contract analysis by being more focused and less prone to spurious creativity. Similarly, in customer service, predictability and speed often matter more than advanced reasoning ability.

By choosing the right-sized model, organizations not only reduce costs but also enhance reliability an outcome that strengthens both operational efficiency and customer trust.

Deployment Economics at Scale

The strongest case for “good enough” models emerges in deployment at scale. Consider a retailer processing tens of millions of customer interactions per month. Running those queries through a frontier model could add millions in annual costs. By contrast, a smaller, fine-tuned model might deliver nearly the same performance at a fraction of the price.

The environmental calculus compounds the argument. Each interaction on a smaller model consumes less energy. Across millions of interactions, the difference in emissions is substantial. For organizations now required to disclose Scope 3 emissions and energy use in digital operations, this is not just a moral issue but a regulatory one.

The Cost and Sustainability Trade-Offs of AI Models

Per 1M Queries Frontier LLM (Trillion-Parameter Scale) Midsize / “Good Enough” LLM Relative Difference
Energy Use 40–50 MWh 5 MWh ~8–10x higher for frontier
Carbon Emissions 15–20 metric tons CO₂ (depending on grid) 2–3 metric tons CO₂ ~7x higher for frontier
Financial Cost $25,000–$30,000 $3,000–$5,000 ~6–8x higher for frontier
Latency Higher (due to compute demand) Lower, optimized for speed Better for midsize
Suitability Complex reasoning, research, specialized domains Customer service, summarization, document drafting, coding support Depends on task context

The ROI equation is decisive. Doubling model performance does not double business value, but doubling deployment costs can halve margins. In most real-world cases, “good enough” models provide the optimal balance between cost and benefit.

Strategic Balance

The divergence between frontier models and smaller systems is creating a dual-tier market. At the top, a handful of companies will continue investing in the largest models, serving clients in high-stakes domains such as defense, advanced research, and financial engineering.

For the broader enterprise market, however, small, and midsize models are emerging as the default. Open-source ecosystems like LLaMA are empowering organizations to build custom solutions without prohibitive expense. Providers like Anthropic and Mistral are explicitly tailoring offerings for efficiency. Companies like Fujitsu are ensuring that orchestration tools make it easy to balance capabilities across a portfolio of models.

This dual-tier structure mirrors earlier technological transitions. The computing industry was not transformed by supercomputers alone but by the diffusion of personal computers and later cloud services. In each case, the most advanced systems remained important, but the broad market scaled around solutions that were affordable, practical, and widely deployable. AI is following the same trajectory.

Establishing a Sustainable AI Strategy

For leaders, the implications are clear. The question is not whether to use AI, but how to use it in ways that are both cost-effective and sustainable. The challenge lies in the fact that while frontier models may dominate headlines, their practical role in enterprise adoption is limited. This requires moving beyond the fascination with frontier performance and focusing instead on operational fit. Effective strategies view AI not as a competition for maximum capability but as a portfolio of tools optimized for specific contexts.

Leaders must ask: What tasks are we automating? What level of accuracy is truly required? What is the cost of error versus the cost of over-capability? Where does predictability matter more than creativity? And critically, how do our AI choices align with our sustainability commitments? By framing adoption through this lens, organizations can align their AI investments with strategic goals rather than succumbing to industry hype, integrating the technology into their strategies without being overwhelmed by spiraling costs or environmental liabilities.

The broader lesson is that sustainable AI is not only about reducing carbon footprint but about embedding efficiency into the heart of deployment strategies. By embracing “good enough” models, companies can ensure that their AI investments deliver real business value while advancing their environmental and social goals.

In practice, good enough models may be more than sufficient for 80–90% of enterprise needs, while frontier models remain a specialized tool for the final 10–20%. The organizations that recognize this division early will avoid unnecessary costs and build more resilient AI strategies.

In practice, “good enough” models may be more than sufficient for 80–90% of enterprise needs, while frontier models remain a specialized tool for the final 10–20%. The organizations that recognize this division early will avoid unnecessary costs and build more resilient AI strategies.

Just as past technological shifts saw businesses equipping employees with desktops and, later, laptops that were "good enough" for their needs rather than supercomputers, the AI era will follow similar principles.

Today, Fujitsu is a longstanding leader in applied computing, prioritizing AI architectures that focus on sustainability and domain specificity. By tailoring smaller models to suit particular industries, such as manufacturing optimization, logistics planning, or city-scale sustainability initiatives, Fujitsu empowers clients to achieve their AI objectives without inflating energy and infrastructure demands. This approach embodies the philosophy that success should be defined by effectiveness rather than scale.

You can find out more at: https://www.fujitsu.com/global/themes/data-driven/data-intelligence-paas/generative-ai/

Strategic Implications for Business Leaders

The challenge for leaders lies in the fact that while frontier models may dominate headlines, their practical role in enterprise adoption is limited. Effective strategies view AI not as a competition for maximum capability but as a portfolio of tools optimized for specific contexts.

Leaders need to consider the pragmatic questions:

  • What tasks are we automating?
  • What level of accuracy is required?
  • What are the costs of over-capability?
  • Where does predictability matter more than creativity?

By addressing these questions, organizations can align their AI investments with strategic goals rather than succumbing to industry hype.

Just as past technological shifts saw businesses equipping employees with desktops and, later, laptops that were "good enough" for their needs rather than supercomputers, the AI era will follow similar principles.

Today, Fujitsu is a longstanding leader in applied computing who is prioritizing AI architectures that focus on sustainability and domain specificity. By tailoring smaller models to suit particular industries, such as manufacturing optimization, logistics planning, or city-scale sustainability initiatives, Fujitsu empowers clients to achieve their AI objectives without inflating energy and infrastructure demands. This approach embodies the philosophy that success should be defined by effectiveness rather than scale.

You can find out more at: https://www.fujitsu.com/global/themes/data-driven/data-intelligence-paas/generative-ai/

Future Insight

As the AI ecosystem continues to evolve, the middle ground will expand even further. While frontier systems will push boundaries and advance capabilities, their high costs will limit them to specialized applications. Conversely, smaller models will become increasingly appealing due to continuous innovations in efficiency-enhancing techniques like quantization and distillation.

Regulatory trends could accelerate this transition. Governments prioritizing sustainability and data sovereignty are likely to promote the use of smaller, locally deployed models. Moreover, customers are demanding greater transparency, a goal that can be more readily achieved with systems that offer the possibility of fine-tuning and auditing.

History has shown that scalable technologies are not those at the extreme frontier, but rather those that are affordable, reliable, and widely adoptable. The future of AI will follow this established trend, favoring solutions that are accessible and impactful on a broad scale.

Conclusion

Every organization faces a choice. On one path lies escalating infrastructure spend, increasing dependence on constrained supply chains, and rising energy costs. On the other lies a smarter approach: extracting full value from existing assets, scaling more intelligently, and turning efficiency into competitive advantage.

Fujitsu’s AI Computing Broker offers a way forward. It does not ask enterprises to choose between ambition and efficiency but enables them to pursue both. By integrating ACB into In the race to develop increasingly intelligent AI, the focus often leans towards maximizing capability. However, in the realm of enterprise adoption, striving for maximum intelligence can be neither cost-effective nor environmentally sustainable.

Small and midsize language models present a compelling alternative. These models reduce costs, facilitate local customization, and significantly decrease energy usage. They offer consistency where excessive creativity might be unwelcome and allow companies to align their AI initiatives with the rising demand for sustainability.

For the majority of businesses, the trajectory of AI adoption will hinge not on possessing the most advanced model, but rather on choosing the right one a model that is sufficiently robust for their needs. This approach aids in balancing sustainability concerns with the tangible benefits offered by AI, ensuring that the investment is both pragmatic and responsible.

Nick Cowell
Principal Consultant & Fujitsu Distinguished Engineer within the Fujitsu Global Technology Marketing Division
Nick is a technologist and futurist with extensive experience in leading award winning hardware, software, and service development, for leading technology providers across the USA, Europe, and Oceania.

Editor's Picks

How AI-First Cultures Drive Competitive AI Adoption
Executives are investing heavily in AI, yet enterprise-wide adoption often stalls due to cultural c…
Fujitsu / October 23, 2025
Accelerate application modernization for engineers and businesses with AI
In today’s fast-paced world of software development—where AI is reshaping workflows and digital tra…
Fujitsu / October 23, 2025
Photograph of a man standing under an illustration of a light bulbs with one lightbulb illuminated over his head
Cost Effective & Sustainable AI <br /> Why today "Good Enough" is often a smarter choice.
For businesses, the focus should shift from merely adopting cutting-edge technology to deploying AI…
Fujitsu / October 23, 2025