Generative AI

Fujitsu / August 4, 2023

Generative AI is increasingly seen as a game changer for business. Over the last few months, the arrival of more powerful and capable Generative AI, such as OpenAI’s GPT, has driven an explosion of interest in the potential of this exciting technology to make organizations more competitive and enhance the services they deliver.

Indeed, the recent Salesforce Generative AI in IT Survey reveals that nearly 70% of IT leaders are making Generative AI a priority for their business over the next 18 months.

It’s clear that all business leaders are now considering how AI might affect their competitive environment, how they can drive benefits from AI and how they can minimize the risks inherent in AI-driven disruption.
To help steer them through the uncertainties, the need to define a proactive AI strategy has become today’s critical priority for C-level executives around the world.

In this short paper, we share some of the key points to consider in developing your AI strategy.

Do you have the strategy you need?

Adoption

Generative AI has become the focus of both business and consumer interest. When ChatGPT became the fastest growing online platform in history, with over a million users less than one week after launch, it was clear that AI adoption has become both demand driven and technology driven.

Importantly, this demand is coming from both businesses looking to maintain a competitive edge, and from individual consumers interested in how Generative AI can help them in their everyday lives.

In this innovation-driven market, the growing awareness of Generative AI is increasing public expectations that businesses will harness the power of Generative AI to deliver new more innovative and capable products, services and solutions.

Significantly, this means that business adoption will be driven by both competitive demand from within the organization and by increased customer demands and expectations. This will accelerate the rate of adoption by reducing the friction traditionally associated with new technology introduction.

This combination of drivers has given industry leaders such as Microsoft the confidence to embed Generative AI into their most important services, knowing this will enhance the enterprise value proposition of their most important services. Indeed, Microsoft’s adoption of Generative AI in their leading products is rapidly making Generative AI a mainstream feature in enterprise IT.

So, how can Generative AI help my business?

At a very basic level, Generative AI is a force multiplier for business, enabling organizations to do more, faster with less, making them more competitive in the process.

Generative AI can influence every part and every level of business operations, from Design, Development and supply chain optimization through to Sales, Marketing, HR and Legal. It has the potential to help all working environments become more energy efficient and more sustainable.

Generative AI is also a force multiplier for the individuals in your organization, working as both a consultant and an assistant to help employees in their daily tasks, enabling them to do a wider range of tasks, more proficiently and faster, and enhancing their personal productivity. This implies not just increased personal productivity but also increased productivity on a team and enterprise-wide scale.

Generative AI is rapidly becoming a catalyst for enterprise innovation and increased competitiveness. The potential uses of Generative AI to benefit organizations is limited only by people’s insight and imagination.

The adoption of generative AI will also have a significant impact on a wide range of tasks and roles, encompassing not only enterprise blue-collar positions but also a broad spectrum of white-collar creative and decision-making roles. For instance, generative AI has the capability to develop software code. To fully harness the benefits of Generative AI adoption, enterprises will need to proactively invest in reskilling and upskilling their workforces.

You need a Generative AI strategy now…

In light of the widespread interest in Generative AI services, it is no surprise that many organizations are already introducing Generative AI services, either as full deployments or as local pilots and trials.

In the absence of a coherent, high-level Generative AI strategy, this can lead to a piecemeal approach, without the necessary policies, safeguards and education in place to ensure the business can meet its objectives.

It may also mean that the organization lacks the necessary training to ensure individuals can maximize the benefits of Generative AI in their work.

For example, the provision of basic training on how to effectively describe to GPT the task you wish it to perform to get the best results ‘Prompt Engineering’) can significantly boost the individual and organizational effectiveness of using GPT.

Factors to consider in your Generative AI strategy

There are a range of important factors to consider in developing an effective Generative AI strategy.

Data Security

Today, Large Language Models (LLM), such as GPT, have become effectively assimilating vast amounts of training data. Over time, LLMs will continue to improve by learning continually from the specific requests users make, a reality that can pose challenges for the organization.

One such challenge is that the requests that employees make to external LLMs can potentially result in data leakage of confidential information. The information an employee uploads to the LLM as part of their request adds to the knowledge of the LLM, potentially helping the LLM to answer similar questions from employees in competitor organizations.

This could present both competitive and legal compliance issues for organizations, for example if legally restricted information is released to the market.

In recognition of this issue, OpenAI is planning to provide an enterprise version of GPT, with enhanced data security features that can prevent unwanted data leakage.

Risks
・Regulatory violations
・Legal and financial penalties
・Data breaches
・Privacy violations
・Competitive damage
・Reputational damage

Mitigations
1.Policies: develop policies describing and defining the use of Generative AI. Note that different departments, teams and individuals may need different policies depending on the work they do and information they access.

2.Controls: controls are needed to define who can access Generative AI and how. This needs to be done with careful consideration to minimize the risk of losing the significant benefits around increased productivity and enhanced personal capability that Generative AI can provide.

3.Education: ongoing education is required to ensure employees understand and follow the Generative AI use policies.

4.Selection: Select the right Generative AI solutions, ensuring they support the organization’s specific compliance needs.

5.Compliance Monitoring: Audit your compliance risks on an ongoing basis to maintain compliance with changing regulations.

6.Keep relevant people regularly informed to provide oversight and identify/resolve issues as required.

Data integrity – the fact, fiction and bias…

Generative AI models can only be as good as the data they have been trained on. Unfortunately, most Generative AI training models rely primarily on primary data sourced from the internet.

A range of basic techniques can be used to help validate and improve the data integrity. However, it is easy for factually incorrect, non-objective opinions and objective bias to find their way into the training data sets, impacting the accuracy of outputs from Generative AI.

The challenge for organizations is to identify false or misleading information in the output of Generative AI, ensuring employees can avoid acting on or propagating false or misleading information.

Risks
・Reputational damage from propagating false or misleading information.
・Enterprise damage from acting on false or misleading information provided by Generative AI.

Mitigations
1.Independently validating the output from Generative AI systems.

Note: as traceability is variable in Generative AI systems, a variety of traditional search engines may be used to help validate information.

2.Keep relevant people regularly informed to provide oversight and identify/resolve issues as required.

3.Using a private Generative AI system trained exclusively on your own or other independently validated data sets to help improve accuracy.

AI Hallucinations

Generative AI models can get confused, leading to ‘AI hallucination’. Technically, AI hallucination refers to the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context.

These outputs often emerge from the AI model's inherent biases, lack of real-world understanding or training data limitations.

In 2022, users reported that ChatGPT often seemed to "psychopathically and pointlessly” embed plausible sounding random falsehoods within their generated content.

AI hallucinations present a range of challenges. Clearly when the systems produce incorrect or misleading information, this can lead to an erosion in the overall trust and confidence in AI systems.

A bigger potential issue however is the increasing use of AI systems to support critical decision-making, for example in healthcare, finance, transport and manufacturing, where acting on incorrect information can have very serious consequences.

Organizations need to be aware that AI systems can unexpectedly hallucinate and ensure that their systems have the transparency and human intervention required to ensure that possible hallucinations can be identified and overridden.

Risks
・Reputational damage from providing harmful or misleading advice and information.
・Reputational damage from making inappropriate statements.

Mitigations
1.Training to understand potential issues and risks.

2.Keep relevant people in the loop to provide oversight and identify potential issues.

3.Some hallucinations can be prevented by introducing prompt management engineering solutions.

4.Using the right prompts via good Prompt Engineering practices can also minimize the risks of AI hallucinations.

5.Ensure the appropriate legal disclaimers are in place.

6.Develop a rapid action damage mitigation plan.

Political Incorrectness

Generative AI models have no concept of political correctness. Generative AI may extrapolate on data to derive conclusions that may be seen as politically incorrect, with the risk of reputational damage if they are used.

Risks
・Reputational damage from making inappropriate statements.

Mitigation
1.Training to understand potential issues and risks.

2.Keep relevant people in the loop to provide oversight and identify potential issues.

3.Ensure the appropriate legal disclaimers are in place.

4.Develop a rapid action damage mitigation plan.

Regulatory Compliance

Current Generative AI models do not usually understand compliance or regulatory frameworks. The use of Generative AI by some individuals and teams has the potential to compromise regulatory compliance.

Risks
・Regulatory violations
・Legal and financial penalties
・Reputational damage

Mitigation
1.Understand the particular regulatory risks that the use of Generative AI may pose to your organization.

2.Select the appropriate Generative AI solutions to support your compliance needs.

3.Develop and implement a strong internal role-based use policy for Generative AI that supports compliance.

4.Provide ongoing compliance training to ensure individuals understand how to remain compliant in their role when using Generative AI.

5.Keep auditing your compliance risks to ensure you maintain compliance with changing regulations.

6.Keep relevant people in the loop to provide oversight and identify any potential issues.

General Purpose v Task Optimized Generative AI

As part of developing a Generative AI strategy, organizations will need to consider the kind of Generative AI or AIs that will best serve their needs.

Generative AI systems can be trained using specialized data sets, enabling them to perform specialized tasks more effectively, including end user support, medical applications, legal, financial, marketing, software development or design.

Bloomberg recently developed BloombergGPT, its own Generative AI version of GPT specifically for the financial markets, trained on over 700 billion pieces of data specific to financial markets. This strengthens the organization’s ability to respond to the specialized financial information requests it receives.

So, a key consideration in developing a Generative AI strategy is determining the kind or kinds of Generative AI that will best support your organization and customers. For most mid to large size enterprises, generic Generative AI solutions are unlikely to provide the optimal benefits or return on investment.

Training

Generative AI is generally easy to use, but if organizations want to maximize the benefits of Generative AI, for example to increase the capability and productivity of their employees, then training is essential.

Prompt Engineering training can help users ask Generative AI questions in the best way to get the best answers quickly and efficiently. Investment in Prompt Engineering training therefore has a high direct return on investment and so is strongly recommend for all users. This can also be combined with other essential Generative AI risk mitigation training.

Sustainability

From an enterprise sustainability perspective, it is worth noting that training Generative AI Large Language models (LLMs) requires significant amounts of energy and resources.

For example, Bloomberg estimated that Training GPT3 required 1.287 gigawatt hours of electricity, the equivalent of 120 US homes for a year, and 700,000 liters of clean freshwater.

Of course, this resource consumption can be diluted across the entire userbase for GPT3, so would actually have a minimal impact on individual enterprises. However, the resources consumed in training a private Generative AI model would need to be factored into an organization’s sustainability goals.

How Fujitsu can help

Fujitsu understands how Generative AI can be used to support transformation, having already delivered over 6,000 AI projects worldwide.

Fujitsu offers a comprehensive range of end-to-end services, from consultancy and co-creation to help you develop your AI strategy through to Kozuchi, our AI innovation platform that can help you develop practical AI solutions for your organization. Fujitsu also has the capability to deploy your AI solution globally, at scale, and to provide the ongoing support you need to help you maximize your ongoing return on investment.

Conclusion

Generative AI is quickly becoming a business necessity to help organizations maintain competitiveness and deliver the innovative AI-powered services that their customers increasingly expect. Organizations need to develop strong, effective Generative AI strategies as a priority, to help them maximize the benefits of this exciting technology.

As innovation continues to bring digital disruption to many markets, Generative AI is expected to become a key driver for business transformation, creating significant opportunities for agile organizations.

At Fujitsu, we are ready to help you use AI and Generative AI to help you transform your organization and deliver the innovative services your customers need.

Key recommendations

・Selection: Determine the specific Generative AI solutions you need to transform your business.

・Policies: develop a set of Generative AI use policies. Different departments, teams and individuals may need different policies, depending on the work they do and information they access.

・Training: Invest in Prompt Engineering training for Generative AI users to maximize the benefit the organization gets from Generative AI. This training will quickly pay for itself in terms of increased personal productivity and capability.

・Controls: Ensure you have the necessary IT controls in place to prevent the unauthorized use of Generative AI. However, this should be done with care to ensure the organization can still maximize the business benefits of Generative AI, in terms of increased productivity and enhanced personal capability.

・Keep innovating: Generative AI is only one of the ways that organizations can increase their competitiveness and offer new AI-powered customer services.

Why not talk to Fujitsu and see how we can help?

Nick Cowell
Principal Consultant / Technology Strategy Unit / Fujitsu
Nick is a Principal Consultant within Fujitsu’s Technology Strategy Unit. Nick is a technologist with extensive experience in hardware, software and service development, having previously worked for leading technology providers across the USA, Europe and Oceania.

Editor's Picks

Sustainable AI - At the core of the Energy Revolution
Transform your business for a sustainable energy revolution. Let us show you how Sustainable AI can…
Fujitsu / December 19, 2024
Sustainable AI – Solving the “Double Materiality” Challenge
Sustainable AI allows to manage coming Double Materiality challenges. Let us show you how Sustainab…
Fujitsu / December 18, 2024
An image of blue rivers running through a large green forest
Now and Future of Fujitsu Three Highlights of Fujitsu Integrated Report 2024
The Fujitsu Integrated Report 2024—that shares Fujitsu’s current position and vision for the future…
Fujitsu / December 17, 2024