Succeeding with generative AI requires the expertise to design new use cases and the ability to develop and operationalize genAI models. These are major challenges. Executives who doubt the potential of generative AI are becoming an increasingly rare breed. In a recent survey of Fortune 500 CEOs in collaboration with Deloitte, 75% expected generative AI to improve operational efficiency while over half believed it would increase growth. In our own survey of data science leaders and their teams, 90% believed the hype was more than justified. Indeed, there has been a string of reports calculating that genAI will have an enormous impact on the world’s economy. For example, McKinsey estimated it would add $2.6 trillion to $4.4 trillion annually. The question is no longer whether generative AI will be transformative but how we will bring about that transformation. In other words, how do we make money with genAI? To answer this question, we must look at the challenges that make it hard to “make money” with genAI and how companies overcome these challenges. The answer lies in identifying the right generative AI use cases and building your capabilities for both developing and operationalizing genAI applications. Why is it hard to make money with generative AI? There are two key challenges that make it hard to “make money,” i.e., increase operational efficiency or revenue growth, with generative AI. The larger and more difficult challenge is that it requires going after new use cases and developing new business models that are different from the ones we’ve seen work for traditional machine learning. This is because genAI is about unlocking new, unstructured data—analyzing and generating text, voice, images, video, etc.—that enterprises have largely ignored. How do you make a chatbot that helps employees discover and summarize documents in your enterprise content management system (a use case that many companies are pursuing with genAI)? No one knows the best way to do this yet as, prior to genAI, it wasn’t possible to do anything similar without extraordinarily large investments of time and money. The second, temporary challenge is that genAI models are much more costly and difficult to operationalize than traditional machine learning models. This is because they are orders of magnitude larger than traditional AI models and they have grown dramatically in size in the last few years. For example, GPT-4 is believed to have more than one trillion parameters, making it on the order of nine thousand times larger than the BERT model, an early generative AI based on the same architecture. Released in 2018, BERT in turn was dramatically larger than most models used at the time. Because these models are much larger and are trained on vastly more data, they are vastly more expensive both to train and to use in production. This second challenge is temporary for three reasons. First, infrastructure is constantly getting cheaper. Second, there is constant innovation in optimization techniques, which reduce the infrastructure footprint of these models. Third, and most important, companies are becoming more familiar with how to use genAI, and they are moving away from the ultra-large genAI models and towards smaller, more specialized models that are fine-tuned for particular tasks and domains. Every company can make money with generative AI Every company can make money with generative AI and several are already doing so. However, most companies lack the specialized leadership and expertise to identify the right use cases to go after with genAI, and lack the capabilities to develop and deploy the corresponding genAI models and applications. The first critical component of making money with genAI is identifying use cases that both deliver substantial business value and are in the sweet spot of the strengths while avoiding the weaknesses of the technology. Identifying and prioritizing these use cases requires skilled data scientists and data science leaders who understand the business context, the organization’s data, and, above all, the strengths and weaknesses of generative AI models. Without a history of building data science teams, and delivering traditional AI and machine learning projects, a company will lack the necessary talent and experience to identify and pursue the most promising use cases. The second component required is the ability to develop and operationalize the genAI models and pipelines in a scalable, cost-effective, and governed fashion, so called LLMOps, for large language model operations. Most companies will not be able to operationalize their most important genAI applications using the giant, generic genAI models being offered by the tech giants. These models underperform, because they are too large, too slow, and too costly, and because they often cannot be fine-tuned. Also, they frequently do not meet enterprise needs for security and control. There is no alternative for enterprises but to implement their own in-house LLMOps capabilities that allow them to ingest foundation models, fine-tune them, and deploy them with comprehensive governance. When will companies start making money with generative AI? Tech vendors are falling over themselves to augment their product features with genAI, and several are likely to grow their business and gain market share with the technology over the next couple of years. Similarly, there has been an explosion of genAI startups, and a small number will likely be extraordinarily successful over the same time period. However, most mainstream enterprises are still early in their genAI maturity and their AI maturity in general. While there are success stories where companies have already made money with genAI—usually by dramatically improving the productivity of high-value employees—it will take time before all but the most advanced mainstream companies see a sizable impact on their bottom line. After all, it has been less than a year since ChatGPT was released, which is when most executives first heard of genAI. Most companies still need to implement LLMOps capabilities and grow their in-house genAI expertise among business leadership and their data science teams. Companies with large, well-established data science teams, an AI center of excellence, and a track record of success with traditional machine learning have a head start. Such companies could have a small portfolio of successful genAI projects in production over the next year. It will likely take companies who lack these capabilities several years before they can meaningfully take advantage of the advancements in genAI. What can we do to make money with genAI faster? Generative AI, like traditional AI and machine learning technologies, does not make money auto-magically. Few use cases, and the genAI models that underpin them, can be outsourced because of the unique data and unique requirements of your most differentiated and valuable use cases, and the challenges of operationalizing genAI. Instead it will require in-house work and investment to identify and design the right use cases and build the teams, processes, and platforms necessary to develop and operationalize the generative AI applications that will transform your business. Organizations that have already invested in their AI capabilities have the advantage and they are driving impact with genAI as we speak. If that doesn’t sound like your organization, then it is time to play catch-up. Kjell Carlsson is the head of AI strategy at Domino Data Lab, where he advises organizations on scaling impact with AI. Previously, he covered AI as a principal analyst at Forrester Research, where he advised leaders on topics ranging from computer vision, MLOps, AutoML, and conversation intelligence to next-generation AI technologies. Carlsson is also the host of the Data Science Leaders podcast. He received his Ph.D. from Harvard University. — Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com. Related content analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks feature A GRC framework for securing generative AI How can enterprises secure and manage the expanding ecosystem of AI applications that touch sensitive business data? Start with a governance framework. By Trevor Welsh Nov 19, 2024 11 mins Generative AI Data Governance Application Security news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages Resources Videos