Engaging Consumers in a Generative AI World – BCG

Integrating a third-party LLM-powered virtual assistant with a plug-in or other API is the quickest and easiest option to reach new customers in a generative AI world. The use of platforms to offer services is a proven way for companies to easily engage with a large and established customer baseone that appreciates having a wide variety of services accessible from a single location. Although conversational AI (such as chatbots) still have significant ground to make up compared to established platforms like WeChat and Amazon, the novelty of the experience is driving customer engagement. And that engagement is accelerating at record pace the three powerful flywheels that drive platform successscale, learning, and network. The success of the platform is also likely to drive the success of companies on the platform. (See Exhibit 2.)

Scale Effect. The cost of large, generalized models (which are the most likely models to be used for virtual assistants, because of their broad functionality and superior conversational ability) is notoriously high. (See Building a Best-in-Class Large Language Model.) But we expect that LLM providers will be able to distribute their substantial R&D and running costs over what will be a large userbase, giving them valuable economies of scale. As a result, companies that want to engage with customers with virtual assistants can do so without building the models themselves.

The total cost to build LLMs depends on the size, complexity, and capability required. Training a large, general-purpose LLM (like GPT-4), can range from $30 million to $100 million and up. Building an industry specific LLM (like BloombergGPT) can cost between $10 million to $50 million and up depending on the level of complexity.

Building a small, single-task model is often more cost effective, ranging from $100,000 to $5 million and up depending on the complexity of prepping the data and the functional requirements of the desired task. For example, a well-known regional bank trained a small, task-specific language model for internal loan adjudication purposes and spent between $150,000 to $200,000 and up end-to-end for their foundation model implementation.

In contrast to building a model, the cost to modify (for example, fine-tune) an existing model is the most affordable option, ranging from $10,000 to $100,000 and up.

The key ingredient to train or fine-tune these models is access to high-quality proprietary data. The data also needs to be cleaned, sometimes labeled (for particular use cases), and ideally anonymized for use in fine-tuning or training an LLM. This is no small ask: BloombergGPT was trained on a massive 363 billion token dataset using Bloombergs extensive, pre-existing financial dataset (which includes proprietary Bloomberg data), the FinPile dataset (a compilation of financial documents from the Bloomberg archives), and external sources such as press reports.

Learning Effect. The excitement surrounding generative AI is encouraging users to experiment with applications such as ChatGPT and Bard. Both chatbots have benefited from the learning effect (also known as the direct network effect) generated by this surge in experimentation: They improve as more people use them. For companies that decide to offer services through an established platform, this learning effect provides a significant advantagetheyll have access to superior user experience and best-in-class conversational interfaces.

Learn More About Generative AI

Learn More About Generative AI

Generative artificial intelligence is a form of AI that uses deep learning and GANs for content creation. Learn how it can disrupt or benefit businesses.

This powerful technology has the potential to disrupt nearly every industry, promising both competitive advantage and creative destruction. Heres how to strategize for that future.

Same-Side and Cross-Side Network Effect. As more companies join LLM platforms, consumers will find greater value and new users will gravitate to the platform (the same-side network effect), which in turn drives more companies to integrate their services with the platform (the cross-side network effect). These network effects present a significant opportunity for companies to engage with a wide user base and attract high volumes of customers.

Many companies today are concerned about the operational risks of using an LLMs interface. For example, providing services through an LLM-powered virtual assistant could potentially expose a companys proprietary data to the LLM vendor. However, many of these risks can be mitigated with technology implementations and vendor contracting.

But companies also face strategic risks that may not currently be on their radar. One key risk, commoditization from intermediation, emerges when an intermediary between a company and its customers reduces emphasis on the companys unique selling points. Much like search engines, virtual assistants will have to prioritize which services are displayed to the customer and can take commissions on sales. The result is often lower margins and standardization of servicesmaking brand recognition and promotion of premium offerings more difficult. This risk grows as more companies join the platform. The question of how an LLM-powered virtual assistant will select (or help the customer select) one companys service or product out of a list of common services and products is unknown, putting companies at higher risk for commoditization.

There is also an inherent risk in relying too significantly on a third-party sales channel. This risk is illustrated by the vacation planning example above. When a customer books through a third-party virtual assistant rather than with the airline or hotel chains that provide the actual service, the virtual assistant provider has control over the engagement logs and how services are selected, and heavily influences customer buying behavior. As a result, companies could lose direct connections with customers, and the critical engagement data that enables them to build brand loyalty and cultivate ongoing customer relationships.

Companies that have access to valuable, domain-specific, proprietary data may choose to double down on their competitive advantagecreating their own LLM-driven customer experiences with generative AI. The tradeoff is typically in the homegrown user experience, compared to LLM-powered virtual assistants where providers are pouring resources into optimizing human engagement. Specialized models designed in-house need to be user-friendly enough to support their customer offerings and encourage customers to return.

The good news is that many small models, such as Alpaca (a 7-billion-parameter language model created at Stanford University) and Dolly (a 12-billion-parameter language model created by Databricks), are not as cumbersome and costly to customize as those required for the more expansive virtual assistants. And creating specialized models, for example, those built through fine-tuning or retraining, with proprietary data can provide superior performance for a specialized task. The better the data is, the better the model is at performing the task that the data is related tothough possibly at a cost of its language capabilities.

It is also possible to add functionality and value to raw data by adding a layer of analysis. BloombergGPT (a 50-billion-parameter language model), for example, outperformed general purpose models for highly specific financial tasks, such as financial risk assessment.

Companies that choose to create their own customized experiences can maintain exclusive access to their valuable, proprietary data and ensure it remains secure. In-house control allows companies greater flexibility to create unique functionalities and user experiences without depending on another companys technical roadmap. In the case of BloombergGPT, the user gets more refined and accurate financial data, and in return, Bloomberg gets more tailored user-interaction data that can be used to continuously update their LLMs.

When companies keep direct access to their customer base, they can benefit from the rich data gleaned from customer engagement. This allows companies to better understand their customers and cultivate stronger, mutually beneficial relationships. It also strengthens companies ability to build customer trust by providing a sense of security and confidentiality, while promoting their brand name. For more sensitive interactions, such as viewing a bank statement, this is particularly valuable; consumers typically prefer to use a service offered directly from the bank itself.

The obvious operational risk surrounding this option will be the simple fact that investing in in-house capabilities can be cost prohibitive. But companies dont need to take the most expensive approach and build from scratch: they can fine-tune free, open-source models or bring in someone elses model and incorporate it into their own website.

Leaders also need to consider the less-obvious strategic risks. For one, theyll need to keep up with the requirements to build and maintain best-in-class capabilities in-house. (See Building a Best-in-Class Large Language Model.) Specialized models need to have good enough functionality and usability to attract and retain customers. But the definition of good enough for customers will evolve alongside the experiences of best-in-class models and platforms. And the data science and engineering talent needed to manage these models is currently a scarce resource.

In addition, the R&D necessary to maintain a best-in-class model likely wont be feasible for most companies, as LLM research becomes more proprietary. Making that task more difficult is the fact that some best-in-class model providers dont allow companies to customize the model for their own purposes.

Companies that choose this option also risk missing out on a critical customer engagement channel. If companies dont put any of their services on a popular LLM-powered virtual assistant, they could become alienated from their customer basemany of whom may have grown accustomed to using that assistant instead of coming to the companies website.

The generative AI world is one of constant motion, making it challenging to track how the market dynamics are evolving. It may be tempting to integrate an LLMs plug-in today, no questions asked. And for some companiesfor instance, those with a small market share, a small customer base, or low-quality data or lack of access to strong proprietary data, and that dont have a strong user experiencethis will be a smart strategic move.

But with every benefit comes a risk. And companies with a strong customer base and unique offering may be better served by maintaining control of their user experience and providing a virtual assistant service in-house.

Read this article:
Engaging Consumers in a Generative AI World - BCG

Related Posts

Comments are closed.