Generative AI in Enterprise

Combining Traditional Models for Enhanced Outcomes

Dorian Smiley
7 min readOct 6, 2023

Disclosures: The opinions and ideas expressed in this article are solely my own. This article was written in my personal capacity. ~5% of this article was AI-generated. Unsourced claims are anecdotal based on my 20+ years as a software engineer and technology executive. I am one of two certified global Foundry ambassadors.

The Limits of Generative AI

The perceived limitations of Generative AI, such as occasional mathematical inaccuracies and high computational costs, often raise concerns about its scalability. Consider the following question:

“If we have to build tools for every type of computational analysis we need to do, or even queries against structured data sources, aren’t Generative AI solutions like agents and chat experiences just redundant?”

Unpacking the questions, we find several assumptions:

  1. GenAI can’t do computational work as the results must be exact and repeatable. For example, you can’t get inconsistent or incorrect results when generating an income statement.
  2. Data must be secure, structured, and reliable. When answering questions, we can’t ask the model to pull from training data or untrusted sources.
  3. There must be an audit trail of the data used to support reconciliation. This is a requirement for most organizations today regarding things like financial reporting. GenAI-driven reports and insights can’t obfuscate the data trail.

After unpacking these requirements, we discover that we need traditional models and algorithms to meet the demands of the enterprise. Further, there is nuance to the questions we ask as the input data may look slightly different in the same domain. Hence, in many cases, we adapt existing models and algorithms to fit the data rather than the other way around. For example, imagine if you were tasked with decomposing revenue for carwash chains A & B and performing an impact analysis to determine the root causes of declining revenue. But Carwash A services consumers, and Carwash B services government agencies. You likely won’t transform their data into a universal model as different features impact these businesses (universal data models are generally impossible for this reason). Instead, you will extract different features and build unique models for each business.

The conclusion being Generative AI is a last-mile add-on that adds little value and a lot more cost. However, this conclusion is false for several reasons. To help understand why, consider the following analogy:

“Think of it this way: in the old way, when we wanted a computer program to do something, we had to give it very detailed instructions, almost like a recipe. We had to specify exactly how to solve a problem in a particular situation. But with GenAI, we’re doing something different. Instead of giving specific instructions for every situation, we tell it the goal we want to achieve, and it figures out the best way to do it on its own.

This is like asking a chef to make a delicious meal without telling them exactly which ingredients to use or how long to cook each item. The chef knows the goal is to create a tasty dish and figures out the details themselves. This is very useful if you don’t know exactly what you want to eat, or what you want to eat is influenced by the day of the week, the weather, or what you had to eat last night. This is less useful if you know what you want to eat and simply want to vary the ingredients in a very precise way.”

Benefits of Generative AI

Generative AI helps us solve problems that require adaptive solutions. Similar to the chef who can make a delicious meal for the customer who doesn’t know what they want to eat, taking into account environmental factors, Generative AI can produce valuable solutions to problems like fraud detection, threat and intrusion detection, clinical research, and even chip design where adaptive problem solving is required to find a solution. In the context of financial due diligence, this might include figuring out the likely financial impact on a business based on forecast models and impact analysis given changes in economic conditions. Or the impact on a brand name given ongoing litigation or consumer sentiment analysis.

The need for adaptive problem-solving in complex problems can be further explained by considering traditional program logic. Software engineers are generally limited to evaluating two values to perform logic: false and true (0,1). This is fine when the number of possible decisions is small. But as this number grows, the application becomes brittle, too rigid, and inflexible to deal with the actual number of pathways that need to be explored.

For example, imagine a software application with 16 different feature flags (true/false) that determine which experience a user receives when registering for a website (actual use case I’ve encountered). Feature flags are ubiquitous in software engineering and only serve as an illustrative example of scaled complexity. There are 65,536 unique combinations of those flags. You can use this simple Blitz to see the growth in complexity here. That is well beyond the scope of any person to reason about, much less program a use case around all possible combinations.

But consider the following prompt:

messages = [
{"role": "system", "content": "You are a helpful AI assistant tasked with registering users"},
{"user": "system", "content": "Using the provided functions, construct a registration experience for the following user: New user over 18 in San Francisco, CA, USA."},
],
functions = [
{
"name": "get_special_offers",
"description": "Get the special offers avaialble to the user before registration.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"age": {
"type": "integer",
"description": "The age of the user"
},
},
"required": ["location", "age"],
},
},
{
"name": "register_user",
"description": "Registers the user.",
"parameters": {
"type": "object",
"properties": {
"firstName": {
"type": "string",
"description": "The first name of the user",
},
//...More required properties
"specialOffer": {
"type": "string",
"description": "The selected special offer"
},
},
"required": ["firstName",...],
},
}
]

This is an example of function calling where the model can reason about what functions to invoke and the parameters that need to be passed. The process can be done via chat instead of rigidly coding an algorithm to register the user. When the model is missing required parameters, such as the user’s name, it will prompt the user to supply it. It can also reason about the order in which to call these functions., prompting the user to first select a special offer before completing registration. In essence, the model can explore the space of our 16 feature flags (65,536 combinations) using its neural net (which is adaptive to new information, not rigid like if then else logic). While I’m not advocating you do this in production (likely cost prohibitive, which we’ll discuss more below), it illustrates just how big of a paradigm shift this is.

Adaptive problem-solving has applications across many domains, some of which are well suited to the economics of Generative AI. However, many use cases will fail to yield a positive return, so it’s important to do the math. For example, let’s compare the cost of a query of an inverted search index with that of a semantic search.

Complexities and Costs

Inverted search indexes are a type of database that stores the contents of web pages in a way that makes it easy to search for specific terms. When a user enters a search query, the search engine looks up the terms in the inverted search index to find web pages that contain those terms. The cost of running these systems is extremely cheap.

Semantic search uses vector embeddings to search the distance between the entries in the embeddings database and the user’s query. This is how most Generative AI search systems work, and they are substantially more expensive to run. How much more expensive?

Morgan Stanley estimated that Google’s 3.3 trillion search queries last year cost roughly a fifth of a cent each, a number that would increase depending on how much text AI must generate. Google, for instance, could face a $6-billion hike in expenses by 2024 if ChatGPT-like AI were to handle half the queries it receives with 50-word answers, analysts projected.

Source here.

That’s roughly 10x more expensive and probably 60x slower than a traditional inverted search index. Until costs come down (which they are expected to do), it’s important not to run into Generative AI use cases unless there are compelling, perhaps existential, reasons to do so (much like the ones Google was facing).

Conclusions

So when and how should an enterprise leverage Generative AI? While there’s no hard and fast rule to answer those questions, one possible framework is as follows:

  1. Assess the Complexity: Determine if the problem truly requires the nuances of Generative AI or if a traditional algorithmic approach can adequately handle it. Always ensure traditional algorithmic approaches have an interface for Generative AI (i.e. tools). The values of tools will compound over time.
  2. Calculate the ROI: Understand the costs involved, both in terms of computing expenses and potential benefits. Will the added value from Generative AI offset its costs?
  3. Ensure Data Integrity: Generative AI models can produce outputs only as good as their inputs. Ensure that data feeding into these models is clean, accurate, and representative.
  4. Implement Safeguards: Since Generative AI can sometimes produce unpredictable results, it’s vital to have human oversight, especially in high-stakes situations. The ability to audit the inference trail is a must!
  5. Stay Updated: The field of AI is rapidly evolving. Regularly re-evaluate the viability and efficiency of your Generative AI solution as newer, more efficient models emerge.

To implement this framework, you will need a platform to secure and model your data, build algorithms and traditional models, and expose them to Generative AI models for use in adaptive problem-solving. Palantir’s Artificial Intelligence Platform (AIP) was purpose-built to make leveraging Generative AI in profitable enterprise use cases a reality, with days of deployment. If you want to learn more about applying this framework and AIP, please contact me on LinkedIn. Thank you!

--

--

Dorian Smiley

I’m an early to mid stage start up warrior with a passion for scaling great ideas. The great loves of my life are my wife, my daughter, and surfing!