Neurosymbolic Programming for AI Agents
Embracing the Closed World Problem to Propel Specialization and the Broader Economy
The closed-world assumption in symbolic AI refers to the presupposition that any statement that is not known to be true in the system is considered false. This contrasts with the open-world assumption, where the absence of knowledge does not imply falsity. The closed-world assumption simplifies reasoning by limiting the scope of what the system needs to consider as possible truths, effectively treating the system's knowledge base as complete regarding the domain of interest.
The closed-world assumption can be highly beneficial in applied AI, particularly within specialized domains like manufacturing. It allows for creating focused, domain-specific models that operate under controlled assumptions, reducing the complexity of problem-solving and decision-making processes. This specialization and the ability to operate under predefined conditions can lead to more efficient and effective solutions for businesses facing supply constraints and disruptions.
Why it Matters
Consider a supply-constrained manufacturing business that must optimize its operations to minimize fines and maximize the fulfillment of orders. A neurosymbolic AI system, which combines the strengths of neural networks (for pattern recognition and prediction) with symbolic AI (for reasoning and rule-based decision-making), could be ideally suited for this challenge. The neural net can process real-time data from the manufacturing process (including unstructured data), predicting potential disruptions, supply issues, or changes in demand. The symbolic model can encode expert knowledge about the manufacturing process, supply chain logistics, constraints (such as those imposed by limited supply or regulatory requirements), and optimization goals directly in a form the system can reason about. This could include ready-made linear programming models and other algorithms that are represented symbolically and are used to solve for these types of constraints.
By combining these capabilities, neurosymbolic AI Agents can dynamically adjust production schedules, perform order prioritization, and allocate resources to optimize outcomes within the given constraints.
The benefits for businesses, especially in the manufacturing space dealing with supply constraints and disruptions, could be substantial:
- Improved Efficiency: By optimizing resource allocation and production schedules based on current constraints and objectives, businesses can make better use of available resources, reduce waste, and improve overall efficiency.
- Increased Agility: Neurosymbolic AI agents can quickly adapt to supply chain or demand changes, helping businesses respond more effectively to disruptions or opportunities.
- Enhanced Decision-Making: The symbolic component provides a transparent framework for encoding and reasoning about business rules, regulations, and objectives, supporting more consistent and explainable decision-making.
- Cost Reduction: Optimizing operations to minimize fines and maximize order fulfillment can lead to significant cost savings, especially in environments with high variability and risk of supply chain disruptions.
Using this approach to building AI agents yields the following benefits for businesses:
- Reduced model training costs—An expert neurosymbolic model does not need as much training data to train. Most of the system's complexity is obfuscated via the symbols, making them much cheaper to train.
- Reduced error rates — Neurosymbolic programs have much lower error rates because the model is purely reasoning about how to solve a problem. The high error rates associated with using language models for programming tasks essentially fall near zero.
- Provide explainability — Traditional black box models are not explainable, meaning a human operator can not realistically check if an answer is correct. That is not the case with symbolic AI and neurosymbolic programs. Human operators can effectively "check the math" to ensure proposals generated by the agent are correct.
- Get to production fast — This approach allows companies to get to production quickly as it circumvents traditional challenges associated with assembling large amounts of training data and the long tail of edge cases that traditionally can take years to solve. It also provides explainability, allowing SMEs who train the models to approve solutions and build reinforcement learning into the system (Augmented Intelligence).
Adopting neurosymbolic AI Agents in production will enable businesses to offer goods and services at a cheaper price, propelling economic growth. Every day a workload isn't running in production is a day our economy continues to use the most outdated modes of production. Companies must adopt a scalable framework for achieving production workloads. Doing so will ensure your place in the future market and help contribute to a thriving and robust economy.
Solution Architecture
One solution to creating a closed world of neurosymbolic AI agents is to combine specialized neural networks (optimized for a GPU) with algorithms to expand the symbolic solution into a program that can be executed on a traditional CPU. By splitting the workload between GPU and CPU, we can reduce cost and rate limits and benefit from deterministic and nondeterministic systems.
The primary components of the solution architecture are:
- The Solver — The solver is an expert model for a particular domain of problems. For example, order intake optimization, truck roll optimization, or inventory rebalancing. The model is trained on Q&A pairs maintained by Subject Matter Experts (SMEs) who currently perform these functions.
- The Programmer — is an expert model designed to turn solutions the solver generates into a symbolic program that the interpreter can run. This model is trained by software engineers who maintain the tools catalog. The tools catalog is also part of the model's training data and describes available tools, their purpose, how they can be invoked, and how to handle outputs. Tools are traditional, well-tested algorithms maintained by the software engineering team. Tools may include linear programming models, decision trees, systems orchestration, computation, or other software-related tasks.
- Evaluator—The evaluator (currently) is an algorithm designed to attempt compilation of the symbolic program and test that all functions referenced in the program are found in the tools catalog. The evaluator issues a score between 0 and 1, which the surrounding application uses to determine whether we need to generate a new program.
- Interpreter—The interpreter comes in two flavors: headed and headless. The headless version is used for agents, while the headed version can power testing and development or UI workflows like user registration, where neural nets replace traditional control flow (i.e., should I present a special offer to the user, or is the user allowed to register for the site).
The solver and Programmer can use in-context learning (recommended for experimentation) or fine-tuning (recommended for production). One model architecture known as a Mixture of Experts (MoE) can offer additional benefits as they can be large and run efficiently. For example, the Mixtral model leverages a sparse MoE architecture to efficiently process large amounts of data, optimizing parameter usage for language translation and code generation tasks. This approach allows for 6x faster inference than previous models while maintaining cost-effectiveness. By fine-tuning a MoE model, we can realize these same performance gains using experts in specific domains like supply chain optimization or inventory rebalancing. Check out this video if you'd like to see an example of fine-tuning Mixtral MoE.
Implementation
IMPORTANT: X-Reason is under development and is not for production use! A public alpha will begin in May 2024.
I have created a simple implementation of the above architecture called X-Reason. It is written in TypeScript and supports both headed and headless environments. The current WIP contains a sample application written in NextJS. There are no fine-tuned models as part of X-Reason. It uses in-context learning to train expert models. Below is a sample of the training data used as part of the in-context learning for a chemical product engineering agent:
Solver
# Product Development
Q: What are the steps I have to take to create a new cosmetic product like lip balm, soap, eye shadow, shower gel, etc?
A: First, recall any existing solutions for a similar product. If an existing solution can used return it, else generate the ingredients list. Then perform an ingredients database search for relevant ingredients. After that in parallel run regulatory checks and concentration estimation for the retrieved ingredients. Once those steps are complete generate the product formula. Then have an expert review the generated formula. After that perform lab testing. Then evaluate the complete tested formula. Next generate the manufacturing instructions. Then have an expert review the generated manufacturing instructions. After that conduct market research, then generate marketing claims. Finally, generate a product image.
...more Q&A pairs
Let's take this step by step.
1. To find the steps to take find the questions in the knowledge base that best solves the user query.
2. Create the task list based on the associated answer
3. Output the steps to take using only the provided knowledge base as an ordered list. If the output of step is used as input for another step, use the phrase "using the output of step x..."
Programmer
...full prompt ommited
Let's take this step by step:
1. For each step in the user query, identify the corresponding function in the functionCatalog.
1.1 Construct the StateConfig instance setting the id attribute equal to the retrieved function id.
1.2 Add the transitions for the CONTINUE and ERROR events with the target equal to the function in the functionCatalog
2. If states are to be executed in parallel be sure to use the type: 'parallel' state node (examples below).
Note parallel states nodes are required to target their own success and failure nodes and there must be an onDone attribute!!!
\`\`\`
{
id: 'parallelChecks',
type: 'parallel',
states: [
{
id: 'RegulatoryCheck',
transitions: [
{ on: 'CONTINUE', target: 'success' },
{ on: 'ERROR', target: 'failure' }
]
},
{
id: 'ConcentrationEstimation',
transitions: [
{ on: 'CONTINUE', target: 'success' },
{ on: 'ERROR', target: 'failure' }
]
},
success: {
type: "final",
},
failure: {
type: "final",
}
],
onDone: [
{ target: 'FormulationGeneration' }
]
},
\`\`\`
3. Ignore conditions like "if then else" statements
Please note the success and failure states do not appear in the function catalog as they are "special" final states required for all state machines.
Only responds in JSON using the Chemli DSL:
### Start DSL TypeScript definition ###
\`\`\`
export type StateConfig = {
id: string;
transitions?: Array<{
on: string;
target: string;
actions?: string;
}>;
type?: 'parallel' | 'final';
onDone?: Array<{
target: string;
actions?: string;
}>;
states?: StateConfig[];
};
### End DSL TypeScript definition ###
All responses are sent to JSON.parse.
### Start Example ###
User Query:
1. Recall any existing solutions for a similar product.
2. Generate the ingredients list for a new product.
3. Perform an ingredients database search.
4. Run regulatory checks in parallel with concentration estimation.
5. Generate the product formula.
Answer:
[
{
id: 'RecallSolutions',
transitions: [
{ on: 'CONTINUE', target: 'GenerateIngredientsList' },
{ on: 'ERROR', target: 'failure' }
]
},
{
id: 'GenerateIngredientsList',
transitions: [
{ on: 'CONTINUE', target: 'IngredientDatabase' },
{ on: 'ERROR', target: 'failure' }
]
},
{
id: 'IngredientDatabase',
transitions: [
{ on: 'CONTINUE', target: 'parallelChecks' },
{ on: 'ERROR', target: 'failure' }
]
},
{
id: 'parallelChecks',
type: 'parallel',
states: [
{
id: 'RegulatoryCheck',
transitions: [
{ on: 'CONTINUE', target: 'success' },
{ on: 'ERROR', target: 'failure' }
]
},
{
id: 'ConcentrationEstimation',
transitions: [
{ on: 'CONTINUE', target: 'success' },
{ on: 'ERROR', target: 'failure' }
]
},
success: {
type: "final",
},
failure: {
type: "final",
}
],
onDone: [
{ target: 'FormulationGeneration' }
]
},
{
id: 'FormulationGeneration',
transitions: [
{ on: 'CONTINUE', target: 'success' },
{ on: 'ERROR', target: 'failure' }
]
},
{
id: 'success',
type: 'final'
},
{
id: 'failure',
type: 'final'
}
];
My symbolic language in this example isn't very "symbolic." It's an abbreviated DSL for X-State. But a DSL in neurosymbolic programming can be just about anything. The v1 release of X-reason will incorporate a different DSL loosely based on Bash.
I've used GPT-4 for all testing and evaluation of the framework, but you are free to implement whatever model you like, including your own!
To create domain-specific neurosymbolic AI agents using X-Reason, you will need to:
- Assemble your training data
- Build your tools catalog
- Create your agent
- Build your evals and tests
- Deploy and monitor your agent
If you like the project, please star it and subscribe to updates. The GA release is expected before the end of 2024.
Operating Platform: Palantir AIP
Palantir's AIP is the platform I use to deploy and manage my agents and perform model fine-tuning and deployment (Model OPs). AIP includes an application called AIP Logic. It allows you to construct agents visually, manage their inputs and outputs, provide security and access controls, and test and assign triggers to invoke your agent. AIP also sits on top of Palantir Foundry, which provides a rich data fabric known as the Ontology, which enhances the ability of agents to reason, simulate, and perform actions. Having a ready-made platform explicitly designed for leveraging AI in production has enabled me to develop solutions much faster.
AIP Logic can use custom reasoning engines developed using X-Reason (or an alternate framework) in agent construction. AIP also provides the Model Ops (including feedback loops) to produce fine-tuned models such as Mixtral MoE, which are used in the custom reasoning engine.
If you'd like to learn more about Palantir AIP and Foundry, see my previous articles:
Further Reading
Conclusion
Neurosymbolic programs represent a major breakthrough in AI agent construction. Effectively leveraging this approach can unlock massive value for the enterprise and the broader economy. As AI agents gain widespread adoption, we will undoubtedly be grappling with ethical questions such as Universal Basic Income (UBI) and how young people can gain the necessary skills to become SMEs. In future articles, we’ll discuss some of those issues and how organizations can tackle them.
If you want to learn more about the topics discussed in this article, please reach out on LinkedIn. Thank you for reading!