AI is in trouble. Both generative AI and predictive AI face crippling limitations that compromise … More
AI is in trouble. Both of its main two flavors, generative AI and predictive AI, face crippling limitations that compromise their ability to realize value.
The solution? GenAI helps predictive AI and vice versa.
GenAI’s problem is reliability. For example, while almost three-quarters of lawyers plan to use genAI for their work, their AI tools hallucinate at least one-sixth of the time.
Predictive AI’s problem is that it’s hard to use. While it has enjoyed decades of success improving large-scale business operations, it still realizes only a fraction of its potential because its deployment demands that stakeholders hold a semi-technical understanding.
These two flavors of AI – strictly speaking, two categories of use cases of machine learning – are positioned to solve one another’s problems. Here are five ways they can work together.
1. Predictive Intervention For GenAI
Predictive AI has the potential to do what might otherwise be impossible: Realize genAI’s bold, ambitious promise of autonomy – or at least a great deal of that often overzealous promise. By predicting which cases require a human in the loop, an otherwise unusable genAI system will gain the trust needed to unleash it broadly.
For example, consider a question-answering system based on genAI. Such systems can be quite reliable if only meant to answer questions pertaining to several pages worth of knowledge, but performance comes into question for more ambitious, wider-scoped systems. Let’s assume the system is 95% reliable, meaning users receive false or otherwise problematic information 5% of the time. Often, that’s a deal-killer; it’s not viable for deployment.
The solution is predictive intervention. If predictive AI flags for human review the, say, 15% of cases most likely to be problematic, this might decrease the rate of problematic content reaching customers to an acceptable 1%.
A generative AI system with predictive intervention that is 99% reliable.
For more information, see this Forbes article, where I cover this approach in greater detail.
The remaining four ways to hybridize predictive and generative AI each help in the opposite direction: genAI making predictive AI easier and more accessible.
2. Chatbot Assistant For Predictive AI
Anyone can use genAI, since it’s trained to respond to human-language prompts, but predictive AI isn’t readily accessible to business users in general. To use it, a business professional needs the assistance of data scientists as well as a semi-technical understanding of how ML models improve operations. Since this understanding is generally lacking, most predictive AI projects fail to deploy – even when there are data scientists on hand.
An AI chatbot does the trick. With the right configuration, it puts into the hands of the business user a virtual, plain-spoken data scientist that helps guide the project and answers any question about predictive AI in general. It serves as an assistant and thought partner that elucidates, clarifies and suggests, answering endless questions (without the user ever fearing they’re pestering, overtaxing or asking “stupid questions”).
For example, for a project targeting marketing with predictive AI, I asked a well-prompted chatbot (powered by Anthropic’s Claude Sonnet 3 large language model), to explain the profit curve “for a 10-year-old using a story.” it responded with a charming and easily-understood description of the diminishing returns you face when marketing your lemonade stand.
For more information, see this Forbes article, where I cover this use of a chatbot in greater detail.
3. Coding For Predictive AI
Crazy story. Although I’ve been a data scientist for more than 30 years, the thought leadership side of my career “distracted” me from hands-on practice for so long that, until recently, I had never used scikit-learn, which has become the leading open source solution for machine learning.
But now that we’re in the genAI age, I found getting started extremely easy. I simply asked an LLM, “Write Python code to use scikit-learn to split the data into a training set and test set, train a random forest model and then evaluate the model on the test set. For the training data, load a (local) file “XYZ.csv”. The dependent variable is called “isFraud”. Include clear comments on every line. Make sure your code can be used within Jupyter notebooks and be sure to include any necessary “import” lines.”
It worked. Moreover, the code it generated served as a tutorial for various uses, without me needing to pour through any documentation about scikit-learn (boring!).
For more information, this approach will be covered by a Machine Learning Week training workshop, “Automating Building of Predictive Models: Predictive AI + Generative AI,” to be held on June 5, 2025.
4. Generating Predictive Features
Since LLMs are well-suited for processing human language – i.e., the domain of natural language processing and also known as processing unstructured data – they may outperform standard machine learning methods for certain language-heavy tasks, such as detecting misinformation or detecting the sentiment of online reviews.
To create a proof-of-concept, we tapped a Stanford project that tested various LLMs on various benchmarks, including one that gauges how often a model can establish whether a given statement is true or false. Under certain business assumptions, the resulting detection capabilities proved valuable, as I detailed in this Forbes article.
More generally, rather than serving as a complete predictive model, an LLM may better serve as a way to perform feature engineering – turning unstructured data fields into features that can serve as input to a predictive model. For example, Dataiku does this, allowing the user (typically, a data scientist) to select which LLM to use, and what kind of task to perform, such as sentiment analysis. As another example, Clay derives new model inputs from across the web with an LLM. For decades, NLP has been applied to turn unstructured data into structured data that can then be used by standard machine learning methods. LLMs serve as a more advanced type of NLP for this purpose.
5. Large Database Models
Even as LLMs have been making a splash, another incoming AI wave has been quietly emerging: large database models.
LDMs complement LLMs by capitalizing on the world’s other main data source: enterprise databases. Rather than tapping the great wealth of human writing such as books, documents and the web itself—as LLMs do—LDMs tap a company’s tabular data.
Swiss Mobiliar, Switzerland’s oldest private insurance company, put LMDs to use to drive a predictive AI project. Their system tells sales staff the odds of closing a new client so that they can adjust their proposed insurance quotes accordingly. The deployed system delivered a substantial increase in sales.
Swiss Mobiliar will present these results at Machine Learning Week 2025. For further detail, see also my Forbes article on large database models.
Hybrid AI As An Antidote To AI Hype
Predictive AI and genAI need one another. Marrying the two will solve their respective problems, expand the ecosystem of tools and approaches available to AI practitioners and reunite what is now a siloed field to become more cohesive.
But perhaps most important of all, these hybrid approaches will place AI value above AI hype by turning the focus to project outcome rather than treating any one technical approach as a panacea.
In a few weeks, I’ll deliver a keynote address on this topic, “Five Ways to Hybridize Predictive and Generative AI,” at Machine Learning Week, June 2-5, 2025 in Phoenix, AZ. Beyond my keynote, the conference will also feature an entire track of sessions covering how organizations are applying such hybrid approaches. You can also view the archive of a presentation on this topic that I gave at this online event.
Source: https://www.forbes.com/sites/ericsiegel/2025/05/15/5-ways-to-hybridize-predictive-ai-and-generative-ai/