A recent HBR article by Oguz Acar from King’s College London rejects the idea that Prompt Engineering is the hottest job of the future, declaring instead that problem formulation is a higher-order, more durable skill for the age of AI. I agree but think there is an even more immediate use for people who are great at framing a problem: data strategy.
Mastering Master Data
I recently hosted a series of Zero100 roundtable discussions with supply chain and IT leaders on the topic of working with tech partners for AI-enabled transformation. The main takeaway, beyond a broad rejection of “big bang” projects like those in the early days of ERP, was that good data is a prerequisite for exploitation of AI. The ask seems reasonable, and yet the feeling was unanimous that tackling the data problem up front often ends up like an addition to your kitchen – it takes twice as long, and costs three times as much as originally planned.
Maybe “master data” is simply too ambitious a concept for operations leaders looking to use AI to improve anything from forecasting and risk assessment to route optimization and network planning. A better approach might be to create data strategies that serve only the scope of a narrow problem statement instead of trying to unify everything encompassing the “end-to-end” supply chain process. This logic treats data as a servant to the problem you’re trying to solve, rather than the master your whole project follows.
Good data is so vital to automating processes, analytics and even content generation that the highest skill is deciding what data to ignore. It is a paradox, but maybe also an unlock, that in this time of exploding data, perhaps less is more.
What’s the Problem?
From 2018 to 2021 I worked for Amazon
AMZN
The technique underpins tech-enabled solutions like developing deep neural nets to forecast demand for new and slow-moving items or optimizing local delivery routes to safely turn left across oncoming traffic rather than turn right to drive around the block. These examples are typical of the very specific problems we were trying to solve, which defined the “good” needed data to feed solvers.
Acar’s article helpfully boils it down to four essentials:
- Problem diagnosis: Identifying what’s wrong. At Amazon as in Acar’s examples, this often involves asking the “five whys”. For many ops people this is little different from root-cause analysis applied to failures in production or logistics.
- Problem decomposition: Chunking the problem into elements that are even simpler. At Amazon, the phrase I heard a lot was “shrink the problem”. If for instance the big problem is supplier quality, the chunks might be supplier order confirmation, in-bound receipt, and QA acceptance.
- Problem reframing: A.k.a. thinking outside the box. Amazonians worked backwards from the customer experience, but for other businesses the reframe may be a matter of thinking about how commercial teams experience the problem or how workers might respond to a new process.
- Problem constraint design: Bounding the solution universe. In operations this includes safety and regulatory compliance, but ideally should not stop at budget constraints or even customer expectations. Amazon Prime, for example, is the outcome of a solution universe that respected physical constraints in logistics but left everything else open.
Data is Still King
Each of these four steps helps identify and prioritize the data needed to use analytics to solve a problem. This approach is fundamental to the way Amazon built its SCOT (supply chain optimization technology) organization and incrementally scaled AI-enabled systems from sourcing to doorstep. It is also how Unilever got a grip on deforestation in its palm oil supply chain, AGCO
AGCO
WMT
Frame the problem, then ask what data you need to solve it.
Source: https://www.forbes.com/sites/kevinomarah/2023/06/08/data-strategies-start-with-defining-what-problem-you-want-to-solve/