A few months ago, I was standing inside a spices factory when a client paused mid-conversation and asked something I didn’t expect.
“Be honest… AI can’t smell spices, right?”
It wasn’t a joke. And it wasn’t casual.
Around us were sacks of raw material, forklifts moving between aisles, and that unmistakable smell that only comes from herbs and essential oils handled at scale. This wasn’t a boardroom discussion about technology trends. This was the place where quality decisions turn into revenue—or losses.
That single question summed up a much deeper concern. Because in this business, smell isn’t poetic. It’s commercial. It’s the difference between acceptance and rejection, between smooth dispatch and an uncomfortable call from a major FMCG client.
And suddenly, the conversation about AI became very real.
Why the Herbs, Spices, and Essential Oils Industry Is Harder Than It Looks
From the outside, the process looks simple. Raw material comes in. It’s stored, processed, tested, and shipped.
From the inside, nothing behaves the same twice.
The same supplier can deliver very different outcomes across batches. Moisture levels shift. Oil yield varies. Seasonal factors quietly interfere. Storage conditions that look “fine” on paper still influence quality in ways that only show up later.
FMCG clients, of course, don’t care about the reasons. They care about consistency.
What makes this industry especially demanding is that variability is built into it. You’re dealing with natural materials, environmental sensitivity, and strict downstream expectations—all at the same time.
This is why experience matters so much here.
And also why experience alone starts to strain as scale increases.
The Hidden Cost of Seeing Problems Too Late
Most quality issues don’t announce themselves early.
They surface after processing has started. Sometimes after value has already been added. Often when options are limited.
By then, teams are no longer deciding calmly. They’re reacting. Phone calls begin. Escalations follow. Everyone is trying to explain why something happened instead of asking what can still be done.
From what I’ve seen, this is where operational stress quietly builds up. Not because teams aren’t capable, but because they’re always arriving a little late to the problem.
When data lives across ERP systems, Excel files, inspection reports, and people’s heads, early visibility becomes very hard to maintain consistently.
The cost isn’t just wastage or rework.
It’s decision fatigue.
It’s audit anxiety.
It’s the feeling of always catching up.
When Experience Starts Working Against You
This part often gets misunderstood.
Experience is invaluable in factories like these. It keeps operations running when things go wrong. But experience, by nature, is reactive.
People recognize patterns after they’ve repeated enough times. They sense issues once something feels “off.” At smaller scales, that works. At larger scales—with multiple suppliers, warehouses, and batches—it becomes risky.
What often goes unsaid is how much pressure falls on a few experienced individuals. They become the unofficial early-warning system. Everything runs through them. That’s not scalable.
Over time, this creates bottlenecks and blind spots. Not because anyone is failing, but because humans can’t continuously track hundreds of subtle signals at once.
That’s when the conversation needs to shift—not away from experience, but toward supporting it better.
The Real Shift: From Reacting Late to Seeing Early
At some point during that factory conversation, the topic shifted quietly.
Not to tools.
Not to dashboards.
But to timing.
Most businesses don’t struggle because they make bad decisions. They struggle because they make late decisions.
By the time a quality issue becomes obvious, the range of possible actions has already narrowed. Options disappear quickly in operations. What’s left is damage control.
Accuracy wasn’t the real problem.
Timing was.
Seeing something slightly early—even imperfectly—is often more valuable than knowing something perfectly when it’s already too late. That insight changed the direction of the solution entirely.
The goal wasn’t prediction for the sake of prediction.
It was early awareness.
What We Actually Built (Without Turning This Into a Tech Lecture)
We didn’t set out to build “an AI system.”
What we built was a foundational intelligence layer that sits across operations and quietly watches what humans can’t track consistently at scale.
At a high level, it did three things:
- Brought together signals that were already present but scattered
- Looked for patterns across time, suppliers, and conditions
- Flagged risk early enough for people to act calmly
Supplier data, batch characteristics, warehouse conditions, quality outcomes—all of it lived in one place, connected in context.
This wasn’t about replacing existing systems. It was about connecting the dots between them.
Once that foundation was in place, conversations changed. They stopped starting with “What went wrong?” and started with “This looks risky—what should we do next?”
Why AI Agents Worked Better Than One Big Model
This is where many AI projects quietly fail.
They try to build one large, all-knowing model that’s supposed to answer everything. In operations, that usually creates confusion rather than clarity.
Instead, we used specialized AI agents, each focused on a very specific operational question.
- One agent observed supplier behavior over time
- Another focused on quality confidence before processing
- A third correlated warehouse conditions with batch sensitivity
- A fourth helped prioritize production sequencing
Each agent had a narrow job. Each explained why something was flagged. And none of them acted without human input.
That design choice mattered more than any algorithm. It made the system understandable. And because it was understandable, teams trusted it.
What Changed on the Ground
The biggest change wasn’t visible on a screen.
It showed up in how the day unfolded.
Fewer surprise calls.
Fewer last-minute scrambles.
Fewer “how did this happen?” conversations.
Quality teams started prioritizing inspections differently. Warehouse teams acted before conditions became problems. Leadership got alerts instead of escalations.
Over time, the numbers reflected what people already felt:
- Around 28% fewer batch rejections
- Nearly 35% reduction in wastage
- Calmer audits
- More predictable operations
But the most important change was subtle.
Decisions slowed down—in a good way. There was space to think again.
What This Taught Us About AI in Operations
Most AI initiatives in factories don’t fail because the models are bad. They fail because they’re built too far away from reality.
Factories don’t need clever predictions no one understands. They need signals they can trust, early enough to matter.
Accuracy helps. Adoption matters more.
Explainability matters more.
Respecting how experienced teams think matters most of all.
When AI is introduced as a silent partner—not an authority—it gets used. When it tries to replace judgment, it gets ignored.
That difference decides whether a system survives beyond a pilot.
This Isn’t Just About Spices
While this story comes from the herbs, spices, and essential oils industry, the pattern is broader.
Any business dealing with variability, environmental sensitivity, multi-step processing, and strict downstream expectations will recognize the same pressures.
Food processing.
Chemicals.
Pharma raw materials.
Manufacturing with tight tolerances.
Logistics-heavy operations.
The details change. The stress feels familiar.
Coming Back to That Question
“AI can’t smell spices, right?”
That question wasn’t really about AI’s limits. It was about trust.
People wanted to know whether technology could respect the nuance they’d spent years developing. Whether it could support experience instead of dismissing it.
The answer, it turns out, isn’t about sensing smell. It’s about sensing patterns early enough to be useful.
When AI is built quietly, carefully, and in context, it earns its place on the factory floor.
A Quiet Invitation
If parts of this story felt familiar, you’re not alone.
Many operations reach a point where teams are capable, experience is strong, and yet everything feels more reactive than it should. That’s usually not a people problem. It’s a visibility problem.
I’m always open to practical conversations about how AI agents and early-warning systems can support real operations—without demos, buzzwords, or theory-heavy promises.
Sometimes it starts with simply comparing notes.

