There is a lot of hype among some sectors of the food industry about data-driven cooking (aka computational gastronomy). It is the practice of using structured food data (recipes, ingredient chemistries, sensory and consumer responses) and computational tools (statistics, machine learning, knowledge graphs) to guide culinary decisions.
How far has “data-driven cooking” actually traveled? Who’s using it, and for what? And where does it go against the grain of real taste?
The most visible adopters aren’t chef-philosophers with tweezers; they’re the big flavor makers and packaged-food labs. McCormick’s widely reported collaboration with IBM didn’t aim to replace palates with processors; it aimed to shorten the long, expensive slog from idea to shelf by surfacing promising spice blends faster—and they did, launching products built with those tools in 2019. The draw was speed and fewer dead ends, not robot cuisine.
Startups in plant-based and “alt-protein” land have also leaned in, using models to map a desired sensory target (say, the savoriness of a stew) to workable plant combinations. Think of NotCo’s much-covered stunts and product pipeline: it’s not magic, it’s systematic guessing with larger lens.
What about restaurants? There’s curiosity and some play, but the center of gravity is still industrial food. Where chefs do tinker, they use it as a sketchbook. Tools like Foodpairing graphs or Sony’s FlavorGraph are used to jog the imagination—to find a bridge note or a left-field pairing—while leaving judgment to craft, place, and the person at the stove. Sony’s own material frames it that way: support the cook’s creativity, don’t supplant it.
Why are any of these people using it? There seem to be three broad reasons:
- It compresses the search for new flavor combinations. Once you model cuisine as relationships—ingredients linked by shared aroma compounds, or recipes linked to images and methods—you can navigate from “I want a darker mid-palate” to “try these candidates” without wandering the pantry for weeks. The original “flavor network” paper made this clear, and showed something culturally important: Western cuisines often share aroma compounds in pairings, while several East Asian cuisines deliberately avoid that overlap. That’s already a story—resonance versus counterpoint—that helps clarify the differences between cuisines.
- It’s good a juggling limits and constraints. Real-world development is never “make it delicious, full stop.” It’s also cost, nutrition, dietary rules, storability, and so on. Algorithms are good at juggling many goals at once; humans are better at deciding which goals matter. The tools sort; the cooks choose.
- It promises to make recipe development less risky. Better triage up front can mean fewer doomed pilots later. That’s the promise—not certainty but hopefully fewer blind alleys.
But there is reason to be skeptical.
Are their biases built into the data? Much of the public recipe and review data tilts toward Western palates and the English-language, which bakes cultural assumptions into the models. If your dataset “thinks” parsley and parmesan are universal, it will miss other logics of deliciousness. Even the foundational flavor-network work shows that what counts as a good relation in one cuisine is noise in another. Models inherit those biases.
And flavor pairing isn’t cooking. A list of shared molecules is a spark, not a dish. Texture, temperature, order of tasting, and timing—all the stuff that makes cuisine temporal—can reverse what a static pairing table predicts. Charles Spence’s review on sequencing makes this point clearly: the same elements in a different order yield a different experience. A story needs a plot and shared flavor molecules do not give you that.
And it turns out, the “law of food pairing” –-foods that share flavor molecules pair well—is not a law. Journalists loved the idea of universal rules; the evidence suggests plural principles. Western kitchens often layer similarity; East Asian kitchens often choreograph contrast. That pluralism is good news for cooking—and a caution against grand, homogenizing models.
Explanations are as important as the recommendation. If you don’t explain why something works, chefs won’t trust it. Chefs and R&D teams need reasons, not just data. If the tool can’t say why this bridge ingredient might work—what it’s doing to the arc—it becomes a party trick.
Finally, an aesthetic worry masquerading as a technical one: optimization tends to pull toward a house style of engineered savoriness—the comfort of browned notes and glutamates—flattening difference under the sign of “depth.” If a tool makes it easier to get to that thicket, more dishes will camp there. The cure for homogeneity is obvious and old-fashioned: keep the palate curious. Use the map to find strangeness, not to pave it over. The original science—useful and fascinating—was about revealing diversity in culinary logics. We should keep it that way.
In summary computational gastronomy is a sophisticated way to make the background of cooking—relations, patterns, tendencies—more visible. In factories, it’s already normal because it saves time and money. In restaurants, it’s a sketchbook. In our kitchens, it can be a prompt to think in arcs and relations rather than ingredients alone. Let it widen the search. Don’t let it narrow the palate.