Noumetic

AI & Technology

Product-driven AI Strategy

N

Noumetic Team

2025-08-27

7 min read

Why your company should rely on atomic information pockets to power a product-driven AI strategy.

TL;DR:

Large Language Models work best with minimal, precise context. Noumetic builds a product-centered knowledge map so the AI always gets the minimum necessary facts for the right audience: reducing errors, hallucinations, and bias.

Product-driven AI Strategy

Andrej Karpathy, former Director of AI at Tesla, recently described today's artificial intelligence as "people spirits". As such, these metaphorical beings embody both the brilliance of deep knowledge and the unpredictability of a wild card.

In more technical terms, these "spirits" are Large Language Models (LLMs), a special class of prediction models that power conversational AI systems like ChatGPT. At their core, they work by predicting what's next in a conversation, based on what's been said before.

For simplicity, let's imagine this as:

Given a sequence of words, what's the most likely next one?

To do this, an LLM draws on two distinct sources of knowledge: learned knowledge and context knowledge.


Learned knowledge is what the model picks up during its massive training phase, digesting staggering amounts of text to understand patterns, language structure, and facts. This process is so resource-intensive that only a few tech giants (and some open-source communities) can do it at scale.

Even without any special context, an LLM can carry on a plausible conversation or answer most trivia questions:

"Who was the 42nd president of the USA?"

This is like having a brain full of pub quiz facts, but frozen in time. It can't, for example, reason about your product's battery life under extreme temperatures unless that specific test data is fed into it.

Context knowledge is information you provide in the moment to help the model with a specific task. For example, if you want the LLM to know about your products battery life, you need to give it those numbers in a readable format.

But here's the catch: you can only fit so much into what's called the context window: the total space for context knowledge, the conversation so far, and the model's answer.

So how much context knowledge is best? The answer:

Just enough.

That's at the heart of Noumetic's bottom-up, product-driven AI strategy. By building on singular truths, given to the LLM as just enough context, we can ensure it retrieves only current, context-appropriate facts. Let's explore some of the common pitfalls mitigated by this approach.


One notorious LLM failure is hallucination, confidently making things up. It might tell you about a study that perfectly supports your argument… only for you to discover it never existed. Or it might "estimate" your product's heat tolerance out of thin air because the actual test data wasn't in its context.

The risk of hallucination actually grows as the context gets longer, a side effect of its probabilistic nature.

When you overload an LLM with too much context, accuracy can drop in more subtle ways than outright hallucination. The model has to sift through a mountain of details to find the right answer, and there are three common traps in something called needle in the haystack problem:

  • Partial match errors: pulling a detail from the context that's close to the truth but not exact.
  • Recency or frequency bias: giving too much weight to the most recent or most repeated detail instead of the correct one.
  • Hallucination: failing to find the real answer and making something up instead.

For example, imagine you feed it decades of product specs for both your company and your competitors. You ask for your maximum operating temperature in 2004, but it might:

  • Return your 2006 spec instead (recency bias),
  • Give you a competitor's 2004 spec (partial match error), or
  • Invent a number entirely (hallucination).

More context isn't always better, it can blur the model's focus, leading to all three types of mistakes.

To avoid hitting the context limit, you can use information retrieval systems to pull only the most relevant chunks of data. But this introduces context fragmentation: critical connections get lost.

You might retrieve three sets of temperature readings but no associated test dates because that info lived elsewhere in the documents.

Even with perfect retrieval and zero hallucination, the LLM can still fail if the wrong information is fed into it. Different use cases require different data such as selling a product, for example, needs marketing-approved specs and competitive positioning, while supporting a product requires troubleshooting steps, warranty terms, and repair procedures.

If these distinctions aren't enforced in your data governance, that is deciding which information is correct for which audience and keeping it up to date, you risk the model giving the right answer to the wrong question. The LLM isn't judging whether a spec is for sales or support; it just sees "data" unless you clearly separate and tag it.


Noumetic addresses these issues by building a product-centered ontology of your company, a structured, interconnected map of concepts where each concept is an atomic information pocket.

Think of an ontology as a detailed blueprint that defines how all the elements of your business relate:

  • Products belong to product lines
  • Product lines belong to your company
  • Projects connect to locations, teams, and timelines
Ontology diagram: products, product lines, company, projects, teams, locations

Each product, product line, project, location, team, and timeline is a self-contained truth with corresponding data. This ensures the LLM retrieves only current, context-appropriate facts.

Leading to just enough context:

  • Shorter context = higher retrieval accuracy and fewer hallucinations
  • Even if the context window is exceeded, information stays within a single conceptual "realm," reducing fragmentation

Furthermore, clear separation of information through data governance ensures each role accesses only what it needs. A sales agent, for instance, works with product features, pricing, and positioning, while a support agent draws from troubleshooting steps, legacy tickets, and warranty terms. This way, the LLM delivers the right facts to the right audience without cross-contamination.

Data governance diagram: audience-specific information, version control, role-based access

With your product at the center, we can extend AI's reach into project management, customer interaction analysis, and actionable business insights, all grounded in product-centered, governed truth.

These principles come to life most clearly in the two main use cases where product-centered AI creates the highest impact: sales and support. Both of these functions generate tickets, structured conversations with customers, but the way these tickets feed back into the product is different. Together they create two complementary cycles.

In the sales innovation cycle, a first contact with a prospect leads to the creation of a sales ticket. A sales agent works to convert this into a customer by highlighting features, benefits, and differentiators. Beyond the immediate outcome, the collected tickets also reveal what potential customers are consistently asking for such as unmet needs, desired features, or blockers to purchase. When analyzed systematically, these insights fuel product innovation by shaping the roadmap around market demand rather than guesswork.

In the support improvement cycle, a customer problem generates a support ticket. The support agent resolves the issue for the individual customer, but the real value lies in the aggregation. By analyzing recurring issues, inefficiencies, or failure patterns across many tickets, teams gain clear insights into where the product can be made more robust, reliable, or user-friendly. These insights then feed back into product improvements that reduce future support load and strengthen customer satisfaction.

Data governance diagram: audience-specific information, version control, role-based access

Together, these two loops, innovation driven by sales and improvement driven by support, form a continuous feedback system. Both cycles start with customer interaction and end with a better product, reinforcing each other and ensuring that the product itself remains at the center of growth and strategy.

If your company is product-led, now's the time to make your product the center of your AI strategy.

Share this article

Do you want to bring your products to life?

Book A Demo