2026 has dawned. The holiday break gave me the chance to reflect on the journey we are on at Query and the steps forward we’ll take in 2026. (In retrospect, the steps we took in 2025 are captured in this blog from my colleague Mike Bousquet.)

In the past few years, the industry, Query included, has talked about search and its evolution with AI on top. For us federated search is the foundation. It is the mechanism by which we see and access remote data. But it is a building block, not the destination. We continue to tirelessly innovate on top of it.

End users are looking for knowledge. The analyst will use any effective means that makes knowledge accessible. Just as in life, cybersecurity tech must be driven by a quest for knowledge.


The SIEM Paradox: Drowning in an Ocean of Data

For decades, the SIEM – the industry standard – has suffered from a fundamental paradox: drowning in data, yet starving for knowledge. To quote Coleridge, it is a case of “water, water, everywhere, but not a drop to drink!”

We, including myself in my previous jobs, have spent years thinking the solution was to centralize the water (the data) in an ocean. I and others built massive SIEMs, security lakes and warehouses, hoping that if we put it all in one place, wisdom would emerge. It did for some time over the past two decades. But now that approach doesn’t work. We just get bigger bills and slower queries trying to create the ocean.

In 2025 we showed how federated search adds value next to SIEMs and security lakes. The value is getting bigger as we go. Let’s see how.


From Mesh to Meaning with AI

At Query, we took a different path. First, we built federated search to stop the data movement madness. Then, we overlaid a security data mesh to make any data, from any source, addressable. Now, as we enter 2026, we have the next leap ready to deploy: Federated Detections and AI Agents leveraging access to normalized distributed data.

Federated search has primarily been used in reactive use cases. The analyst was deciding to respond to/investigate something that had happened. Yes, sometimes they use it to hunt for threats upfront before a potential breach. But often they don’t have the bandwidth to hunt, which is especially true for small and overworked teams. 

Now with AI, Query has a system that organically surfaces relevant information and alerts. It is early but customers have started to see its value. I am excited for the amazing amount of value it will create for our customers in 2026.


A Peek Under the Hood: Extreme Parallelism

You can’t think of automation and AI without having a strong foundation that can execute queries with efficiency and fast performance. It is even more important when you have to deal with decentralized data. How do we achieve this performance? By favoring extreme parallelism.

In traditional data handling, joins are the enemy of speed. Joins act like locks and semaphores; they slow systems down and create dependencies. While joins are still necessary for valid scenarios, the more smart and optimal parallelization you can do, the better off you are.

At Query, we estimate cardinality, prefetch where it is optimal, and use in-session caches to scale.


The QDM Advantage: Making Data Addressable

One secret weapon in our architecture is the Query Data Model (QDM).

While QDM is based on OCSF (Open Cybersecurity Schema Framework), we use it for much more than a data transformer and normalizer. We use QDM to make data addressable on the mesh.

  • From a semantic perspective, the analyst asks our mesh (powered by our federated search query engine) in QDM for data criteria in a platform-agnostic manner.
  • QDM is then populated with extreme parallelism with the events and entities meeting that criteria (e.g., getting detections related to a specific user).

The Gap Between Answers and Knowledge

What is the answer the analyst is after? Once the data is populated, our search CoPilot can distill those results into the answer the analyst is looking for. But here lies the critical distinction: Is that answer truly knowledge?

Maybe not. It is the right answer to the current question, but it is transient and scoped based upon the information in that specific result set at that specific moment. It contains facts derived from that data. The best way to think of it is that it contains bits of knowledge, but it doesn’t represent the whole picture.

So, how can we help AI go further from narrowly focused individual answers to have more expanded knowledge? The journey towards that is through AI leveraging a Knowledge Graph.


AI with Knowledge Graph

In our opinion, knowledge is a collaborative process between the AI and the analyst. While we don’t claim our AI won’t make mistakes, we do show the relevant data that was used to get to the AI answer. Our CoPilot has primarily focused on assistance so far. Having a continually updating Knowledge Graph can make it smarter and more aware of the broader context. Constructing a Knowledge Graph involves automating several relevant summary searches daily—some to trigger detections, others to surface interesting events, observations, and analytics.

With Federated Search, QDM and other building blocks, we create a composite AI-enabling picture:

  • QDM+ for the Ontology: QDM with extensions – let’s call it QDM+ for now – will serve as the semantic model for our AI. QDM already captures the OCSF entities, their relationships, the constraints of the environment they exist in, and their activities (normalized events).
  • Mapping the Graph: By modeling that ontology on the mesh, we map and structure the Knowledge Graph.

The Goal: Local, Contextualized, and “Aware” AI

Imagine a SOC-controlled private LLM. This model consumes the summarized knowledge every day, digesting the observations derived from the mesh. It stays “aware” this way.

Where should this knowledge live? Who should own it and be responsible for it? Those are questions in front of us we have to answer in 2026, in collaboration with our customers.

Could this ever lead to true sentient awareness of the organization’s security picture? I would hope so at some point in the future. For now, our 2025 foundation and start in 2026 is a stepping stone. It enables a collaboration where the analyst interacts with AI that understands the context of the environment, not just the syntax and results of a query.


The 2026 Pathway

Query will help you centralize the knowledge, without centralizing the data.

Last year we broke the shackles of forced data centralization of proprietary SIEMs. Our customers got control and flexibility over their data. In 2026, we will take the journey together with customers to use our mesh, extract meaning, and centralize knowledge (in their AI model).

That is our journey from federated search to knowledge.