Most conversations about detection start with logic. Which rules are you running? How are they tuned? What frameworks are you mapping to?
As an industry, we’re good at this part. We debate thresholds, compare rule packs, and build coverage heat maps. There’s always a discussion about whether something should fire at three events or five.
What gets less attention is the execution surface those detections actually run against.
In most companies, detection coverage is defined by ingestion. If security-relevant data is ingested into the system running detections, it’s in scope. If it isn’t, it effectively doesn’t exist.
That model worked when environments were smaller and more centralized. It gets harder to defend as security-relevant data spreads across cloud platforms, identity providers, line-of-business apps, security tools, and data lakes.
Security teams respond the only way they can. They prioritize sources, adjust retention, and decide which data is worth the cost and effort to ingest. These are rational decisions. They’re also why coverage rarely keeps pace with the environment.
Over time, detection coverage starts to look like an ingestion strategy.
Portability Is Only Part of the Equation
There has been broad recognition that tying detection logic to a single storage system creates limits. Making detection logic portable is an important step.
But portability and reach are not the same thing.
A detection rule that can move between systems is useful. A detection rule that can execute across all relevant security-relevant data, wherever it resides, is something else entirely.
Supporting a few storage backends looks good on a diagram. It does not change what happens during an incident.
The practical question is simple: can detection logic execute across the full set of security-relevant data a team relies on, without requiring that data to be centralized first?
If the answer is no, coverage will always trail the environment.
What Broad Reach Looks Like in Practice
Consider what happens when a new line-of-business app rolls out across the organization. It generates audit logs with meaningful security context. Those logs live in the application’s system of record or in a data lake.
In an ingestion-first model, the next step is predictable. A ticket gets filed. Pipeline work is estimated. Ingestion costs are calculated. Retention is debated. Everyone agrees the data is important. It just might not make this quarter’s roadmap.
Only after that process do detections reliably run against it.
In a reach-first model, connecting the source is enough. Detection logic executes where the data already lives. Coverage expands without redesigning pipelines or relocating large volumes of telemetry.
The same pattern shows up with cloud control plane logs, identity systems, or high-volume telemetry stored in data lakes for long-term analysis. The data is valuable. The friction is architectural.
Reach turns connectivity into coverage.
Changing the Coverage Curve
When detection coverage depends on ingestion, it grows in steps. Each expansion requires engineering effort, cost analysis, and maintenance. Coverage increases, stabilizes, and waits for the next integration cycle.
If you’ve ever seen a roadmap slide that says “Ingest everything,” you know how that usually ends. There’s always a footnote.

Ingest everything: Subject to budget approval, retention limits, and three integration backlogs.
When coverage depends on connectivity, growth is more continuous. As new sources are connected, they become part of the execution surface and detection logic remains stable. The environment evolves and coverage expands with it.
That shift changes how teams think about detection programs. Instead of asking whether a source is worth ingesting, they can ask whether it is worth connecting. Instead of rebuilding pipelines, they can focus on improving detection logic and response.
Reach as a Foundation
This is where the Security Data Mesh comes in.
By connecting to security-relevant data wherever it resides and providing a consistent way to work across it, the mesh establishes a broad execution surface. Detections become one workload running on that surface.
The same foundation supports investigation, threat hunting, incident response, and compliance. Once access to security-relevant data is consistent across systems, workflows can build on top of it without being re-architected each time the environment changes.
Detections are often where the constraint becomes visible first. They are not the only place it appears.
Looking Ahead
Most teams have gotten very good at managing ingestion. They have dashboards, budget forecasts, and sometimes entire engineering backlogs dedicated to it.
Fewer teams have stepped back to ask whether ingestion should be the gate for detection coverage. Reach does not remove the need for discipline, but it removes an artificial boundary that has shaped coverage for years.
In the next post, I’ll look at what it means for a detection to be production-grade when it runs across distributed security-relevant data. Federation does not lower the bar. If anything, it raises it.
If you’re ready to expand detection coverage without centralizing more data, take a look at Query Federated Detections and reach out to see how a reach-first model works in practice.
