The compliance document arrives: “You shall protect your endpoint.” “You shall have a cyber security policy.” “You shall monitor your network and systems.”
The language is clear enough. Your organization responds by researching vendors, evaluating platforms, and making a purchase. The intrusion detection system gets installed. When the auditors arrive, they ask one question: “Do you have intrusion detection protection?”
“Yes.”
Check. You’re compliant!
But nobody asks the questions that actually matter. Is it turned on? Is it configured properly? Is it blocking malicious traffic? Does it provide meaningful metrics? The auditor doesn’t ask because that’s not what audits are designed to check. The compliance framework doesn’t specify because it can’t dictate implementation details. You have the solution, you checked the box, you’re compliant, you’re OK to operate.
You’re also potentially just as vulnerable as you were before you spent the money.
This is compliance theater: the appearance of security without the substance. Not because anyone is being dishonest or cutting corners, but because organizations are systematically skipping a critical translation layer between what compliance requires and what actually protects the business.
The compliance-to-product shortcut
Consider a typical compliance requirement: New York’s Department of Financial Services regulations, for instance. TwelveTen pages written by lawyers for lawyers, specifying a series of obligations. Your organization needs to respond, preferably quickly. There’s pressure to show action, demonstrate progress, and satisfy auditors before their next visit.
So you take what seems like the most direct path. Compliance says you need endpoint protection, so you evaluate endpoint protection tools. You select one, purchase it, deploy it. Now you can tell auditors “yes, we have this.” The requirement appears to be met: you found a solution that claims to satisfy it; you implemented the solution. On paper, it works perfectly.
What gets skipped is the entire middle layer – the translation from legal obligation into company-specific policy, from policy into procedure, and from procedure into the actual requirements that the solutions have to satisfy, and from solutions to artifacts that will validate obligations are met.
The result is predictable. You own a tool. It might be installed. It might even be running. But does it actually deliver what the compliance requirement was trying to ensure? Often, nobody can answer that question because nobody defined what “success” looks like beyond “we possess the thing.”
The audit process itself reinforces this gap. Auditors ask “do you have it?” not “how well do you use it?” They verify presence, not effectiveness. This creates a perverse incentive where the fastest path to compliance is buying something – any something – rather than actually solving the underlying security problem.
Nobody is committing fraud. Everyone is following the established process. But the process itself is designed to verify purchases, not protection. You cannot answer compliance with a technical solution. That’s not how it works. You’re two steps away from that.
What compliance documents can and cannot tell you
Compliance frameworks specify obligations: “You shall protect your endpoint.” “You shall have a cyber security policy.” “You shall implement controls.” They define what outcomes you must achieve.
What they cannot specify is how. Which specific tool to use. Which vendor to select. How to configure it for your environment. What metrics prove it’s working. How to adapt it to your specific organization’s structure and needs.
This isn’t a flaw in compliance frameworks. Lawmakers and lawyers write these documents, not technologists. They must apply across different industries, different organizational sizes, different technical environments. They define obligations, not implementations; outcomes, not methods.
The gap this creates is substantial. Between “you shall protect” and “here’s the specific tool we purchased” lies an entire translation process. That’s where the real work happens, and where most organizations try to skip ahead, jumping directly from the legal requirement to a product purchase.
The assumption underlying this shortcut is that buying something called “intrusion detection” automatically satisfies a requirement to “protect against intrusions.” It’s equivalent to assuming that purchasing a gym membership means you’re physically fit. The tool is an enabler of security, not security itself.
This creates an uncomfortable reality where organizations can be simultaneously compliant and vulnerable. The tool exists. The compliance checkbox is marked. The auditor is satisfied. The money is spent. But the actual security posture hasn’t improved, or hasn’t improved as much as the investment should have delivered. The risk remains largely unchanged.
The translation chain: what should happen instead
There’s a proper sequence, and skipping any step in it undermines everything that follows:
Compliance requirement
↳ Company-specific policy
↳ Procedure and process from a domain-specific framework
↳ Technical requirements
↳ Solution selection
↳Artifact to validate requirements with
From legal obligation to company policy
The translation begins by taking the legal language, often just lawyer-speak that your technical teams cannot directly act upon, and converting it into company-specific policy.
Compliance says “you shall have application security policy.” That statement doesn’t tell you what to do. It’s an obligation without a roadmap.
The translation means specifying:
“Our organization will implement application security following NIST Cybersecurity Framework”
or “We will follow MITRE CAPEC framework to simulate threats to our application.”
You’re taking the legal obligation and declaring which industry-standard framework guides your interpretation. You’re deciding which specific controls from frameworks like NIST 800-53 are relevant to your industry, your size, your risk profile. Without this step, you’re interpreting “shall have security” however the first vendor you talk to chooses to define it.
From policy to process and roles
Policy establishes the framework. Process defines who does what. For application security, this means specifying that the security team provides threat intelligence and defines security best practices. Development teams implement specific controls at specific points in the software development lifecycle – which controls belong in the pipeline, which at the repository, which in the deployment process.
I think about this division like a hamburger. The security team handles the buns: the overarching framework and standards that wrap around everything. Development teams handle the meat: the actual implementation in their specific context and technology stack. Both are essential because neither can do the other’s job effectively.
This step answers critical questions about ownership and boundaries. Where does security team oversight end and development team ownership begin? What are the handoff points? Who is responsible for what? Without answers to these questions, you end up in one of two failure modes: either security becomes a bottleneck because they’re trying to run everything, or nothing gets implemented consistently because everyone is interpreting requirements differently.
From process to technical requirements
Only after defining roles and process can you articulate what a solution actually needs to do.
For supply chain vulnerability management (for instance) the requirements might include:
- must handle known vulnerabilities and zero-day events;
- must allow triage and assessment of residual risk;
- must support corporate single-sign-on to access artifact repository;
- must scan build artifacts for known vulnerabilities.
These are functional requirements. They’re the capabilities the solution must have. But you also need non-functional requirements:
- must scale with our development volume;
- must version control all configuration changes
- must enable development to continue working while issues are being assessed;
- must not require constant hands-on management, or if it does, we need to staff for that reality;
- must integrate with our existing tools and workflows;
- does it require a lot of handholding to maintain, or is it more turnkey?
Without defining both types of requirements, you’re evaluating solutions based on vendor feature lists and marketing materials rather than your actual needs. You can’t know if a tool is the right fit if you haven’t specified what “right” means for your organization.
From requirements to solution selection
Only at the end of this chain should you be selecting solutions. Now you have real criteria. You can evaluate whether a tool actually delivers what your organization needs. You can ask informed questions: Does this handle our specific vulnerability patterns? Does it work with our development cadence? Can our teams actually use it in their daily workflow, or will it sit unused?
In addition to satisfying functional requirements, the ultimate acceptance criteria is whether we have met the legal obligation. The job is done when we produce evidence that compliance is met.
Each step in this sequence informs the next. You cannot define good requirements without understanding your process. You cannot define good process without clear policy. You cannot write good policy without understanding what the compliance requirement is actually trying to protect against.
When one size fits none: the blanket policy problem
Once an organization has purchased a security tool or established a security policy, the temptation is to apply it uniformly. Consistency seems like a virtue. One standard, one tool, one process across the entire organization.
But different teams work in fundamentally different ways. They run on different cadences. They have different capabilities. They use different technology stacks. They face different constraints. They deliver different things at different speeds. The security policy needs to come first, but it also needs to accommodate these different use cases – understanding capacity, throughput, how people actually do their work.
When you apply a blanket policy without recognizing these differences, you’re dictating by the lowest common denominator. Think about a city implementing a curfew across every neighborhood because one district has problems with late-night disturbances. Would it reduce activity in the problem area? Probably. But you’d also constrain everyone else, including areas that never had an issue. You’ve created a policy that treats dissimilar situations identically.
In practice, this means your highest-performing development team – the one shipping reliably, moving fast, innovating effectively – gets slowed down by security controls designed for teams that lack their discipline or capabilities. Meanwhile, your most constrained team – the one already struggling – gets requirements they cannot possibly meet without grinding to a halt. Same policy, opposite problems, equally counterproductive.
This happens because when you skip the translation layer, you don’t understand the different use cases within your own organization. You don’t recognize that the meat between the security buns can be different for different teams. Different language stacks, for example, have distinctive patterns and different needs. The process that affects how supply chain vulnerability management applies to a Java development team’s workflow is not the same as how it applies to your Python team or your front-end developers.
That knowledge only exists with people who work in those stacks day in and day out. It’s not a criticism of security teams to say they don’t know these details. If you don’t work with a particular technology every day, you simply wouldn’t know the best practices specific to that environment. That’s exactly where collaboration has to happen, during the policy and process definition stages, not after the solution has already been selected.
Manufacturing operations have learned this lesson thoroughly. They understand their throughput at every stage. They know that speeding up one station in an assembly line doesn’t help overall production if it just creates a bottleneck at the next station. They think in systems, understanding how changes in one area cascade through the entire operation. Security implementations rarely demonstrate this same systems thinking. They tend to operate in terms of mandates and tools.
The real costs of theater
The financial cost is the most visible. Money gets spent on tools that aren’t configured correctly, aren’t being used effectively, or don’t actually match what the organization needs. This isn’t because the tools themselves are deficient, but because requirements were never properly defined before purchase decisions were made.
Operational costs follow. Development teams get slowed down by security processes that don’t account for how they actually work. High performers find themselves constrained by rules designed for the lowest common denominator. Workarounds begin multiplying as teams route around obstacles that seem arbitrary because the reasoning behind them was never explained or validated against actual work patterns.
But the security cost is the most dangerous, precisely because it’s the least visible. The organization believes it’s protected because it has checked compliance boxes. Leadership can report to the board: “We’ve implemented these controls. We’ve purchased these systems. We’re meeting our regulatory obligations.” Everyone feels safer.
Meanwhile, the intrusion detection system might not be properly configured for your specific network architecture. The application security tool might not be catching actual vulnerabilities in your particular tech stack. The supply chain vulnerability management might not be identifying risks in your specific dependencies. You have the tools. You’re compliant. But the actual security posture hasn’t improved. Or hasn’t improved nearly as much as the investment should have delivered.
This creates a false sense of security that’s arguably more dangerous than having no security measures at all. When the security incident eventually occurs, everyone asks the predictable questions: “But we had intrusion detection. We had application security. We were compliant. What went wrong?”
What went wrong is that possessing the tool and actually being protected are entirely different things. Compliance theater just provided the appearance, without the substance. The checklist was completed but the reality was neglected.
Building the translation layer in practice
- Understand the intent behind the compliance requirement. Treat the requirement as a question, not an answer. What is the legal or regulatory obligation actually trying to accomplish? What risk is it addressing? What outcome does it require? “Protect your endpoint” isn’t the goal itself – it’s shorthand for preventing unauthorized access and data exfiltration. Understanding the intent behind the requirement changes how you approach satisfying it.
- Document your organization’s interpretation as policy. Which industry framework are you following? NIST, MITRE, ISO? Which specific controls from that framework apply to your industry, your size, your risk profile? This isn’t just selecting a framework name to drop into a document. Explicitly state: “Here’s how we’re interpreting this compliance obligation. Here’s the standard we’re following. Here’s our reasoning.”
- Define specific roles, responsibilities, and handoffs. For application security, for instance: The security team provides threat intelligence, defines best practices, monitors for emerging threats. Development teams implement controls in their pipelines, scan their dependencies, manage vulnerabilities in their code. Make it explicit. Document it. Ensure everyone understands where security oversight ends and development ownership begins.
- Specify both functional and non-functional requirements. Before evaluating any solutions, answer completely: What must the solution do, specifically? What capabilities does it need to have? How must it perform, scale, and integrate with existing systems? This is where you must account for differences across teams. Different groups may need different solutions, or the same solution configured differently for their specific contexts. As long as all approaches satisfy the policy, variation in implementation is appropriate and often optimal.
- Evaluate solutions against your documented requirements. With this foundation in place, you can ask informed questions and have criteria that matter to your organization specifically. You can determine whether a tool actually fits your needs rather than whether it claims to solve the general category of problem.
- Measure effectiveness, not just presence. After implementation, track whether the solution is actually catching vulnerabilities, reducing exposure, being used effectively by teams. Build feedback loops so that when something isn’t working, you can diagnose which layer needs adjustment. Is the problem with the tool itself? The process around it? The policy? The original requirements? Fixing the right layer prevents the same problem from recurring.
Beyond security
This pattern extends well beyond security mandates. Any compliance requirement, operational initiative, or technology mandate that arrives from leadership or external sources follows the same problematic path when the translation layer gets skipped.
The pattern is consistent: Executive demands. Organization jumps to solution. Translation layer gets skipped. Implementation struggles. Outcomes disappoint. People wonder why the investment didn’t deliver what was promised.
The fix is equally consistent: resist the urge to jump straight to solutions. Build the translation layer. Pick a framework as the North Star to implement the policy. Do the work of moving from requirement to policy to process to specifications to solution selection. Accept that this takes time upfront.
It feels slower. It requires more thinking before action. It involves more stakeholders in the decision-making process. And it’s harder to show immediate motion – to demonstrate that you’re “doing something” in response to the requirement.
But it’s the difference between compliance theater and actual effectiveness. Between spending money on tools and actually solving problems. Between checking boxes and delivering the outcomes that compliance requirements were designed to ensure in the first place.
When someone asks “do you have intrusion detection?” you don’t just want to be able to answer “yes, we purchased this system.” You want to be able to say “yes, and here’s how we know it’s protecting us.” That second answer is only possible when you’ve built the translation layer properly.
The work of translation isn’t bureaucracy added on top of the real work. It is the real work. Everything else is just motion.
Jason Mao is a Systems Architect at Ten Mile Square. He has more than 25 years of experience in software engineering, upholding high engineering standards in cloud operations, security, and product delivery. If you have questions about cybersecurity with respect to cloud infrastructure, or concerns about your “translation layer,” Jason can work with you to ask the questions that really matter. Schedule a discovery call.
