How to Talk Cyber Risk with Nontechnical Stakeholders
Closing the gap between security findings and business decisions
A practitioner methodology for translating technical findings into board-ready insight.
Victoria Mosby, Senior Sales Engineer, PlexTrac
A PlexTrac-sponsored white paper, 2026
Executive Summary
Security teams are doing more testing than ever. Continuous assessments, ongoing penetration tests, red and purple teaming. But most of what comes out the other side is still a pile of technical findings that never quite translates into business decisions. Executives get Common Vulnerability Scoring System (CVSS) scores, heat maps, and vulnerability counts when what they really need is a clear story.
The translation gap is not a tooling problem. It is a methodology problem. Reports get filed because they answer questions nobody asked, in language nobody outside security speaks, with no clear path from finding to decision. The same engagement that produces 47 findings could produce three decisions. Most of the time, it produces neither.
This paper presents a methodology for closing that gap. The argument has three parts.
First, score for the business, not the system. CVSS in a vacuum cannot answer business questions. A CVSS 9.8 on an isolated test server is not a higher priority than a CVSS 5.3 on a domain controller that touches the enterprise resource planning (ERP) system. Layered context, including asset criticality, business impact, risk appetite, and environmental factors, flips the priority order in ways that match how the business actually experiences risk.
Second, speak the audience's language, not your own. Shared vocabulary between security and the business is built through three operational mechanisms: a jointly owned risk register, cross-functional readouts that present with leaders rather than to them, and purple-team feedback loops that let the business correct the security team's translation in real time.
Third, report as a decision engine, not a deliverable. Every finding has the same anatomy: a conclusion-titled summary, business consequences quantified in dollars and frameworks, and a recommended action with effort estimates and sequencing logic. Reports built this way support quarterly board cycles, not just point-in-time deliverables.
The payoff is not just better reports. It is the reframing of security as a quantifiable cost-saving function. Engagements that show measurable risk reduction over time turn one-time clients into long-term relationships and security teams into trusted advisors rather than perpetual cost centers.
The Translation Gap
Every executive really asks the same question, in some form: are we good? They will not phrase it that way. They may ask about audit readiness, ransomware exposure, board-reportable metrics, or quarterly risk posture. But beneath the phrasing, the question is consistent. Three sub-questions sit underneath it. What could actually hurt us? How badly? What should we do first?
If a security report does not answer those three questions, it is not actionable. It is a document that may be technically accurate, professionally formatted, and rigorously defensible, and still fail at its job.
This is not a hypothetical failure mode. It is the dominant failure mode in security reporting today. The findings are real. The CVSS scores are correctly calculated. The descriptions are technically precise. And the report sits in a SharePoint folder, unread by anyone who could act on it, because nothing in the report tells the reader what to do.
Three patterns drive the problem.
Volume over clarity. More findings does not mean more action. A report with 47 findings sorted by CVSS descending tells a reader that the security team did thorough work. It does not tell the reader what to fix first. When everything is critical, nothing is.
Vulnerability is not risk. A vulnerability is a technical fact. Risk is a business judgment. The same vulnerability has wildly different risk implications depending on what system it lives on, what data flows through that system, what compensating controls exist, and what the business has stated as its tolerance for loss. Reports that conflate the two leave the business judgment to the reader, which is exactly what the report was supposed to provide.
Severity is not priority. CVSS scores severity in a way that is internally consistent and externally meaningless. CVSS does not know what your systems do, who uses them, what data they touch, or whether they are subject to Payment Card Industry Data Security Standard (PCI DSS), the Sarbanes-Oxley Act (SOX), or Health Insurance Portability and Accountability Act (HIPAA) scope. A CVSS of 9.8 on a system that holds nothing important is a lower priority than a CVSS of 5.3 on a system that runs revenue. CVSS-sorted reports inverse-correlate with business priority more often than they align with it.
The result is the gap between what testing produces and what decisions require. The gap closes when reports answer the three questions executives are actually asking, in language executives actually use, with structure that supports the decisions they actually need to make.
The rest of this paper is about how to do that.
Score for the Business, Not the System
The first shift is the most concrete: how findings get scored.
Consider two findings from the same engagement. Finding A is a remote code execution vulnerability with a CVSS of 9.8, identified on an isolated test server. The system has no production data, sits air-gapped from the production network, and has no path to customer impact. Finding B is a weak authentication finding on a domain controller with a CVSS of 5.3. The domain controller has admin paths into the ERP system, which holds $40 million in annual revenue data and falls under SOX scope. During testing, four admin accounts were successfully compromised.
Same scoring system. Same numerical scale. Wildly different business priority. Finding B is the higher priority by every measure that matters to the business, despite scoring 4.5 points lower on CVSS.
The fix is not to abandon CVSS. CVSS is useful as one input. The fix is to add three layers of context on top of it before producing a priority recommendation.
Asset Context. What does the system do? Who uses it? What data does it touch? Is it internet-facing? These are factual questions that any engagement can answer, often from observation alone, even when the client has not provided detailed asset documentation. Naming conventions, data encountered during testing, and exposure visible from external scanning all serve as evidence.
Business Impact. What is the financial exposure if this finding is exploited? What compliance frameworks apply, and what do they require? What is the organization's stated risk appetite, and where does this finding fall against it? What operational consequences would follow? Risk appetite in particular is a term boards and chief information security officers (CISOs) already use, and one most reports ignore. Naming it explicitly and aligning findings against it is a credibility move that costs nothing and pays repeatedly.
Environmental Factors. What is in the way of exploitation? Compensating controls, real-world exploitability evidence, signs of active exploitation in the wild. CVSS environmental metrics gesture at this but rarely capture it well. A finding with public exploit code and active exploitation in adjacent industries is meaningfully different from a theoretical vulnerability with no observed exploitation.
Layered together, these three contexts produce a business priority rating that frequently disagrees with CVSS-sorted ordering. That disagreement is the point. The business priority rating is what the executive needs. The CVSS score is what the auditor needs. Both can coexist in the same report. Only one should drive the priority order.
When client data is genuinely unavailable, the methodology still works with three substitutes. Use what you observe during the engagement. Use a lightweight intake questionnaire delivered before scoping. Use industry benchmarks for breach cost, ransomware recovery, and regulatory exposure by sector. None of these are perfect proxies for actual client data, but all of them produce more useful priority signals than CVSS alone.
The intake questionnaire deserves a moment of its own. Five questions, ten minutes for the client to fill out, and the deliverable improves materially.
What are the top three to five business-critical systems in scope?
What revenue depends on these systems?
What compliance frameworks apply, and what is your stated risk appetite?
What has changed since your last assessment?
Who receives the report, and what decisions do they need to make?
The audience question is the most often skipped and the most consequential. The audience determines the structure. A report written for the chief financial officer (CFO) is structured differently from a report written for the engineering team, even when the underlying findings are identical. Ask the question. Use the answer.
Speak the Audience's Language
The second shift is harder than the first because it is not a methodology change. It is an operational one.
If a finding cannot be acted on, it is not a finding. It is a fact. The difference is whether the audience has the language and the context to make a decision from what they read. That capability is built, not declared. No glossary at the front of a report substitutes for the work of building shared vocabulary across teams.
Three operational mechanisms produce that vocabulary. None of them happen during the engagement itself. They happen in the spaces between engagements, where most security teams under-invest and most business stakeholders never see security at all.
The risk register as a living artifact. A risk register that gets updated quarterly during board prep is not a risk register. It is a compliance artifact. A working risk register is updated continuously, owned jointly by security and the business, and maps every active finding to the business outcomes it threatens. When a new vulnerability is identified during a routine scan, it shows up in the risk register the same day, with the same business framing the executive team will see at the next quarterly review. The register becomes the shared vocabulary because every entry is written in language both audiences understand.
Cross-functional readouts. The dominant pattern for executive briefings is for security to present to business leaders. The methodology in this paper requires presenting with them. The difference is structural. In the first model, security presents findings, leaders ask questions, and the meeting ends. In the second, security and business jointly walk through the risk register, jointly make priority decisions, and jointly own the outcomes. Cross-functional readouts force translation in both directions. Security learns what business leaders care about by hearing them push back. Business learns the technical context by being asked to weigh in on it.
Purple-team feedback loops. Purple teaming is usually framed as collaboration between offensive and defensive security teams. The methodology extends it to include the business. The business tells security what they need to hear, not what security thinks they need to know. This is uncomfortable. It surfaces the gap between security's view of risk and the board's view of risk in ways that cannot be glossed over. It also produces the most durable shared vocabulary of any of the three mechanisms, because the business is actively correcting security's translation in real time.
A practical artifact accelerates all three mechanisms: the Jargon Glossary. Place it at the front of every report, before the executive summary. The format is two columns. The left column is what security says. The right column is what the audience hears.
What we say
RCE: Remote code execution. Ability to execute arbitrary code remotely.
IDOR: Insecure direct object reference. Authorization flaw exposing other users' data.
Risk appetite: stated tolerance for loss across categories.
What they hear
RCE: Attacker runs any command they choose. Equivalent to handing over admin keys.
IDOR: One customer can read another's data. Privacy violation. Contractual breach.
Risk appetite: how much loss the board will accept before it forces a change in posture.
The glossary does not replace the operational mechanisms. It accelerates them. Once the glossary is established, every subsequent finding can be written in the right-column register with confidence that the audience will understand what is meant.
Report as a Decision Engine
The third shift is the structural payoff of the first two. Reports stop being deliverables and start being decision engines.
Every actionable finding has the same anatomy:
Title = asset + business consequence
+
Consequences = regulatory, financial, and operational impact, with dollar ranges
+
Action = fix + effort estimate + sequencing logic
The formula reads as obvious in retrospect. Most reports fail at one or more of its three parts. The title names the technical issue rather than the business consequence. The consequences section lists generic risks rather than quantified ones. The action section recommends a fix without estimating effort or proposing a sequence.
A worked rewrite makes the difference visible. Consider a Structured Query Language (SQL) injection finding written two ways. The Before version is the kind of writeup most reports produce. The After version applies the three-part formula, defines business consequences in terms the audience already uses, and proposes a sequenced action with effort estimates. The acronyms in the After version, including personally identifiable information (PII), the European Union (EU), the U.S. Federal Trade Commission (FTC), and web application firewall (WAF), are the working vocabulary of the audience the report is written for.
Before:
Title: SQL Injection in Login Parameter Severity: Critical (CVSS 9.8)
Description: A SQL injection vulnerability was identified in the login form's username parameter. The application does not properly sanitize user input, allowing an attacker to inject malicious SQL queries.
Recommendation: Implement parameterized queries to prevent SQL injection attacks.
After:
Title: SQL Injection Exposes Customer Database. Estimated $3M to $8M in regulatory and litigation risk.
Business Impact: Critical. Exceeds stated risk appetite. The vulnerability exposes 2.4 million customer records including PII and payment data, creating mandatory notification obligations across 14 states and EU jurisdiction.
What's at risk: Customer data confidentiality, PCI DSS compliance status, customer trust, regulatory standing with the FTC and state attorneys general.
Recommended action: Implement parameterized queries (1 to 2 sprint development cycle, ~80 engineering hours). Deploy WAF rules as compensating control within 48 hours (~4 hours of operations work). Schedule code review of all input handling within 30 days.
The previous version is technically correct. The after version is technically correct and answers the three stakeholder questions.
What could hurt us? Customer data exposure with regulatory consequences.
How badly? $3M to $8M, exceeding risk appetite.
What should we do first? WAF in 48 hours, code fix within two sprints.
The same data underlies both versions. The structural difference is what allows the second to drive a decision.
When this anatomy is applied consistently, an executive summary writes itself. Five sections, in this order:
Risk Exposure Summary. Aggregate dollar range across regulatory, financial, and operational categories. One paragraph.
Priority Decisions Required. Top three to five findings with business impact framing. One short paragraph each.
Remediation Roadmap. Phased and sequenced by risk reduction per unit of effort. Visual where possible.
Risk Trend. For recurring engagements only. Longitudinal comparison of total risk exposure, mean time to remediate, and finding closure rates against prior engagements.
Finding Summary Table. Sorted by business priority, not by CVSS. CVSS appears as a column, not as the sort key.
For external assessment providers, the report's value extends beyond the executive summary. The roadmap-ready structure means the deliverable does not end at the report. It ends when the client knows what to do next. Four design choices make the difference.
Group findings by business function rather than by severity. Provide effort estimates that distinguish configuration changes from code refactors from multi-sprint projects. Sequence work by quick wins first and strategic items planned with their dependencies. Flag dependency relationships explicitly so blocking findings are visible.
A useful mental model for individual finding presentation: one slide, one finding, one decision. If a finding's slide cannot be summarized as a single decision the audience needs to make, the slide is doing too much. Split it.
Quantifiable Risk Reduction
The methodology produces an artifact that endures past the engagement. That endurance is the cost-saving function argument.
Cybersecurity is conventionally framed as a cost center. Spending goes up, ostensibly to keep risk down, with the relationship between the two assumed rather than measured. Most boards have learned to budget security spending as overhead and to treat the security team's reports as compliance theater.
The methodology in this paper changes the math. Reports that quantify risk in financial terms enable risk reduction to be quantified the same way. When a recurring engagement shows total risk exposure decreased by 38% since the previous assessment, and mean time to remediate for business-critical findings improved from 47 to 19 days, the security team is no longer reporting cost. They are reporting return.
This is the risk trend section of the executive summary, available only to recurring engagements but transformative when present. Longitudinal data is what turns a deliverable into a relationship. One-time clients receive a report. Recurring clients receive a quarterly view of how their security posture is changing in dollar-quantified terms.
Quantifiable risk reduction is what makes security a cost-saving function, not a cost center. The phrase deserves to be said plainly because it inverts a position the business has held for decades. Done well, security pays for itself, and the methodology in this paper is how it gets shown.
This has two follow-on effects worth naming.
For internal security leaders, it shifts the budget conversation. Defending the security budget on the basis of "we prevented incidents that would have happened" is unfalsifiable and reads as a hedge. Defending it on the basis of "our quantified risk exposure decreased by $4.3M while spend increased by $400K" is a real argument that real CFOs respond to.
For external assessment providers, it shifts the client relationship. A vendor who delivers an annual report is replaceable. A vendor who maintains a longitudinal view of the client's risk posture, in language the client's board uses, is a strategic partner. The work is the same. The framing of the work is what changes.
Appendix: Practitioner Toolkit
The following artifacts accompany this paper as a working toolkit for security teams operationalizing the methodology.
Pre-engagement intake questionnaire. The five-question template, with field descriptions and sample answers from three engagement types.
Business-aligned executive summary template. The five-section structure with prompts and example language for each section.
Before and after finding examples. Three findings, each rewritten from CVSS-sorted technical writeup to conclusion-titled business framing.
Contextual scoring cheat sheet. The three-layer rubric with worked examples showing how CVSS-sorted priorities flip when context is layered on.
Jargon Glossary. Ten technical terms in the two-column format, ready to drop in at the front of any report before the executive summary.
The toolkit and slide deck is available alongside this paper. Each artifact is designed to be used immediately on the next engagement and refined over subsequent ones.