Home Tech Updates Why ambiguity in cybersecurity is no longer correct

Why ambiguity in cybersecurity is no longer correct

by Helen J. Wolf
0 comment

There are places where ambiguity and subjectivity work well, but measuring your exposure to cyber risk is not.

One place where clarity is desired is in the C-suite. As both the costs and risks of cybersecurity continue to escalate, CEOs struggle with the return on their investment in cybersecurity.

When measuring the effectiveness of their company’s cybersecurity, a survey found that 72% of CEOs receive metrics that “miss meaning or context,” and 87% “need a better way to measure the effectiveness of their investments in measure cybersecurity’.

Why ambiguity in cybersecurity is no longer correct

AIT Sloan Management Review notes, “OOftenexecutives and directors spend too much time studying technical reports on things like the number of intrusion detection system alerts, antivirus signatures identified, and software patches deployed.” These matters are often delegated and confined to the IT department. Still, ideally, top management should strategically manage and address cybersecurity risks so that risk management is not based solely on incidents.

Cyber ​​security increasingly has to learn to speak another language. Current reforms in multiple countries – notably Australia and the United States – would expose individual directors and executives to liability for cybersecurity risks. The proposals aim to capture the “content of how a company manages its cybersecurity risk”.

That’s a different view of risk – not one that favors qualitative or ambiguous representations of the ‘traffic light system’ type.

The traditional approach has been to rank risks as high, medium, and low or rate them as “probably likely” or “somewhat likely to affect the business.”

These divisions are too vague in the modern world. Security teams may think moderate risk should be mitigated, but the management team may argue that it can be accepted. Defending your point of view can be tricky because the term “medium risk” sounds a bit ambiguous.

It becomes more challenging when teams have multiple risks ranked by the medium. Which one do you focus on first? Do you spend the same time and resources managing all three risks? It’s hard to know for sure with non-quantitative statistics.

Organizations face thousands of IT and cyber risks every year. The challenge is to determine which risks need to be addressed first. Likewise, there can be hundreds of possible security checks; which will provide the most benefits for the least cost?

These are questions that CISOs need answers to. And for that, they need quantitative data. Ambiguous terms must be converted to hard numbers.

Do the math

Introduce cyber risk quantification – a monetary measure of IT and cyber risk exposure.

It aims to help practitioners and their employers determine which risks should be prioritized and where cybersecurity resources should be allocated for maximum impact.

Cyber ​​risk quantification typically uses advanced modeling techniques such as Monte Carlo simulations to estimate the Value at Risk (VaR) or expected loss of risk exposure.

By quantifying the monetary impact of a risk event, questions like “How much should we invest in cybersecurity?”, “What is the return on investment?” and “Do we have adequate cyber insurance coverage?” can be answered more confidently.

Uncertainty is minimized when exposure to cyber risks is expressed clearly and precisely. It becomes easier to direct security investments when you know how much the bet will cost and how much a particular control can help reduce that cost. There is much less debate and confusion about the top three cyber threats, why they are ranked that way, or which controls are most relevant to mitigate them. The data is already there for everyone to see.

Several stakeholders benefit from such clarity. CISOs gain a deeper understanding of the impact of risk, helping them make data-driven decisions. Boards have a greater understanding of what is at stake for the company regarding dollar value. And executives can effectively prioritize cybersecurity investments, driving alignment between cyber programs and business goals.

Six things to keep in mind

To quantify cybersecurity risk, organizations need to consider six key points.

First, prepare a common risk language. Standardize the risk terminology as much as possible. When everyone in the organization has a different definition for every IT asset, threat, or vulnerability, it becomes difficult to communicate and defend risk decisions.

Second, quantifying cyber risks is a collaborative exercise beyond the IT security department. Involve other departments in identifying critical risk scenarios. The more perspectives brought forward, the more comprehensive your risk data will be.

Third, cyber risks and threats are constantly evolving. A chance that was critical a year ago may not be so important or relevant anymore. The only way to find out is to re-quantify the risks regularly, maybe once or twice a year.

Fourth, it is not efficient or effective to cover all possible threats and risk scenarios asimultaneously. Pick an important use case and work on it before moving forward.

Fifth, automate where possible. Manual processes for quantifying cyber risks can be both complex and time-consuming. Automating these workflows allows a wide range of risk exposures to be measured more quickly.

Finally, quantification is not a panacea: cyber risk quantification should enhance, not replace, other IT and cyber risk management processes. Its value is best realized when complemented by risk monitoring, qualitative assessments, internal audits, and problem management processes.

While no organization can ever be completely immune to threats and risk, smart and calculable risk quantification, management, and measurement can help organizations better mitigate risk.

You may also like