“A CEO’s favorite visualization of metrics is a stock chart, a 1-inch square that contains a month’s worth of opening and closing prices, a trend line and several other indicators.
By no means does Jaquith (or CSO for that matter) think these five metrics are the final word on infosecurity. Quite the contrary, they’re a starting point, relatively easy to ascertain and hopefully smart enough to get CISOs thinking about finding other metrics like these, out in the vast fields of data, waiting to be reaped.
Your coverage of devices by these security tools should be in the range of 94 percent to 98 percent. If in one quarter you’ve got 96 percent antivirus coverage, and it’s 91 percent two quarters later, you may need more formalized protocols for introducing devices to the network or a better way to introduce defenses to devices. “At any given time, your network management software doesn’t know about 30 percent of the IP addresses on your network,” says Jaquith, because either they were brought online ad hoc or they’re transient. How to get it: Run network scans and canvass departments to find as many devices and their network IP addresses as you can. Then check those devices’ IP addresses against the IP addresses in the log files of your antivirus, antispyware, IDS, firewall and other security products to find out how many IP addresses aren’t covered by your basic defenses.
Maximum coverage, while an important baseline, is too narrow in scope to give any sort of overall idea of your security profile. For example, percentage coverage by class of device (for instance, 98 percent antivirus coverage of desktops, 87 percent of servers) or by business unit or geography (for instance, 92 percent antispyware coverage of desktops in operations, 83 percent of desktops in marketing) will help uncover tendencies of certain types of infrastructure, people or offices to miss security coverage. That is, 98 percent antivirus coverage of manufacturing servers is useless if the average age of the virus definitions on manufacturing’s servers is 335 days. Here’s an example: plotting the percentages of five business units’ antivirus and antispyware coverage and the time of their last update against a companywide benchmark.
Patch latency is the time between a patch’s release and your successful deployment of that patch. As with basic coverage metrics, patch latency stats may show machines with lots of missing patches or machines with outdated patches, which might point to the need for centralized patch management or process improvements. At any rate, through accurate patch latency mapping, you can discover the proverbial low-hanging fruit by identifying the machines that might be the most vulnerable to attack. One possible visualization: For data where you can sum up the results, such as total number of missing patches, a “small multiples” graphic works well.
Password strength: This metric offers simple risk reduction by sifting out bad passwords and making them harder to break, and finding potential weak spots where key systems use default passwords. Password cracking can also be a powerful demonstration tool with executives who themselves have weak passwords.
Gratuitous pictures, 3-D bars, florid design and noise around the data diminish effectiveness.
One possible visualization: An overall score here is simple to do: It’s a number between 1 and 10. To supplement that, consider a tree map. Tree maps use color and space in a field to show “hot spots” and “cool spots” in your data. They are not meant for precision; rather they’re a streamlined way to present complex data. They give you a feel for where your problems are most intense. In the case of platform-compliance scores, for instance, you could map the different elements of your benchmark test and assign each element a color based on how risky it is and a size based on how often it was left exposed. Be warned, tree maps are not easy to do. But when done right, they can have instant visual impact.
Legitimate e-mail traffic analysis is a family of metrics including incoming and outgoing traffic volume, incoming and outgoing traffic size, and traffic flow between your company and others. There are any number of ways to parse this data; mapping the communication flow between your company and your competitors may alert you to an employee divulging intellectual property, for example. The fascination to this point has been with comparing the amount of good and junk e-mail that companies are receiving (typically it’s about 20 percent good and 80 percent junk). Such metrics can be disturbing, but Jaquith argues they’re also relatively useless. By monitoring legitimate e-mail flow over time, you can learn where to set alarm points. At least one financial services company has benchmarked its e-mail flow to the point that it knows to flag traffic when e-mail size exceeds several megabytes and when a certain number go out in a certain span of time. How to get it: First shed all the spam and other junk e-mail from the population of e-mails that you intend to analyze. Then parse the legitimate e-mails every which way you can. Added benefit: An investigations group can watch e-mail flow during an open investigation, say, when IP theft is suspected. Try this: Monitor legitimate e-mail flow over time.
CISOs can actually begin to predict the size and shape of spikes in traffic flow by correlating them with events such as an earnings conference call.
You can also mine data after unexpected events to see how they affect traffic and then alter security plans to best address those changes in e-mail flow. Time series simply means that the X axis delineates some unit of time over which something happens. How to get it: Build a risk indexing tool to measure risks in your top business applications. Expressed as: A score, or temperature, or other scale for which the higher the number, the higher the exposure to risk. Could also be a series of scores for different areas of risk (for example, business impact score of 10 out of 16, compliance score of 3 out of 16, and other risks score of 7 out of 16). Added benefit: A simple index like this is a good way to introduce risk analysis into information security (if it’s not already used) because it follows the principles of risk management without getting too deeply into statistics. Try this: With your industry consortia, set up an industrywide group to use the same scorecard and create industrywide application risk benchmarks to share (confidentially, of course).
To this excellent article, the author suggests to use the measurment of Incidents and their impact as a metric.
http://www.csoonline.com/read/070105/metrics.html