Cyber Security Institute

§ Current Worries

Top 3 Worries

  • Regulations
  • Old Firewall Configurations
  • Security Awareness

§ Listening

For the best information

  • The underground
  • Audible
  • Executive Excellence
  • Music (to keep me sane)

§ Watching

For early warnings

  • 150 Security Websites
  • AP Newsfeeds
  • Vendors

Tuesday, December 05, 2006

The Truth about Patching

According to an April 2006 report from the Yankee Group consultancy in Boston, Mass., the various security investments enterprises have made do, indeed, make it more difficult for “criminals, spies and miscreants” to break into corporate networks.  However, the report says the criminal element is focusing on new attack strategies, one of which is “quickly creating and launching exploits to vulnerabilities before enterprises can patch against them.  The so-called zero-day (0 day) attack, where an attack is launched against a vulnerability before a patch is created to plug that vulnerability, has long been a great fear of any security professional.  With the criminal element actively seeking out opportunities for such an exploit, it’s more important than ever for organizations to take stock of their patching strategy.

They are simply different ways of performing the same job, either using a small software “agent” or polling from a central location to collect data on the target system.  Here we focus on five, which fall into the categories of accuracy, scalability, bandwidth, speed and coverage.

Some IT professionals believe that being a resident on the client or server enables agent-based systems to collect more information, and ensures they won’t miss machines that are turned off.  It’s what you do with that data that matters.  And while it’s true that an agentless architecture cannot poll a machine that’s turned off, it’s also true that end users can—- and do—- disable software-based agents.

Additionally, if a user attaches a rogue machine to the network, it won’t have an agent and may not be found unless the company has another means of detecting such machines.  Even an agent-based system still needs to evaluate data that the agent collects, which means that data must flow over the network at some point—- so it does need a certain amount of bandwidth.  But even though some older agentless systems did consume significant bandwidth because they had to read entire copies of files across the network to check versions, more advanced agentless systems have overcome this shortcoming and now consume only moderate amounts of bandwidth.

In an agent-based scenario, if all agents are reporting in at once, you should ask whether that server can keep up.  And in practice, it’s not likely that the scanning tool will be the gating factor in how quickly you can get a patch out—- it’s how quickly the third-party vendor makes the patch available.

http://www.it-observer.com/articles/1288/the_truth_about_patching/

Posted on 12/05
TrendsPermalink