They are simply different ways of performing the same job, either using a small software “agent” or polling from a central location to collect data on the target system. Here we focus on five, which fall into the categories of accuracy, scalability, bandwidth, speed and coverage.
Some IT professionals believe that being a resident on the client or server enables agent-based systems to collect more information, and ensures they won’t miss machines that are turned off. It’s what you do with that data that matters. And while it’s true that an agentless architecture cannot poll a machine that’s turned off, it’s also true that end users can — and do — disable software-based agents.
Additionally, if a user attaches a rogue machine to the network, it won’t have an agent and may not be found unless the company has another means of detecting such machines. Even an agent-based system still needs to evaluate data that the agent collects, which means that data must flow over the network at some point — so it does need a certain amount of bandwidth. But even though some older agentless systems did consume significant bandwidth because they had to read entire copies of files across the network to check versions, more advanced agentless systems have overcome this shortcoming and now consume only moderate amounts of bandwidth.
In an agent-based scenario, if all agents are reporting in at once, you should ask whether that server can keep up. And in practice, it’s not likely that the scanning tool will be the gating factor in how quickly you can get a patch out — it’s how quickly the third-party vendor makes the patch available.
http://www.it-observer.com/articles/1288/the_truth_about_patching/