Skip to content

CyberSecurity Institute

Security News Curated from across the world

Menu
Menu

Category: Uncategorized

Reinvigorate your Threat Modeling Process

Posted on July 17, 2008December 30, 2021 by admini

At the same time, it’s likely you won’t have thought of everything or implemented defenses against every possible attack. It’s very unlikely you have a home defense management plan or have ever run a penetration test against your home.

As we build software, regardless of whether we’re in an agile or a waterfall world, we need agreement on what we’re building, what we’re not building, and what we’re doing to ensure we’re building the right thing. In the past few years, a perception that threat modeling is a heavy, bureaucratic process has been generated. There are some good reasons to move toward adding processes; I’d like to talk about them, some lessons learned from these processes, and how to put the fun back in threat modeling while making it an efficient, agile-friendly activity that anyone can do.

Approaches to Threat Modeling
There are many things called threat modeling. Rather than argue about which is “the one true way,” consider your needs and what your skills, abilities, and schedules are, and then work with a method that’s best for you. As part of that approach, some people ask, “What’s your threat model?” and “Have you threat modeled that component?”

One is requirements elicitation, the other design analysis. At Microsoft, we almost always mean the latter technique. There are more threat modeling methods out there than I can dream of covering in one column. There’s also a tremendous diversity of goals. Should your threat modeling process be fast or deep? Should it focus on assurance and completeness, or ease of use? Should you involve experts or developers in every meeting? Do you have organizational or industry rules you need to follow, such as the Microsoft® Security Development Lifecycle (SDL) or the rules for medical device manufacturers?

The high level objective should be to understand security issues early so you can address them in the design rather than try to overcome design flaws later. Some of the major ways to approach threat modeling activity include the following:

Assets
Asset-driven threat modeling is much like thinking about what you want to protect in your house. You start by listing what assets your software has associated with it, and then you think about how an attacker might compromise those assets. Examples include a database that stores customer credit cards or a file that contains encrypted passwords. Some people may interpret an asset as an element of the threat modeling diagram, thinking that a Web server itself is an asset. Digital assets are things an attacker wants to read, tamper with, or deny you the use of.

Attackers
Attacker-driven threat modeling involves thinking about who might want your assets, and it works from an understanding of their capabilities to an understanding of how they might attack you. This works great when your adversary is a foreign army with a known strategic doctrine, physical world limits, and long-lead-time weapons systems development. This works less well when your adversary is a loosely organized group of anonymous hackers. More generally, it’s not clear this is useful in software threat modeling. There are certainly people for whom “think like an attacker” is an effective part of design analysis. It’s less clear that this is a reproducible process in which people can get training. If you’re going to start from attackers, it’s probably worth using a standard set. It will be helpful to have a small set of these anti-personas written out.

Software Design
Design-driven threat modeling is threat modeling based on where your fences and windows are. You draw diagrams and worry about what can go wrong with each thing in your diagram. (This is the essence of the SDL threat modeling process today because everyone in software knows how to draw diagrams on a whiteboard.) The software equivalents of fences and windows are the various forms of attack surface, such as file parsers or network listening services—sockets, remote procedure call (RPC) services, Web services description language (WSDL) interfaces, or AJAX APIs. They’re the trust boundaries where you should expect an attacker to first get a foothold.

A Quick and Dirty Threat Model
Threat modeling doesn’t have to be a chore. Following the process illustrated in Figure 1, here is the outline of a basic threat modeling process that will get you going quickly and painlessly: Diagram your application, and use this to tell your app’s story in front of the whiteboard (see Figure 2). Use circles for code, boxes for things that exist outside of it (people, servers), and drums for storage. Our team uses funny looking parallel lines for data stores. Draw some trust boundaries using dotted lines to distinguish domains. When you get stuck, apply the STRIDE threat model, described in Figure 3, on each element of your app. All the threats in one place may mean you’re worried about the front door and not worrying about anything else. A third order defense might be an alarm system on the door, and to mitigate the threat of someone cutting the wire, you send a regular message down the wire. If you find yourself worrying about the software equivalent of what happens when someone cuts the phone wire to the alarm system before you worry about locks on the doors, you’re worrying about the wrong things.

File bugs so you can fix what you found threat modeling. Modifying a DLL on disk or DVD, or a packet as it traverses the LAN. Allowing someone to read the Windows source code; publishing a list of customers to a Web site. Crashing Windows or a Web site, sending a packet and absorbing seconds of CPU time, or routing packets into a black hole. Elevation of Privilege Authorization Gain capabilities without proper authorization.

Finally, you need to account for the availability of time and resources both for your threat modeling process and any resulting mitigation and testing.

Microsoft has found that threat modeling works better with a security expert in the room, but there isn’t always one available. You can get decent results by giving people structure and feedback on their work, and by breaking it down into small, easy pieces with rules and self-checks in each one. For problems validating the threat model and your mitigation plan, look to see whether the diagrams represent the code and whether you have agreement between developers and testers on that.

http://msdn.microsoft.com/en-us/magazine/cc700352.aspx

Read more

Security and Business: Financial Basics

Posted on June 24, 2008December 30, 2021 by admini

How do you justify spending on something that isn’t designed to increase the bottom line? The fear factor exists, and yet explaining why bulletproof glass is worth more than Plexiglas still requires numbers. With a recession hovering over the United States like some black helicopter, there will be still more pressure to measure what security spending brings to a company.

One big challenge is that the data rarely is simple to pull together. And even though there are now tools like Agiliance, which makes an ROI calculator for information security expenditures, the devil is still in the data.

Here are four well-known metrics and measurement components that, if used properly, can help put the impact of security spending in the financial perspective companies need.

ROI (Return on Investment) It’s a classic business expectation that if you invest money in something, you can measure the return on your investment by its impact on the bottom line. But understanding the value of security spending presents challenges, since the tension that exists in most branches of IT is that investment does not usually lead directly to profits. For security spending, the problem is bigger: If investing in security works, nothing happens.

But what if nothing would have happened anyway?

“[The trouble with] trying to calculate ROI on security tools is that they destroy the proof of their effectiveness simply by doing their job,” says Ross Leo, CEO of Alliance Group Research, a security consultancy. So ROI has become a somewhat loose measure of how long it will take to recoup the cost of investing in security. It is not a perfect measure, which may be why its usage appears to be dropping.

Some 42 percent of organizations polled in the 2007 Computer Security Institute Computer Crime and Security Survey said they used ROI to measure their information security investments. That was up from 39 percent the year before, but well below the 55 percent who reported using it in 2004.

Other common measures: 21 percent of respondents said they used internal rate of return measures, and 19 percent used net present value. ROI can be straightforward for some aspects of physical security. Craig Chambers, CEO of Cernium, which makes software that analyzes videotape, says at a minimum, his firm’s tools mean companies can hire fewer security guards, creating obvious savings on salary and benefits. But it’s rarely so straightforward to calculate savings. Some of the problems with using ROI: Strict adherence to ROI may cause companies to pick the wrong technology to save money. For instance, a firm might find that inexpensive surveillance cameras are not as effective as ones that include built-in analytical tools, but a strict focus on ROI will seem to show a better payback for an inferior product, says Steve Hunt, a security consultant in Evanston, Ill. “ROI is misleading because people don’t understand what they’re trying to accomplish…Look at the benefit you want first, then the ROI,” Hunt says. He doesn’t think ROI numbers work well in security, and he tends to counter with a discussion of their likely losses if they don’t invest in security services. Even though he prefers measuring losses, he concedes that unless a firm has recently experienced a breach of some sort, measuring costs becomes an exercise in “throwing darts at a dartboard.”

Otherwise, it’s tough to quantify the potential around losses, says Anthony Hernandez, managing director of the information risk management practice at Smart business advisory and consulting in Devon, Pa. He notes, for instance, that it was difficult to say what companies would get in return for spending on HIPAA compliance. In the case of PCI, he’s seeing companies receive fines of $25,000 a month. It’s also possible to measure what breaches will cost, thanks in part to incidents like those at TJX, which paid $100 million in fines and another $156 million to resolve lawsuits. It would be harder to say whether TJX suffered any intangible costs, like loss of goodwill (sales actually rose in the wake of the breaches).

Note that there’s also another measure, ROSI (return on security investment), which works by taking the expected security spending and subtracting any expected annual loss (see ALE, Page 39).

TCO (Total Cost of Ownership) An alternative to ROI is to figure the total cost of ownership (TCO) for a security investment. While the purchase cost or ongoing contract costs will be clear, figuring out less-obvious spending is harder. For Tyminski, TCO helped him justify buying a new intrusion prevention system. Bell will measure the time system administrators need to spend with the product, how much time it will take to install or migrate to a software package, what the product itself costs (both up front and for maintenance or support) and how much time its help desk will spend doing hand-holding. Marc Shapiro, senior vice president of Group 4 Securicor, the parent company of Wackenhut, says the firm is seeing more CSOs look for metrics, primarily TCO. Ideally, he likes to contrast those with the potential losses, but even in the physical security world, annualized loss estimates “are difficult to get,” he says.

EVA (Economic Value Added) The best-known version of EVA was developed and trademarked by Stern Stewart and offers a way to measure financial performance for business units. To use an EVA in a practical way, one should take numbers used to generate things like total cost of ownership, ROI and the annualized loss expectancy, and compare them to actual costs, looking at factors like what it would cost to implement and support them.

http://www.csoonline.com/article/394963/Security_and_Business_Financial_Basics

Read more

The botnet business

Posted on May 14, 2008December 30, 2021 by admini

First of all, we need to understand what a botnet or zombie network is. A botnet is a network of computers made up of machines infected with a malicious backdoor program. The backdoor enables cybercriminals to remotely control the infected computers (which may mean controlling an individual machine, some of the computers making up the network or the entire network). Malicious backdoor programs that are specifically designed for use in creating botnets are called bots.

They are used as a powerful cyber weapon and are an effective tool for making money illegally. The owner of a botnet can control the computers which form the network from anywhere in the world — from another city, country or even another continent. Importantly, the Internet is structured in such a way that a botnet can be controlled anonymously.

When bots are controlled directly, the cybercriminal establishes a connection with an infected computer and manages it by using commands built into the bot program. In the case of indirect control, the bot connects to the control center or other machines on the network, sends a request and then performs the command which is returned.

Botnets can be used by cybercriminals to conduct a wide range of criminal activity, from sending spam to attacking government networks. It should be noted that spam is not always sent by botnet owners: botnets are often rented by spammers.

The second most popular method of making money via botnets is to use tens or even hundreds of thousands of computers to conduct DDoS (Distributed Denial of Service) attacks. This involves sending a stream of false requests from bot-infected machines to the web server under attack.

Botnets help increase the haul of passwords (passwords to email and ICQ accounts, FTP resources, web services etc.) and other confidential user data by a factor of a thousand.

It can also be used to infect the computer with other malicious programs (such as viruses or worms) and install other bots on the computer.

Flood: start creating a stream of false requests to a specific Internet server in order to make it fail or to overload channels in a specific segment of the Internet.

Types of botnet Today’s botnet classification is relatively simple, and uses botnet architecture the protocols used to control bots as a basis. In practice, building decentralized botnets is not an easy task, since each newly infected computer needs to be provided with a list of bots to which it will connect on the zombie network.

Classification of botnets according to network protocols For a botnet owner to be able to send commands to a bot, it is essential that a network connection be established between the zombie machine and the computer transmitting commands to it.

NetBus and BackOrifice2000 were the first to include a complete set of functions that made it possible to remotely administer infected computers, enabling cybercriminals to perform file operations on remote machines, launch new programs, make screenshots, open or close CD-ROM drives, etc.

A malicious user then came up with the idea that computers infected with backdoors should establish connections themselves and that they should always be visible online (on the condition that the machine is switched on and working). This user must almost certainly have been a hacker, because new-generation bots employed a communication channel traditionally used by hackers — IRC (Internet Relay Chat). It is also likely that the development of new bots was made easier by the fact that bots working in the IRC system were open source (even though these bots were not designed for remote administration purposes but to respond to user requests such as questions about the weather or when another user had last appeared in chat). When infecting a computer, the new bots connected to IRC servers on a predefined IRC channel as visitors and waited for messages from the botnet owner. The owner could come online at any time, view the list of bots, send commands to all infected computers at once or send a private message to one infected machine. This was the original mechanism for implementing a centralized botnet, which was later christened C&C (Command & Control Center).

Developing such bots was not difficult because the IRC protocol has simple syntax. A specialized client program is not required to use an IRC server — a universal network client, such as Netcat or Telnet, can be used.

Information about the new IRC botnets spread rapidly. This was done by seizing control of the network, redirecting bots to other, password-protected, IRC channels and the result was full control over somebody else’s network of infected machines.

First, hackers developed tools for remotely controlling servers based on such popular script engines as Perl and PHP or, more rarely, ASP, JSP and a few others. Then somebody developed a method by which a computer on a local area network could connect to a server on the Internet; this made it possible to control the computer from anywhere in the world. Descriptions of the method for remotely controlling computers on local area networks which bypassed such protection as proxy servers and NAT were published online and it soon became popular in certain circles.

The development of semi-legitimate remote administration tools that could be used to evade protection on machines in local area networks and to gain remote access to such computers paved the way for web-oriented botnets. It is difficult to register a large number of accounts automatically as systems which protect against automated registrations are constantly modified. It turned out that botnets with classic architecture (i.e. a large number of bots with one command and control center) are very vulnerable, since they depend on a critical node — the command and control center.

All that the zombie network’s owner needs to do is send a command to one of the computers on the network, and the bots will spread the command to other computers in the botnet automatically.

P2P botnets The Storm Botnet In 2007, the attention of security researchers was attracted by a P2P botnet created using a malicious program known as the Storm Worm. Authors of the Storm Worm were spreading their creation so rapidly that it seems as though they had set up a conveyor belt to create new versions of the malicious program. From January 2007 onwards, we have detected between three and five new Storm Worm (Kaspersky Lab classifies it as Email-Worm.Win32.Zhelatin) variants a day.

Clearly, the bot is being developed and distributed by professionals, and both the zombie network architecture and its defense are well-designed.

Mayday Mayday is another interesting botnet and it technically differs slightly from its forerunners. Network size is not the only criterion in which Mayday is inferior to its ‘big brother’ Storm: the Mayday botnet uses a non-encrypted network communication protocol, the malicious code has not been tweaked to hinder analysis by antivirus software and, most importantly, new bot variants are not released with anything nearing the frequency we saw with new variants of the Storm Worm. Backdoor.Win32.Mayday was first detected by Kaspersky Lab in late November 2007, and since then just over 20 different variants of the malicious program have made it into our collection. Most users are familiar with ICMP (Internet Control Message Protocol) because it is used by the PING utility to check whether a network host is accessible. Command and control centers of web-oriented botnets use a mechanism known as CGI (Common Gateway Interface).
Kaspersky Lab did not detect any new variants of the Mayday bot in spring 2008. Perhaps the malicious program’s authors have taken a timeout and the Mayday botnet will resurface in the near future.

The botnet business The answer to the question why botnets keep evolving and why they are coming to pose an increasingly serious threat lies in the underground market that has sprung up around them. Today, cybercriminals need neither specialized knowledge nor large amounts of money to get access to a botnet. The underground botnet industry provides everyone who wants to use a botnet with everything they need, including software, ready-to-use zombie networks and anonymous hosting services, at low prices.

Let’s take a look at the ‘dark side’ of the Internet and see how the botnet industry works to benefit zombie network owners. The first thing needed to create a botnet is a bot, i.e. a program that can remotely perform certain actions on a user’s computer without the user’s knowledge. Software for creating botnets can be easily purchased on the Internet by simply finding a appropriate advertisement and contacting the advertiser. Bot prices vary from $5 to $1000, depending on how widespread a bot is, whether it is detected by antivirus products, what commands it supports, etc.

A simple web-oriented botnet requires a hosting site where a command and control center can be located. Such sites are readily available, and come complete with support and anonymous access to the server (providers of anonymous hosting services usually guarantee that log files will not be accessible to anybody, including law enforcement agencies). Since stealing botnets is a common practice, most buyers prefer to replace both the malicious programs and the command and control centers with their own, thereby gaining guaranteed control over the botnet. This ‘reloading’ of botnets is also helpful for protecting them and ensuring anonymity, since IT security experts may already be aware of the ‘old’ C&C and the ‘old’ bot. They infect the systems of users who visit a malicious web page by exploiting vulnerabilities in browsers or browser plugins. Sadly, these tools are so accessible that even adolescents can easily find them and they even try to make money by reselling them.

Interestingly, ExploitPacks were originally developed by Russian hackers but later they found an audience in other countries as well. These malicious programs have been localized (showing that they were commercially successful on the black market) and are now actively used in China, among other places. Developers of such systems as C&C software or ExploitPacks realize this and develop user-friendly installation and configuration mechanisms for their products in order to make them more popular and increase demand. For example, installation of a command and control center usually involves copying files onto a web server and using the browser to launch an install.php script.

It is well known in the cybercriminal world that sooner or later antivirus products will start detecting any bot program. When this happens, the infected machines on which an antivirus product is installed are lost to the cybercriminals, while the rate of new infections significantly deteriorates. Botnet owners use a number of methods to retain control of their networks, the most effective of which is protecting malicious programs from detection by processing the malicious code. The ability to gain access to a network of infected computers is determined by the amount of money cybercriminals have at their disposal rather than whether they have specialized knowledge. Such botnets can be used by governments or individuals to exert political pressure in tense situations.

In addition, anonymous control of infected machines that does not depend on their geographic location could be used to provoke cyber conflicts. Think of ten friends or acquaintances who have computers — out of the ten, one of them is likely to own a machine that is part of a zombie network.

http://www.viruslist.com/en/analysis?pubid=204792003

Read more

Tech Insight: Incident Response

Posted on January 18, 2008December 30, 2021 by admini

Incident response (IR) for many IT shops traditionally has been accomplished by cobbling together tools from various sources with a script-based tool that automates the collection of data from the suspect system. All manual incident response is slow response, says Kevin Mandia, president and CEO of Mandiant. A key driver for organizations dealing with incidents, especially those in the financial sector, Mandia says, is speed and minimizing exposure: The IR team must be able to quickly grab information about the incident, determine what’s happening, and respond appropriately to minimize collateral damage. And as industry regulations and legislation now require disclosure of data breaches, it’s increasingly important to handle incidents and internal investigations as quickly as possible.

Guidance Software, thanks to its success as a forensic software company, has been the major player in the enterprise incident response (IR) market for several years. Its Encase Enterprise product integrates IR and traditional forensic capabilities into one interface that’s familiar to users of the company’s standalone Encase Forensic product.

There are network event-focused tools arriving as well: Startup Packet Analytics, for instance, on Tuesday will emerge from stealth mode and roll out its new Net/FSE Network Forensic Search Engine software, which collects and organizes Cisco NetFlow and syslog log data into a searchable format, helping analysts to investigate breaches as soon as they occur.

Key features to consider in enterprise IR tools are the breadth of operating system support, what information can be collected, and whether it will complement current internal processes and tools. Collecting volatile data such as open ports, running processes, and contents of memory, is one key thing to consider when searching for an IR solution. If you conduct small internal investigations and computer forensics, most IR solutions can collect information in a way that can be easily analyzed by existing forensic products, or within the IR solution itself.

Chet Hosmer, senior vice president and chief scientist for WetStone Technologies, says that is one of the key features of WetStone’s LiveWire Investigator: quickly and accurately capturing volatile information, as well as performing acquisition in such a way that can be analyzed within its product, or plugged into other vendors’ tools.

Brian Karney, chief operating officer for AccessData, says internal investigations are a primary driver for companies researching, or that already have purchased, enterprise IR tools.

http://www.darkreading.com/document.asp?doc_id=143629&WT.svl=news2_1

Read more

After a Data Breach

Posted on October 30, 2007December 30, 2021 by admini

Bananas.com was caught off guard last year. The musical instrument sales site suffered a data breach that was followed swiftly by a double whammy of consequences. Because its own resources were limited, Bananas referred victims to large credit-reporting agencies to monitor for subsequent financial damage from the breach. Despite its efforts, Bananas apparently failed to meet all the various state notification requirements and was subsequently slammed with fines and fees by major credit companies. The Bananas experience provides a hint of the turmoil a company can face as it tries to cope with disclosure requirements in the wake of a data breach.

With no imminent legislative relief in sight, corporations sometimes resort to blanketing customers with notifications after a breach — lobbing disclosures even in those states that don’t require them, simply to cover all bases. But this practice can have “unintended detrimental consequences,” says Robert Scott, managing partner at the Dallas office of Scott & Scott LLP, a law and IT services firm.

Studies have shown that most customers would take their business elsewhere if they received two or more security breach notices, says Scott. “When faced with a security incident, businesses should carefully determine who has been impacted, review their breach notification laws in the relevant states, and devise a breach notification strategy that satisfies the legal obligations and properly notifies affected consumers,” he says. Others are stepping up encryption efforts, since many states don’t force companies to disclose security incidents if the compromised data was encrypted.

In large companies, disclosure activity often involves multiple jurisdictions, such as the offices of the chief auditor, the chief compliance officer, the chief privacy officer and the chief technology officer or the CIO, says Joseph Rosem­baum, a partner at New York law firm Reed Smith LLP.

“Where responsibilities are partitioned across a diverse set of functions, each office may have the ability to provide greater focus on individual issues, but the challenge of coordination across multiple disciplines is more difficult,” Rosembaum notes. Moreover, it takes corporate vigilance to keep pace with so many differences in state disclosure laws — variations that start with notification triggers. “For some states, any breach that compromises the security or confidentiality of covered personal information triggers the obligation to notify the affected individuals,” notes Thomas Smedinghoff, a partner at Chicago law firm Wildman, Harrold.

For example, although one state might allow exemptions for compromises of encrypted data, “another state without such an exception would require a notice, even though the data was unreadable,” says Geoff Gray, a privacy and data security consultant at the Cyber Security Industry Alliance in Arlington, Va.

And as Bananas.com learned, the high cost of notification compliance doesn’t stop with the resources it takes to coordinate a response and alert customers. “We expanded upon legislation that only existed at the time in California and opted to make nationwide notification of potentially affected consumers, without any state or federal law requiring us to do so,” says Christopher Cwalina, ChoicePoint’s assistant general counsel and vice president for compliance.

http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=304931&source=NLT_AM&nlid=1

Read more

Bringing Security into the Development Process

Posted on October 11, 2007December 30, 2021 by admini

“The risks that have been prevalent throughout the years have been mostly risks of Trojans being implanted, allowing individuals to come in and steal information or commit fraud,” Carpenito said.

With this in mind, vendors such as Gamma Enterprise Technologies and Fortify Software are looking to improve security in the development phase.

Gamma, based in Woodland Hills, Calif., offers a data obfuscation tool called InfoShuttle Data Security, to protect data in SAP development and test environments. The tool accesses the InfoShuttle Content Library, a repository of SAP objects and relationships, to automatically detect all related fields deep in SAP’s data structures for identifying and masking confidential data. In addition, it disguises data according to different rules, such as shuffling existing key fields and replacing data with unique generated numbers while maintaining consistency across multiple data tables, Gamma officials said. “The development environment by its very nature is an open one with access granted to a wide range of in-house staff and often to outside contractors,” said Suzanne Swanson, executive vice president of Gamma. “Enterprises really have to segment them off from the main network as a minimum, and make sure only strongly authenticated remote access is supported.

Security researchers at Fortify Software reported in their Oct. 9 white paper, “Attacking the Build through Cross-Build Injection,” a class of security vulnerabilities they are calling cross-build injection.

While external dependencies and open-source components do not necessarily represent an unacceptable security risk, Fortify’s researchers demonstrate that they deserve proper vetting to ensure they do not compromise the security of applications that make use of them.

“When software that depends on external components is built, an attacker may either target the server that hosts the open-source component or the DNS server that the build system uses to resolve the name of the remote server,” Jacob West, security research group manager at Fortify, said in an interview with eWEEK.

http://www.eweek.com/article2/0,1759,2194543,00.asp

Read more

Posts pagination

  • Previous
  • 1
  • …
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • …
  • 40
  • Next

Recent Posts

  • AI News – Mon, 17 Nov 2025
  • CSO News – Mon, 17 Nov 2025
  • AI/ML News – 2024-04-14
  • Incident Response and Security Operations -2024-04-14
  • CSO News – 2024-04-15

Archives

  • November 2025
  • April 2024
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • April 2023
  • March 2023
  • February 2022
  • January 2022
  • December 2021
  • September 2020
  • October 2019
  • August 2019
  • July 2019
  • December 2018
  • April 2018
  • December 2016
  • September 2016
  • August 2016
  • July 2016
  • April 2015
  • March 2015
  • August 2014
  • March 2014
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • October 2012
  • September 2012
  • August 2012
  • February 2012
  • October 2011
  • August 2011
  • June 2011
  • May 2011
  • April 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • June 2009
  • May 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • December 2004
  • November 2004
  • October 2004
  • September 2004
  • August 2004
  • July 2004
  • June 2004
  • May 2004
  • April 2004
  • March 2004
  • February 2004
  • January 2004
  • December 2003
  • November 2003
  • October 2003
  • September 2003

Categories

  • AI-ML
  • Augment / Virtual Reality
  • Blogging
  • Cloud
  • DR/Crisis Response/Crisis Management
  • Editorial
  • Financial
  • Make You Smile
  • Malware
  • Mobility
  • Motor Industry
  • News
  • OTT Video
  • Pending Review
  • Personal
  • Product
  • Regulations
  • Secure
  • Security Industry News
  • Security Operations
  • Statistics
  • Threat Intel
  • Trends
  • Uncategorized
  • Warnings
  • WebSite News
  • Zero Trust

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2025 CyberSecurity Institute | Powered by Superbs Personal Blog theme