Skip to content

CyberSecurity Institute

Security News Curated from across the world

Menu
Menu

Category: Uncategorized

5 tips for cybersecurity-training your employees

Posted on January 21, 2010December 30, 2021 by admini

When Lauer arrived at the agency, he had a list of more than 20 noncompliance items from Federal Information Security Management Act audits.

Now when users log on to the MCC network, they are greeted by a Tip of the Day awareness training application, which asks a question about IT security. Besides giving managers an easy way to assess the agency’s training program, the daily quizzes have also made employees more mindful of security.

“We’ve had a tremendous reduction in viruses,” Lauer said. “Instead of clicking on things, [users] call the help desk. They never used to do that before.”

But not every agency can report such success. Indeed, experts say the goals of user training efforts are still a long way from being realized. “There is a gap, and the gap is costly because it undermines all the technology being thrown at security problems,” said Keith Rhodes, senior vice president and chief technology officer at QinetiQ North America’s Mission Solutions Group. “No approach to training is infallible because human beings are fallible, and of course, human fallibility is what training tries to counter,” Rhodes said.

Four out of five federal IT managers said they provide ongoing classes on security policies and procedures. But even then, almost half had seen employees post passwords in public places, violating one of the most fundamental security proscriptions. The survey highlights one of the hardest tasks in IT security: changing user behavior. For instance, firewalls won’t prevent an employee from stowing passwords under a mouse pad or engaging in other careless practices.

Security managers and industry consultants say there are a few basic techniques for evaluating the effectiveness of IT security training and improving the odds that the lessons will sink in. At MCC, new employees receive IT awareness training as part of their orientation, and the security tip of the day provides ongoing reinforcement. MCC officials keep tabs on employees’ security awareness by tracking responses to those daily quizzes via a monthly performance report.

Organizations with multiple locations always face a tough challenge when it comes to developing and measuring the success of training programs. The state is 18 months into a four-year initiative that will meld the IT operations of 16 executive branch agencies under the statewide Office of IT. “To get metrics to prove that end-user security is working, you’ve got to be in a consolidated environment,” said Seth Kulakow, Colorado’s chief information security officer. Consolidation will provide the consistency required to gather the correct metrics, he added.

Barr recommends that agencies use internal IT security employees to conduct quarterly vulnerability assessments and external experts for annual vulnerability assessments.

Elsewhere, Colorado’s Kulakow has recommended making an employee’s adherence to security policy part of his or her performance evaluation.

Content filtering and data loss prevention are among the products agencies can use to counteract human nature, said Keshun Morgan, a networking and security specialist at CDW-G.

Tip no. 1: Make employee testing simple and routine
Tip no. 2: Check what they do, not just what they know
Tip no. 3: Put security in personal terms
Tip no. 4: Invoke consequences for misbehavior
Tip no. 5: Always remember the limits of training

http://fcw.com/articles/2010/01/25/feat-cybersecurity-training-a-must.aspx

Read more

Supply Chain Security Threats: 5 Game-Changing Forces

Posted on December 16, 2009December 30, 2021 by admini

No. 1 Game-Changing Force: ‘Black Swan’ Events As Nassim Nicholas Taleb explained in his 2007 book of the same name, the term “black swan” refers to an event that is high-impact, hard to predict and rare.
Black swans need not be negative (as in the case of 9/11) and can present times of great opportunity, but CSOs rightfully spend their time worrying about the former scenario. When it comes to the supply chain, black swan events can include everything from disastrous weather to global pandemic to terrorist attacks.

The problem is, if you prepare for the worry du jour, you may leave yourself exposed on other fronts. Warned that a large-scale outbreak of Asian bird flu would put supply chains at risk, global businesses braced for the worst. Executives discussed how the supply chain might be affected if the flu broke out in China. Their plans rested on transporting and storing materials in other places around the world. Then, early this year, H1N1 flu broke out in Mexico and spread quickly to unexpected regions like Australia.

“Companies had to immediately reassess their plans because they were based on specific scenarios,” says Adam Sager, senior manager of business continuity consulting at Control Risks, a security consulting firm in Washington. “Companies realized they needed to better prepare for unexpected events and increase their knowledge of how their organizations could be impacted. If something is emerging on a global basis, they need to act before it affects their supply chain,” says Sager.

When a crisis hits—no matter where on the globe—you need to be able to understand and assess the situation using firsthand country- and location-specific information, says Sager. And you need bi­directional communication between crisis managers and the locale where the event is occurring. Sager notes that companies are discovering gaps between their crisis plans and their operations. “They had security management and crisis management plans in place, but the missing link was integrating them with the business so people around the world could understand management’s position regarding critical things such as uptime, issue resolution and who’s responsible,” he says. This type of information is often not conveyed to the field in advance, a crucial error. Management needs to empower local decision-makers in advance to take action quickly to mitigate damage if certain conditions are met.

The plans have to address not just key supply chain nodes and specific scenarios that could occur, but also emerging security vulnerabilities. “That is a different mind-set and way of planning,” Sager says. “The security department has to come together with the operational/financial side of the business,” looking at all aspects of the supply chain, including where the different components are located and alternative sourcing arrangements. Sager puts his clients through tabletop testing, in which executives sit in a conference room and go through a scenario point by point with the key decision-makers, reviewing how they would respond.

Marc Siegel, commissioner for the ASIS International Global Standards Initiative, is leading the charge to develop an ISO standard for supply chain resilience. ASIS has already published SPC.1, its first organizational resilience standard, which it expects will be ready by the end of the year. “We think standards are the answer for dealing with [black swans],” Siegel says. “Companies have to develop a comprehensive [supply chain resilience] strategy because their resources are limited… This allows you to look at the full picture, rather than just separate out the different things.”

Organizations need to approach risk from a holistic standpoint, Siegel adds. “The problem with the risk du jour is that the likelihood of it happening varies so greatly between organizations that it can divert your attention away from doing a comprehensive risk assessment.” In short, it can make you take your eye off the ball.

No. 2 Game-Changing Force: The Rise of Malware Information security matters also weigh on CSOs’ minds, though they are not as visibly related to the supply chain as physical security is. An organization (and therefore its supply chain) can be brought low by an attack on its information network as surely as it can be hurt by an attack on its cargo. Many CSOs say they are worried about botnets; two of the most pressing threats related to botnets are spam/phishing attacks on employees and the possibility of a resurgence in the denial-of-service (DoS) attacks that first appeared 10 or more years ago. Ed Amoroso, CISO of AT&T, blames rampant technological complexity for the rise in malware. “The primary root cause for almost everything we deal with—commercial customers and everything—is complexity. The computers and networks that people set up and use have become way too complicated,” says Amoroso. “DoS used to be about large-volume traffic hitting your network,” says Lee, an officer for the National Incident Response Team and assistant vice president at the Federal Reserve Bank of New York. Rena Mears, a partner in security and privacy services for Deloitte & Touche, believes the malware supply chain is itself approaching maturity. Lee, for one, does not believe that network service providers can adequately protect against the threats posed by new-breed malware. Many CSOs expect the associated threat pool to continue to widen.

Although the economy is forecast to improve slowly in the coming year or two, many experts expect the reshaped landscape will not necessarily signal a return to prosperity for all, or even most, of society.

This is certainly true in the food/beverage/agribusiness industry, due to the obvious importance of maintaining a food supply that’s safe from contamination, whether malicious or innocent.

http://www.csoonline.com/article/510943/Supply_Chain_Security_Threats_5_Game_Changing_Forces

Read more

Industrialisation Of Hacking Will Dominate The Next Decade

Posted on December 8, 2009December 30, 2021 by admini

· A move from application to data security as cyber-criminals look for new ways to bypass existing security measures and focus on obtaining valuable information.
· Increasing attacks through social network sites where vulnerable and less technically savvy populations are susceptible to phishing attacks and malware infection.
· An increase in credential theft/grabbing attacks. As the face value of individual credit card records and personal identity records decreases (due to massive data breaches) attackers look at more profitable targets. Obtaining application credentials presents an up sell opportunity as they provide a greater immediate value to stolen data consumers up the food chain.

· A move from reactive to proactive security as organisations move from sitting back and waiting to be breached, to actively seeking holes and plugging them as well as trying to anticipate attacks before they come to realisation.

Application owners need to get their act together and tackle these trends head on. Organisations serious about protecting data will need to address not only the application level but also at the source of data. This will mean introducing of new technologies including a Database Firewalls, File Activity Monitoring, and the next generation of DLP products. These tools should also be combined together with other technologies such as Web Application Firewalls and classic DLP solutions to allow organization to keep track of dataflow across the enterprise from source to sink.

He sees the automation of hacking as a major issue and technical measures will be needed to combat this trend.

Organisations must look to integrate their protection tools with proactive security measures, admittedly not readily available today, however the security community is currently developing solutions and these will become widely available over the next few years.

The next decade must see the IT security industry rise up and stand shoulder to shoulder if it is to win the fight against cyber-criminals.

­Botnet growers / cultivators whose sole concern is maintaining and increasing botnet communities ­ Attackers who purchase botnets for attacks aimed at extracting sensitive information (or other more specialized tasks) Cyber criminals who acquire sensitive information for the sole purpose of committing fraudulent transactions As with any industrialisation process, automation is the key factor for success.

Indeed we see more and more automated tools being used at all stages of the hacking process.

Proactive search for potential victims relies today on search engine bots rather than random scanning of the network.

Massive attack campaigns rely on zombies sending a predefined set of attack vectors to a list of designated victims.

Attack coordination is done through servers that host a list of commands and targets.

SQL Injection attacks, “Remote File Include” and other application level attacks, once considered the cutting edge techniques manually applied by savvy hackers are now bundled into software tools available for download and use by the new breed of industrial hackers.

Search engines (like Google) are becoming an increasingly vital piece in every attack campaign starting from the search for potential victims, the promotion of infected pages and even as a vehicle for launching the attack vectors themselves.

In the last few days, Imperva tracked and analysed a compromise that affected hundreds of servers injecting malicious code into web pages, these were cross referenced with keywords that scored highly in Google search engine generating traffic and thus creating drive by attacks.

The scale of this attack, and others like it, is enormous and would not be achievable without total automation at all stages of the process.

Organisations must realize that this growing trend leaves no web application out of reach for hackers.

Attack campaigns are constantly launched not only against high profile applications but rather against any available target.

Protecting web applications using application level security solutions will become a must for larger and smaller organisations alike.

End users who want to protect their own personal data and avoid becoming part of a botnet must learn to rely on automatic OS updates and anti-malware software.

Previously attracting student communities, the growing popularity of social networking sites, such as Facebook, Twitter and LinkedIn is fast infiltrating mainstream populations with practically every man, and his dog, now ‘on Facebook’.

Elderly people as well as younger children, people who did not grow up with an inherent distrust in web content may find it very difficult to distinguish between messages of true social nature and widespread attack campaigns.

Attackers will also take advantage of the social networking information made accessible by social platforms to create more credible campaigns (e.g. make sure you get your Phishing email from your grandchildren).

The capabilities offered by the social platform and their growing outreach into other applications (webmail, online games) allow attacker to launch huge campaigns with a viral nature and at the same time pinpoint specific individuals.

Much like searching through the Google search engine for potentials target applications, attackers will scan social networks (using automated tools) for susceptible individuals, further increasing the effectiveness of their attack campaigns.

An entire set of tools that would allow us to evaluate and express personal trust in this virtual society are yet to be developed and put to use by platform owners and consumers.

Even when considering manually executed fraud, it is evident that having multiple sets of valid credentials for an online trading application makes it much more easier than having the personal data of account owners.

Consumers should protect themselves mainly from Trojan and KeyLogger threats by using the latest anti-malware software.

To date the security concept has been largely reactive — waiting for a vulnerability to be disclosed; creating a signature (or some other security rule) then cross referencing requests against these attack methods, regardless of their context in time or source.

http://www.businesscomputingworld.co.uk/?p=2017

Read more

Choosing SIEM: Security Info and Event Management Dos and Don’ts

Posted on December 2, 2009December 30, 2021 by admini

1. Security event management (SEM): Analyzes log and event data in real time to provide threat monitoring, event correlation and incident response. Data can be collected from security and network devices, systems and applications.

2. Security information management (SIM): Collects, analyzes and reports on log data (primarily from host systems and applications, but also from network and security devices) to support regulatory compliance initiatives, internal threat management and security policy compliance management.

Traditional SEM vendors have responded by orienting products previously geared toward real-time event alerting and management toward log management functionality. For instance, ArcSight added its Logger appliance and additional deployment options to address compliance. Meanwhile, SIM players such as SenSage and LogLogic are adding real-time capabilities.

Jon Oltsik, an analyst at Enterprise Strategy Group, sees the market differently. The main driver, he says, is the need to keep up with security complexity. “There is an acute awareness that security attacks are more sophisticated and that security at a system level is harder than at the device level,” he says. Compliance is the second most important factor, he says, and the third is the need to replace early SIEM platforms that don’t scale or provide the right level of analytics and reporting capabilities.

Forrester expects consolidation among the 20-plus SIEM vendors in the next 12 to 36 months, as well as more cloud-based SIEM services.

Core Capabilities
According to Gartner, five critical capabilities differentiate SIEM products, whether you use them for SEM, SIM or both.
This includes functions that support the cost-effective collection, indexing, storage and analysis of a large amount of information, including log and event data, as well as the ability to search and report on it.
Reporting capabilities should include predefined reports, ad hoc reports and the use of third-party reporting tools.
Key capabilities include user and resource access reporting.
This includes real-time data collection, a security event console, real-time event correlation and analysis, and incident management support.

The need for compliance has encouraged smaller security staffs to adopt SIEM, and these buyers need predefined functions and ease of deployment and support over advanced functionality and extensive customization.

Large volumes of event data will be collected, and a wide scope of analysis reporting will be deployed. This calls for an architecture that supports scalability and deployment flexibility.
Access Monitoring. This capability defines access policies and discovers and reports on exceptions. It enables organizations to move from activity monitoring to exception analysis. This is important for compliance reporting, fraud detection and breach discovery.

SIEM DOs and DON’Ts DO include multiple stakeholders.
When developing requirements, be sure to collect them from the range of groups that may benefit from collected log data. This includes internal auditors, compliance, IT security and IT operations.
There are certainly customers just looking for log management because of a compliance requirement, and they may not have the internal resources to do anything but collect and document logs, Kavanaugh says. “But many buyers realize the capabilities inherent in log management software—the ability to collect, search and run reports—are valuable to security operations.” Once the security group gets involved, he says, they look at including network security devices, routers and other areas of the network environment where they don’t have great insight, as well as the real-time component.

When selecting a SIEM product at Liz Claiborne, Mike Mahoney, manager of IT security and compliance, involved architecture leaders from eight groups, asking them to respond to an in-depth questionnaire regarding what would help them improve their jobs. It ultimately took six months to complete the evaluation. “I wanted this to be a tool they would benefit from beyond log collection,” Mahoney says.
“Ultimately, the point of intersection is log management, but analytics might be done by two different platforms,” Oltsik says. “Whether you need security or compliance, you’re using the same log data.”

Correlation is a key aspect of SIEM systems, says Larry Whiteside, associate director of information security at the Visiting Nurse Service of New York (VNSNY). SIEM systems normalize logs from various systems, which helps you see the most important data you need out of those logs in a readable format. They also help you correlate events that the human eye could never perceive but that correlation rules can detect. “If you use correlation rules, you can run a report, and two events that are 10 minutes apart will be right on top of each other because they’re directly related to each other,” White­side says. He can also look at specific databases on specific servers and see who’s touching them. Or he can get log events to see what applications are talking to other applications and what database tables they’re hitting.For instance, if Server A is talking to Server B, and activity peaks on Sunday night at 10 p.m., he can drill in further to see what desktops are involved.

While software is the traditional form factor, Kavanaugh says, vendors have increasingly come out with all-in-one appliances, which do the data collection, analysis and correlation and use their own built-in databases to store copies of logs.

There are also many blended offerings, in which a server performs the real-time analysis, correlation and monitoring, and an appliance covers log collection.
Cincera warns that hardware and software accounts for one-half or less of the total cost of ownership of a SIEM implementation. The rest, he says, is the labor involved with creating, building and deploying the technology. “You can’t just put someone on the console and have them whip up 10 good correlation rules a day,” he says. “They need to understand things like, ‘These events need to be treated in this manner, or with this level of discretion.’ ” This requires the governance function to specify which events to care about and what actions to take. … There’s a cost to the organization based on that function,” Cincera says.

Another cost is maintenance, which includes keeping rules up to date, group management, permissions, alerting, monitoring and metrics. “You need to manage interfaces to upstream systems, things that feed information to the engine,” Cincera says. “You need to stay constantly involved, making sure connections stay in sync with one another, and that can be a daunting effort.” The work level grows dramatically based on the number of upstream systems you need to feed, he warns. “Every event you choose not to ignore is one on which you must act, even if it’s just to say, ‘noted,’ ” Cincera says. At some point, Cincera says, the rules, alerts and actions you take lose value and should be decommissioned.

Total cost of ownership is something no vendor is good at communicating, he adds. “They don’t want you to think of all those costs.”

http://www.csoonline.com/article/509553/Choosing_SIEM_Security_Info_and_Event_Management_Dos_and_Don_ts

Read more

Six Steps Toward Better Database Security Compliance

Posted on October 10, 2009December 30, 2021 by admini

1. Database Discovery And Risk Assessment Before organizations can start their database compliance efforts, they must first find the databases — and where the regulated data resides in them.
“That’s a big challenge for a lot of folks. They know where their mainframes are, and they know where a lot of their systems are but…they don’t really know which database systems they have on their network,” says Josh Shaul, vice president of product management for Application Security, a database security company.

2. Vulnerability And Configuration Management Once an inventory has been developed, organizations need to look at the databases themselves.
“Basic configuration and vulnerability assessment of databases is a key starting point for enterprises,” Shaul says.

3. Access Management and Segregation of Duties Figuring out who has access to regulated data, what kind of access they are given, and whether that access is appropriate for their jobs is at the heart of complying with regulatory mandates. “Sometimes it’s as simple as account management, password controls, and removing default accounts,” Laliberte says. Organizations need to be vigilant to constantly review roles and entitlements to prevent toxic combinations of privileges. Take, for example, a payments clerk who gets a promotion to run the accounts payable department. In the new position, that person “owns” the AP system and has the ability to modify and delete checks that have been written.

4. Monitoring Risky Behaviors And Users Unfortunately there is a built-in segregation of duties violation in every database — and it’s one you can’t get rid of, Shaul says.
“Databases in general don’t give you the ability to take away DBAs’ data access away from them,” Shaul says. “And that’s what auditors are coming in and flagging folks for, saying, ‘First and foremost, you’ve got this easy-to-find segregation of duties violation. This exposure is one reason why database activity monitoring is so critical to enterprises seeking to satisfy regulatory requirements. Unfortunately, all too many organizations fail to log, track, or monitor database activity because they worry that such monitoring may affect database performance. DBAs and other database stakeholders should know that today’s third-party monitoring tools aren’t nearly as burdensome to database performance as in years past, experts say.

5. Reporting On Compensating Controls In those instances where organizations have appropriate compensating controls in place, auditors want proof that these controls actually exist, Laliberte says.

6. Following Defense-In-Depth Strategies Finally, it is important to remember to keep a little perspective on the matter of database security and compliance.
“This is really just a piece of what has to be a pretty large security program that’s going to allow you to meet these regulations,” says Mike Rothman, senior vice president of strategy for eIQnetworks, a security information and event management company.

Phil Lieberman, president of Lieberman Software, a password management company, believes this is one of the biggest database risks of all. The data may be secure on the server, but if someone with ill intent gets hold of the unencrypted tape, then it will be compromised all the same.

http://www.darkreading.com/story/showArticle.jhtml?articleID=220600156

Read more

Five Ways To Meet Compliance In A Virtualized Environment

Posted on September 3, 2009December 30, 2021 by admini

“It’s a good idea to talk about the intersection between compliance and security…. A lot of compliance regulations are written assuming the systems are physical — and that only certain administrators have rights to physical systems,” says Jon Oltsik, senior analyst at Enterprise Strategy Group.

“What if financial information sits on a virtual system and on a system with other [applications running on it]? If a financial application runs as a VM on a physical system, where do the access controls need to be? How are the regulations going to change to accommodate that?”

And compliance doesn’t always equal security — just take a look at some of the biggest data breaches of late. Virtualization adds another dimension to that problem. “You can have compliance without security and security without compliance,” Oltsik says.

Configure the virtualization platform, both the hypervisor and administrative layer, with secure settings, eliminate unused components, and keep up-to-date on patches. Virtualization vendors have their own hardening guidelines, as does the Center for Internet Security and the Defense Information Systems Agency, according to RSA and VMware.

“Virtualization infrastructure also includes virtual networks with virtual switches connecting the virtual machines. All of these components, which, in previous systems, used to be physical devices are now implemented via software,” states the RSA and VMware best practices guidelines. Extend your current change and configuration management processes and tools to the virtual environment, as well.

Server administrators should have control over virtual servers and network administrators, over virtual networks, and these admins need to be trained in virtualization software in order to avoid misconfiguration of systems. “Careful separation of duties and management of privileges is an important part of mitigating the risk of administrators gaining unauthorized access either maliciously or inadvertently.”

Deploy virtual switches and virtual firewalls to segment virtual networks, and use your physical network controls in the virtual networks as well as change management systems.

Monitor virtual infrastructure logs and correlate those logs across the physical infrastructure, as well, to get a full picture of vulnerabilities and risks.

http://www.darkreading.com/security/management/showArticle.jhtml;jsessionid=HQVORXCLBU4A3QE1GHRSKHWATMY32JVN?articleID=219501096

Read more

Posts pagination

  • Previous
  • 1
  • …
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • …
  • 40
  • Next

Recent Posts

  • AI News – Mon, 17 Nov 2025
  • CSO News – Mon, 17 Nov 2025
  • AI/ML News – 2024-04-14
  • Incident Response and Security Operations -2024-04-14
  • CSO News – 2024-04-15

Archives

  • November 2025
  • April 2024
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • April 2023
  • March 2023
  • February 2022
  • January 2022
  • December 2021
  • September 2020
  • October 2019
  • August 2019
  • July 2019
  • December 2018
  • April 2018
  • December 2016
  • September 2016
  • August 2016
  • July 2016
  • April 2015
  • March 2015
  • August 2014
  • March 2014
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • October 2012
  • September 2012
  • August 2012
  • February 2012
  • October 2011
  • August 2011
  • June 2011
  • May 2011
  • April 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • June 2009
  • May 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • December 2004
  • November 2004
  • October 2004
  • September 2004
  • August 2004
  • July 2004
  • June 2004
  • May 2004
  • April 2004
  • March 2004
  • February 2004
  • January 2004
  • December 2003
  • November 2003
  • October 2003
  • September 2003

Categories

  • AI-ML
  • Augment / Virtual Reality
  • Blogging
  • Cloud
  • DR/Crisis Response/Crisis Management
  • Editorial
  • Financial
  • Make You Smile
  • Malware
  • Mobility
  • Motor Industry
  • News
  • OTT Video
  • Pending Review
  • Personal
  • Product
  • Regulations
  • Secure
  • Security Industry News
  • Security Operations
  • Statistics
  • Threat Intel
  • Trends
  • Uncategorized
  • Warnings
  • WebSite News
  • Zero Trust

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2026 CyberSecurity Institute | Powered by Superbs Personal Blog theme