Humans: The Weakest Link In Information Security

By Jeff Schmidt: Published: March 11, 2011
Hosted at: EmbeddedSw - OpenPuff
Hosted at: EmbeddedSw - MultiObfuscator

All security is a weakest link problem. An impressive 3-ton steel and concrete bank vault door is useless if the back of the vault is made of drywall. A prisoner can look back at a 15-foot electrified barbed-wire fence as he walks through the unlocked gate. And just about every technical countermeasure that brilliant engineers devised to protect vital computer systems and valuable information can be accidentally or intentionally circumvented by human interaction.

The array of technical countermeasures available to protect information and computer systems has certainly expanded dramatically over the last decade. Most corporate IT departments now allocate a significant portion of their budgets to information security. What isn't clear is whether systems are more secure as a result.

The concept of 'security' itself is a nebulous one. Notoriously difficult to measure, one practical and business-oriented approach to defining security is measuring incidents and the resulting damages/losses. Based on the high-profile breaches of the past 18 months, the industry-wide overall level of 'security' doesn't seem to be improving; arguably it's getting worse. Awash in technical countermeasures, we have to ask: "what we're missing?" The answer is that the human remains the weakest link in the information security chain.

Usable security still eludes us

Thousands of years of human evolution have created the "hairs on the back of our necks" that alert us to possible danger. This mechanism protected early humans from predators and still protects us when walking an unfamiliar street at night. These mechanisms don't exist in the online world; well-trained and well-intentioned humans are all too often and easily tricked into doing something dangerous. Just ask the employee of security company RSA who innocently opened a benign-looking email attachment. The employee who opened that attachment unwittingly weakened millions of RSA authentication tokens - RSA SecurID tokens secure access to many high security systems including banks, utilities, and governments around the world. We're all familiar with the obscure "certificate warnings" that our Web browsers occasionally grace us with - these warnings are completely indecipherable, un-actionable, and thus routinely ignored.

The risks posed by trusted employees must be actively managed

Employers have to trust their workers; there is no reasonable alternative. However, all too often, employers fail to realize that risks posed by trusted personnel are highly dynamic and must be actively managed. Often, employers assess employee risk only once - at the time of hire. Unfortunately, employees with decades of tenure are capable of the unthinkable if they're having trouble making the mortgage payment next month. Moreover, as employees' roles' change, their access to sensitive information and level of supervision must be re-evaluated to actively manage the acceptable level of risk. Just last year, for example, not properly vetting an employee's security access allowed a low-level HSBC employee to steal data that affected 24,000 of the private bank's clients - 15 percent of its client-base.

Perspectives on information are changing

Generation X and Y grew-up in the Internet age - where an infinite volume of information is as close as the nearest browser. Open Source software, Wikipedia, Napster and Google have created an expectation that digital information is readily available and free. Of course, this has created tension with brands and copyright holders facing rampant piracy of commercial software and media. As the Information Age generations make up more and more of the workforce, their perspectives risk devaluing information as a proprietary resource. Problems arise when employees treat data casually, sharing widely, emailing socially, and taking valuable information with them when they leave.

Be mindful of the "easier way in"

When a security mechanism presents a standard "hard" way through and an alternative "easier" way through, the bad guys will always target the easy way. There is no better example of this fact than airport security screening: while many decry the screening of pilots, soldiers, children and the elderly, the reality is that relaxing requirements in any part of the system creates an "easy" way through that will be exploited. If pilots are expedited through airport security with less screening than the general population, the bad guys will dress-up as pilots.

In the cybersecurity world, automated ("self-service") password reset mechanisms are the norm and are a perfect example of this phenomenon. They're used because they are quick, economical and convenient for both the account issuer and the user. We've all used them - click the "I forgot my password" button and I'm either sent an email or prompted to answer a few personal questions. Unfortunately, the security of the alternate (reset) mechanism is often weaker than the password, and as such the reset mechanisms have become attractive targets. Just ask the numerous Hollywood starlets that have recently had their mobile accounts compromised via this mechanism. Social networking sites have made it easy for bad guys to guess the answers to common "personal security questions" such as the name of your street growing-up, high school mascot, etc.

In any system where humans play an integral role, vulnerabilities due to human nature will permeate. Any realistic security system creates redundancies and redoubts that address both technical and human vulnerabilities. The best security systems also mitigate the consequences of the admittedly inevitable beach.