EU – No to X-Ray Scanners

For those who don’t know, the European Union has much stronger safety and privacy laws than the US. The EU just announced their new official policy for the deployment of airport scanners. Two key quotes:

It is still for each Member State or airport to decide whether or not to deploy security scanners, but these new rules ensure that where this new technology is used it will be covered by EU wide standards on detection capability as well as strict safeguards to protect health and fundamental rights.

In order not to risk jeopardising citizens’ health and safety, only security scanners which do not use X-ray technology are added to the list of authorised methods for passenger screening at EU airports.

If only TSA would accept that dosing people with X-rays and taking nude pictures of them isn’t actually necessary for security! Hopefully the new EU regulations will spur Congress to pass similar laws that protect the health and privacy of Americans. As Scientific American reports, the TSA is planning on deploying over 1800 scanners in airports across the country. Write your Representative and Senators now to encourage them to follow the EU’s lead in protecting citizens!

Rant: Who watches the watchers?

Computer security can be an arcane subject, especial for the “uninitiated” who don’t know what phrases like “risk mitigation”, “threat profile”, and “single-loss-expectancy” are talking about. But a lot of computer security boils down to fundamental ideas about trust and security that we’re used to in the real world. This week at work I was handed a very frustrating example of these fundamentals.

In security jargon, we talk about “controls” – especially “technical controls” vs. “procedural controls”. Let me break that down into plain English for you. Procedural control basically means “we told someone not to do a bad thing, and we trust that they’ll listen to us.” Technical control means “we don’t have to trust someone, because the system won’t do the bad thing even if the person wants to.” In the security world, technical controls are almost always preferable, since they allow your organization to take someone’s trustworthiness out of the equation.

A simple real life example of these two types of controls are locks on doors. In some situations, for example college roommates who grew up together, locking doors isn’t necessary because the people involved are trustworthy. But in another situation, the exterior door on your apartment, you can’t trust the other people and you demand a reasonable lock to secure your living space. And in further extremes, like protecting weapons or biological agents, the people involved are trustworthy but the possible damages are so high that strong locks and other controls (guards, video cameras, fences, etc.) are required.

As you can see from the examples, just because the people involved are trustworthy doesn’t mean systems with lax controls are adequate. If the risk of damage is large, prudence demands that we design a system that “watches the watchers” so to speak.

The example from work wasn’t nearly as dangerous as biological agents. But it was all the more frustrating because I had pointed out the ease with which the operations team could implement better controls on their patching process just a few days ago. Then yesterday it came up that the swing shift operators had installed software patches on the wrong boxes – an error facilitated by the lack of technical control and the attitude from the operations leader that the problem was “reminding the swing shift guys they shouldn’t patch those machines.”

No, the problem is you aren’t even willing to learn from your mistakes and implement new controls even after you’ve been burned once…

S/MIME Gotcha

I recently reenabled S/MIME signing in my Outlook client. (S/MIME is a way to place a digital signature on an email message so the receipients can verify the sender.) When I tested sending mails back and forth to myself through my various clients, I had no problem. However, when I started sending email to other receipients, they all had issues opening the mail – most with the error message “Your digital ID name could not be found by the underlying security system.”
This error is normally associated with difficulty opening encrypted mail. Since I wasn’t using encryption, I couldn’t fathom why this was happening. Many Google searches and Microsoft Knowledge Base articles later, I still hadn’t found a solution. I finally had an “Ah-hah!” moment and found the problem. So, in the hope that someone will be spared some of my pain, here’s my problem and solution.
I configured Outlook 2007 to use SHA512 for the signature algorithm. Unfortunately, this is not as widely supported as one might hope. Even on another Outlook 2007 installation at work, SHA512 couldn’t be opened. Changing the signature algorithm back to SHA1 let everyone start seeing my emails again.
The “Your digital ID name could not be found by the underlying security system” error message is grossly misleading in this case! The system should really be reporting something like “The security system does not support the algorithm used to sign this message.” I don’t normally bash Microsoft, but in this case… you dropped the ball guys! Since SHA1 has started to show some signs of weakness, I’m hopefully that SHA512 will be more widely supported in the future. But until then, keep your S/MIME certificates set to SHA1 and AES256!