Wednesday, November 9, 2011

The Growing Malware Problem

On Friday. I'm lecturing at Dartmouth College to the TISH workgroup (Trustworthy Information Systems for Healthcare) about the growing malware problem we're all facing.

Have you ever seen a Zombie film?   If so, you know that to stop Zombies you must shoot them in the head - the only problem is that the steady stream of Zombies never seems to end and they keep infecting others.   Just when you've eradicated every Zombie but one, the infection gets transmitted and the problem returns.   You spend your day shooting them but you never seem to make any progress.

A Zombie in computer science is a computer connected to the Internet that has been compromised by a cracker, computer virus or trojan horse and can be used to perform malicious tasks of one sort or another under remote direction.

Staring in March of 2011, the rise in malware on the internet has created millions of zombie computers.   Experts estimate that 48% of all computers on the internet are infected.   Malware is transmitted from infected photos (Heidi Klum is the most dangerous celebrity on the internet this year),  infected PDFs, infected Java files,  ActiveX controls that take advantage of Windows/Internet Explorer vulnerabilities and numerous other means.

Here's the problem - the nature of this new malware is that it is hard to detect (often hiding on hard disk boot tracks), it's hard to remove (often requiring complete reinstallation of the operating system), and anti-virus software no longer works against it.

A new virus is released on the internet every 30 seconds.   Modern viruses contain self modifying code.  The "signature" approaches used in anti-virus software to rapidly identify known viruses, does not work with this new generation of malware.

Android attacks have increased 400% in the past year.   Even the Apple App Store is not safe.

Apple OS X is not immune.  Experts estimate that some recent viruses infections are 15% Mac.

If attacks are escalating and our existing tools to prevent them do not work, what must we do?

Alas, we must limit inbound and outbound traffic to corporate networks.

BIDMC will pilot increased restrictions in a few departments to determine if it reduces the amount of malware we detect and eradicate.    I'll report on the details over the next few months.

One of these restrictions will be increased web content filtering.    I predict in a few years, that corporate networks will advance from content filtering to more restrictive "white listing".   Instead of blocking selective content categories, they will allow only those websites reputed to be safe (at that moment anyway).  I think it is likely corporate networks will block personal email, auction sites, and those social networking sites which are vectors for malware.

It's truly tragic that the internet has become such a swamp, especially at a time that we want to encourage the purchase of consumer devices such as tablets and smartphones.

I've said before that security is a cold war.   Unfortunately, starting in March, the malware authors launched an assault on us all.    We'll need to take urgent action to defend ourselves and I'll update you on our pilots to share our successful tactics.

9 comments:

Bernz said...

I might have mentioned it in a previous post, but I believe in maintaining two networks: locked down and free. This is what we did in a secure E-Discovery firm that dealt with HIPPA and GLBA data.

First, we used network switches that required all computers to be authenticated to the network. Those computers could only talk to a white list of Internet sites. Those computers couldn't run any program that wasn't on a white list. We had network behavior monitors to watch for traffic anomalies.

Second, we set up a wireless network that was for any "unauthorized" device. This network only has Internet access. So people can bring in their own devices, do personal email, chat, etc without poisoning the "locked-down" network.

When it comes to remote computers, only authorized computers were allowed to send traffic over the VPN. So we wouldn't authorize every home computer, only certain ones that had a certain level of patches/security.

While it creates a smidgen of inconvenience, the "bring your own device and use it, unhindered, on the open wireless network" seemed to be a good tradeoff and users seemed happy.

Users understood the need for security (user education was a big part of our push) and were compliant.

The only thing that was inconvenient is when people wanted to do work on a device of their choice. We made sure our "locked down" network worked with the 3 big OSes (windows, mac os x and linux) so people didn't seem to mind that much. That was actually the hardest part -- finding a solution that worked across all three platforms.

Alan said...

Why starting in March 2011? A lot of what you write about has been happening for a good number of years.

Chris Howe said...

I'm with Alan actually. Malicious code has been around for a long time, and the "next generation" is always coming out.

There's an old addage "better locks make for better thieves" and it's true for malware as well. The next generation AV will combat this "next gen Malware" until "next gen Malware 2.0" is released. How do you think content filtering will mitigate this cycle?

joebeone said...

Neat, as part of the SHARPS team, I work with Denise Anthony and Tim Stablein who are part of both SHARPS and TISH. If you see them, say hi! Good group up there.

Justin Wiley said...

Good post. I think locking down wire traffic is a half measure, and something that we've tried for years with increasingly advanced firewalls. I think rigorous visualization and app sandboxing at the level of the desktop is a better (albeit imperfect) solution. Intel/McAfee's effort is a good start

Alan said...

For some applications, such as corporate online banking, extremely restrictive whitelisting has been the way to go for some time: blacklist everything, including use of external drives/media, except the bank.
Got data that needs serious security? Don't network it. Sometimes data use agreements require this.

Alan said...

@Justin

interesting experiment in designing a secure OS using VMs:

http://qubes-os.org/Architecture.html

Michael said...

The white-listing process will prove as ineffective as (or more ineffective than) blacklisting: it'll piss users off even more, and they will work harder to find workarounds. Especially when it impacts their actual work: I recently worked at a large research institution, and my group was looking at the epidemiology of STIs in inner-city kids.

Our filtering agent (WebSense) blocked any sites having to do with sex or sexual organs, not to mention commercial sex workers and drug use. Needless to say, it severely impeded our work (and no one wanted to bother with the 6-month exception process), so we all found ways around it.

Granted, a group clever enough to do that might be less likely to click that Viagra ad (and more likely to keep Flash up-to-date), but who else copied our steps?

Anonymous said...

Good discussion. Doesn't it all come back to removing administrative rights? That seems to have been the only successful approach in my experience.