Tuesday, February 8, 2011

A Multi-Layered Defense for Web Applications

The internet can be a swamp of hackers, crackers, and hucksters attacking your systems for fun, profit and fraud.  Defending your data and applications against this onslaught is a cold war, requiring constant escalation of new techniques against an ever increasing offense.

Clinicians are mobile people.  They work in ambulatory offices, hospitals, skilled nursing facilities, on the road, and at home.   They have desktops, laptops, tablets, iPhones and iPads.  Ideally their applications should run everywhere on everything.   That's the reason we've embraced the web for all our built and bought applications.   Protecting these web applications from the evils of the internet is a challenge.

Five years ago all of our externally facing web sites were housed within the data center and made available via network address translation (NAT)  through an opening in the firewall.   We performed periodic penetration testing of our sites.  Two years ago, we installed a Web Application Firewall (WAF) and proxy system.    We are now in the process of migrating all of our web applications from NAT/firewall accessibility to WAF/Proxy accessibility.

We have a few hundred externally facing web sites.  From a security view there are only two types, those that provide access to protected health information content and those that do not.   Fortunately more are in the latter than the former.

One of the major motivations for creating a multi-layered defense was the realization that many vendor products are vulnerable and even when problems are identified, vendors can be slow to correct defects.   We need  "zero day protection" to secure purchased applications against evolving threats.

Technologies to include in a multi-layered defense include:

1.  Filter out basic network probes at the border router such as traffic on unused ports

2.  Use Intrusion Prevention Systems (IPS)  to block common attacks such as SQL Injection and cross site scripting. We block over 10,000 such attacks per day.   You could implement multiple IPSs from different vendors to create a suite of features including URL filtering  which prevent internal users from accessing known malware sites.

3.  A classic firewall and Demilitarized Zone (DMZ)  to limit the "attack surface".

Policies and procedures are an important aspect of maintaining a secure environment.   When a request is made to host a new application, we start with a Nessus vulnerability scan.

Applications must pass the scan before we will consider hosting them.   We built a simple online request form for these requests for access to both track the requests and keep the data a SQL data base.    This provides the data source for an automated re-scan of each system.

Penetration testing of internally written applications is a bit more valuable because they are easier to update/correct based on the findings of penetration tests.

One caveat.   The quality of penetration testing is highly variable.    When we hire firms to attack our applications, we often get a report filled with theoretical risks that are not especially helpful i.e. if your web server was accidentally configured to accept HTTP connections instead of forced HTTPS connections, the application would be vulnerable.   That's true and if a meteor struck our data center, we would have many challenges on our hands.  When choosing a penetration testing vendor, aim for one that can put their findings in a real world context.

Thus, our mitigation strategy is to apply deep wire based security, utilize many tools including IPS, traditional firewalls, WAF and proxy servers, and perform periodic re-occurring internal scans of all systems that are available externally to our network.

Of course, all of this takes a team of trained professionals.

I hope this is helpful for your own security planning.

4 comments:

  1. Well, thank you for putting details of network defense onto the internet, where said hackers can read about them an figure out what to do in response.

    Seriously, the only people who can effectively defeat a hacker is another hacker.

    ReplyDelete
  2. I disagree. Hackers with the level of expertise to attack something sophisticated have a deep understanding of the levels of security.

    John, great info. We have started looking at implementing a similar solution. Please let us know how your migration goes.

    ReplyDelete
  3. This is important information, particularly in healthcare. Hospitals, small ones in particular are vulnerable. This information is already widely available anywhere on the 'net, so summarizing it here is public service. You can't be naive and think hackers don't already know this. We use NSA's "Defense in Depth" for our applications. Thank you Dr. Halamka. This is important information to share.

    ReplyDelete
  4. When I was the sysadmin in the Dept. of Earth Sciences at the local university, I used iptables on the firewall to our local net and implemented an "explicit allow/implicit deny" setup, logging all of the denied connection attempts and not even responding to ICMP packets, kind of like a cloaking device. Wow! Some days, there were hundreds of crack attempts, and we were just a speck on the internet.

    Of course, it goes without saying that I, and later the university through a formal policy, always installed my laptop's O.S. with encrypted hard drives. At least that would thwart the script kiddies, if they ever got their hands on it. It's mind boggling how many institutions don't require such basic encryption, given the number of laptops that are stolen.

    ReplyDelete