Guest Column | November 23, 2009

Web 2.0 Security: Why Filtering and Reputation Schemes Are Not Enough

image_Wedge_Headshot

By Hongwen Zhang, Ph.D., president and CEO of Wedge Networks

Web 1.0, with its static pages, provider-centric content and thou-shall-click-and-have-a-sip browsing, is so Paleozoic. In the lingo of the new web-savvy, ‘Twitter-centric' generation, Web 1.0 is "so yesterday."

Enter Web 2.0.

Web 2.0 is a natural evolution where it is all about the user. It can be summed up as a service platform where localized, user-centric content (both tailored and contributed by the user and for the user), instant-response browsing (through client-side scripting), and simple, integrated services are all available at the user's fingertips. It does not matter where data is sourced, gathered, or aggregated as long as it is relevant and presented to the end user. Web 2.0 is a platform of collective intelligence and focuses on soliciting and providing relevant data to empower the end user. Companies that invest in Web 2.0 can bank on a greater competitive edge.

Amazon.com, for example, sells the same products as competitors and receives the same product descriptions, cover images, and editorial content from vendors. However, Amazon was innovative and proactive by researching user participation and empowerment at the point of purchase. They solicit and present user reviews, and, more importantly, apply user activity to produce better search results and suggestions about what others users have carried out on their network. Thanks to Web 2.0's technologies and strong user participation, it is no surprise that Amazon's ranking and sales also outpace competitors.

Unfortunately, as with any good technology, it can be put into the wrong hands and used negatively. Web 2.0's power is now harnessed by malware writers providing the groundwork for a new generation of hacker collaboration and preferred means of deployment. Hackers are now using Web 2.0 technologies to maintain profiles on social networking sites and blogs to keep in touch with their business partners and customers. Further, Web 2.0 makes it much easier to jeopardize, mobilize, and control remote PCs into botnets. Through strategically placed malware masquerading as user-contributed content on blogs, social networking sites, etc., PCs can be easily compromised, adding them into the ever-increasing botnets without the end users knowledge. With the power of virtualization and Web 2.0 technologies, botnet controllers are now able to sell time on their networks to spammers and other hackers, keep blogs on a Web 2.0 site, and use this collaborative intelligence to keep track of what those "malware customers" require.

The challenge becomes how to protect your corporate network from the negative results of Web 2.0. There are two commonly used approaches– classification and reputation that typically work hand-in-hand.

1. Classification-based approaches attempt to classify all possible URLs into good or bad. Systems using classification-based approaches will in real-time consult with a local URL database, which is periodically updated from a centralized service, attempting to filter out the good URLs from the bad.

2. Reputation approaches are typically a cloud service in which a reputation-authority is consulted. Systems using reputation-based approaches will pass a URL to the service, and in real-time, the authority responds with a rating between good or bad.

The challenge with both approaches is that bad content can happen to good sites through the following channels: third party content networks (e.g., advertising networks such as Gator adverts that pushed malicious content onto Facebook), user-contributed content that is inadvertently malicious (e.g., malicious games posted on Facebook walls), or reputable sites (e.g., NBA.com) that have been compromised through SQL injection. Furthermore, bad content can selectively rear its ugly head when vulnerability is detected. A compromised site can always hide bad content when those reputation authority crawlers visit, but bring it out in full force when a weakness is detected on an innocent user's browser.

Security providers have witnessed this again and again — companies investing heavily in classification (URL filtering) and reputation network systems at the perimeters of their networks to be Web 2.0 secure, yet not getting the promised benefit. The reason: Web 2.0 is extremely dynamic; therefore, static schemes that try to take a snapshot and build conclusions based on this will not work.

So, can Web 2.0 be secure? Yes, but there are no shortcuts. Just like in the airport where it's protocol to be searched every time you cross into the secure zone, network content has to follow a similar process. The most promising technologies available to solve problem are the new generation of network security appliances that enforce network content security and combat Web 2.0 attacks. These new security platforms prevent attacks from ever reaching endpoints by performing content reconstruction, inspection, and manipulation to determine whether the content is safe for delivery to the endpoint. This approach is known as network content inspection.

However, not all content inspection approaches are equal. Techniques such as those used by UTMs, Web application firewalls, or simple intrusion detection attempt to do real-time content scanning, but fail in one aspect or the other — stopping at packet level scanning, not doing deep enough content scanning (by not assembling all the content or subjecting it to limited search techniques), or simply opening up and passing everything through when the traffic volumes are too high. That is as bad as letting passengers through if they are about to miss their flight.

So if deep content inspection is the approach you are using, the features to look for in a network content inspection appliance are:

  • Accuracy and In-depth scanning: Ensure the product scans all content, assembling it in real-time. If needed, unpack the appliance and scan against a full-fledged database of signatures and perform heuristic-scanning and keyword-scanning that protect against new threat vectors and programmatic attacks such as SQL injection/Cross-site scripting, Code-injection etc.;
  • Performance and Robustness: Given that the selected appliance will be in line with your network's traffic, it must accommodate high volumes with little to no impact to the company's traffic throughput and latency (performance). Under extreme loads, the appliance should continue to perform without impacting your network speed; and
  • More for Less: Because the product will be working with your network the selected appliance should at least cover the most common e-mail protocols (IMAP, SMTP, POP3) and Web protocols. Each device introduces some form of latency that can provide more protocols for fewer devices.

Because Web 1.0 is a thing of the past, securing your network against Web 2.0 malware is critical now more than ever. There are no shortcuts, and you have to roll up your sleeves and get technical with your vendors. Select web 2.0 security products that can deliver high accuracy without compromises, high capacity, be flexible to respond to security changes, and be easily deployed in any network segments to provide you with the best defense against malware associated with Web 2.0.

Hongwen Zhang is the president and CEO of Wedge Networks (www.wedgenetworks.com). Its technology leadership in deep content inspection allows companies to protect against new and emerging web-based threats that traditional scanning methods have difficulty intercepting and controlling.