Lippman Offers Prime Explanation of Spam

Speaking on Friday, January 14 at the Thayer School of Engineering, Richard P. Lippman, a researcher at MIT’s Lincoln Laboratory, addressed the issues surrounding the use of intellectually evolving machines to protect computers from spam, a significant online threat.  Lippman began his presentation by noting that there is great potential for using such machines in a security capacity, as they can “automate decisions” and “adapt to frequent changes” in spammers’ attacks.  However, such machines, according to Lippman, are all too easily spooked.

Lippman then detailed the general manner by which such machines can be fooled.  Spammers, or internet adversaries, can either directly manipulate the features of such machines to produce a desired outcome, or, they can more insidiously reconfigure a defending machine’s “training data” and open the floodgates to a torrent of attacks.  Because both methods can have deleterious consequences for a computer, an “arms race” between attackers and defenders creating and maintaining machines has naturally ensued.  Lippman then proceeded to detail the essence of this cyclical relationship, describing not only the nature of attacks, but also, the framework of defense.

According to Lippman, spam has undergone a significant transformation since its early beginnings.  Starting out as text only, spam then became pictures, then pictures and text, and finally, a synthesis of complicated designs, pictures and nonsensical words difficult for a computer to recognize.  Each successive class of spam worked to fool protecting machines into believing that it was actually harmless when it obviously was not.  Taking advantage of “social engineering,” a process by which the user mistakenly trusts the spam and clicks on it, the spam then proceeds to compromise and infect the user’s computer and possibly other systems connected to it.

The defense against such attacks, according to Lippman, involves five critical components.  First, defenders work to detect spammers and their source of origin by creating “honeypots” that lure spammers into revealing their identity and seriously undermining their ability to launch successive attacks.  Subsequent steps involve denying both identified and unidentified spammers access to a system, verifying the rate of success in stopping such attacks, and building a robust machine capable of monitoring and evaluating the previous two decisions.  The final procedure is working to create a machine capable of integrating previous decisions into its future processes; that is, this machine would not only detect and deny spammers, verify its rate of success, and monitor its progress, but also learn and evolve to meet the challenges of shifting spammers’ attacks. According to Lippman, such a learning machine would have to synthesize the fundamental capability of recognizing abnormal data in order to be truly effective.

Leave a Reply

Your email address will not be published. Required fields are marked *