Faculty Spotlight: Charles Palmer

01 Pic 2Introduction

Dr. Charles C. Palmer is an adjunct professor of Computer Science at Dartmouth College, CTO for Security and Privacy for IBM Research, and a member of several government advisory boards. Working in both industry and academia, Dr. Palmer focuses on special projects relating to security and privacy, unique customer challenges, and national security issues (1).

Dr. Palmer was the founding director of IBM’s Institute for Advanced Security. Based in Washington, D.C., the Institute helps clients, academics, policymakers, and businesses understand the complex, multidisciplinary issues associated with system security (1). At Dartmouth, Dr. Palmer teaches courses titled Security and Privacy, Database Systems, and Software Design & Implementation to undergraduate and graduate students; he is also past Director of Research and Senior Technical Advisor to the Institute for Information Infrastructure Protection (I3P), which is managed by Dartmouth College (2).

Prior to taking on these roles, Dr. Palmer led the Security and Privacy departments at IBM’s Thomas J. Watson Research Center for several years. He continues to work with those teams, assisting with IBM’s products, services, and “ethical hacking” ventures. Dr. Palmer received a Ph.D. in Computer Science from Polytechnic University in Brooklyn, NY in 1994 (2). In an interview with the Dartmouth Undergraduate Journal of Science, Dr. Palmer describes his career trajectory and shares some of his thoughts on the current state of security.

On your first day of class teaching CS 55: Security & Privacy, you ask students to define security. How would you define security?

A simple way to describe it is “no surprises.” The system does what you expect it to do, no more no less. And that system could be anything – your computer system at home or the automated check out machine at Home Depot.

Do you think that other people would say something different? What is the public’s perception of security?

That’s tricky. I think most people try to boil it down to their world. For example, if you’re in a classified environment, security means that the secrets don’t leave. If you’re Amazon and you’re worried about books, then it means that the secrets – the books – don’t leave unless someone pays for them. Most people look at security as a context-specific thing.

And don’t forget, the bad guys have security too. They look at it as, “Can I do this and not get caught?” Security is really where you stand. Physical security too – I can stand on the edge of that cliff and assume that the ground is not going to crumble, but at the end of the day, security is what you boil it up to be.

When did you first get started working in security?

I was actually in it for a long time and didn’t know it. For example, you’re a student and you need to get extra credit on an assignment, so you change the rules of the game…kind of like the Kobayashi Maru test posed to Captain Kirk at Starfleet Academy in the Star Trek series. It was a no-win situation that was supposed to test how he would react in such a situation, so he hacked into the system and changed the scenario so there was an answer. That’s sort of how I got started – I didn’t think of it as security. I noticed that someone didn’t set up a system well and I could exploit it to somebody’s advantage, sometimes to my whole class’ and sometimes to mine. But I really got into it when IBM said to me, “Why don’t you stop that theoretical network design stuff and build a team to go break into customers’ systems, under a contract, and see what they did wrong?” Straight out of the movie Sneakers, that’s what we started doing.

I’ve actually found a lot of people in security didn’t start there, at least in my generation. Most of us didn’t go to college for it; my crack team of ethical hackers at IBM included two computer science majors, a fine arts photography major, a physicist, and a high school graduate. And we ‘won’ 85% of the time, out of about 2000 gigs. ‘Win’ meaning that we were able to do more than the customer thought we could do. And the other times, either we were blocked or the customer was so terrified that they tried to change the game, effectively running down the hall ahead of us locking doors. We got to work on both electronic security and physical security, so it was a lot of fun.

You are a Professor in the Dartmouth Computer Science Department, CTO for Security and Privacy for IBM Research, a member of several government advisory boards, and an expert in the field of systems security. How do you feel about balancing research, work, and undergraduate teaching? 

It’s definitely a challenge. But the good news is that before I came to Dartmouth, I gave up management of my department at IBM because I wanted to get technical again. That allows me to come up here to Dartmouth, pursue teaching, and help to run a Consortium that was based at Dartmouth for a little while. And now what I do for IBM is go down to Washington talking to customers, policymakers, and so on to help them understand security: what they are trying to do, what works, what won’t, what can’t…all that sort of thing.

What is “ethical hacking”? How did that come about? Was it IBM’s idea?

So the customers actually drove it. They said, “We do want to do this Internet thing, it sounds way cool,” but they had also heard about hackers, heard about break ins, heard about people on the inside doing things that they shouldn’t – things like changing the price of a product for a friend so they could buy it cheaper. So one of the guys that I eventually hired, Wietse Venema, had written a paper with Dan Farmer titled “Improving the Security of Your System by Breaking Into it.” They had done a lot of cool stuff, a bunch of defensive projects including a project called SATAN that was a system administrator tool to test system networks. And, of course, it was misunderstood (given the name and whatnot), but what the world didn’t realize is that the bad guys already had tools like this.

So that kind of noise had already started and then the customers started flipping out. At large companies like IBM with big research divisions, we started getting hard problems from customers dealing with mainframes, among a bunch of other concerns. So the guys at the company came down to IBM Research. Of course, we had all read Venema’s paper and we looked around and talked to our consultants, and we saw that there were lots of consultants who could help you set up a system securely, but there was no one trained to go in and see if they had actually done it right. So it just sort of made sense – we would do this “hacking for contract” or “hacking for hire” thing. Then someone else in IBM came up with the term “ethical hacking” and it all went on from there.

Now all of the big accounting firms, even individual companies, do this. The customers are real excited about it but also terrified. When we go in for an “ethical hack,” they are basically handing us the keys to their company; if I can hack you and I tell someone what I did, then they can hack you too. The clients are worried about losing their intellectual property and losing their money, but their biggest concern is that “CNN Moment”: They don’t want word to get out that they have a problem. And the threat of that “CNN Moment” is profound – anything from changing a website background to orange for Halloween to displaying pornography when a user clicks on a product description. So the customers were ready for a solution.

At that point, we set up a contract. The terms for each company varied from “try anything you want, we’re perfect” to “whatever you do, don’t hit the ‘T’ system because it runs the trains.” And then they started asking for physical tests: can you find the computer room in a business the size of Berry Library? Can you even get into the building? And then other customers just wanted to know internal things like “do we have wireless networks in this company?” Because, nowadays, anyone can go to Staples and buy a wireless router, sometimes our job was more of an audit where people wanted to know what was going on in their companies.

What does a conversation with a customer of an “ethnical hack” usually look like?

Our first two questions for a customer are always: what do you have and what is it worth to you? What are you protecting, and why? And if you think hard about that, they are very tough questions. These companies would much rather have IBM come in with their professional, trained “white hats” to tell them what they did wrong than have some other guy figure it out. The military has been doing this for years: they make two teams – a “blue team” sets up the defenses and a “red team” tries to break them. The idea has definitely been around.

Now, at the end of the process, when the customer is reflecting on what we’ve done, they would ask, “So we’re secure now?” And we’d respond, “Well, at least for the next few minutes.” And that’s always the answer because people are involved – you never know what’s going to happen. Recently, this type of work has moved much more toward long term, continuous analysis and evaluation, with much more monitoring than there used to be.

Shifting gears away from industry for a second, do you think that the public’s view of security is more theatrical than we’d like it to be? Is there anything we can do about that?

Could our security, our personal security, be better? Sure! The biggest challenge we have is that we don’t have a culture of security. People are more interested in features and functions than they are in securing their credit card. That decision is hard to believe, and we’re getting better, but that’s how it is.

We always used to pick on students and say thing like, “Company X is going to see what you put on Facebook.” And maybe they will and maybe they won’t. But what people put online is forever. I’d be more concerned about a future significant other or children seeing that stuff than an employer.

What are some of the most interesting or fun problems you’ve tackled during your career?

Certainly ethical hacking was the most fun ever, because you’re really actively helping the customer realize just what they were doing. I guess the most fun things are the ones you can demonstrate. Demonstrating cryptography is not that much fun: “now you can read it, poof, now you can’t – questions?” That’s not real exciting, and it’s very hard to prove stuff in that field. But there are plenty of other things that you can actually do to demonstrate how you can fix the security.

One student demonstrated a few years ago that if he took a flash bulb from his camera, held it close to his CPU, and at a certain point in the CPU’s processing flashed the bulb, it affected the CPU to the point that he could make it do what he wanted. That’s simplifying it a lot, but even so, where does it come from and how did he do that? It’s not about breaking things always, but the real fun has been in showing customers how they could improve. I don’t know if you watched “the Three Stooges”, but the typical thing they do is to poke someone in the eyes. Well, when Curly learned to put his hand vertically between his eyes to protect himself, then Moe had to figure out how to hit him another way. We use that analogy a lot in the industry – we don’t just show you how someone can poke you in the eyes, but also how you can protect yourself in the future. And when a customer realizes that it’s both possible and not that bad, then you see their business blossom. They can benefit and truly see the economies of scale from it because the bulb finally went on – it’s very gratifying. When you work with a customer, client, team member…and the light comes on: that’s why I teach! When I see the students out there in my class, some are cross eyed tired and some are just like ‘Argh I don’t get this’, but then I look around and see some students who perk up and I know “he’s got it!” and “she’s got it!” That’s definitely the most fun. When I see someone understand that they can truly make something better by changing their hygiene, making a new password…that’s really it.

How do industry and academia effectively work together to progress the state of security today? 

Well, to start, industry goals vary wildly. Smaller companies may favor more of a “get it out” approach, even if products are not fully secure or don’t completely work. However, that focus has pretty much changed over time. The larger, more established companies – IBM, HP, Microsoft – realize that every bug costs a lot, a lot more than it costs for testing. So these companies have gotten a clue and started worrying about things like brand and whether a customer will come back. That is what’s going to determine the future of the company, as opposed to just features and functionality. The customer has to believe you when you say that a system is secure, especially when a history of integrity and security is what you’ve got to offer.

Now academia is different, but it has to be. It has to be open, encourage experimentation. And that’s great. We need that. Where else could you do the stuff that some of the researchers here do? Not that it’s illegal, but you need to be in an environment that can tolerate it. Some kinds of research you can’t do in industry because it’s too disruptive. If I want to test a new piece of software, the best thing that I can think of doing is give it to a bunch of students at a university because they will beat it to death. They may not be looking for security holes, but they will encounter way more bugs than the testers did because they are users. And these users, especially students, are unruly users. And that’s good! Where else can you have a student do an experiment where the student walks around campus with a box of candy bars and says, “Tell me your password and I’ll give you a candy bar,”? You don’t do that at an industrial company or the government. Now granted, maybe students are a weird population, but it’s still interesting and it’s a great way to get a feel for that stuff.

Academia plays an extraordinarily important role in all of this. Companies are out there to make money for their shareholders, keep things going, etc. If there is a problem that is really hard, even industrial research organizations may not be allowed to work on it because it takes too long. The payoff may be 10 years or 17 years or an unknown amount of time in the future, especially with things like the “science of security.” HP, IBM, or the government may look at things like that, but the people who will really investigate are the ones with the time and without the worry of quotas and selling: it’s the academics. It’s going to be an academic who has the time and the students to worry about these issues. If you think about this all as a “security ecosystem,” the academics play a huge role because when you ask, “where did all of this stuff come from?” The answer: those guys.

What would you say to someone interested in pursuing a career in security, be it academic or in industry? 

To be good at security, you have to have an interest in systems. Unless you want to go into something very specific like cryptography, you really need to have a “systems view,” because the key to security is the weakest link. The weakest link might be the network, the box on your desk, the people – it can be any of that stuff. So that’s what we try to do in CS 55: Security and Privacy. We take a step back and say, “Look at this whole thing. This whole thing implements that ‘task.’ Where are the problems?” There have been a lot of glorious hacks that were very simple and had nothing to do with hardware or software; they had to do with things like impersonating a UPS man, picking up a code book, or looking over someone’s shoulder as they typed in a password.

There are lots of ways a system can be broken and we are trying to make it harder and harder, but it’s never going to be done. Trying to understand all of the vulnerabilities and plug all of the holes: it can’t be finished. So if someone wants to go into this, I would say study networking, operating systems, programming, sociology, policy – just about anything you can imagine is going to be related because this is a multidisciplinary science. You can’t just be “Joe Firewall” anymore; that’s not going to work. You’re going to have to be much more than that to make a difference, and we kind of need you to!

References

1. Charles C. Palmer – IBM Research. (n.d.). IBM Research, Researcher and Project Pages. Retrieved October 3, 2013, from http://researcher.watson.ibm.com/researcher/view.php?person=us-ccpalmer

2. Charles C. Palmer. (n.d.). Dartmouth Computer Science Department. Retrieved October 1, 2013, from http://www.cs.dartmouth.edu/~ccpalmer/

Leave a Reply

Your email address will not be published. Required fields are marked *