Yesterday I attended a discussion put on by the Atlantic Council’s Cyber Statecraft Initiative. The event, “Lessons from our Cyber Past: The First Cyber Cops,” featured a panel that included:
- Steven Chabinsky, Deputy Assistant Director, Cyber Division, FBI;
- Shawn Henry, former Executive Assistant Director, Criminal, Cyber, Response, and Services Branch, FBI;
- Christopher Painter, Coordinator for Cyber Issues, Dept. of State.
Here are a few key themes I took away:
Public attitudes are changing
It’s amazing to think how public attitudes regarding cyber have changed over the last fifteen years and how much more room there is for evolution. Painter noted that, back when he was a U.S. Attorney prosecuting famed hacker Kevin Mitnick, public opinion was largely in favor of hackers. At Mitnick’s trial, a plane with a banner reading “Free Kevin” circled the courthouse. It seems that this attitude has shifted and that support for “black hat” hackers has diminished considerably.
Public Attitudes are perhaps most important today in determining how individuals and companies deal with cybersecurity and attacks against their networks. Companies have historically been very hesitant to report data breaches, out of fear of losing customers and damaging their reputation. As a result, it was thought that forcing companies to report incidents would change their behavior to strengthen security. However, as the number of such reported incidents increases, customers have almost become desensitized; they figure that once a company is hit, they are forced to deal with the problem and improve security, giving them a leg up over companies that haven’t. As long as this perception prevents customers from voting with their feet, company behavior may not change.
We’re hitting the snooze button on wake-up calls
The panelists tended to agree that there has been no single cyber wake-up call, but instead a series of incidents from which lessons have been drawn. Painter characterized it as being a wake-up call that we simply keep hitting a snooze button for. An incident causes a splash of publicity, and then fades until the next incident occurs. A few examples:
- During Solar Sunrise in 1998, it looked like the .mil domain was under attack by a country in the Middle East. The U.S. showed restraint as the interagency process played out and it was eventually determined that the culprits were two teenagers from California with the help of another teen in Israel. However, since our .mil network had been intruded, it became clear that it would be possible for intruders to direct attacks from the U.S. in the future. The question arose: would U.S. adversaries show as much restraint as the U.S. before leaping to conclusions about the origin of an attack and retaliating?
- Another wakeup call was in 2000, when massive denial of service attacks took down or slowed down multiple internet and media companies. There was debate about the origins of the attacks, with some arguing that the source had to be a nation state, due to the sophistication of the attacks. It turns out, the perpetrator was a 13 year of Canadian hacker, who went by the moniker MafiaBoy. This event highlighted the asymmetric threat that is inherent in cyber.
- Other wake-up calls, like the ability to damage infrastructure were unintentional, like when teenagers reset a telephone switch at a local airport, showing that physical infrastructure could be adversely impacted by hackers. Panelists agree that we’re still waiting for a global wake-up call on infrastructure, though—something that won’t likely happen until the lights go off for an extended period of time or someone is harmed.
We aren’t losing, but we’re not winning, either
There are many successes to celebrate, including greater public awareness of cyber issues, enhanced law enforcement capabilities, tactical successes, and improved international cooperation. DOJ has led the way in helping nations craft laws appropriate for prosecuting cybercriminals in an attempt to avoid a situation like when the U.S. identified a criminal who did significant monetary damage in the Philippines. That person was arrested by authorities there, but had to be released, because he didn’t break any existing laws there.
Despite significant progress, however, we’re falling behind. While we’re experiencing great tactical successes, we’re not strategically winning, since the threat continues to outpace our own capabilities. So, what does this mean for the future?
Looking ahead
- Define success. Even the best GPS is useless if you don’t have address to give it. Different organizations will have different risk models that should be built on careful cost/benefit analysis.
- Government alone cannot provide adequate cybersecurity. The private sector needs to be involved each step of the way. Companies will also need to develop capacity to hunt down intruders inside their networks; trying to simply build “higher fences” to keep out intruders is a losing battle.
- Raise the costs of committing cybercrime. As long as cybercriminals don’t face enough risk, we won’t be able to stop them.
- Work harder to get victims to come forward. There were some early missteps, where victims were victimized further by authorities who seized computers and released their names. While this has stopped, there needs to be a push to get victims to report problems, otherwise it will be impossible to fully grasp the problem.
- Put structure in place to recruit and retain cybersecurity professionals. The FBI has made tremendous progress in being able to protect against and prosecute cybercrimes by hiring from the right talent pool (both panelists got into cybersecurity through their love of gaming as teens!), creating a new career path to support them, and creating new group of specialized training that undergoes constant review to ensure it is up to date.
- Don’t wait until you have “all the facts” before taking action. Arguing that we need to better understand the problem before legislating is like putting off buying a new computer out of fear that a newer model will come out in six months. You will always be able to argue more information is needed and run the risk of never taking action. Further, a lot of the cyber issues we face now are not that different from those of 15 years ago.
If you’re interested in learning more, you can listen to a recording of the panel’s discussion here.
Your first “looking ahead” bullet is a really good one. Do we have any best practices in this area?
Thanks for the comment, Chris. Steven Chabinsky is the one who made the GPS analogy. His take was that we should 1) get much more granular than we are today in defining our vision of success, 2) determine the tactics that are currently available to get us there and 3) define the gaps. He stressed that #1 would look different for different users. For instance, critical infrastructure has a lower tolerance for failure than a small nonprofit.
He also stressed this would require some assurance in attribution, using a really interesting analogy of nuclear warheads. If ICBMs were launched towards the U.S., the U.S. would not be able to shoot them all down, nor perfectly deal with the consequences. However, we do have the capacity to identify the trajectory (and therefore origin) of those ICBMs, giving us a credible deterrence. Attribution is an important component of the deterrence needed to help achieve success.
For a different take on the panel, check out Alex’s post (via Bob Gourley): https://www.govloop.com/profiles/blogs/a-lesson-from-the-first-cyber-cops