Happy holidays, security professionals!

By Jason Rader, Global VP & Chief Information Security Officer

I know the last thing you wanted to do this season was have to determine if a Nation State is in your network, established a foothold, moved laterally, accessed your data, and if any of that information requires you to do a legal disclosure. As business leaders, much of what we do resides in the relatively mundane if we’re lucky. In the security space, this can turn on a dime. I was comfortably drinking eggnog the night of Sunday, December 13, when I saw the first news of the SolarWinds situation on a network security feed I subscribe to. I mentioned to my wife that this was gonna be big…I didn’t know just how big. Woah.

I’ve done everything right…how did this happen?

This literally may send you through the five stages of grief (denial, anger, bargaining, depression, acceptance). I’ll let you do you own self-evaluation:

  • Denial: No, this wasn’t us; we don’t even use SolarWinds. If we do, we’ve surely implemented tools that have prevented this from happening. 
  • Anger: Man, I’m mad at the hackers, the product company, my security tools, my team, and myself!
  • Bargaining: Maybe they just installed the foothold software and didn’t do anything else. We’re not a target they’re after, right?
  • Depression: Man, I need to update my resume. My company’s brand will be destroyed. We.Will.Never.Recover.
  • Acceptance: All right, what do we have to do to deal with the situation and learn from this?

Once you’ve made it to the acceptance phase, good for you! Now let’s talk…

Don’t focus simply on your perimeter. That’s a third of it.

First off, it’s unlikely you could have prevented this. If your team intentionally acquires and installs some software that’s been digitally signed by a Certificate Authority that all of your systems trust explicitly, then your perimeter security isn’t going to help keep it off your systems. This brings to mind the aspect that we have to think about more than just perimeter-based controls. When something is an unknown/unknown (insert misunderstood Rumsfeld quote here), it is something you can’t defend against.

Prevent, detect, respond

It is detective controls that uncovered this threat. It was unusual or unexplained authentication requests from applications acting in an odd way that tipped folks off. And if you’re not looking for that stuff (or don’t even have the ability to see it), then you’ll never discover it going on. Once you detect the threat, what’ll you do? Having a way to correct or remediate a situation like this is big, but some folks are left holding their life jackets on the deck of a sinking ship wondering what to do next. Basically, spend equal amount of effort on your ability to prevent, detect, and respond to security events.

Which brings me to my next point…log aggregation and log retention. If you’re in a situation where you’re asked, “what happened,” you will most definitely need a huge number of logs to correlate events across multiple systems to formulate an answer. So, you’ve got a SIEM? Great! Is everything in there? Do you have the logs from March? Do you have the expertise in-house to write the correlation rules? What we’ve learned from most recent events is that we need logs from more sources than we thought, more detail than we had planned on, and kept for a longer period of time than expected. Now is the time to plan for maturing your logging strategy.

Where’s the data? — Once the logs are enabled, knowing critical systems and the data that they process/hold is going to be something you’re going to need to focus on. It’s far easier to hone the investigation down to the most likely repositories the attackers are after rather than boiling the ocean. Even if you do have this documented, make sure that these are maintained and up to date. I talk about this all the time but when you have to prioritize an investigation, this is gold.

The right tools Responding to a security event requires a combination of people, process, and technology. Tools can make or break you. The different teams (infosec, network, cloud) may have completely different tool sets. Relying on one you question the provenance of is also an unforeseen issue most of us are experiencing for the first time. By all accounts, additional monitoring and sensors will be implemented following a security event. These tools and processes should be documented. A security tools rationalization may be warranted. The bright side of a security situation is that it usually accelerates the organization’s security roadmap.

Security operations — You may be asked to create accounts specifically for the investigation. How do new accounts get created? Who are the global admins? What are the service accounts in use, and what will changing their passwords break? If we need to install software for the investigation, how does it get installed, and who has the permission to do that? If we are asked to collect evidence, do we know how to do this in a way that is forensically sound? Out-of-band communications and secure file storage should be agreed upon and established as well. These are all questions that could easily be documented…the caveat is that they may cross organizational boundaries, have unclear processes, and might not have been assessed for impact to the organization. This is an area that I know we all will pay more attention to in 2021 based on the 2020 we’ve had. 

Engaging with legal counsel I really hope it hasn’t come to this. But sooner, rather than later, you should engage your legal team. There should be a known process for working with your corporate counsel and bringing in an external firm. For example, attorney-client privilege has specific requirements that can make or break you with the jurisdictional and regulatory elements alone in today’s global market and are enough to make your head spin. Unless you are a security expert and an attorney, legal is your friend. 

Communicating in a crisis and putting it in writing Since we’ve talked about some legal aspects, when communicating about the event, even internally, don’t speculate…especially in writing. There are certain key words like incident, breach, and disclosure that make things complicated. Assume everything will be evidence at some point and behave accordingly. Make sure your employees know to not make any statements to clients and quickly create a corporate communication process with an email alias for customer requests. This is crucial to keep things focused where they should be.

Cyber incident response plan and tabletop exercises — The worst time to figure out your company’s cyber incident response plan is when you are in the middle of having to deal with an incident. Hiring smart people is a good start. But a bunch of smart folks in a room talking over each other with ideas of what should be done is less than productive. At a minimum, defining and documenting teams, team leaders, and procedures will make things run much smoother. Having a tabletop exercise a few times a year will allow you to refine the plan and make it better with input from teams across the organization.

Tiger Team During the Apollo 13 mission crisis, a Tiger Team was formed to approach the situation in an innovative and inclusive way. Identifying in advance these key folks across your organization, their skillsets, and their personal contact information will accelerate your ability to respond to your own “Houston, we have a problem.” In the event that infrastructure or collaboration capabilities are compromised, you may have to reach out to the team to enroll new mobile devices, use a new messaging product, or get a laptop shipped. I suggest having these team details stored in a secure location in the cloud or in physical form in your home office. Make sure team members know the procedures they should follow as well when the team is engaged via these out-of-band channels.

Are the bad guys gone? — How do you know? The reality is, you may not ever get definitive assurance that you found and removed everything without scorching the earth and starting over from scratch (not a pleasant option). Assuming that you did all your due diligence, put the appropriate monitoring in place, and your teams are vigilant…your outlook should be good. The lessons learned from the event should help you tighten things up initially, and your processes should mature over time if you continue to review and refine. After all is said and done, if you’ve coordinated your efforts evenly across prevention, detection, and responding to this threat — utilizing all of the intel available — you should be stronger than you were to begin with.

Leadership! Often taken for granted, leadership is key to surviving any stressful situation. In a crisis, people want to be told what to do, and they want to feel like they are doing the right thing. This is where your plan, and executing against it, will make or break you. People also expect communication to flow in a crisis. Understanding and adjusting your communications to pivot from an executive audience to your response team versus external customers is a skillset that is built with the wisdom that comes from experience. I am certain that leadership, provided before, during, and after this unprecedented event, is what will prove to be the best investment that can be made into a security program.

Good luck and onward, my friends.

— Jason Rader

National Director, Network and Cloud Security