You are here: American University Centers Khan Institute Start Here

SH Coverart New

START HERE PODCAST: Cyber Public Policy Fundamentals

Welcome to "START HERE" the educational resource for public policymakers seeking to delve into the intricate world of cyber policy. In this evergreen series, we bring you the foundational knowledge you need to navigate the complex realm of cybersecurity with confidence. Join us as we sit down with top experts in the field, uncovering the essential principles and insights that shape cyber policy today. 

NEW Episode!

Listen to our new episode. Episode 7 - Digital Identity is now live!

Listen Here >>

 

 

bagley2-305x341

Drew Bagley, CIPP/E, is CrowdStrike’s Vice President and Counsel for Privacy and Cyber Policy, responsible for leading CrowdStrike’s data protection initiatives, privacy strategy and global policy engagement. He serves on the Europol Advisory Group on Internet Security, the U.S. Department of State’s International Digital Economy and Telecommunication Advisory Committee, and the DNS Abuse Institute’s Advisory Council. 

Megan Brown 2

Megan Brown is Co-Chair of Wiley Rein LLP’s Privacy, Cyber and Data Governance practice and former senior DOJ official. She advises multinational companies and industries on complex cybersecurity and data privacy challenges, including risk management, incident response and reporting, compliance with emerging regulations, and government investigations.

Sasha O'Connell 2-1

Sasha Cohen O’Connell, Ph.D. is the Executive in Residence and Senior Professorial Lecturer at AU (SPA) where she teaches cyber policy at the undergraduate, graduate, and executive level. Additionally, she serves as the Director of Programming and Curriculum at the Shahal M. Khan Institute. Prior to joining the university full-time, she had spent the majority of her career at the FBI where she served most recently as the organization's Chief Policy Advisory, Science and Technology and as the Section Chief of Office of National Policy.

MEET THE TEAM BEHIND THE POD

Josh Waldman, Wiley Rein, Content Development & Strategy

Erica Lemen, Wiley Rein, Audio Production

Tatiana BienvenuKuhu Badgi, AU Khan Institute, Streaming & Web Resources

Molly Thornton, Crowdstrike, Graphics

START HERE In the News

Fed news
Selected
Episode 1
Episode 2
Episode 3
Episode 4
Episode 5

Episode 1 - Welcome to START HERE

In the first episode of “START HERE”, Sasha O’Connell, Drew Bagley, and Megan Brown debut the podcast and explain why it is filling a gap in discussions about cybersecurity and policy.

 

Listen Here

Transcript: 

Sasha O’Connell

Hello and welcome to Cyber Policy Fundamentals, the Start Here series. My name is Sasha O'Connell and I'm a Senior Professorial Lecturer at American University, and I'm joined by Drew Bagley and Megan Brown from CrowdStrike and Wiley Rein, respectively. We are thrilled to be kicking off this audio resource series. Drew, do you want to introduce yourself and talk a bit about the gap we're hoping to address with this product?

Drew Bagley

Sure, thank you, Sasha. I'm Drew Bagley and I'm the Vice President and Counsel of Privacy and Cyber Policy at CrowdStrike. And let me pass it to Megan, to introduce herself.

Megan Brown

Hi, glad to be here. I'm Megan Brown. I'm a partner at Wiley Rein, a Washington, D.C. law firm, where I co-chair our Privacy, Cyber and Data Governance practice.

Sasha O’Connell

Awesome. Drew, can you talk a bit about what this project is all about and what gap we're trying to address?

Drew Bagley

Sure. We are kicking off the Start Here podcast series to address a gap we have seen in policy discussions. When we think about cyber policy, the focus is on trying to solve problems, but also consider the process and incentives in solving those problems. And oftentimes, there can be faulty assumptions when we're talking about cyber policy because it's a complex issue. And faulty assumptions can lead to bad policy, like assuming that regulations are always good and going to solve the problem, or that the private sector is only looking to maximize profits, or that decisions are simpler than they actually are. There is also a lack of common language and fundamental understanding in this field, and that can lead to a lack of productive communication on policy, and these are very important issues. There's also a gap in fundamental educational resources on cyber policy, and ultimately, there's a disconnect between problems, the policies proposed to solve those problems, and the metrics for success in determining whether or not the policies put in place actually solve the problems they were purporting to solve. Cyber policy is about properly identifying the problem that needs to be solved and coming up with a solution that solves it. And the audience here, Sasha, for this podcast is anyone finding themselves needing to get up to speed on the cyber policy topics. It's for you.

Sasha O’Connell

Perfect and when we were thinking about this project and that audience, we also give a lot of thought to the setup and layout and the way to best deliver this information so that it's super useful for folks who really truly need a place to start on these complex topics. So, what we've landed on is short segments about 15 or 20 minute audio files that are essentially mini classes that can be evergreen. So, as time goes by, and new proposals come up, they can serve as that key educational resource or touch point to get up to speed on these issues. Our goal is to produce and deliver a twelve-episode starter pack in 2024, and each of these episodes again will serve as a primer. So, cover history people, check and policy principles, all behind these key cybersecurity policy issues of the day. And to do that, our promise here is to use straightforward language and analogies and storytelling to the extent we can to try and get out of this world and get down to the key issues. Speaking of key issues, Megan, what are the topics? What's on the top of the agenda?

Megan Brown

Well, there is no shortage of topics to address in 2024. The three of us have been living with these policy issues for years now, and so the challenge really was narrowing it down and sort of where to start. So, I think our first episode is going to try and tackle cyber incident reporting mandates and kind of the pros and cons and the different equities there; but we also want to talk about ransomware and do a primer for folks, so they can understand what the buzzwords are and what the trade offs and what's really going on for policymakers. There's a lot of discussion last year and heading into 2024 about accountability and liability. Who should own this problem where you've got nation state actors, but you've also got private choices that might have externalities? So, how do you encourage good behavior and set up the right incentives? There's been a lot of discussion about standards, whether it's at NIST or other organizations that put out best practices. Should the private sector face voluntary standards and encourage best practices, or is it time to move to regulation and sort of nudge behavior? And then finally, kind of in the mix is who's who; looking at the various regulators, the agencies, the incident responders, the private companies that are involved as well as the policymakers, the congressional folks, the staffers, and the agencies who are confronting all of these questions that we hopefully will be able to unpack in this series. So, no shortage of things to talk about as we head into 2024, and really look forward to helping folks approach some of these policy issues.

Sasha O’Connell

Absolutely. Drew, why are you fired up about this project?

Drew Bagley

Well, I'm fired up about this project because I think that the three of us really bring together a lot of different perspectives that often don't come together in policymaking, right? We have, obviously academia, private sector, the cybersecurity world all brought together, and I think that, ultimately, when we think about just even the basic questions that need to be asked when proposing cyber policy and trying to solve cyber policy problems, it really is important to understand certain fundamentals and to think about all the different intended and unintended consequences that policies can have. So, I think that this series is going to give us a unique opportunity to really get into those issues.

Sasha O’Connell

I agree, and from my perspective, sort of doing cyber education full time now in my current role, I know there's a real gap in resources that do exactly what you described. So, I'm excited to get it out to audiences, be they educators who want to use them in the classroom, policy makers who have a new role or new responsibilities, anyone who all of a sudden needs to take on these issues and needs to find a place to start. Megan, what do you think?

Megan Brown

I think it's exciting because Drew is working at CrowdStrike, so you're seeing the tactical in the trenches kind of trends and can speak to what is coming at policymakers and organizations. I've worked at the Department of Justice and currently advise companies on compliance and policy from incident response to reporting to DHS and their regulators, and I think the perspective that I hope we can offer is that practical, not theoretical, perspective to sort of say, what does this really look like? And some folks in the policy world might not have been in the chair that Drew sits in to understand what some of these choices actually mean.

Sasha O’Connell

Absolutely. Well, thank you all for joining us. We look forward to seeing you at our next episode on Internet Reporting and please also check out our affiliated website for Start Here. We will drop the link for the website in the show notes. There will be additional resources and other materials associated with the topics covered. See you there!

Episode 2 - Incident Reporting Part One

In the second episode of “Start Here”, Sasha O’Connell, Drew Bagley, and Megan Brown discuss cyber incident reporting. They cover state and federal mandates and proposals, including the Cyber Incident Reporting Critical Infrastructure Act (CIRCIA), and discuss the tradeoffs of reporting from both the public and private perspectives. 

Cyber Incident Reporting: When an organization experiences a cyber security incident and reports it voluntarily or by mandate.  

Listen Here

Key Questions:

  • What are the tradeoffs associated with making incident reporting mandatory? 
  • What is the history of incident reporting, and how has it evolved? 
  • What are the key aspects of the California 2002 Data Breach Reporting Obligation and HIPAA to incident reporting? 
  • What are the pros and cons of implementing mandatory incident reporting?
  • Which agencies should be responsible for reporting?
  • Should incident reports be in the public or private domain?
  • What level of detail do incident reports require?
  • What should the timing of reporting look like?

Extra Resources: 

Transcript

Sasha O’Connell

Welcome back to Cyber Policy Fundamentals, the Start Here series. My name is Sasha O'Connell, and I'm a Senior Professorial Lecturer at American University and I'm joined by my colleagues, Drew Bagley and Megan Brown from CrowdStrike and Wiley Rein, respectively. We're super excited to launch this episode and dig right into incident reporting. Mandatory cybersecurity incident reporting is a hot topic these days and an ongoing policy discussion at many levels. There are numerous proposals and new mandates coming online, both at the federal and state level. In this episode, we want to break down the fundamentals, assumptions, and tradeoffs so that you can evaluate various proposals and approaches as they come online. So, we're going to start with what is this policy issue? And to start, let me offer from, where I sit, the policy issue most simply is whether cybersecurity incident reporting, so the reporting of cybersecurity incidents, should be mandatory and if so, to whom. And that's really the challenge that policymakers are struggling with to formulate those rules. Megan, is that right? Is that the place to start?

Megan Brown

Yeah, I think it is. I mean, there's a lot of different tradeoffs, but I think a fundamental principle to keep in mind is that cyberattacks and cyber breaches are fundamentally criminal activities, right? If a major company is subject to a major incident, someone has done a very bad thing and violated several federal laws at a minimum and it is unusual in our society and sort of civil justice system to require victims of crime to report that crime. In other circumstances, if you think of victims of robbery, identity theft, physical violence, there’s not some overarching mandate that says all crime victims have to run to the police or tell anybody. That might be a good thing, it might be nice if people did in certain circumstances but that's not the assumption when you're the victim of a crime and so, I think sometimes these discussions forget that private organizations and governments, frankly, are victims of crimes when we're talking about cyber incidents.

Sasha O’Connell

Absolutely. That is really good context and I think that analogy to kinetic or real-world crimes is an important one to keep in mind if there is going to be a change from that tradition, is the justification there. Drew, is there historical context here for cyber internet reporting? What's your thought on that?

Drew Bagley

Absolutely. Cyber incident reporting isn't all that new, especially when we think about the broader family of data breach reporting obligations. So, beginning about 20 years ago, or actually a little more than 20 years ago, in 2002, California passed the first state data breach reporting obligation. The Health Insurance Portability and Accountability Act, also known as HIPAA, was amended in 2003 with its security rule and for those requirements, what you really had was the beginning of an impact driven regime. In other words, where there was some sort of impact on victims and the type of data was the type that could be used, such as to perpetuate identity theft. It was sensitive personal information known as PII. Then that sort of personal information breach would need to be reported. It also mattered how many victims there were, what the circumstances were, and whatnot and these early data breach reporting requirements had very long lead times from when you would discover an incident and then when you needed to tell your regulator. And so, what we've seen over the past few decades is things have gotten more industry specific. The definition of what an incident or breaches has changed, it really applies in some ways to a broader category of data and at the federal level, we haven't seen one overarching federal requirement. Instead, we've seen more of these industry specific approaches. In the same way HIPAA was focused on health care, we now have reporting obligations for critical infrastructure, and we have reporting obligations for the financial sector and other sectors.

Sasha O’Connell

Awesome. I think that is super important context. So maybe then taking next, we can revisit this idea of defining terms real quickly, then maybe talk a bit more about what the real problem is that policymakers are trying to solve. Again, Megan, to your point, if this isn't how we do it generally, right, why is cybersecurity different, and what problem are we trying to solve, and then some of the factors that make it particularly tricky. So just going back to the definitions real quick. So, as I mentioned, right, incident reporting is when an organization experiences a cybersecurity incident and reports it, we generally consider it to a government agency is what we're talking about right, and it comes in two forms. It can be voluntary, the organization decides for itself whether it tells someone, or mandatory, a law, regulation, or contract situation that requires the organization to notify the government. Is that a fair description? Am I forgetting anything just to level set on what we're talking about here?

Megan Brown

Yeah, no, I think that's right. There also could be, as Drew mentioned earlier, sort of data driven requirements right. You may have to notify consumers or others whose data was affected; but, yeah, that's what we're talking about.

Sasha O’Connell

Perfect. So that's what we're talking about, why is this different? Why do we even‑ what problem is created in cyber that to your point, Megan, it's not created in other families of crime and reporting? What problem is the government trying to solve with this intervention, Megan?

Megan Brown

Yeah, so we've had for years on top of the few mandatory regimes that Drew talked about, it's mainly been a system of voluntary reporting of cyber incidents and Congress has done a few things to try and encourage that. Call the FBI, call DHS. They've enacted statutes to make it easier to do that and to protect companies who do that. And there's a lot of reasons, frankly, why the victims of a cybersecurity incident might not want to go public or might not want to call the government, hesitation about having the FBI in your system. The FBI has done a great job recently over the past, say, 5 to 10 years of changing that culture, but it's heady to pick up the phone and call a law enforcement agency and ask them to sort of get in your business. In addition, you might have an incident that doesn't really have strong indications of harm to consumers or likely identity theft or, systemic impacts. Or you might not know the impacts early on, investigations can often take weeks or months to really figure things out. The victim might not want to tell the government because of regulatory risk. They might not want to tell the public or other companies because of brand impacts as well as class action litigation exposure, or as we saw after Colonial Pipeline, your CEO gets hold before Congress and yelled at for any missteps that might have been made and we've seen this. There is a risk of revictimization after you go public. If reporting happens before your systems are secured or what's going on or even some of the ransomware bad actors will double dip. So, I see a lot of reasonable hesitancy to make certain incident reports both to the government and to the public.

Sasha O’Connell

Okay, I'm officially then not convinced, right. Drew? What's the argument on the other side? What are proponents of mandating reporting arguing? What problem are they trying to solve?

Drew Bagley

Yeah, supporters of reporting mandates are generally citing several different problems that they're trying to solve, and it really depends on the type of reporting requirement, the sector they're focused on, and whatnot. So, if we take all of that together, the list of some of the most common policy aims would be, first of all, raising awareness of some of these data breach incidents or cyber incidents and then in theory, what that means is that if victims know that this sort of thing has happened, that helps them ensure they can better secure their own data. That also helps them think more about who they're going to trust their data with, which then in theory can create incentives for companies to bolster their security and make sure that they're protecting data better. There's other arguments depending on the government agency involved and supporting certain types of requirements; there are investigation and enforcement equities at stake. If the government doesn't know about these incidents, then the government doesn't have the information to investigate them and then also the government doesn't have the ability to use its resources at scale to potentially disrupt adversaries who may be behind these types of attacks or to enforce against companies or even other government agencies that might not be responsible stewards of data. And so there really is this enforcement angle that if you have laws to protect data to begin with, how are you going to be able to enforce those if you're not aware of when there have been lapses in that? And then there are some incentives for those companies themselves. If a company is aware or a government agency is aware that they would be publicly exposed in some sort of way or obligated to tell a regulator or obligated to tell the victims about a cyber incident, then that's going to further incentivize them to protect their data with even more resources, to devote more resources to that, so that a breach or a cyber incident doesn't happen to begin with. We certainly see this a lot today in terms of some of the ideas behind some of the modern legislation where you don't just see a notification obligation, but you also see an obligation to protect data to begin with reasonable measures, something that's intended to change over time. And then there's also this notion that if you have a bunch of these obligations, if they're broad enough, you can overall change the culture. So, a couple decades ago, for example, a Social Security number was something that was much more commonly used as an identifier for things that have nothing to do with Social Security or filing taxes and now, of course, we've seen instead a migration into other forms of identity for various applications. So there really can be this change of culture when certain categories of data are deemed to be regulated and other categories less regulated and have less penalties attached. And then, ultimately, there are arguments that if you have some types of these obligations in place that are requiring this type of data to be produced on cyber incidents and data breaches, and that can better inform policy as a whole. So for example, it might be that if you're seeing lots of different patterns that are consistent with certain types of cyber incidents, and they're all coming from a particular threat actor or using a particular payment method or targeting a particular sector, and that can inform policymakers as to where they should devote their energies and attempt to come up with some sort of new policy solution that instead of focusing merely on the reporting is focusing on stopping the cause of the problem to begin with.

Sasha O’Connell

It's so interesting and hearing you both lay those out, it's clear why this is an evergreen issue, right. That there's a lot to contemplate in terms of costs and benefits in both sides of the argument. I would say maybe in summary, and I'm curious what you guys think, the reason this is particularly challenging or what makes this so hard to nail down and define sort of falls into a couple, at least priority buckets and one is that there are clearly, as you both said, complexities and tradeoffs, right, for reporting mandates because victims and investigations could be harmed by over reporting or premature reporting. So that's not even necessarily obvious on the investigation side. Additionally, consumers might suffer from notice fatigue or over reporting to be harmful in that way too and that is just too much, particularly in agents in organizations or companies that are in a regulated space. And then I hear you guys talking about too, the move from voluntary to mandatory kind of impacts and complexities for that relationship between the private sector and government, the trust piece there and how that works both positively and negatively. Before we move on to laying out the key players, any other thoughts you guys on why this makes it particularly tricky given the arguments you both laid out?

Megan Brown

I like that you just introduced that concept of trust, Sasha, and I think we'll come back to it, but I think the relationship so far in cyber has frequently, but not always been, built on trying to create that trust that makes companies want to call the FBI, or call DHS, or even call their regulator and there may be downsides to some of these new mandates because with mandates come accountability and enforcement. So that's, I think, one of the big things that policymakers have to balance and you've seeing that in a lot of the regulatory comment cycles that are ongoing.

Drew Bagley

Yeah. I think another difficult issue to grapple with some of these reporting requirements is that we moved away from an era in which there was a long lead time with these reporting requirements; to one in which there's an expectation to report nearly immediately and granted, there definitely are thresholds. There's thresholds on determining the incidents of high impact, making a determinability, determination, etc., but ultimately, in those first few hours and days after there's some sort of data breach in modern times, you're usually dealing with a cyber incident, and usually, the priority is to mitigate that incident, stop the bleeding, and ensure that more data is not going to get out. And so sometimes I think that there's many equities at stake and interest with regard to the brief reporting itself, but this isn't done within a vacuum. I think that's really important to remember.

Sasha O’Connell

Yeah that's great. So, in addition to this idea of voluntary versus mandatory, you're pointing out two other things, to how big of a breach or cyber security incident does it need to be to be reported this idea of materiality spoken another way, and then you said a couple of times too, I think it's really important, the time aspect. How long do victims have to report? So those are all three kind of important things that need to be considered when thinking about any of these policies going forward. That's super helpful. Let's turn for a second to the key players here and their equities and interests. I think it really helps understand the context of these discussions to understand where people are coming from and what their incentives are, and their equities are when they bring it to the debate. Drew, can you start with the private sector side? I know that’s a big task. Can you sum up how the private sector feels real quick on this? But what are some key points there?

Drew Bagley

Yeah, absolutely. If we look back at the history of data breach reporting, there are, of course, all 50 states have data breach reporting laws, but now what we've layered upon that are many industry specific reporting obligations, so now you have many larger organizations in the private sector that really fall into multiple categories. So, they have a Venn diagram of reporting obligations whenever there is an incident, and there's no consistency over even how those incidents are defined or what those timelines are for reporting those incidents. So, I think that when you think about where the private sector is with anything in the regulatory space, they're always looking for consistency. Then you can resource around consistency, design processes around consistency, and then comply with the law, and then organizations that aren't, there can be enforcement penalties but where you make things very confusing and layered, then it's very hard for organizations to comply with the law, especially smaller ones when they have competing, potentially competing legal standards. So I think consistency is one of the key things that the private sector is interested in.

Sasha O’Connell

That's perfect, and then thinking about it from a consumer perspective or an advocacy, civil society perspective, I think it's important to think about too, the dual piece of being concerned about one's personal data and security and privacy, and how those things kind of come out in the wash in terms of a balance of protecting one's data and getting that notification and also protecting the ability to control one's data right in reporting to government and where that data goes. So that's a whole other piece; I think that's very much in the mix. Megan, what do you think about both of those and also, can you talk about the government's perspective real quick?

Megan Brown

Yeah, sure. I think on the consumer side, one element that we've seen is not just an interest in consumers and security of their data, but also the reliability of the services that they have come to expect and so I think you've seen a shift in how the government thinks about the private sector's duties, which is, I think we've kind of moved beyond, frankly, some of the data focused regulations, and now we're talking about resiliency, availability, etc.; really important services. That's why there's this focus on critical infrastructure. From the government's perspective, there are a lot of actors and whenever I think about who the key players are in government, I also because, I'm a lawyer, I think about what are their authorities? Have they been told to go do this? So we have a few key players. The FBI has been doing this for a really long time. The Department of Homeland Security is now a major player, and I think that creates some friction with the FBI in policy decisions, but DHS has the cybersecurity and infrastructure security agency that has been given a bunch of new authorities from Congress. They're supposed to be kind of the belly button of the civilian critical infrastructure government relationship, but you have these other regulators who are out there that layer on top of from the Federal Communications Commission to the Federal Trade Commission to the Transportation Security Administration, EPA, the securities regulators. They are all doing things on cyber, and I think everyone in policy world needs to think about what are they doing, how does it relate to all the other activities and what are their authorities. And then finally, another huge player in this space are the state legislatures and regulators. Because as Drew mentioned, there's not an overarching federal privacy or cyber regime right now, the states are kind of off to the races and creating some additional complexities. California is one, New York Department of Financial Services is another, but there's a whole bunch. So that's kind of the, as you like to call it, Sasha, the who's who in the zoo. From my perspective, on the federal side, it is an ever-expanding set of characters, unfortunately.

Sasha O’Connell

Absolutely. Thank you both for your help breaking this all down. While many know about current or pending incident reporting requirements or proposals, unfortunately, I don't think we often stop and think about definitions, historical context, and real problems or issues we're trying to solve as we've covered here, because we want to take some time on this topic to dig into some of the proposed interventions more specifically. We're going to reconvene for another episode of Start Here to do just that. Thanks for joining us. We look forward to seeing you next time when we continue this conversation and please be sure to follow the link to our website available in the show notes to access additional resources on this and other related topics.

Episode 3 - Incident Reporting Part Two

In the third episode of “START HERE”, Sasha O’Connell, Drew Bagley, and Megan Brown continue the discussion of cyber incident reporting, digging deeper to discuss the main aspects of proposed mandates and new government approaches. This episode addresses the state data breach reporting landscape and new laws like the Cybersecurity Incident Reporting for Critical Infrastructure Act and news rules at the Securities and Exchange Commission. Sasha, Drew and Megan discuss hard operational questions, including whether reporting should be public or confidential, timelines for reporting (and tradeoffs of speed versus accuracy), and how reporting mandates can put victims at further risk.

Listen Here

Key Questions: 

  • To whom should reporting be required?
  • How can we achieve uniformity in reporting regulations?
  • Should reports be public or confidential?
  • What should the timing standard be for reporting?
  • What data is used to decide reporting standards?

Trancript

Sasha O’Connell

Welcome back to Cyber Policy Fundamentals, the Start Here series. My name is Sasha O'Connell and I'm a Senior Professorial Lecturer at American University, and I'm joined by my colleagues Drew Bagley and Megan Brown from CrowdStrike and Wiley Rein, respectively. Today we're going to continue the conversation about Incident Reporting Policy. If you're looking for history and context on this topic, I would recommend a listen to Episode 2 first, where we dig into those issues, because today we're going to jump right into policy choices currently under consideration. And with that Drew, can you kick us off?

Drew Bagley

Sure. Well today, there are state data breach reporting obligations in all 50 states. There is no federal equivalent, so no federal standard, that supersedes those 50 independent laws. Now those laws obviously have developed over the course of a similar time period, so many of them are similar in their requirements and then how they define what type of data that needs to be reported. But then layered on top of that, are both in some states and then certainly at the federal level, sector specific reporting obligations, and that's where the definitions of what needs to be reported change, but also the thresholds of when something needs to be reported. In other words, you could have some sort of incident where you have a data leak, data loss, data breach, but it doesn't rise to the threshold of what needs to be reported, and that is something that varies greatly with different laws. And then on top of that, we now have seen newer requirements through the rulemaking process, such as by the SEC, for publicly traded companies to report material cybersecurity incidents. That means that you have some requirements where the duty is to either tell a regulator or directly inform a victim of some sort of data breach or cyber incident, and you have a newer requirement where there's a duty to tell the world, and so that's a bit different as well.

During this time too, what we've seen is because you have 50 different state regulators, then you have different government agencies that regulate different sectors, even how you report a cyber incident or data breach is different. And in terms of what form you fill out, what information is required, who you send it to, and when you need to turn over that information and do that notification. So, for example, something like HIPAA, going back to the early days, that standard is still in place, that's a 60-day notification window, whereas there are now

some that are as short as 24 hours. In addition to that, what we've seen is, we have seen some discussions in Washington and some efforts to try to even picture what would harmonization look like. So for example, if we take one of the more recent requirements, CIRCIA, the Cyber Incident Reporting for Critical Infrastructure Act, that law, one of the things it created, in addition to new mandates for critical infrastructure, was it created something known as the Cyber Incident Reporting Council. And that was chaired by DHS and recently came out with a report. What the CIRC was focused on was coming out with a report on what harmonization could, should, and would look like, in terms of breach reporting. So that even if there were agencies that have completely different remits and completely different equities, there could at least be a way in which a victim could report to one place and have their data then sent to the appropriate venue depending on how they were regulated or appropriate venues.

So we do see certainly an appetite for harmonization, but no clear path at this time.

Sasha O’Connell

So, there are 50 different versions of this, plus the new requirements coming out of CIRCIA and the SEC depending, addressing the critical infrastructure focus and publicly traded companies, specifically. And this is all a bit of a jumble in terms of the specifics, and we could use some harmonization, perhaps through federal law, but that doesn't seem to be happening. Do we know what the issue is here in terms of getting this all aligned?

Megan Brown

Yeah. I think we've identified at least six that have come up in the development of the law that Drew was describing, CIRCIA, that Congress had to grapple with, but also that DHS and CISA are going to have to – they're resolving right now in many respects, as they implement that law. But the first big question we stumble upon, I think, is to whom should reporting be made, if you're going to have mandates? And one major policy debate underlying that legislation was the role of DHS versus the FBI. Sort of does a mandate to report to CISA undermine voluntary reporting to the FBI or should the FBI have some sort of more robust role? And I think that's a hard question. Congress resolved it in CIRCIA, that major incidents are going to be reported to DHS and CISA. Related to that, should regulators get the information that is reported? Should they have their own mandates? Should they rely on DHS, or should DHS keep that information kind of siloed for operational uses and not use it for regulatory enforcement type things? And related to those questions are, does the agency receiving reports, whether it's TSA or the Environmental Protection Agency or DHS, does that agency have the capacity and capability to do something

meaningful with the reporting? Those are kind of key policy questions about that first question, which is, to whom should reporting be required?

Sasha O’Connell

And Megan, what's your thought just on that to finish up to whom? Right? Drew also raised this question of public versus confidential. Right? That if you're reporting, is that protected in that way? Or, again, we saw this with FCC and maybe some other things coming forward, that requirement to make it public too. Right? Adds another complexity.

Megan Brown

I personally think it does add a lot of complexity, and I think policymakers are grappling with that. We saw in the regulatory process at the Securities and Exchange Commission, basically the private sector crying out for less public reporting or at least less public fast reporting, which we'll talk about timing shortly. But this question of, should you report to a law enforcement agency who can operationally help find bad guys, or are you reporting publicly for some other purpose? It goes back to that threshold question Drew identified at the outset, kind of what is the purpose of the reporting? And that is going to be an evergreen question. Whenever a reporting mandate is being discussed, I think policymakers and regulated entities need to think about public or confidential. What are the benefits and tradeoffs of each of those?

Sasha O’Connell

Okay. So, we've got to whom – and we've got a bit of discussion about then what the receiving entity does with it or doesn't do with it. Right? So, and I interrupted you there, but what about the question of what to actually report? Right? And then that triggering mechanism, what triggers it and the timing? I think those are the outstanding issues.

Megan Brown

So, I think in terms of what you report, this is going to be a real challenge for regulators across settings is, how detailed do you want reports to be, and what is the purpose of the information you're demanding. Right.

I think there can be a tendency to want more and more information because someone might find it useful or interesting, but I think policymakers need to ask themselves, what is that trade off about? The more information you demand, the harder the report is going to be to do quickly and accurately. And then what is

going to be the use of that information? And you've seen in a lot of comments, the private sector encouraging actionable timely information. And is the government going to do anything with it that will help the private sector, or are they just pulling in a lot of information that may not actually be actionable? I think Drew had some thoughts on some prior legislation.

Drew Bagley

Yeah. What we've seen certainly is that there is a government interest in knowing, especially at a macro level, not a trend level, what sorts of threat actors are posing a threat to companies in the United States, as well as government in the United States and victims as a whole, so that they can understand where our government resources appropriate to address those, where is, victim self-help the most appropriate way to defensively protect against those. And so, years ago there was legislation passed that went into effect in 2015, that allowed and even created conditions to encourage the sharing of certain information with the government, with DHS, so the cyber threat indicators and defensive measures.

But what we've seen is that that's specific of course to, sharing with one agency, and there certainly is always skepticism from the private sector in sharing certain types of information with government agencies, because government agencies, of course, depending on the agency, they also have the ability to enforce fines against the company. They have the ability to enforce criminal penalties and whatnot or just even raise costs by creating more process in the form of more information requests and subpoenas, and the like. So, that's where I think it's really important to Megan’s point, to be really thoughtful in terms of what is actually necessary in terms of information sharing. How's that information going to be used? But also, what is that benefit for the victim organization sharing that information. Is it that over time, they're going to get some information back that helps them be more secure? Is it that there's some sort of benefit from an immunity perspective with regards to what else could happen with that information being shared and whatnot. So, that's where that sort of incentive structures are really working under this realm.

Sasha O’Connell

That makes sense. Before we sum up, let me throw back on the table this idea of timing too. On addition to everything that's been put out there for consideration, Congress seems to like this idea of 72-hours for reporting. Any quick thoughts on that? I know you guys have been involved in real life incident reporting incidents. Give us some context for 72-hours. What's happening in response? Is that reasonable? And where do you see the conversation about timing going. Megan, I know you've got thoughts on this one.

Megan Brown

So, yeah Sasha, there are various timeframes being adopted and considered for incident reporting. Some are really in contrast to existing state data breach laws that may provide that you have to notify a state AG in a “reasonable time for as soon as practicable.” We're seeing now more rigid and shorter time frames. Congress seems to like the 72- hours. That's in the new incident reporting law for critical infrastructure. I find 72-hours to be a bit arbitrary. Why that number? Did anybody study the benefits of that as opposed to another period, perhaps longer? The other federal proposals that are on the table right now range from 8-hours for federal contractors, which I think is frankly wildly unrealistic. Some regimes require 24- hours from some other agencies. The FCC, for its part, just recently adopted a data breach reporting rule, and it requires reporting, “without unreasonable delay and in no case later than 30 days.” So you can see there's a lot going on in this space and the cyber incident reporting council that Congress set up at DHS to look at these issues about different incident reporting obligations they issued a report in 2023, that hopefully we can have in the resources with this podcast, it identified 45 different federal incident reporting requirements administered by 22 federal agencies, and that's just the federal ones. Other countries are also adopting some pretty aggressive and unrealistic time frames that affect critical infrastructure, and some of those are less than 24-hours, and that's really hard for companies that are multinational. The challenge, as you've alluded to, is at 72-hours after you've decided that an incident has happened, you have spun up an incident response team, Drew does this regularly with his clients, you are probably still in the very early stages of figuring out what happened, trying to figure out what your recovery plan looks like. Do you have backups? Are they accessible? Have you figured out where they are in your system if there's persistent activity? You might not have even started to think about kind of root cause analysis. From my personal experience, 72-hours is very fast because there's a lot of unknowns, and you're going to be expecting companies to tell the government things. People care deeply about being accurate when they're speaking to the government. So, it means that you are involving lawyers, and you are taking some time away from your incident responders to validate what you're going to tell the government, and that's just a real challenge. So, from my perspective, 72-hours is real fast, but that's what we're going to be stuck with for certain reporting regimes. And I really hope the government calibrates their – I guess, the government will need to have some recognition of tradeoffs. Drew, what do you think?

Drew Bagley

I completely agree. I mean, in those immediate hours, the priority is always going to be to mitigate the risk, whatever it is, just like with any other risk

mitigation practice. And so with a cyber incident, part of that is determining whether or not you actually have full visibility into the incident. Oftentimes you might have some indication that there was some sort of infiltration into a computer, but you might not know right away, and for a bit of time, where else on the network the adversary may be.

And, I think that's really important to remember. A lot of these cyber incident reporting requirements are predicated on the notion that a cyber incident is a single moment in time. If we look at some of the more modern trends like data leak extortion, that's certainly not the case, right? Where a victim can be revictimized over and over again. And so, what I think is much more important than even nailing down a particular time window, is making sure that the threshold is right. In other words, 72-hours might not be an impossible standard to meet if that 72-hour clock does not begin until you actually have a hold of the cyber incident itself, you've mitigated the risk, and then can spend those resources on getting a timely report. Whereas that the 72-hours can be very disruptive if the threshold for reporting, and when the clock starts is much more vague and broad, something that's intended to ensure that the victim has to report while they're still putting out the fire. I think that's a good way of thinking about it. Your building was on fire, you'd want to put out the fire first, then write up the report on how the fire happened. I think that should really be the intention here, is focusing on that. So, if we look – historically at HIPAA, there's a 60-day reporting timeline, and the HIPAA security rule has been in place for 20 years, and I don't know that we can say, that obviously, that hospitals are no longer under attack or hospitals are more secure because we have that reporting rule in place or that the cause of why the health sector is still under attack, is because the reporting requirement is 60 days instead of 72. Right? I don't think either is true.

Sasha O’Connell

It makes sense, and it really speaks to my last question for both of you. On behalf of the listener, I was new to this, there's so much to think about, right? We have the history, the definitions, we have this patchwork, context, efforts that kind of reconciliation and alignment, and then all of these sub issues to any of the individual policies. In the end, if I was new to this, I'd be wondering, do we know what works? To your point, Drew, right? Are there cost benefit analysis that have come out? Do we have data on kind of what works given the different policy objectives. Megan, you do this all the time. Is there a place to go to figure out what works? What did Congress use when they were deciding these things for CIRCIA, for example?

Megan Brown

Well, one thing that I think people should stop and sort of scratch their heads about is, the lack of data about the effectiveness of some of the prior reporting regimes or analysis of the uses of that data in the utility. I don't believe there has been an overarching review of what is good and bad in existing reporting. We have not just HIPAA, that Drew mentioned, we've had several years of Department of Defense mandatory reporting under some clauses there, that are fast, it's one of the places the 72-hours came from. And I don't know that folks have looked back and said, has good information come from that? Has DOD been able to help the private sector with that information? Likewise, the 2015 law Drew mentioned, which was about voluntary sharing of cyber threat indicators, a recent report came out from the intelligence community that suggested that – that's good stuff, it's effective. But Congress, I don't think looked at those precedents to say, how can we build on what has worked and improve what hasn't. They kind of move to this CIRCIA law assuming that reporting will be beneficial. I think that's a policy blind spot for some people to really think about what is the past experience with these regimes? What can we learn from that? Because I don't know that the data would support a 72-hour threshold as being particularly beneficial, but that's what we've got in the law.

Sasha O’Connell

Perhaps a great project for any researchers and academics in our listening community to take on and help our policymakers with. Drew, any thoughts on that before we wrap?

Drew Bagley

Sure. I think with some of these reporting requirements, in modern times, we've seen that some regulators somewhere, so 72-hours for example, GDPR, looks to the European Union, some regulators somewhere came up with a theory behind a number or picked an arbitrary number, and then others, for purposes of pure consistency have it piled on, without going back to ask those very questions that Megan has pointed out, as to whether or not that number makes sense to begin with or whether we should all have a new standard. I think that's really important. I think the other thing is, some of the information that's out there and been reported, it might be that that information could be really useful, if certain actions were taken with that information. So, for example when the private sector is voluntarily sharing information with the government, it might be that the government could use that information in actionable ways to disrupt e-crime actors and their infrastructure. And so, for some of these, we might think of the reporting, and we might not think of an equivalent sort of action that could be taken, and for others, we might say, oh, actually, the part that's missing here is the impetus to use this data in this sort of way that would benefit victims as a whole. So, I always think that's really important is to ask, where is there a

realm where the government is uniquely situated to do something in cyber versus where is it that victims need to improve their own defenses, get better about protecting data, be better about transparency, about how they're protecting that data? So, I think that we should never assume that there's any sort of single silver bullet solution for any of this.

Sasha O’Connell

Perfect. I think with that, just about sums up or lay down on incident reporting. Right? The complexities, the history, and certainly the activity that's ongoing here makes it super relevant. So, with that we're going to wrap our episode. We hope everybody both visits us on the Start Here website, and the link will be in the show notes to see additional resources and have availability to the transcript to this episode for reference. We hope to see you next time where we are going to tackle ransomware and other extortion challenges. So, Drew, Megan, thanks so much for joining me, and we'll see you next time.

Episode 4 - Ransomware Part One

In the fourth episode of "START HERE," Sasha O’Connell, Drew Bagley, and Megan Brown delve into the alarming world of ransomware and extortion schemes. This episode addresses the evolution of ransomware and the approaches that organizations often take in response to such threats. Sasha, Drew, and Megan discuss how ransomware affects operational technology and the Colonial Pipeline ransomware attack of 2021. 

Listen Here

Key questions:

  • What is ransomware?
  • What is operational technology and how is it affected by ransomware attacks?
  • What goes into the decision to pay ransoms?
  • What was the attack on Colonial Pipeline? What can we learn from it?

Extra Resources: 

Transcript: 

Sasha O’Connell

Welcome back to Start Here. In this series of podcasts, we are working to give you, our audience, a framework for analyzing foundational cyber policy questions. In our previous episode, we looked at the policy context, history, and choices surrounding the potential to mandate cyber incident reporting. I'm joined today to follow-up on that conversation by Drew Bagley, Vice President and Counsel, Privacy and Cyber Policy at CrowdStrike, and Megan Brown, partner at Wiley and co-chair of the firm's Privacy, Cyber and Data Governance practice to take on our next topic, that is to break down and explore ransomware and other extortion challenges. Before we start, or maybe actually, this is a great starting place for providing context for a discussion about ransomware and extortion schemes. Let's talk briefly about how you both think about how cyber policy makers should prioritize issues when the underlying technology or adversarial tactics change so quickly?

Megan, what do you think?

Megan Brown

So I really think this is an important topic that highlights that fundamental question for policymakers, and it applies not just to ransomware and extortion threats, but other questions across the cyber landscape, across the privacy landscape and others, which is, you know, how should policymakers think about particular types of threats when we see over and over, in the past two decades at least, that threats and tactics change? So, I think about it from should Congress or federal agencies be kind of running after specific threats that might be in the news, or should they instead focus on some fundamental principles that might have wider applicability and might be more durable and don't become quickly obsolete?

Sasha O’Connell

Drew?

Drew Bagley

Ransomware is a great example of Megan's point about policymakers sometimes chasing either a bespoke problem or a timely problem of today, but not necessarily one that's going to be a problem that we still need solved the same way in three years or that the way we're solving today's problem is going to solve the future problems. So, when you think about ransomware, it's obviously been all over the front pages for the past several years. However, even when we look at that phenomenon, adversaries have actually changed their tactics in recent years. For example, data leak extortion is now becoming much more of a threat than ransomware in and of itself. By analogy, if you think about it, malware was the hot topic a decade and a half ago, and, therefore, there were a lot of efforts that were spent trying to solve the malware problem, both from a technology standpoint and from a policy standpoint. You fast forward to today, there are still certain standards that were designed to address malware, whether or not a file on a computer is malicious or not, something binary and yet today, what we see is that actually 71 percent of attacks don't use malware, and even with the data we've seen at CrowdStrike, 80 percent of attacks are actually using legitimate real credentials. And so just with that example alone, tactics have changed, and therefore, the policy means to address them needs to change too.

Sasha O’Connell

That makes sense. So just to summarize, the context that perhaps ransomware is a version of something that continues to evolve and, thus, for policymakers, the important thing is to focus on key principles is what you guys are saying, not so much specific to this kind of implementation or specific threat activity of the day and tailoring interventions more broadly and less specifically makes a lot of sense. With that context in mind, and you started this, Drew, can I ask you, let's start on ransomware. What is it?

Drew Bagley

The system and the FBI's joint ransomware task force has a great definition for ransomware. They define it as a form of malware designed to encrypt files on a device, rendering them and the systems that rely on them unusable, and the reason why I like that definition is that the definition is not specific to what an adversary might do after encrypting the files. So in other words, an adversary may use ransomware for its namesake, locking up files and offering to release them for a ransom or conversely, the adversary can use ransomware for pure destructive purposes, and we've certainly seen that before. Not necessarily asking for something monetary in return. This is helpful in understanding that ransomware is only one form of a broader trend of extortion. For example, in today's trend of data leak extortion, ransomware may be used in addition to file exfiltration, or pure file exfiltration may take place without the use of ransomware. In other words, use of ransomware is no longer as simple as an adversary locking up files and asking for payment and then potentially unlocking those files in exchange for payment. This means ransomware may be tied to double extortion, asking for a payment to unlock files, and subsequently asking for a payment not to leak or further disclose information. So, I believe the ransomware definition is very helpful, but it's also important to remember that extortion is actually the broad policy problem that we're trying to solve, and ransomware is just one part of it.

Sasha O’Connell

Megan, from the kind of victim perspective, can you talk us through what people see? I assume we're still not in this era of ransomware as a service, we're not still seeing requests to mail checks to PO boxes. Can you talk a little bit about what it looks like on the receiving end - I know you've worked with clients in that situation.

Megan Brown

You know, the paradigm example is an employee tries to log on to their account or workstation and a banner pops up. “Hi. Your network has been penetrated and your files have been encrypted. To recover your files, send a hundred thousand dollars in Bitcoin to the following address.” Or, you know, the security team gets reports that databases just aren't available and when they start investigating, they might find a ransom note embedded in other systems. We've seen artifacts like that in the past or you could have a note sent to executives that says we have encrypted your files - here's a screenshot of a file tree and to get the keys to unlock your data or your system, you have to go to this website, which is on the dark web on, like, Tor and prepare to pay us $10,000,000 dollars in digital currency. Drew's team at CrowdStrike had put together some examples that they make available on the website, so in our resources page, maybe you can look at some of those. Some of them are very sophisticated. Some of them are full of typos and sort of seem like something Saturday Night Live would put out or The Onion. It varies, but that's kind of the game, and then that sets off a whole array of choices and things and playbooks that have to get executed.

Sasha O’Connell

And what are organizations thinking about? What does this mean in those playbooks? What are people balancing and thinking about when that goes down in an organization?

Megan Brown

Well, I've seen an array of reactions and challenges, and Drew's probably seen it far more, but every hour that your system is inaccessible, you either can't conduct a part of your business or you can't provide some service, hopefully, not all of your service, or you're wondering about what data has potentially been exfiltrated and this happens to retailers, hospitals, small businesses. If you're a medical facility or a doctor's office, for example, you might have to try and work with backup systems or figure out what paper records you have. If the incident is affecting your operational technology, maybe a factory has to shut down or critical services have to be paused because the security team doesn't know the extent of the intrusion and so they're trying to triage, how do we contain the damage and then think about remediation or getting backups restored.

Sasha O’Connell

Megan, let me actually ask you to just pause for a second. Can you say a little bit more about what you mean by operational technology in this context?

Megan Brown

Yeah, sure. Great point. Operational technology is different from information technology and as government regulators are thinking about regulation here, they're sometimes distinguishing between operational technology, OT, and IT. OT is and there's not clean definitions that are universal, but you can generally think of it as operational technology makes a service operate or go. In the Colonial Pipeline example, it was making sure that oil could get where it needed to go. It makes trains run on time; cranes operate. By contrast, IT is more about the business systems in the back end, email and billing. Sometimes, you need to consider whether you have to pay folks who claim they have your info so that you can actually determine what data has been accessed. The example I mentioned of, like, a screenshot of a file tree, you might not know what is in all of those files, and so you might need to buy some data back to figure out, you know, have I triggered my state data breach notification laws? Do I have to call so and so? Like, a whole bunch of things and, fundamentally, the threat actors here are making money from that uncertainty that is forcing victims to feel like they have to pay either to restore service, to examine the damage, or to be able to function and run their business.

Sasha O’Connell

Thanks, Megan. That is so interesting. I never really thought about the assessment piece, the need to potentially pay to even know what was lost, taken, or frozen. Drew, I'm going ask you about the payments piece and how that works functionally; but before we get there, can you say more on this need to assess or pay to even know what was taken? Are there other implications for that or considerations?

Drew Bagley

Absolutely, Sasha, and actually for listeners of this episode, this actually touches upon a lot of the topics we discussed in the incident notification episode because whether it's a cybersecurity mandate or a privacy law mandate, a lot of these breach notification laws actually require notification to a regulator or even individuals if the impact is great enough. So in other words, a victim really needs to have enough information about what that impact is. There are going to be situations in which a victim does have visibility into their systems, so even if they had some sort of ransomware incident or data extortion, they might actually know what was exfiltrated. But in other cases, to Megan's point, a victim might not actually have any of that information. In other words, they might not have any endpoint detection and response data, any log data, or anything, and therefore, they might need to actually know a bit about what the adversary may have. Similarly, a victim may need to know which type of adversary this is and what they traditionally do, and that's where a victim may want to analyze some sort of threat intelligence to understand who they're dealing with and what that impact may be. Sometimes ransomware might use other methods. So, for example a payment in gift cards and even other types of payment providers like MoneyPak, Paysafecard, and whatnot. But, traditionally, cryptocurrency is the preferred method of payment by adversaries, and the way in which that works when a victim organization does indeed want to pay or have the option of payment is they need a Bitcoin wallet. And so that's something where in recent years, what you've seen is that rather than victims going and setting up their own Bitcoin wallet or other cryptocurrency wallet and transferring funds to it, they might go through a third party company that specializes in doing those types of payments and that would hold wallets in a variety of different cryptocurrencies, and then, also, that company usually would be the intermediary that would help negotiate. Whereas on the back end, the victim organization would be working with their outside counsel in weighing their different options and, also in checking to make sure that they have some sort of information to make a valid judgment about whether or not their payment would likely violate any sort of sanctions law; because even if you are a victim of ransomware, you're still subject to different considerations, such as with regard to money laundering or terrorist financing, for example. So the AMLCFT restrictions, and so you have to ensure that you're not inadvertently paying money that would go on to finance terrorism or to facilitate the laundering of money, and you have to watch out for bans on payments to certain sanctioned entities. So that's an important consideration in the payment process, but the way that it works in practice is that there will usually be some sort of communications portal set up by the adversary. So the adversary will provide a URL that the victim can navigate to, and that's where there will be some sort of online chat, and that's where back and forth, the victim or the entity negotiating on behalf of the victim, will be interacting with the adversary and then also that's where they'll get the information about which wallet to send money to. Now anytime you're doing that, what you're doing is, of course, weighing that risk against the risk that you never get your files unencrypted because there's certainly no guarantee that an adversary is going to decrypt your files just because you paid them. And in fact, you might be opening the door to be revictimized. On the other hand, sometimes that is the only option for a victim. And interestingly enough, another thing that's happened as data has grown more and more, is that the process for decrypting files actually now takes a lot longer. So oftentimes, the victim's also weighing how long would it take to decrypt their files versus how long would it take to recover from a backup, because recovering from a backup may take a long time. Decrypting files may take a long time. So those are sometimes considerations being weighed too in these situations.

Sasha O’Connell

Thanks. That's super helpful in thinking about how this plays out. Before we pivot to thinking about policy intervention options for this persistent challenge, Megan, can you talk a little bit about Colonial Pipeline? It's probably one of the most high-profile examples people are familiar with and what the decisions were and how they played out for that organization as far as we know.

Megan Brown

Sure. I'm happy to, and, of course, there's tons of publicly reported stuff. The CEO has been really transparent. But back in 2021, some bad guys, some hackers that I believe the government is confident are associated with the Russian Federation, infiltrated the computer networks of Colonial Pipeline. They demanded more than $4,000,000 dollars in ransom. The company shut down the pipeline operations, which I would consider to be the operational technology side of things as opposed to the IT side of things, and that's a technical concept that policymakers struggle with drawing those distinctions, but hackers didn't get in and shut down the pipeline operations. So, it's really interesting how in the heat of an unfolding attack, you can see the decisions that have to be made against a backdrop of significant uncertainty in the Colonial Pipeline situation. The CEO has described the decision process and the timing both to Congress and to the press and based on that public sourcing, they detected the attack just before 5 am on a Friday, when an employee found a ransom note on a system in their IT network and as we discussed a moment ago, IT is the information technology side, billing systems, back end stuff, not the pipeline operations. That employee notified the operations supervisor who is in sort of the pipeline control center, and the CEO has described sort of a broad stop work authority that allows the pipelines to be shut down quickly if there's concern about safety; and here, they didn't know. They didn't know what was really happening and they were worried that there had been access to or there could be future access to their operational technology, their actual pipelines. So they put in a stop work order to halt those operations. That was done to contain the attack and help ensure that any malware didn't get into their OT systems if it hadn't already and that happened as I understand it, within an hour, employees began the shutdown process and then within 15 minutes, over 5,000 miles of pipelines had been shut down and the CEO, even though he faced some criticism for it, you know, he has said that it was an extremely tough decision, but he made that decision. They made that decision and then as folks know, they've paid the ransom. It took them, I think, around 6 days to get back online and so you can see there both the important differences between sort of that IT, OT issue, but really this broader uncertainty that decisions in a rapidly unfolding ransomware attack have to be made with really imperfect information.

Sasha O’Connell

Well, we've certainly covered some ground here in terms of the context, sort of depth and degrees, and implications of both ransomware and other associated extortion schemes and, using examples like Colonial Pipeline, it's easy to see these complexities in action and certainly relevant to our day to day.

We're going to pause here and regroup for our next episode where we're going to talk more about what policymakers can and are doing about this challenging issue. Thank you for joining us, and we look forward to seeing you on that next episode; and in the meantime, please don't forget to check out our associated resources in our Pause Here section of our website, where the link is, of course, available in the show notes. We'll see you next time.

Episode 5 - Ransomware Part Two

Join Sasha O’Connell, Drew Bagley, and Megan Brown as they embark on a journey to demystify cyber policy, addressing the vital gap in understanding and communication that hinders effective policy development. Through engaging discussion and expert insights, this episode serves as your gateway into the intricacies of cybersecurity public policy, specifically focusing on ransomware and policy approaches. 

Listen Here

 

In this episode, you'll discover:

Understanding the Ransomware Economy: Delve deeper into the complexity of the ransomware economy, exploring its various facets and implications for policy development.

Regulating Ransomware Attacks:  Explore policy approaches to regulating and mitigating ransomware attacks, considering the multifaceted interests that must be balanced by policymakers.

Challenges and Considerations: Examine the challenges inherent in crafting effective ransomware policies, including the need to address divergent interests and perspectives.

Solutions and Strategies: Gain insights into identifying real problems posed by ransomware and formulating practical policy solutions to combat this evolving threat.

A Holistic Perspective: Benefit from a unique perspective that integrates academia, private sector expertise, and real-world policy application to offer a well-rounded view of ransomware policy.

This episode is just the beginning of a series designed to equip current and future policymakers with the knowledge and tools needed to navigate the complex landscape of cyber policy, with a specific focus on ransomware. Don't miss out on this essential resource—follow and subscribe to stay updated with the latest episodes and insights.

Key Questions: 

  • Why can’t we just “shut down” the ransomware economy?
  • What regulation exists in the United States on ransomware?
  • How effective is paying ransoms?
  • How are organizations handling ransomware attacks?

Transcript

Sasha O’Connell

Welcome back to Start Here. We're working to give you a framework for analyzing cyber public policy questions. In our previous episode, we looked at the policy context and technology and impact associated with ransomware and other extortion schemes. I am so pleased to be joined again today for a follow-up on that conversation by Drew Bagley, Vice President and Counsel for Privacy and Cyber Policy at CrowdStrike and Megan Brown, Partner at Wiley and co-chair of the firm's Privacy, Cyber and Data Governance practice. We're going to take on today options facing policy makers regarding this really challenging problem. So, let's get right to it. Drew, why can't we just shut down this sort of economy that's driving this? Why can't we go after these bad actors who are doing this as a service and close this business down? Is it more complicated than that?

Drew Bagley

This is one of those situations where it's just about as complicated as it gets when it comes to disrupting the adversaries behind this. So, to break this down from a policy standpoint, we obviously already have disincentives in place with regard to criminal penalties for those that are engaging in criminal behavior, which would be inclusive of ransomware, of course, but nonetheless, that doesn't mean that that solves the problem, of course, right? Ransomware and other related forms of extortion, such as data leak extortion are gaining traction, and so policymakers really need to look at what else could they do to disincentivize this sort of behavior? What can they do to incentivize would be victims into bolstering their defenses ahead of time and not waiting until they're a victim to respond to these sorts of attacks, and what other sorts of IT hygiene could be incentivized right now. So, for example, could victims, in addition to implementing by better cybersecurity, could victims actually be incentivized to have better backups in place. And similarly, when there are ransomware attacks or other sorts of extortion campaigns, policymakers can look at whether or not the status quo is sufficient in terms of getting all of the information that's out there on what the adversary's infrastructure actually is and being able to disrupt that infrastructure and take that down. And so I think that's an important framework for policymakers to look at when drawing a conclusion about what they can do about this. Whereas where we've seen this debate, I think, has really focused instead on forms of payment, whether or not payments for ransomware should be allowed or not. So in other words, whether or not victims should have that option or not have that option and while that there's a legitimate debate there, I think there are many other aspects of this problem, and it's something that policymakers need to look at holistically like they do with other problems.

Sasha O’Connell

So it sounds like what you're really laying out when it comes to policy options here is, first of all, it's not just about payments. This question of outlawing payments seems to be a real hot topic today, but, as you said, Drew, that's not really what it's about. There's a whole possibilities around disincentivizing this kind of behavior, disrupting this kind of behavior, and then there's a whole set of choices around how we incentivize good hygiene, either at an individual level or at an organizational level. So in that lay down, it sounds like there's still a lot of responsibility on the part of users or the private sector to sort this out, but potentially some role for government. Where is the government on this Megan? Where are things right now in terms of what they're trying?

Megan Brown

People, you know, sometimes they roll their eyes when the government will indict a bad actors overseas because they scoff that they can't actually be captured and put in jail. I still think there's a process value to the government pursuing the bad actors even in absentia to send this message and push those international norms that, like, you should not be housing these open and notorious criminal enterprises in these dark corners of the world. That said, that's sort of a different bucket of issues, which is what is the United States government capable of doing offensively outside of the country to disrupt some of these actors? I'm going to park that as a totally interesting and separate discussion. There is additional regulation and movement coming to try and change the incentive structures here. Unfortunately, I think a lot of that is targeted at the victims themselves, but that is who the government has in front of them to regulate because they can't go after the bad guys hiding in the dark corners of the world of the Internet. So you've got mandatory reporting. Congress in 2022 passed the Cybersecurity Incident Reporting for Critical Infrastructure Act, which is a mouthful and that the rules for that are currently being developed by the cybersecurity and infrastructure security agency over at DHS. That's going to be a big deal. Congress has said if you get hit by a ransomware attack and you pay a ransom, you better show up and tell the government. They want to be collecting that information; they want to be collecting that information quickly. That is new. There is currently not a broad prohibition on the payments of ransom, nor is there a broad reporting obligation. You might have reporting obligations if a ransomware event qualifies as some other kind of cyber event, but the nature of the attack does not currently drive it. You've got a lot of voluntary work that goes on, which I think is really important and laudable not to lose sight of. Many times, the first piece of advice I'm giving to clients when they have a situation is call the FBI. Call the FBI because they might know something, call the FBI because if you end up making a payment, you want to show that you worked with the FBI. Call the FBI. That is frequently, but not always, the advice. Secret Service is also involved in some of these issues. You've got DOD and the intelligence community. Some of these actors are nation state. New mandatory incident reporting that was a direct response to the Colonial Pipeline attack, they started with a security directive towards the pipelines, and now they've expanded back to other critical infrastructure. The Securities Exchange Commission has new rules that have come online for reporting of material incidents by publicly traded companies. You've got state government activities. The New York Department of Financial Services has robust cyber rules. They also have issued specific guidance to their regulated entities, and here we're talking insurance companies, people offering financial products, debt products, etc., warning them about double extortion. I think we're going to try and drop a copy of that in our resources for the show, and then as Drew said, you've got treasury, which administers our sanctions program, which it is a strict liability regime. If you accidentally pay a terrorist group, you can go to jail. So they take this stuff really seriously even though you're just trying to get your data back online.

Sasha O’Connell

In that context, it's so complicated. Even the FBI's perspective, they're kind of threading that needle where on the one hand they absolutely do not support the paying of ransom as we talked about, they argue ransom payment, not only encourages and furthers the business model, but it often, as you guys have both said, goes in the pockets of terrorist organizations or money launderers or rogue nation states. But on the other hand, I think AD even testified that the FBI is not encouraging Congress to make it illegal because you're then, as you just said, Megan, sort of creating this double extortion risk. I think his quote was, if we ban ransom payments now, you're putting US companies in a position to face yet another extortion, which is being blackmailed for paying the ransom and not sharing that with authorities. So you can see in that FBI position alone, the sort of complexities for the government of the one hand, not wanting to feed this ecosystem. Megan, on the government side, banning payments, voluntary, what other, what else is being discussed here?

Megan Brown

There are proposals that pop up from time to time to try and ban most extortion payments. Deputy National Security Adviser Neuberger said in May that they're grappling with that and there have been a couple of large reports on these topics that I think the government recognizes those tradeoffs, but there are voices who want to go ahead and ban it and believe that there would be some short term, hideously painful consequences of that, but that's the only way to fix the solution. I think those are hard decisions to make. It's hard to me to tell a hospital, go out of business or, patients' health outcomes get affected or you small business that has been around for years, you just go out of business; I'm sorry. That's just, I think that's tough because we don't ban the payment of ransom in other settings. If there is a, let's call it a domestic kidnapping or an international kidnapping, you do not violate the law by paying the ransom unless you violate the sanctions that we mentioned already. So I think it's a big change to consider banning ransom, and I think policymakers have a lot on their plates.

Sasha O’Connell

Absolutely. Drew, what do you guys see at CrowdStrike? I mean, does paying ransom work most of the time at the end of the day? It seems like an empirical question for policymakers that they'd want to put in the mix.

Drew Bagley

Yeah. CrowdStrike, we really see the whole gamut with regard to us being brought in at completely different times. Sometimes it's after the fact. So all the options you know, can vary beyond the table, and so, sometimes, companies are actually in a pretty good position because even if they were hit with ransomware, they actually have visibility into their systems and can actually determine the impact right away and understand that there might not actually be much of a risk. There certainly could be a disruption, but much of a risk that they have data that will remain inaccessible and other times they may not have any visibility into that, which can affect their decision making process. But ultimately, it really becomes something that is up for each individual organization to weigh their risk today. Depends a lot on what the state of their IT was, what the state of their security was, whether they had any visibility into their systems, what the state of their backups were, and even their industry and what's at stake. So if we're talking about a hospital, then that might be a lot different than talking about a small business that might be selling something and might have to redo an inventory list or something like that. So that's where all of those variables can come into play. From a policy issue, I actually think it's really helpful to kind of think about ransomware not as something that is just specific to ransomware, but when I'm listening to some of the examples Megan's giving, to really think about this as extortion and what is it that policymakers can do, and how can they help victims of extortion, cyber extortion, because extortion here can take many forms. So, when we think about ransomware, we're thinking about the encryption of files and then the extortion for payment to get access to those files again. When we're thinking about data that's been exfiltrated, we might be thinking about extortion where data might be leaked and have a greater impact if a payment is not sent to an adversary, but then we can also think about modern extortion where an adversary is threatening to go to a regulator to report that a victim has faced an incident and not yet reported it and in fact, it might be that a victim might be in a position where it's not yet reportable because it has not yet risen to the impact of being reportable, and yet the adversary would nonetheless, being able to extort a victim and so I think this notion of extortion is one where that's what's really important to address because the methods to extort victims will evolve as they have. As we've seen over the past couple decades, they'll continue to evolve, and that's really the public policy challenge today.

Sasha O’Connell

Yep. That makes sense. And, again, you raise the potential for creating compounding extortion opportunities by outlawing payment or even mandating reporting, which is so important to consider and maybe under considered when thinking about policy interventions. Megan, before we wrap, can you, for policymakers out there, give us one more time kind of what are victim organizations thinking? What are they balancing when this happens? Because I think that's a really important sometimes overlooked perspective of people who haven't sat in that chair and now sitting on the policy chair. Like, are they thinking about sanctions? Are they thinking about company impacts only? Like, what's the structure they're running through that policy makers should keep in mind when thinking about crafting any kind of intervention.

Megan Brown

Oh, I love that question because, of course, I want policymakers to have sat in the seat or at least talk to people who have sat in the seat, but I think from a victim company's perspective, they're thinking about all of those things, Sasha, which is, is it illegal to pay? What are the risks of payment? What are the reputational harms of payment? What are the reputational harms if I do not pay? What are the harms that my customers or employees will face that payment might obviate? If someone's threatening to release all kinds of sensitive health information of your employees, it's perfectly reasonable to think about whether a payment could prevent any harm to them from the public release of that data. Similarly with customers, right, so there's lots of those things. There are questions about what is the actual impact of your operations and how quickly can you diagnose those impacts. Has the data actually been exfiltrated, or are they lying to you? Are your systems encrypted? Are your backups adequate like Drew mentioned? How quickly can you spin up your backups because that's not like an instantaneous cutover kind of thing. How sensitive are the systems and data that might be at issue? Those things all factor into the decision to negotiate and a final decision to pay, and those factors change over time. What is right on Monday might change by Thursday by the time you've done certain things so it is not a binary thing, and I think policymakers also need to sort of keep in mind that. As well as, you know, if you're looking at rebuilding, if you've got a ransom demand for $2,000,000 dollars and it's going to cost you $20,00,0000 dollars to rebuild your systems, that may be a pretty simple calculus. As those numbers get closer maybe you have less of a need to pay, and because it is extortion more than anything else, there is a heavy degree of psychology. I know that, I'm sure Drew, on the matters he's worked on, there is a lot of psychology here that, you know, the bad actors and others, you have to try and think about, and that's why it's such an imperfect system, and then finally, you know, I think there's a lot of considerations about is the incident already public and what will be the consequences of that publicity issue. And that, Sasha, relates to a related policy question. So there's the policy question of should payment be illegal? Should payment be approved? Should payment be acceptable? Then there's the ancillary question of should we make you tell on yourself? Should you have to publicly report? And I think, there's some really interesting tradeoffs there as well about, you know, a reasonable reluctance on the part of a victim to disclose that they paid a ransom because disclosing that information might make other people see them as a good target in the future, and so there's just all of these complicated tradeoffs that I think do not lend themselves easily to new federal regulations, but I think policymakers are really grappling with it. They understand all of that. All the folks at the NSC are well aware of all of that, and I think are trying to approach these issues with thoughtfulness and humility. Maybe not the best data, because I think that's hard to get, but I think those are the issues I would say I would love a policymaker to just keep in the back of their mind.

Sasha O’Connell

Amazing place to end. I mean, no simple answers here. But, again, like you said, having more information about what we're really talking about as you guys broke down the technology, the players, the ecosystem, and I think importantly, the context, that this is a changing threat and we need to keep that in mind. Then also the secondary tertiary kind of consequences of any potential policy intervention as you guys laid out are all super important. So with that, we are going to wrap this episode. We hope everyone will both visit us and future episodes in, on our website link will be in the show notes where you will find additional resources on this in previous topics. Thanks again, Drew and Megan, for joining me today and thanks all of you, for listening. We will see you next time.

 

Selected
Episode 6
Episode 7

Episode 6 - Dataflows

Join Sasha O’Connell, Drew Bagley, and Megan Brown as they embark on a journey to unravel the complexities of cross-border data flows and their impact on global cyber policy. In this enlightening episode, they delve into the benefits of data flows and internet connectivity, highlighting their crucial role in driving innovation, economic growth, and international collaboration.

Listen Here

Key Questions: 

  • What are the benefits of cross-border data flows?
  • Why is there a need for a global workforce for data flows?
  • What is telemetry data?
  • What does data flow regulation look like in the United States?

Resources: 

Trancript

Sasha O'Connell

Welcome back to Start Here. In this series of podcasts, we are working to give you a framework for analyzing foundational cyber policy questions. In our previous episode, we looked at ransomware and some of the challenges companies and governments face in trying to stop it. My name is Sasha O'Connell, and I'm a Senior Professorial Lecturer at American University, and I'm joined again today by Drew Bagley, Vice President and Counsel for Privacy and Cyber Policy at CrowdStrike, and Megan Brown, a partner at Wiley and Co-Chair of the firm's Privacy, Cyber, and Data Governance practice.

We're going to take on the next topic, and that is cross border data flows. As we all know, being online is completely fundamental to our lives. I start every semester with an exercise where I work with students to list the ways we interact with the internet from sunup to sundown, and sometimes, honestly right, overnight, and it's, of course, every time we do it, a remarkable list. For the purposes of this episode, it's really important, I think we start with that recognition that one of the reasons being online has been so, frankly, incredibly useful and functional and really a necessity in our daily lives is all built on technical protocols that are in fact global and decentralized, and those are also dependent on the ability of major providers to move data around the world at lightning speed. It's what makes all the magic work. Over time, of course, it turns out that that set of global protocols, and in particular as relevant to this episode, the associated free flow of data has some consequences that are important to acknowledge and balance with all the primary benefits that it also provides.

In an effort to assert their vision of the correct balance of those two things, the costs and benefits, if you will, governments around the world have started to push for data localization policies and laws. This is a concept that requires, essentially, companies to store data in particular countries and puts restrictions, sometimes outright bans, or requires export licensing on moving data around the world. We've even seen some recent moves in the U.S. to limit the transfer of data to certain countries. So we want to talk about it. Should data be treated more like a physical product in international trade with a whole scheme of rules and requirements for bringing it into the U.S. or sending it out? We picked this topic for Start Here because there are real practical and policy issues that are at play, including impacts on cybersecurity activities, which we're going to talk about in just a second, when governments limit the movement of data across borders or require in country storage.

So with that, I'm going to turn to Megan and Drew here. I mean, we talked a bit about the benefits, the free flow of data underlying just about everything we do on the internet. Can you talk a little bit more about maybe some of the other benefits that aren't so intuitive?

Drew Bagley

Sure. Absolutely, Sasha. Yeah. As you noted, there are many economic benefits from a business perspective, even in the ability to set up a multinational business across borders. There are cultural benefits, but importantly, and I think that policymakers often overlook this, there are lots of cybersecurity benefits and even, I would argue, cybersecurity necessities to the free flow of data. So for example, all of the devices that we use today have some sort of unique identifier being generated, or otherwise statically associated with the device and those unique identifiers are important, because as we're online, that means that if an adversary is attempting to get into our laptops, our phones, or any other device, there is some sort of interchange between data. Between unique identifiers that might be associated with that adversary and unique identifiers that could be associated with the victim. And those breadcrumbs are really important for cybersecurity and those can be important in terms of preventing a cyber attack and detecting something that could be adversarial behavior, or even in terms of investigating a cyber attack after the fact. And in fact, these days, there are even cybersecurity mandates backed up by official guidance that call for the use of threat hunting, for example. Whereby you would have people 24/7, so because of time zones, you'd have to have people around the globe, looking at this sort of telemetry data that has these unique identifiers for purposes of catching things that the technology alone might not catch. Find that hands on keyboard activity. There's also use of identifiers in red teaming, where you are asking a security company to come in and try to penetrate your defenses and see how good of a job you do protecting against that. All of those things, by the very nature of how you describe the Internet, require some sort of cross border data flows and this all exists in an era in which, in some legal regimes, even something as public as an IP address is sometimes categorized as regulated data that should be localized, and so that's where cybersecurity really is affected by data localization.

Sasha O'Connell

Drew, I know you have a paper out you co-authored on this topic. One thing, again, you mentioned it briefly, to go back to is the need for a global workforce and the need to move data around. Can you explain that? What does that mean for CrowdStrike? And what does that mean for the need to move data across borders?

Drew Bagley

Sure, so I co-authored a paper with Peter Swire and several other co-authors that focused on taking the MITRE ATT&CK Framework, which is the closest thing you can get to an industry standard in cybersecurity for what a cybersecurity framework looks like and we applied that to various data localization rules, and essentially, an adversary is going to take data across the border anyway; regardless of the rules. They're not exactly rule followers and yet a defender, if a defender is hamstrung by data localization rules, that means that you could have a defender in one jurisdiction, able to only look at a certain set of data and as soon as there was, let's say lateral movement within a system, meaning an adversary is moving from one machine to another across the network, then all of a sudden that same defender might not be allowed to look at the data that technically is in another jurisdiction. Where this really comes into play these days is most attacks use legitimate credentials, as we've talked about in previous episodes, and so if you're talking about identity credentials, and those are personal data, well, there's an interplay going on constantly through various internet protocols to authenticate those identities and so if a defender is not allowed to look at authentication logs once they cross a certain threshold, then that's very problematic; and essentially what you have under ideal circumstances, and again, under official guidance, even from the European Union Cyber Security Authorities, under ENISA, is that you have 24/7 security operations centers staffed by people who are doing threat hunting, but even, and forgetting even that technical cybersecurity, you just have 24/7 customer support if something technical is going wrong and that requires access to data, basic data. And if those things are disrupted, then that's something that can really only benefit the adversary who doesn't have to play by the rules and doesn't have a whole lot of benefit for the defender that's going to be boxed in by the rules.

Sasha O'Connell

I heard you say, too, that it makes sense for CrowdStrike, for example, to have folks work the evening when it's day in another country, just sort of by definition, which interests me because it's a real people issue, at the end of the day, not a technical issue. So that aspect is interesting as well.

Drew Bagley

Even when we think about cyber workforce shortages today. Think about in a single jurisdiction, even a larger jurisdiction with a big population, I have yet to find a policymaker in the world that doesn't complain about the cyber workforce shortage. So, imagine if then you've reduced your pull because of data localization, it's hard enough with a follow the sun model, but with a follow thesun model, you can do this. And especially where we have cyber haves and cyber have nots, you have a lot of organizations that have to depend on managed service providers that have that 24/7 backbone to help them from a security standpoint. So, then you're really just shifting that burden if you're making it so that the rules are that you can only find talent in a certain jurisdiction within certain hours that they're going to work and then hope for the best during the other parts of the day.

Megan Brown

Or worst case, I think, or an additional downside of all of this is you're just introducing friction. You may have to have multiple people doing the same thing because they're in different jurisdictions and I think when you're talking about cyber defense or response, speed is really important, and so it's not satisfying when I hear some policymakers say, well, you know, you can just contract around that or there's ways to work around that. Maybe, maybe, but it introduces friction, contract negotiations, additional bodies, and it's just not– it gets in the way of the speed that I think Drew's been talking about being so important.

Sasha O'Connell

So, anything else on benefits? Again, we sort of brushed over, but importantly, 90 percent of probably what we do on the internet, is dependent just from a convenience perspective on this flow of data globally. We obviously have this really interesting addition of the cybersecurity aspect to it. Any other benefits on this before we move to maybe what some of the risks are to this kind of data flow?

Megan Brown

I mean, I'll just flag, I think we take for granted in our connected economy that when we travel, for example, our services are going to work. Like you can hail a ride share in Greece that you could also do here in Washington, and if you don't have data transfer and portability, all of that can be much more difficult, much more costly and less seamless for end users. You know, in addition, we've had, cloud services are enabling huge cost effective storage as well as ready access around the globe and a lot of these data localization questions impact those efficiencies, and so I think just policymakers need to understand that whenever you're putting up an additional hurdle to the use of data or the movement of data, you are having these on services, and technology that a company wants to be able to send telemetry data to its engineers in India for processing for some cool new thing. If they have to check with their lawyers every time they want to do that, you're introducing a lot of friction to the economy.

Sasha O'Connell

Can I ask you guys to define telemetry data?

Megan Brown

No.

Sasha O'Connell

Thanks. Good talk. Drew? It came up twice. On behalf of our listeners everywhere, telemetry data.

Drew Bagley

Absolutely. I'd be more than happy to. It's such a fascinating term to define. So, sure, at its core, telemetry data is generally speaking, and this is something that changes over time as technology evolves, but the metadata being generated, either by a device, so we can think of Internet of things, devices generating some sort of data about what's going on on the device. But more commonly in the context of cybersecurity, it's really the metadata about the processes going on a device. So when you open your office software, for example, there's an executable file that opens. The content of that executable file is not the telemetry, it's the fact that that executable file opened, and then whatever happens subsequently. So that file might call out to different libraries that are on the system, and then the operating system might take other types of actions, and so that chain of events is very important from a cybersecurity standpoint because if, for example, if you opened a Word document, and then all of a sudden there was a file delete event after that; that would just be the telemetry itself. You wouldn't have to look at the document to know what's in it. But that pattern might be indicative of ransomware on the system, and so that data, again, is useful for cybersecurity, but it's only useful if you're able to identify the adversary and stop the adversary, identify the victim machine and block what's going on on the victim machine. If you remove all those identifiers, then that's something where you can't have it both ways. And so I think oftentimes, when these data localization conversations happen, cybersecurity is being thought about as if it's 20 years ago during the malware wars and most attacks today don't even use any malware. So you're not talking about matching hashes to a known list of badness. Instead, you're talking about using this telemetry.

Sasha O'Connell

Okay, perfect. So the point here is that telemetry data would be included in any data restriction?

Megan Brown

It could be. It's all definitional and I think that's another thing policymakers just have to always keep in mind is don't cast too wide a net when you're defining what data has to be protected.

Sasha O'Connell

Yep. That makes sense. Okay. So, I see a ton of benefits here. Are there any benefits to restricting data? Can you guys explain, where this is coming from? Either from a general perspective or a more tactical perspective? What are the benefits on the other side of some restrictions?

Megan Brown

So, there's always going to be some risk relating to the collection, handlings, sharing, trade, and data, right? There's commercial data, there's sensitive personal data, there's all kinds of data, and that's what many times the bad guys are looking for, so, it is reasonable to try and minimize that. It may be reasonable in certain circumstances to try and keep that data from getting out to countries of concern, certain kinds of data, we policymakers might say. We don't want this kind of data in our adversaries hands but there's several justifications that regulators around the world will offer. One frequent one is, a country may have made a value judgment about what privacy and security demands they have domestically, and they are worried that the export of that data will subject that data to less protection. So that's, one model to export your privacy standards to the destinations for your citizens data. Often, it's to ensure that data is going to stay available in a country so that that government can have access to it for their own purposes. That might be counterintelligence, that might be law enforcement regular old surveillance. If someone is in their country to be able to get that information for law enforcement purposes, for example, and some countries really do want to support their own domestic economic growth by encouraging companies to build data centers and offer cloud services in their own countries. That generates jobs and promotes their own economic growth in the tech space. So, that's another kind of motivation that I think many would say is a benefit from some of these limitations on cross border data movement.

Sasha O'Connell

That makes sense. So I hear it again, just saying back to you to make sure I've got it too, so there's kind of a export of privacy values, right, to make sure that the way we do business in our country, we're protecting that as our data goes forward. There are government sort of uses, be it for the government's surveillance and the last, which is again, I really gravitate toward the human, the sort of non technical aspects, this idea of literally creating jobs by keeping the data in your country to build data centers or otherwise. Anything else, Drew, on that or you're, you think we covered the benefits?

Drew Bagley

Sure. I'd say sometimes you see a conflation of all of those in different jurisdictions. So, for example, in Europe, there are definitely data localization restrictions that exist under privacy regimes like GDPR that, of course, have exceptions and the idea is how you create mechanisms for other countries to either bolster their privacy protections or at least bolster contractual protections that follow the data. But then there are also equally, and maybe even right now getting even more traction, there are trade equities at stake. So, this notion that if you are able to control and regulate the data, then you're going to be able to in theory, shape the marketplace and shape how companies are able to play by the rules and especially in certain jurisdictions where you might face that your entire marketplace is dominated by foreign companies. It's a way to have a stake in the market and so in Europe right now, in addition to privacy laws, having some data localization requirements, there are also different certifications. So for example, there's a certification in France called SecNumCloud that's been proposed that would actually have.

Sasha O'Connell

Good French Drew.

Drew Bagley

Thank you! It would have data sovereignty provisions in addition to data localization, meaning that there would even be this component in it where the data being stored by a certified company would not be allowed to be subject to foreign laws. And so, we have to remember that data localization actually comes in different flavors. There's, as Megan was outlining, there's data localization where you have to keep a copy of data in a jurisdiction. Then you go all the way to the other extreme where you're not even allowed to have any other law apply to that data.

Sasha O'Connell

That's fascinating and creative policymaking, as I hear it. So in a second, I want to get back to this kind of balance of trade versus national security, because I think it speaks to a lot of what's going on in the U.S. right now in terms of some ongoing discussions, but where is China on all this, Megan? Can you talk a little bit about that?

Megan Brown

Yeah. I would put China at the extreme of the data localization mandates and sort of data sovereignty as Drew sort of described it. They want data that is created in China to stay in China. They have some permissions for exporting that data, but it has made the business climate for multinational companies very challenging and they justify their rules based on national security interests, and I've heard Chinese government officials, at various events say, we need this data because we need to be able to make sure that we don't have domestic terrorism and we don't have, et cetera. So I would put China on the far extreme in terms of the domestic obligations and the rights that they assert to look at that data, and that has given a lot of U. S. companies and multinationals real heartburn, and it creates problems that, you know, we'll talk about in a minute about how U. S. law is going to deal with that. India is another example of a, pretty big data localization country. I think some observers have said that they're starting to moderate that a bit because they were seeing some economic downsides from being kind of an island but there's a spectrum, of how countries have approached it.

Sasha O'Connell

So it's interesting. So countries can be in this game, but for different reasons, Drew, is kind of what you were saying and maybe framed politically even one way, and maybe there's some other stories going on. I'm getting big nods here in the studio. So okay, let's turn to the U.S. It's a really interesting story right now, and maybe we can start with just the lay down of where are we, Megan? Like, are there U.S. data localization laws today? My understanding is traditionally globally, right? We have been all for the free flow of data. Where are we today on that? And then maybe Drew, you can add in some of the players today and we can talk about what's going on.

Megan Brown

Yeah. So I think we are at a really interesting pivot point for U.S. policy in this area. You're right. Traditionally, the U. S. has been a champion of not just the global open internet, but also the free flow of data, and that's been a policy that the U.S. Trade Representative has long championed as a way to push back on some of the arguably protectionist impulses that Drew discussed, that maybe the European Union is taking a different approach for maybe different reasons, but I think the U.S. government is starting to shift that approach and you see that in several ways, and there's a lot of different dynamics going on here. You've had pressure from Europe for a long time. They have traditionally looked skeptically at U.S. law because they think there's too much surveillance. I think we could have a whole separate discussion about whether that's correct or not. But they've sort of, wanted the U.S. to do more from a privacy perspective but the U.S., they used to push back, I think, a little bit on that. They've pivoted a little bit now. Notably, the U.S. trade representative last fall did a big change in the policy approach. They walked back some longstanding advocacy supporting the free flow of data so that the United States can take a more regulatory approach. One of the reasons the United States wants to take that more regulatory approach and tighten up the flows of data is because of national security concerns that folks are saying all this data, lots of data is ending up in China and we are deeply uncomfortable with genomic data, with lots of sensitive personal information. So, that's caused this, and maybe it's been given them an excuse to do something that they otherwise want to do, but there is a pivot going on at the United States government level, to entertain these notions and to start going towards a more permission-based approach to the global data trade.

Sasha O'Connell

And who do you talk to about these issues in the U.S. government? Which departments and agencies have equities here? Can you kind of break that down for us?

Drew Bagley

Yeah, absolutely. There are many players in the U. S. Government. We can think about traditionally the Department of Commerce having a very big role here. Especially if we think about the Bureau of Information Security, maintaining a list of different types of export controlled information. For example, munitions information is on that list. And we also can think about though, new kids on the block, like the Cyber Bureau at the State Department, having an equity here and definitely being involved in the discussion about cross border data flows, and so, even though there are you know, different data types have always had different rules, right now what we're seeing is that data as a whole or entire subsets of data like personal data are now facing more friction. This isn't just happening at the federal level. In fact, just last summer, the state of Florida, actually enacted a law that regulated healthcare data and the flow of healthcare data that essentially forbid certain types of PHI from being processed outside of the United States or Canada. And that's something where, you know, on its face seems, simple enough and probably you know, intended to apply to medical records, but again, when we start thinking about how complex data sets are, either with cybersecurity or even with AI training models and whatnot, and that's something that really quickly can really trickle down and add a lot of friction to how data flows work.

Sasha O'Connell

Where do you guys see this all going in the U.S.? So there's a little movement at the state, as you described, Drew. We have, I know maybe we should talk a little bit about a recent executive order that just came out on this. Where does this all, where do you think this ends up?

Megan Brown

Well, I think we are seeing this. I used to think it was incremental. I feel like accelerating rapidly. You know, we've seen in the past some data localization, for instance, the committee on foreign investment in the United States and the team telecom process, which reviews when foreign companies want to buy parts of U.S. telecommunications companies. They impose mitigation agreements that require data localization or they prohibit storage in certain countries. So we've seen bits of that. We've now seen the Commerce Department being more active. There's a rulemaking that they're kicking, that they've kicked off to try and to get a better handle on the U.S. computing infrastructure. They call it infrastructure as a service and they want to understand what companies are making use of that here in the United States. But I think the biggest pivot that I'm looking forward to, I think Drew's going to address, is this executive order, where we've really just jumped into the deep end of the pool, I think, on broad new government oversight of data transfers and it starts out by looking at so called countries of concern. But, I don't know that it stops there, and even if the ultimate restrictions are focused on a few countries, the friction that we keep talking about will apply broadly across the economy as companies have to figure out if they're covered by these new restrictions that the President wants to put on data transfer.

Sasha O'Connell

So here in March 2024, for those listening, maybe later on down, what just happened, Drew? What is this executive order, Megan's talking about?

Drew Bagley

Sure, so the President issued an executive order on the bulk transfer of data to foreign countries. And so right now we're actually in a holding pattern to figure out what the implementation will look like because the executive order delegated to various agencies, different rulemaking responsibilities and so what we're going to see are lots of public comment opportunities for anyone listening in the early in the year 2024. A lot of public comment opportunities and how this gets implemented but the overarching framework is that the President is laying out that there should be an apparatus for the restriction of certain data types from being stored in

all of the certainty is coming later. It's coming later; but that's exactly, but that's the general framework and so the hypothetical sorts of scenarios and threats that, it appears this is intended to deal with are those related to say bulk biometrics being stored and what if those were stored in a country that was hostile to the United States. What would that mean in terms of protecting the citizens from a national security perspective?

Rather than this executive order being designed in a way that's intended to really address privacy or trade or some of the other things that we've talked about. Now that's not to say that especially once we see how it's implemented, it won't affect all of those things and have an impact on all those things, but the way it's set up is really under the notion of the fact that there is more data collected about individuals than ever before. This data can be used for more nefarious purposes if it gets into the wrong hands and there needs to be some sort of means, to have some sort of oversight over where that data is going and curtail those data flows.

The executive order itself though, even though, again, it's not super specific since that's going to come later, it still even has this notion of exceptions built in and everything. So we'll see where this really ends up. My suspicion is it's one of those powers that, the President is laying out to have this power, to have this card to play, in some sort of future event, or to have some sort of leverage, in different situations down the road, rather than this to be a new overarching framework that's akin to a privacy framework or something.

Sasha O'Connell

And I'm thinking as you're talking, does what the U.S. does matter disproportionately because so many of the tech companies that have our data are here in the United States? How does that play into this kind of global discussion?

Drew Bagley

I think that that's even if you look at some of the examples Megan cited about other countries that are even attempting to influence our behavior. That would be the European view, for example, is that, we're going to scrutinize whether or not the United States has a federal privacy law because U.S. tech companies tend to dominate the market. Whereas the European Union, for example, generally does not come out and take an opinion on China, which has laws that literally force encryption keys to be turned over source code to be turned over, et cetera, but, their argument is you know, less relevant.

Megan Brown

Well, I think we do have some indications of where the government's going to go, and I think I may disagree a little bit with Drew on the ultimate breadth here. I think the government and the executive order says that it has kind of this modest goal, but the devil really is going to be in the details. The DOJ, the Department of Justice, is delegated substantial regulatory power under that executive order, and they have already released right on the heels of the executive order. They put out what's called an advanced notice of proposed rulemaking, and we may, I'm not going to take us down into, you know, an administrative law nerd, but, they've telegraphed in this ANPRM that they are interested in a broad array of different data categories as sensitive. They have a lot of questions that show potentially quite a broad reach of the ultimate rules. How they define what is a bulk transfer; the countries of concern are fairly narrowly circumscribed but it envisions that even if you're not directly giving data to a country of concern, that you'll have to potentially put contract terms in your vendor agreements that restrict third parties abilities to give your data to countries of concern. So I think there is a lot to unpack here as this executive order flows through, and it really is a big change in how the United States has thought about the free flow of data, and it's just a fundamental philosophical move that we've made to now go to perhaps a more license based approach, but this permission based approach.

Sasha O'Connell

So, on that example of the new executive order, if the issue the U.S. is thinking about in terms of what it wants to solve is the potential misuse or abuse of sensitive data of U.S. persons, are these kind of data restriction regulations or laws going to get us that direction, Megan? Like, you know, we talked about, we love the free flow of data, but there's sometimes unintended, challenging consequences. We make a move to address those consequences to potentially again, now have unintended consequences of those policies that restrict that data. What do you think?

Megan Brown

Yeah. I mean, I think the question that policymakers have to keep in mind when they're drafting something like the Notice of Proposed Rulemaking at the Department of Justice or BIS Export Controls is what are the unintended consequences? Are they focused really tightly in on the actual problem that they're trying to solve? Have they gotten good data about the costs of it, and those costs are not just the restrictions themselves, let's just hypothesize they get it right and they're really focused on a few types of data and a few countries of concern. One of the challenges is all the other companies that have to go through the process of figuring out if they're covered, talking to their lawyers. Analyzing every word that's in the new rule to, so there's the potential for over breadth, that the rules themselves may in fact sweep more broadly than the government thinks is necessary and the sort of damage that's done in economic costs to subjecting businesses, to additional uncertainty and more regulatory hurdles. So all of that, I think, suggests that, folks need to tell the government in response to these public comment opportunities, but policymakers really need to focus in on getting a cost benefit analysis, good definitions that are realistic and talking to people who actually will have to live under these regimes because there's very real practical impacts on intercompany transfers, all kinds of things that they might not anticipate.

Sasha O'Connell

Interesting. Well, this topic is certainly one on the front burner and one to watch going forward. I think with that, we're going to wrap this episode. We hope everyone visits us on our website for Start Here and the link will of course be in the show notes. To see additional resources and have the availability for the transcript as well. We hope you join us next time and we want to send a special shout out for this episode to the production team of Erica Lemen and Josh Walden and the team here at Wiley for hosting us in the studio this month. Woohoo. Thanks guys.

Drew Bagley

Better production quality.

Sasha O'Connell

Exactly. See you next time.

Episode 7 - Digital Identity 

On this episode of START HERE, join Sasha O'Connell, Drew Bagley, and Megan Brown as they unravel the complexities of digital identity and its implications for cybersecurity. Delve into the definition of digital identity, the critical role of authorization, and the emerging technologies shaping authorization processes.

Through insightful discussions and expert analysis, this episode explores the challenges faced by key players in digital identity and authentication, and the workstreams and policies aimed at addressing these challenges.

Listen Here

Resources

Transcript

Sasha O’Connell

Welcome back to Start Here. In this series of podcast episodes, we provide a framework for analyzing foundational cyber public policy questions. In our previous episode, we looked at the international flow of data and how governments are looking at changing rules that apply to that data. Today we're on to the next not so simple challenge, digital identity. To work through the ins and outs with me on this topic, I am again joined by Drew Bagley, Vice President and Counsel for Privacy and Cyber Policy at CrowdStrike and Megan Brown, a partner at Wiley and Co-Chair of the firm's Privacy, Cyber and Data Governance Practice. We are going to take on this next topic and break it down.

So with that, let's get to it. There is a famous 1993 New Yorker cartoon by Peter Steiner. Now, somewhat ironically, also an internet meme, I believe, which has a dog sort of typing on the computer quote unquote, talking with another in-real-life dog at his feet and the caption reads “On the Internet nobody knows you’re a dog.” That pretty much seems like a great place to start this episode, because the ability to at least appear or feel anonymous online is one of those fundamental features of the internet and is also something that has both benefits and costs, frankly that the policy community continues to grapple with. In terms of what the right balance is and digit identity in particular wrestling with some of these challenges as it presents itself in the specific context of cyber security is a huge issue while maintaining obviously benefits where possible.

So with that, we want to get into it, and Drew, why should we even talk about digital identity? Doesn't everyone know what that means, and it's crystal clear in the policy community and we can move on or what?

Drew Bagley

Everyone knows what that means, and everyone has a different understanding of what it means, I would say. Identity is something where when we think about it in the context of information about ourselves, we think we know what that is. It's our PII or maybe it's even our actual identity card or whatnot; but then in other contexts, identity is actually dealing with the authentication protocols through which either we, as individuals sign into computers and devices, or that software actually authenticates itself and is able to interact through APIs through different web platforms and whatnot. And you know, a good illustration of this is actually recently, I was involved in a conversation between two people where one person was talking about identity and they were thinking more about identity in the context of authentication and trying to protect that from adversaries and threat groups and another person was thinking about identity more in the context of a digital identity and kind of like a digital passport and thinking about that. Another person was thinking about it more in the context of PII, and I think that's a great illustration of how identity can mean different things in the cyber context and yet, each aspect of identity is very important when it comes to understanding cyber policy and what we can do about protecting various forms of identity. 

Sasha O’Connell

All right, perfect. So, Megan, in that context, when you think about cybersecurity in particular, how do you think about digital identity, or which of those aspects that you just mentioned are most important?

Megan Brown

Well, I think both of them are in different ways. So many of the legal frameworks around cybersecurity and the expectations for companies and organizations depend on this concept of authorized or unauthorized access. And so that kind of means for someone, either a person or a device or a software system, like Drew just mentioned, to be authorized, the system owner has to know what it is, who the person is, are they who they say they are, or is the device who they say it is. And the ways in which we know and verify who someone is online is sort of one of those key foundational pieces of connected services and networks and information systems. It affects who's allowed to buy and pay things, send an email, make a medical appointment and organizations have to have processes and technologies to validate who the people are, and who the entities are that are connecting to their systems, whether it's a large enterprise network, or, your son's school portal to be able to log in. For a lot of the privacy and cyber incident reporting requirements, they turn on whether something is authorized or unauthorized. So you have to be able to tell whether the access you've seen was authorized or unauthorized, but then there's sort of core cyber expectations, which is, you will build systems, and regulate how systems access data based on access, based on digital identity, based on who's allowed to do what; and so it really is a foundational piece of doing cyber security right, and about what the government's looking at when it thinks about, how companies need to be doing cyber security.

Sasha O’Connell

That makes sense. So, in that context, it's obviously critically important that we have the capacity to identify people in a secure way. Drew, in that context, obviously, identity online, digital identity is extraordinarily important in cyber security as Megan just outlined. Can you talk about how it's handled? Break it down, how do we know who's a dog and not a dog as it were on the internet?

Drew Bagley

Yeah, as Megan noted at the end of the day, whether we're talking about the first example I was giving about identity when we're thinking about PII, or we're talking about the authentication piece, it's still all focused on validation. So traditionally, when we think about validation in terms of authentication, of course, we've had usernames and passwords going back decades, as one way to authenticate us, and then you know, in recent years, multi-factor authentication has become a lot more common where it's not just something you know, your username and your password, but it's also something you have, which could be in the form in modern times of your mobile device and being able to authenticate either with a code or a push notification that you are who you say you are by having those two things. There is, of course, verification that in the physical world, we think about with regard to having our passport on us, and maybe even multiple forms of ID and whatnot; and so, even though they're different context, we're still in the digital space, thinking of different ways to really confirm that somebody is who they say they are and that their credentials were not just co-opted by somebody who shouldn't have them. That's one of the fundamental problems of just having merely a username and password and in addition to multi-factor authentication, biometrics has increasingly become more popular, even on our mobile devices, with either fingerprint or face ID and then there's also a notion of even browser fingerprinting and other aspects of identity where there can be a composite made of different factors in addition to your username and password to have some high degree of confidence that the person logging in is the person who really does own those credentials or should own those credentials. But then, that's all really pretty much on the front end. On the back end, it's not going to scale for a machine to be logging into a machine each time, or for a user to say, okay, I need access to this computer and my computer needs access to these three service for information and these five web services and whatnot. So a user is not just logging in at each layer of the internet.

Sasha O’Connell

Got it Drew. So that all sounds like front end or kind of user forward ways to do it. What's happening on the back end? Is there other things going on there we need to understand?

Drew Bagley

Sure, on the back end, we fortunately, generally speaking, don't have a universe in which users in modern times have to manually authenticate themselves for each individual system, themselves with their username and password and multi-factor authentication. So instead, what happens is once that initial front-end authentication occurs, then there are what are called tokens on the back end and that's how machines are able to talk to machines using a secret that is known to be associated with that user who has logged in. So, if there is that validation of the user, then after that, the machines are talking to machines based on that initial validation. That's something that in purposes of scaling in modern times, has really benefited the user experience. But in terms of security vulnerabilities, there have been lots of problems with the way in which this authentication architecture has been built and it actually goes back a quarter century. So, some of the flaws that were initially in the on premise versions of older authentication protocols have now made their way into the cloud era.

Sasha O’Connell

So, I know we're sort of heading toward, the classic question of trade offs, and you already mentioned between kind of user efficiency, user access, and user ease and then security on the other side of these systems to make sure things are locked down as appropriate and I know we're heading to sort of think about the policy of balancing those two things. Before we get to the more on the user experience with Megan, Drew, can you say any more about how adversaries take advantage of these systems are kind of what the risk is around digital identity and how adversaries are targeting that very specific element with cyber-attacks.

Drew Bagley

Absolutely. Digital identities in the form of credentials, as well as tokens, have become immensely valuable in recent years. So, for example, on the dark web, there are a lot of marketplaces that have digital identities for sale. So credentials are for sale and that allows would be threat actors to buy access to a victim organization. Similarly, because those credentials are so valuable, what you see is that adversaries are very focused on trying to obtain those credentials and those credentials could come from previous data breaches, but they can also just come from basic phishing attempts. So, in fact, we've seen, especially over the past year, a group known as Scattered Spider, using all sorts of phishing techniques to do even very sophisticated organizations into turning over their credentials and then also basically finding ways to co-opt MFA processes. In other words, by transferring sims from an individual's phone that has their mobile phone set up for two-factor authentication to the threat actor’s device. So that way the threat actor can even handle the multi-factor authentication once they steal the credentials. So there's been a lot of that going on. And what adversaries want to do when they're breaking in on the front end, or logging in as is more common now, is get in, and then quickly escalate their privileges. So the credentials they might get might be for an individual that doesn't have access to a lot of things on that victim network, and so what the adversary is going to move to do is to try to find ways to escalate those privileges to be administrative privileges. Sometimes, even if the user might not on their face, have access to different things, they might actually be in some sort of user group, that's part of another user group, that's part of another user group, going on and on and on that does have access to different things, or has full administrative credentials and that can be incredibly valuable. But other times we've actually even seen threat actors call the help desks at victim organizations to try to complain that “Oh, I'm logged in, but I just can't seem to access this one folder. I'm supposed to have access to it” and then even getting help desks to actually escalate those privileges once they have them. So, in other words, the social engineering doesn't just stop at getting the credentials to begin with, but the social engineering can even be used to make those credentials more powerful. But then again, as I mentioned a moment ago, as the CSRB report pointed out, there are inherent flaws and certain authentication protocols that have it so that tokens aren't expiring when they should expire, or that tokens could actually be generated by somebody that's not the authentication provider. If the threat actor can generate their own tokens then they don't even necessarily need to worry about getting credentials to begin with and then it's game on for getting access.

Sasha O’Connell

So, what's the - it seems clear then, right? So what's wrong? We need to lock all this stuff down to the max possible. That should be the goal of all policy, Megan. Now can we just lock this down and stop this nefarious activity?

Megan Brown

No!

 

Sasha O’Connell

Why not? Come on. 

Megan Brown

Stop it, Sasha! Yeah, if we just unplug all of our computers you won't have cyber-attacks and networking, then we'll all be fine. We can just go back to pen and paper. Oh, but there’s still fraud then too.

No. it's the processes that these threat actors are exploiting have really important business purposes. Legitimate business purposes, and I think it is this balance that folks are trying to strike and we've seen token issues create and contribute to security incidents, regulators are looking at digital identity, they're looking at multi-factor, but I think folks have to keep in mind that, when credentials are stolen, the systems that they go to don't exist in a vacuum. They're not built just to be secure. They're not only being built to resist criminal activities. They are being built to facilitate the business. They're being built to enable customers to log into their accounts. They're being built to enable fast payments, mobile activities. So, it's about seamless or near seamless transactions with the minimum amount of friction and so these solutions that, we all depend on, yes they can be exploited by bad guys, but the organizations that manage them and are being held responsible for them also have to ensure that they work effectively for the customers that they're trying to serve. The government faces this problem too. You want to log on to various government sites, you need to authenticate yourself. So just to keep that in mind, you can over correct if you want to lock things down too much, but these systems have really important business purposes and companies are trying to manage these issues of if you want people to be able to access your services, you want it to be fast, but you have to take certain steps to try and make sure that only the right people are doing it and that's not going to be perfect every time.

Sasha O’Connell

Absolutely right and I think in terms of foundations on these issues, like many things that there's no such thing as perfect security even if we locked it all the way down, things continue to evolve and there's always threat risk. And then on the other hand, I always talk with my students, as security goes up, efficiency tends to go down. The friction is created when you're building those security features in and so for policymakers, just to have that as a foundational construct and to acknowledge that from the jump, I think is an important place to start. What is the government's role here, Megan? Who's active in this space and how do they think about trying to strike this balance or guide this balance in the private sector that owns many of these systems?

Megan Brown

Well, I think it's a really fragmented space. There's a lot of private sector companies who are out there trying to develop solutions for digital identity and selling those solutions, both to the government and to the private sector. There are standards groups that are out there that are trying to build consensus about interoperability and how you want to do various kinds of authentication. Obviously, in terms of our personal and financial identities, there's the bedrock foundational documents that are, the government runs those, your social security number, your passport, your driver's license but as we transition to digital identity, the question is how do you prove that? How do you take what we all rely on at the airport? I was just at the airport yesterday. TSA is looking at my passport, this is looking at things like mobile driver's license, but the security challenge is, how can system owners build a system that they can reliably use to determine who can have access and what they can access. I don't know that we're going to get into it on this session, but at least privilege principles, the core parts of sort of zero trust. How much do you put on your users to prove themselves? And so, moving to the agencies that actually have been active. There's the National Institute of Standards and Technology, or NIST, which we've talked about many times before. They have digital identity guidelines. It's special Publication 863, and that sets some pretty darn specific standards for how federal networks are supposed to do digital identity. Biden had an executive order on cyber security not long ago. It mandates that federal agencies use multi-factor authentication. We see some regulators starting to look at pieces of this, trying to nudge companies along or force companies to go along with using multi-factor authentication. Examples of that are the New York Department of Financial Services, which has cyber security regulations. They require multi-factor for a variety of things. The Federal Trade Commission's safeguards rule under the Gramm Leach Bliley Act has expectations for multi-factor, and so you see this move to try and get companies to do more to move away from passwords to multi-factor and then we can talk a little bit in the future about better multi-factor, phishing resistant multi-factor, but there is this move to push people along this spectrum of authentication, making it harder, making it more rigorous to prove your digital identity.

Sasha O’Connell

Awesome. Drew, how do you see those work streams? I know we'll talk about some of the specific policy proposals in a second, and some of the trade-offs inherent in those, but how do you see those work streams and how do you see this globally? Where's the U.S. in comparison to some of our international partners in terms of the government thinking about this issue?

 

Drew Bagley

Well, I think Megan's proposal for us to go back to pen and paper is probably the most extreme that has been proposed so far.

Sasha O’Connell

You’re right, there's a perfect security, but Megan doesn't like that either for fraud. I get it. I get it.

Drew Bagley

If we can't do that then, I think that as Megan noted, there's been a very big push with all things for multi-factor authentication, even at the state level, if we look at the New York Department of Financial Services, their cybersecurity requirements now require that much more explicitly. But, we are in an era in which even MFA is being co-opted and so that's where there's a lot of push to make sure that organizations are actually monitoring the identity plane in the same way they've traditionally monitored the network plane, the end point plane, and in more recent years, the cloud plane, to look for things like impossible travel and if somebody is trying to log in from one geolocation and then a minute later, they accept a push notification from the other side of the world, then maybe that doesn't make sense. So, in other words, multi-factor authentications certainly part of basic security hygiene now, but that's not necessarily even the end goal anymore and so that's where you see a lot of movement in the space of identity, threat detection and response.

On the digital identity side, you really have the full gamut. If you look at countries like Estonia, for example, they embraced a full digital identity and digital identity in the context of individuals being able to have full validation for basic government services for voting, for everything else. They embraced that well over a decade ago. Then you have other places around the globe where digital identities being embraced just for specific government services, specific things but this notion of authentication nobody's solved it to find the perfect authentication model that does not create a whole bunch of friction for the user to where it's miserable to do and then is also completely secure. So, I think, we see, a whole marketplace of ideas, but there have been, there's been a lot more interest in recent years for governments to even get more involved in the protocols themselves related to identity. I think we've seen less of that in the States, but in the European Union for example, there's been movement because of this notion of some sort of electronic identity that at the European Union level, there's been guidelines issued on what the protocol should be for digital identities to interact with one another and then who the validator of that identity should be and whatnot. Whereas traditionally with all things with authentication on identity, authenticating your driver's license and validating that, that's the government's role. Authenticating all things online traditionally have been the private sector when we've thought about the domain name system and now with some of these recent proposals, we're kind of seeing a merger of the two.

Sasha O’Connell

It's so interesting again, a place inside our policy where there really is a patchwork of things kind of going on, and nothing's entirely figured out. Again, I know both of you mentioned the Department of Financial Services in New York. Are there other policy proposals? What's CISA up to in this regard? FTC FCC, what else is on the table these days?

Megan Brown

Yeah, so the Department of Homeland Security says Cyber Infrastructure Security Agency has been using its sort of bully pulpit. They do not have plenary regulatory authority other than the incident reporting stuff we've talked about separately, but they've been just churning on performance goals and guidance documents, and they're more than a password program; and so they're really pushing MFA and stronger MFA. One example of their work is their cyber cross sector, cybersecurity performance goals. They lay out use of a fifteen or more character password and phishing proof multi-factor authentication is the new emerging standard. They promote MFA on their website as a best practice and so they're definitely leaning into that the use of biometrics, etc. The FCC, the Federal Communications Commission, is starting to play in this space as well. They have long required telecom carriers to authenticate their customers before they give them access to certain data. Now, if you've tried to call your wireless carrier, sometimes that can be frustrating because you have to go through all these hurdles to prove you are who you say you are, but the FCC wants to safeguard that information and make sure you are who you are. Now the FCC is requiring providers, wireless providers, to do more authentication before SIM swaps. And because they're worried, about the downstream effects of fraudulent SIM swaps, which can lead to other fraud and other problems. So now they have new rules that carriers have to use quote, “secure methods”, to authenticate customers that seek a SIM swap. What I thought was interesting was that they declined to provide regulatory certainty about what it considers adequately secure. So, people were saying, “Well there's all these different ways to do secure authentication. Tell us what is adequate.” And they said, “No, no, no, you guys figure it out because it may evolve over time.” Which, personally, I thought that was a frustrating approach, but New York DFS, as we mentioned several times, they really are into MFA and I've certainly been on the phone with some of their investigators when they really are concerned about the lack of MFA in certain places. Their rules aren't all that specific or haven't been to date about precisely what systems and networks have to have MFA, but they've broadened that and it's much more broadly required. And then, as we mentioned, the FTC safeguards rule, if you're going to get into financial information, they're requiring MFA for access to that protected, that sort of subset of protected information. So, most of these, I will say, have exceptions. They don't require MFA for everything all intense. They have an exception process where you can show there's an equivalent or more secure alternative, which is a nice flexibility piece there but lots of agencies are now in the space, and I think you'll see some enforcement actions that, wrap people on the knuckles for not doing enough in the government's eyes to authenticate and to have tools that are secure.

Sasha O’Connell

Interesting, and Drew how do you see where the private sector is today on this? Are they waiting for clear guidance from government or waiting to be regulated or waiting for enforcement actions or what's the stance in the private sector? Or how do you guys see that today?

Drew Bagley

So on protecting identity in terms of protecting credentials as we were discussing a moment ago, there are MFA requirements that are sector specific popping up and I think MFA is becoming much more mainstream and I think we will, as we even see, enforcement related to cyber incidents and data breaches going forward. I think more and more, especially litigation, you're going to see whether or not MFA was enabled being a factor in terms of whether or not an organization was negligent, whether they were compliant with various regimes, etc. So I think that's going to become mainstream and already is becoming mainstream. Whereas I think it really depends on the maturity of the organization in the sector as to where you see identity threat detection and response being used or some form of actually monitoring that the MFA and the credentials are being used by the people who should be using them and getting to that point where you're challenging, who's logging in over and over again from a technical basis, making sure tokens don't expire things like that. You see that's really all over the place in the private sector right now, where you definitely, you have some places and some protocols that really have are inherent with vulnerabilities, once you're in, you're in, and you can walk around. You're not just in the apartment building, you can then walk around, unlock every apartment in the building and that's what we see on the back end with again, the way lots of this architecture set up and whatnot. So it's all over the place, but there's a long way to go to getting credentials protected and getting that more secure. Whereas I don't think, as a society with every data breach we see, we ever figured out how to protect PII. So, we've got to solve that one too.

Sasha O’Connell

Yeah, so in solving, whether it's from an internal policy perspective, from a company's perspective, or even one of the federal agencies working on their own data, or from a public policy perspective, in terms of what's, either required or recommended. We've talked about some of the trade-offs here about increasing friction in the system with the increase in security that makes sense. Clearly, there's this idea of customization versus uniformity on definitions or what's required. How do you guys see, what else? What are at its heart is that issue in terms of policy tradeoffs when we think about this space? Megan, you want to go first before we wrap?

Megan Brown

Maybe two things for instance, the first is the risk based approach, like it sometimes it seems like regulators just want to say, just do MFA, do MFA for everything and that sort of sounds okay at a surface level, but I think it's important to take a step back and I could cite you a bunch of NIST documents that would back me up on this that you take a step back and have to consider what the use case is, what the risks are that you're trying to address. What is the sensitivity of the data or the information or the service you're trying to protect? And so I don't think people should look at MFA, for example, as this panacea that you just throw on everything because, that might be prudent in many cases, but there may be reasons why there's a compensating control or something else.

The other pieces, anything that is human facing has to have this balance. We talked about a few minutes ago, about making things easy for the customer, while achieving some degree of reasonable security, but remembering that customers come in all different levels of knowledge and capability, especially with security. And so I don't think, it's fair to consumers to just say, everyone has to use token based MFA for the majority of online services. There's a lot of stuff that may not need that. You may not need MFA to access Facebook, but maybe you should have MFA to get into your bank account. So I think that's an important thing for trade-offs as well. One example that I point back to is I remember speaking with NIST, ten years ago when they were updating their digital identity documents, they really wanted to tell everyone to stop using text messaging for MFA. And there's trade-offs with the use of SMS for MFA, I use it a lot, a lot of people use it, maybe it's not the best thing for banks to use or for your crypto wallet, but we discussed with them the trade-offs for your average consumer that needs to log on to their social security account and would my grandma be able to handle token based MFA, or put an app on their smartphone, footnote, not everybody has a smartphone. So I do wonder in Estonia, how all the people that Drew mentioned with their online everything, if there are people who are kind of being left behind on the government services side. So it's just that trade off that remembering the customer behavior has to be taken into account. People, are human, and you want to set up protections for their mistakes, but also not set up a system that's too onerous, especially in light of whatever the risk is and the sensitivity of the data or system.

Sasha O’Connell

It's so interesting and when we think about those different populations, we tend to think about, maybe a more senior population having difficulty with that functionality, but I'll tell you, working with young people at the university do their patience for time. Their expectations in terms of instantaneous services and their lack of patience for friction in the system is pretty real and so their willingness to, and I'm not saying all but move away from products that require more friction in the system. It is another thing just to consider. Drew, other thoughts on trade-offs before we wrap up on this one?

Drew Bagley

Megan hit it spot on with the digital divide. We've been talking about the digital divide for thirty years in other contexts. Oftentimes with just getting folks access to broadband internet, which is still something that we, as a country are working on and there is absolutely also a digital literacy component that goes into this. And so even if we know what the best practices are trying to figure out ways to get those into the hands of the people who are potentially the most vulnerable and need the most is not trivial at all, and, at the end of the day, we should get to a place in which the person using the service can just focus on using the service and not be over rotating on what are the fifteen different things they have to do from security standpoint to use the service. And so I think part of that then means it kind of goes into the secure by design movement, which is the entity best suited to provide security, and bring security to the table should be doing that. So I think that starts in the identity ecosystem with authentication providers ensuring that their code is secure to begin with, and ensuring that there's interoperability for other security to be layered on top of that. When we think about these systems that are driving things like, going to update your voter registration online, going to update your driver's license online. That entire burden shouldn't be shifted to the individual to the perspective victim. A lot of that should be on the entity providing the actual service, but there are also that even when we're talking about the operators of technology that are very tricky in modern times, if we look at a lot of critical infrastructure entities. So, where you have operational technology systems that can be very old or even internet of things systems that may have been designed in a way where it's not easy to layer security on top. That's certainly challenging and so I think that's where it's incumbent upon the government to provide the right incentives to make sure that entities that are cybersecurity have nots are able to get cybersecurity, even if that means they have to first modernize their stack to do so, but I think, like Megan said, though, there's certainly is a lens of risk you have to view all of this through. Or you have to consider that layering MFA and all these things is necessary in every application, or realistic in every application; it would be great in every application. Is it realistic or are there other ways to, wall off certain types of access from other types of access? Because that's the other problem is that over the past several decades the way access has been architected; once you have access to one service, it's not hard for an adversary to get in and get access to other things. So I think there's a lot more thinking that can be done on the back end there too.

Sasha O’Connell

All right, with that, as usual, I don't think we solved any of the policy issues, but we certainly framed them up and offered great-

Drew Bagley

Megan did. I think. 

Sasha O’Connell

Megan does as she does but thank you both for joining me. And thanks to our listeners. We are going to wrap it here. I hope everybody visits us at the Start Here website and that is, as always in the show notes, where there will be additional resources on digital identity and also a transcript from this episode for reference. We also look forward to having you join us next time where we're going to kick off our series on who's who in U.S. cyber policy, starting with a bit of a deep dive into the players at the White House who work on cyber policy and how they rack and stack up, which actually might be a little less intuitive than one might think. There's kind of an interesting story there. So we look forward to getting back together soon to share on that. So, Drew, Megan, thanks again for joining me and we'll see everyone next time.

START HERE is sponsored in part by a grant from the Special Competitive Studies Project
cyber

Questions, comments, or recommendations for future

START HERE episodes?

Contact Us