Catch this episode on YouTube, Apple, Spotify, Amazon, or Google. You can read the show notes here.
The time between a law being proposed and going into effect may feel like a snails pace, but for cybersecurity and GRC professionals, it may feel like the DNA of an organization may need to change. This week we chat with cybersecurity leaders Tim Chase from Laceworks and Matt Hillary of Drata who delve deep into the ever-evolving landscape of cybersecurity regulations. They explore topics such as the challenges of rapid incident reporting, the role of collaboration in the industry, and the emerging onslaught of AI-related laws and proposed bills.
This Week’s Guests
Tim Chase, Lacework’s Global Field CISO
With over 15 years of experience in the cybersecurity industry, Tim is a Global Field CISO at Lacework, a leading cloud security platform. Tim holds CCSK, CISSP, and GCCC certifications and has a deep understanding of product security, DevSecOps, application security, and the current and emerging threats in the cybersecurity landscape.
Matt Hillary currently serves as VP, Security and Chief Information Security Officer at Drata. With more than 15 years of security experience, Matt has a track record of building exceptional security programs. He most recently served as SVP, Systems and Security, and CISO at Lumio, and he’s also held CISO and lead security roles at Weave and Workfront, Instructure, Adobe, MX, and Amazon Web Services. He is also a closet raver. Like really, actually is.
TL;DR
The landscape of cybersecurity regulations is ever-changing, with new bills and regulations continually emerging which impact businesses of various sizes.
The recent rules released by the SEC regarding the time frame for announcing a breach or incident have significantly impacted organizations. The term "material" is a key aspect of these rules, leading to discussions around what constitutes a material cybersecurity incident.
The role of a CISO is challenging due to the potential for breaches and incidents despite implementing comprehensive security measures. The additional regulations add further complexity to the role.
Transparency and honesty are vital in the event of a breach. Companies that are open about incidents and their impact are viewed more favorably than those that attempt to cover things up.
The concept of 'carrot and stick' in regulation is discussed. There are mixed feelings about this approach, with some preferring collaboration and industry-led standards over punitive measures such as fines. However, there is recognition that both incentives (the carrot) and punitive measures (the stick) can drive companies to improve their cybersecurity measures.
AI is a hot topic in cybersecurity. It has the potential to assist in quickly sorting through data and reducing false positives. However, implementing AI also brings its own set of regulations and challenges.
Editor’s Note
We’ve got a lot of great episodes coming your way, and for the month of June, we are likely to have a weekly cadence instead of every other week. Then in July, we’ll take a quick breather. Here’s what’s coming up next:
OWASP + MITRE @ RSAC
Research: Top Threats Targeting SMBs
The Unstoppable Phish w/ SquareX
Patch Tuesday: Collective Defense + Vuln
SEC Disclosure Rules
In July of 2023, the SEC adopted the Rules on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure by Public Companies.
The new SEC rules require public companies to disclose any material cybersecurity incidents. It’s designed to force or perhaps encourage organizations to be proactive in preparing for potential incidents and have a response plan in place.
According to the initial statement from the SEC, “An Item 1.05 Form 8-K will generally be due four business days after a registrant determines that a cybersecurity incident is material. The disclosure may be delayed if the United States Attorney General determines that immediate disclosure would pose a substantial risk to national security or public safety and notifies the Commission of such determination in writing.”
However, the interpretation of what is considered a material incident is somewhat subjective, and many companies struggle with determining which incidents need to be reported. The urgency of these rules is significant, as companies may need to report incidents within a short time frame while still managing the incident.
In the past what may have been quarterly discussions and consequently disclosures of incidents, this new rule forces companies to plan ahead:
“Because it forces you to really get your stuff together before incidents happen,” said Hillary. “And that moment was extremely reflective for me to say, Where's my response plan? What do we have as far as when's the last time we tested it? How well are we doing as far as our detections go? How will we do as far as a response goes? Are we ready to go?”
As far as navigating what is conisdered material or not, Chase and Lacework suggest structure is a security leader’s best path forward.
“If you look at what has to be reported, they're material cybersecurity incidents, which probably is up to a little bit of interpretation. And that's what I get asked about a lot is how do I determine if something is material? Because there are people that are nervous that maybe they'll determine later that it's material and should have reported it, and then you're in bigger trouble, right? So we developed a cybersecurity framework, a material framework, that has helped a lot of our customers to understand this so that when they go through and they have an incident, they can walk through several steps and say is this material?”
This approach in itself is a good reflection of what the SEC has set out to do; to force public companies to think proactively about their cybersecurity and governance programs. Lacework’s approach to add structure and a framework allows security teams to create incident playbooks that answer questions prior to a situation occurring and should trigger internal teams to ask the hard questions. From there, making clear decisions based on those hard questions and combined with structure puts security leaders in a better position to make rational decisions.
The Three Flavors of AI Adoption and Regulation
AI laws, bills, and regulation feel like they are coming from every direction. Right now there are 40+ US state bills already proposed to restrict or add guardrails around AI, the EU AI Act just passed, and there are several laws that have recently gone on the books. From a federal perspective if the US, those are likely several years out. In anticipating of the onslaught of AI regulations, organizations and governing bodies have generally proposed three components for these guardrails, but with a few notable outliers:
How organizations can adopt AI
How security teams can use AI
How software and technology can embed AI
And then we have outliers like in Tennessee where a law, the ELVIS Act, was recently passed to protect artist IP to ensure they own their own voice and likeness.
“This just reminds me a little bit of a cloud back in the day where security folks are saying, ‘hang on, we need to secure this. We need to slow down.’ And the business said, no, we don't. We want to save costs, we we need to push out product faster. And so security had to catch up, right?, said Chase.”
Just as organizations want to embrace AI and security teams want to use it to move faster or analyze data more efficiently, they are not in the business of saying no or becoming blockers. Slow things down? Sure, but not outright ban and block.
“We need to understand how they want to use it, what they want to use, and we need to understand in our organization where is it being used. How is it being used? And put appropriate guardrails around so it's not shadow it.”
Beyond the scope of how it is being used and adopted right now, regulation is a near-term challenge for all companies. During RSA Conference 2024, a common theme was that there was dozens of AI-related bills in the works, and being able to untangle and navigate them is going to be a challenge. If you do business in any state that has a new AI law, and it conflicts with another state, how will companies be able to handle these case-by-case?
Chase joked that the answer is a lot of GRC folks, but goes on to say bringing in all relevant stakeholders and technology is going to be key.
“I think some people don't realize that how many GRC folks you may have in a foreground. You may have one systems engineer, you may have three GRC folks these days that have been working on the privacy team. But I think there's also the opportunity for some tools to help.”
Outside of alignment, regulation and framework overlap is common. For example, GDPR and CCPA have similar components, but identifying the delta between the two is where organizations will need to spend the most time to ensure coverage.
“I think most of us are going to agree that's going to be the signature overlap, maybe some things that kind of tweak in there and what a better use of large language models and to be able to consume those and say, where are the deltas here to really get us out of it” said Hillary.
Show Transcript
This transcript was automatically created and is undoubtedly filled with typos. As usual, we blame the machines for any errors.
Elliot: Hello, and welcome to Adoption Zero Trust live from RSA. We are going to be doing something a little bit more unique this time, because we're in person at a conference. That said, we have been fortunate enough to be connected up with Laceworks. He'll see. So Mr. Tim chase over here. I'm also going to be borrowing Matt Hillary, who is the CSO over at Toronto to fill in for Mr.
Neil Dennis. That said today, we're going to be talking about something that is on everyone's mind. There is a very ever changing landscape of new regulations that's coming, new bills that are in the works, and of course, how it's going to impact businesses of all sorts of different sizes. Fortunately, we two leaders in cybersecurity , who know how to navigate these different situations, and that's where this story picks up.
So before we kicked into it, Tim, I would just love a little bit of background yourself. Obviously, you're at the field. See you soon. Bye. Known, equipped organization. It's like maybe you can get a little bit of background yourself. How you stepped into those shoes.
Tim Chase: Yeah, absolutely. So I appreciate that.
So I've been in the security field probably about 20 years now. I started out in the application security space back when application security wasn't cool. You were still doing a lot of static scanning. Eventually we started going to the cloud. So I added, cloud to that and started doing cloud security.
And then I became a CISO. So I wanted to do lead a security program using what I've learned on the AppSec and the CloudSec space. So I did that two times. And then I said, I'm going to step away. Cause I thought about being in sales cause I do talking security. But I decided to on the middle ground.
So I'm a field CISO so I still get to talk to a lot of security leaders, but I also get to do a lot of just talking about security in general. I've been at
Matt Hillary: Lacework now for about three years. I'm Dhrata's Chief Information Security Officer. I've been with Dhrata. It'll be my year mark tomorrow. So I'm really excited to be a part of this company.
It's been a wild ride in just a short amount of time. But really grateful to be a part of a company. My history is I originally started my career in in the GRC space mostly on. I met Ernst Young as an external auditor or assessor. Moved over to AWS, helped start their compliance program.
Went to several aspects of that program there. And then I did a. Start off my journey becoming a CISO. I was at a number of companies. One of the core companies that I think helped grow in this role was Instructure. A lot of us have used Canvas, or our students have heard kids have used Canvas before helped secure that part.
And that's actually where I met the founders of Grada. Ever since then, I've been a fan of them. And they so kindly let me be part of the the journey about a year ago to lead their security efforts,
Elliot: Amazing. Thank you for allowing me to abuse you as much as I do. I'm sure you've been in similar issues where they just throw you into things.
Absolutely. It's all good. Fortunately, that means we have two very well equipped people who are able to handle this conversation. Not just from someone who understands cybersecurity as the organizational leader, but probably do a lot of consulting on behalf of Lacework to help guide them through various different aspects, not just from product positioning, but just it comes up naturally in conversation, like how do I handle X, Y, and Z?
Excellent. That's where we are. So I'm going to throw this into a really simple territory. Maybe it's not that simple because it's changing still a little bit. But SEC has released some pretty recent rules about the amount of time that you have to announce a breach or an incident occurred if you are a public company.
So maybe we can start there and how You all have seen this impacting organizations. Again, it is changing. It's not, black and white as far as the rules go, but we're seeing organizations now putting breach notifications in their financial status too. So maybe Tim, we can start with you just to share your perspective a little bit on that.
Tim Chase: Yeah. I think it's really interesting to see how the government in general, the U S government has started to really take notice of cybersecurity all throughout the various. I would say maybe departments or different types of areas of government, whether it's SEC and whether it's CISA or whatever.
The SEC one in particular, what we get asked about a lot and what our customers ask about is the word material. If you look at what has to be reported, they're material cybersecurity incidents, which probably is up to a little bit of interpretation. And that's what I get asked about a lot is how do I determine if something is material?
Because there are people that are nervous that maybe they'll determine later that it's material and should have reported it, and then you're in bigger trouble, right? And so that's what we get asked a lot. So we developed a cybersecurity kind of framework, a material framework, that has helped a lot of our customers to understand this so that when they go through and they have an incident, they can walk through several steps and say is this material?
Is this not? Is it material? And so that's the biggest area around the SCC materiality that we've had conversations. So we've had a lot of good luck with that. We had several CISOs that kind of contributed to this framework. But that's really what I talk about a lot when it comes to the SCC. Very interesting.
So it sounds
Elliot: like structure is a key component on how to navigate something that does not have as much structure. And Matt, I know, all about frameworks and structure. So let's navigate over towards you and let's maybe dig into that.
Matt Hillary: No, it's your question. I sat down with a CISO back, I want to say eight months or so ago, Black Hat, and sat down with them and asked them, they, just shared vulnerably and openly about incidents.
And I think at that time, I think it was around the Caesars and MGM, right before that it happened. Yeah. And I told him like, man, I would never wish an incident on my worst enemy, just because I realized, man, that's gotta be the biggest and most intense pressure period of any kind of CISO or security plight.
And those 48 hours, that's a short amount of time. I'm seeing more and more companies start with, Hey, 72 hours breach notifications to your customers, or now to 48 hours. And now it's Hey, I've been solving yesterday, 24 hours. And you're like, oh my gosh, I'm in the midst of trying to figure out what is going on in my environment.
At the same time, I have the pressure of having to disclose something either publicly to the SEC or to our customers or to any other entities that require that level of notification while still trying to figure out what's going on at the same time. So that amount of pressure just increased.
Almost through gas and on fire of intensity during that very short period of time. You know this person I was talking to they reflected back and said, oh I would totally wish an incident on someone. I'm like, oh my gosh, tell me more about that. You totally just blew my mind with that statement.
Like that sounds so cruel and they were like no. Because it forces you to really get your stuff together before incidents happen. Interesting. And that moment was extremely reflective for me to say, Where's my response plan? What do we have as far as when's the last time we tested it? How well are we doing as far as our detections go?
How will we do as far as a response goes? Are we ready to go? I'm, I was the CISO at a weave as well as an instructor full of publicly traded companies. And at the time, we had disclosure committee meetings. I were ready to disclose stuff quarterly as part of our normal process. But adding this now, not only the breach notification piece, but also having this whole, like you said, material, like you brought that up, material risks as part of your annual risk assessments, having folks that are skilled in security as well, I think is also an aspect under this part of your board, and just really taking risks seriously.
In response to this, I'm seeing security leaders do one of two things. It was a spectrum. One is they're burying their heads in the sand saying, Oh my gosh, I don't want to have any of this written out because I may be personally liable for what I did or didn't do. And then the second is, Oh my gosh, I'm going to start writing everything out to the point that I can really have it out in front of me and say, I get some credit for having good faith efforts in this role.
Sometimes they describe this role of the CISO as I don't know if you've watched Star Trek, but it's a Kobayashi Maru. Basically that whole unwinnable scenario. I feel like we are in that role every day in the sense that we can literally do everything and still suffer a breach or an incident.
And that is a really frustrating dissonance that exists in my life and I think most CISOs lives. And so having this additional regulation I think helps make sure that people have their stuff together. But at the same time also adds a level of intensity of it. That I worry about it personally because we're still human in this pool.
We're kinda the best we can. And again, I think it's just forces us to be better and better happens. And
Tim Chase: it's the, and it's the notification time that really bugs me a lot and a lot of security leaders that I talked to. 'cause I know like when I was in that seat that's not a lot of time to have to be able to respond to it.
Understand, even understand is it material or not, right? Yeah. 'cause sometimes it takes you a while to. Number one, realize that you've been breached and then you have to backtrack to know where was it breached? What did, what was the impact? And so being able to get all of that together in a way that you have to make this big decision of whether it's reportable or not.
Like that, that is one of the things that worries me and a lot of people that I talked to is because that's such a big decision that you have to make in such a quick amount of time. And I, we have seen some cases of companies just go ahead and making them a notification whether it was material or not.
They're like, Hey we haven't quite determined the extent of it. We're going to go ahead and just put it out there that we've had this incident reported to the sec. That way it's on the safe side. Is that to me, that's just, it happens so quick. And so that's what I also have a lot of conversations on are like, what kind of tooling and what kind of things can we use to, to do these things quicker?
Because if you're having to do all of this correlation, if you're having to, Pour through your sim if you're having to write rules in your sim or do all these queries like that just takes time So the more things that they can automate the more Maybe eventually we'll start to use gen AI maybe in that sort of sense like the more that you can do To help you get to that answer quicker is what we're looking for how can I sort through all my data quicker?
How can I read down the false positives? That is, that's a conversation I have all the time. Interesting.
Matt Hillary: I know there's a lot of shame around the disclosure of incidents. In reality, I think it's almost like a phishing assessment where Hey, someone clicked on a phishing email and Oh my gosh, like I feel terrible.
I've seen people directly be defensive of their own behavior saying,
Elliot: wait,
Matt Hillary: I didn't put my password. And I'm like what is this password right here? And how did we get it? Like, how do we explain it? But and try not to be shameful in that conversation, but to help weed out the defensiveness.
But I feel like there is a lot of. Confidence and vulnerability and openness. There's a concern about overdisclosure, right? First person who usually calls our general counsel saying hey, this is what happened Help me understand the materiality of this aspect, right? And this we need to disclose But after that it's more man There's a ton of us in this space that want to learn from our peers They want to say hey They've got breached this way and this is what they learned from it so that we can all Up level our security program the same way because it is an evolving landscape And it's great to be able to learn from each other.
So I see some benefit in not shaming, but also learning from, and also helping people feel comfortable in saying companies are victims in this case, too. And we're learning as companies, we want to be better. And at the same time can we be a little more open in some cases without that litigation that, that retaliation towards the company?
I know it's a little tough, but yeah, it feels like we're
Tim Chase: trending that way, right? Like when I look at it, like at least me personally, maybe it's cause I'm in the industry, but I don't look at a company that's been breached and be like, dude, like you're, that sucks. Like you, I can't believe you did that.
It's more oh man, let me just read more about what happened so I can learn from it. Where you really get frustrated and where you do get angry is when you, when they try and cover things up. Like when a month later and you look back and oh, that was that wasn't really what they said.
Like that breach was a little bit broader in the type of data that they got. That's when you start to look at it and be like, oh man. How many emails or letters do you get in the mail these days that are like, hey, by the way, your data was stolen and we're going to give you a year's worth of identity protection.
It's not as big of a deal. To me, it's about being honest, putting it out there, and just doing your best, I guess is what you're saying, and not being complacent. Like that's what I look at. I 100 percent
Elliot: agree. That absolutely makes sense. So maybe we can unpack this and get a little philosophical for a pretty binary question, which is in regulation, it's usually forwarded or pushed forward through the concept of a carrot and a stick.
So in this situation there is already fines involved. If there's an instance hits you for this particular scope, but what are your thoughts on, is there value to find balance between the carrot and the stick or should The Fed and regulate, regulating bodies focus more on encouraging, enticing organizations to be more secure with the thousands of caveats that in the boardroom, it's still, it's all in drama to push through.
Tim Chase: I'm a fan of, I've never been a fan of the fines. I'm more about the collaboration. That's me. I just feel as an industry, we work better when we collaborate together. We come up with standards and we try and do our best. I feel like when there's the care and the sick mentality with the government, you there's still a lot of ambiguity.
If you. If you look at a lot of what's being pushed, there's still, what is material, what's not material, or, what's valid in in a security plan versus not that. Like when they're telling you have a cybersecurity plan okay, but what does that mean? So to me, sometimes there's still a lot of ambiguity for the carrot listing mentality, which puts a little bit of a bonus on the business to try and figure that out.
So to me, I'm more, I side more on the, Hey, let's as an industry get together. CISA may be leading it or with CIS or with CSA, like some of these organizations that are a little bit broader. Like I'm a big fan of doing it that way rather than all of the fines and the things like that.
Cause to me, it's a little bit of, you have people who maybe don't understand the problem. Who end up trying to force you to do something and then they realize they didn't get it right the first time With and they come back if you look at NIS, right? There's a reason it's called NIS twos because the first time they didn't get NIS right and they realize oh This doesn't work.
So well, so that's my general perspective It is like I think we work pretty well as an industry together and take that mentality. I don't think differently
Matt Hillary: No right in line with you to some echo what you're saying as well trying to think what is the care? What is the stick here?
And why do we need both of them? There's a lot of human behaviors associated behind the scenes, right? A lot of folks are worried. There's a lot of egos involved. There's a lot of company egos involved. There's a lot of trust involved. There's a lot of, just hey, we have a presence in this space and we want to maintain that level of trust.
But realizing in this space, oh my gosh, like again, that unwinnable scenario, we need to recognize that as a thing. So people do feel more I guess safe in saying, hey, look, like we are just as human as a company, as we are a person. We are doing our very best. I do feel like, I don't know, I'm taking harm from that book, Word Rules, the Google book you talked about.
People are inherently good, right? And so most of us want to go out to take care of it. Now, there are a few of us that are like, oh my gosh, because of ego, because of personal reasons, because of whatever, a little more scared to follow what the regulations are hoping for and really scared about really doing all the work necessary to be ready for when something like that were to happen.
And you are right. Like in some cases, I think. The most painful ones to watch are when someone discloses their public leader, like a hacker guy, say on Twitter we did this, and then the company's no, we don't have any indication of this, and they're like, oh, no, actually, here's some screen prints, and here's some references, this is what we have, and they're like, oh, yeah, better, are you sure about that?
You want to re go back and make it sound like what you actually wrote originally? And then they're writing the narrative, and you're like, oh, we need to have our narrative. It's almost like the whole classic red team blue team thing where the blue team needs to be able to tell just as well as the red team did what happened during the incident to make sure stuff's good.
So it's really forcing us to really be on our game and all passes there so that we can say, you're right. We're doing this. We're working through it. We are human and I think folks are embracing more and more of that Vulnerability leadership versus the worry about gosh, what does this have on me?
And how's this gonna back our company because at the end of day we're here working together and learning together
Elliot: I definitely appreciate that You don't put that perspective on there because I think the other one is I'm a very jaded perspective on using like the stick approach because Large organizations can just be like, this is now just the cost of doing business.
And they look at the fines as it's just something we'll plow through. We'll build into the budget and just can't keep going. Obviously there are large organizations that take privacy issues that way, which does not benefit anyone else. So I certainly can see value in more encouraging practices.
Plus insurance premiums go down if that's always a problematic situation, all that. Maybe we can pivot over to the fun hot topic of, and that's RSA. It's not as, it's not as crazy as I thought it was going to be. Every booth, I swear, I thought was going to be AI based, like one in five. It's not as bad, but there is, I think right now, at least just the EU AI Act, which is the biggest one that has broad sweeping impact, but there are several, dozens of bills, all over the place.
So before we get into the specific, but I love perspectives on, Just pivoting from that characteristic. But do you feel, in respect perspective, AI is moving as fast as it is, how should organizations, how states, nations how should we navigate those situations? Because we don't really want to slow down progress and technology, but we also have to, get ahead of those things.
Tim Chase: Yeah, I think that we as CISOs need to take it seriously. We need to understand it just like anything else in an organization. I like that. I was on a panel yesterday and I mentioned that to me, this just reminds me a little bit maybe of a cloud back in the day where people, the security folks are saying, hang on.
Like we need to secure this. We need to slow down. And the business said, no, we don't. Like we've got business stuff that we need to do. Yeah. We want to save costs, which you know, we can, I don't know if that happened, but we we need to push out product faster. Like all of these business reasons. And so security had to catch up, right?
And so we can't get caught up in that again. Like to me as security professionals it's like, all right, let's embrace it. Like the, we talk a lot about security wanting to use gen AI to help our own ourselves, but don't forget that the businesses want to use it to help what they're doing as well.
They want to be able to take things that repetitive tasks that they're doing and they want to be able to put gen AI. So we need to understand how they want to use it, what they want to use it. And we need to understand. In our organization, where is it being used? How is it being used? And put appropriate guardrails around it.
So it's so it's not shadow it, right? So that, that's the first thing that I think that we need to just do as security leaders is not say no, but figure out how to embrace it and how to secure it. And the second part that I'll say is once again I do the EU AI one. I'll say that.
I felt like I looked at it and I thought, this is pretty well done. Like they've got the different risk factors. They divided it up appropriately according to the type of data or the business sector and things like that. So it's actually pretty well done, but I also would counter that maybe with, so as the classic security alliances, and once again it's, do you want to go the big stick approach where they're going to force some things and make you do it that Or can we, as an industry just. Adopt a common model that we want to operate from. So those are, I'll just throw that out there. Yeah,
Matt Hillary: no, I really like how you referenced the cloud as another part in our technology journey over the last 20 or so years that has had a no impact on our companies.
The one I think about that, I think, Hey, when the cloud came out, I was like, Hey, this is the rope that's going to help organizations climb or it's going to rope that they're going to potentially hang themselves. Like we saw companies literally as a result of getting their AWS like root accounts, Compromise littered posting the next day.
Like we were out of business. Like we are like, I don't need solid people at this point. And so obviously, with AI. I don't know if that level of capability exists, but there, it really is the rope that's going to help organizations find, or it's going to rope people people are going to start having a lot of concerns about.
And I think that's where the regulation is going to help make sure that rope to potentially hang ourselves is less impactful, right? Where they're saying, hey, let's not use AI to generate bots, But obviously some of these content filtering capabilities are revivable and changeable. So people can potentially get that level of capability to the point that, yeah, that is concerning, but having regulations around it to say there's some punitive impact for people using this in the wrong ways or in unethical ways or in ways that might further damage.
I think it's cool to be able to help keep us from. Go into that space, knowing what those garb really are. Yeah, right there with you on the the, hey, let's use this to help our company. Now, as far as the regulations go several same thing, one other I was going to throw in there, I was really grateful for OWASP, the top ten large language model thing for embedding into your OS.
Because you've got a couple different use cases. One is your employees use of it to augment their capabilities, which is awesome. The bad guys are also using that for more capable, efficient capabilities and more like smoothing voice or even getting calls or whatever it may be. That are really scary on the actual, we're using on the attacker side.
And then we always have the aspect where we want to include it in our own product to help augment our customer's capabilities and use their, which I'm really excited with security operations side, the ability to detect and respond in a capable way. To obviously the AI aided attackers that are coming after us as well.
I'm also excited to see AI use it to create a dossier to see how we can further harden our companies and say, Hey, this is what your posture looks like. This is what may potentially be used. If it gives you today, like if you had 10 poker chips today, the CISO, you apply, these are the things based off our assessment of your external perimeter, your internal capabilities, or the latest, threat intelligence capabilities are taking it through or your actual events that are happening on your endpoints or in your cloud to say, focus on these today to help your organization have the biggest risk reduction over time.
So I'm excited about aiding our capability to make the right decisions there on what's focused on next. So really cool capability. Yeah.
Elliot: Very cool. So we can we'll. We'll unpack that a little bit more, but if I probably sum this up there's probably two pieces. First, I'd love to know, just very plainly without mentioning any tools or technology, I'm trying to be a little more vendor neutral on it, but I'd love to know if you feel like the technology is viable today or if it hallucinates too much.
And this is in regards to, security practices and things like that. Protecting posture, but do you feel like we're getting there? If you had a scale of like one to a hundred, like where are we sitting on that?
Tim Chase: I think we're on the lower end. I really do think that we're still trying to figure out where it's gonna best impact the lives of security professionals.
Like when I look out there and I kinda walk the floor, it seemed like everybody still figuring it out. And so I still think we're at the beginning stages of it. Where, we're still utilizing it to, to ask for help automated, remediation guidance or something like that. Or, I do there's a couple of ones out there that are looking at using AI to help on the sock side, which is where I think it's, it is really gonna help.
But I still feel like we're in the, the biting stages of it. I'm probably gonna put one to a hundred. I'm probably gonna give us a 20. Okay. 'cause I think that we're all wanting to figure out how to make it work for the customer. And we're trying to get past the. Everybody has to be able to take a box sort of thing to our houses truly going to value going to be valuable and take us forward to the industry.
That's where I think we are right now.
Matt Hillary: I sense it similarly. I think about it. Hey, we're at the beginning of that journey. Understand the capabilities to help us get us where we go. I think ultimately we all want to get to the point where we could remove that disclaimer at the bottom of prompt output saying this may not.
This has been generated by a trust. I hope we have the point where we have models that are trusted to the point that we may not necessarily need to have that level of disclaimer right now. Right there with you on the side of, and endless possibilities in endless domains on where AI is going to help us accelerate, which is really exciting on the security side, a number of things.
Now, when I think about the composition of creating AI, you've got massive amounts of data, you train these models, and you have the infrastructure that runs on to make sure it's secure, that infrastructure on, that it's running on to keep that secure, we've got many capable products out there to help us with that aspect of this task.
Now, as far as the actual training model side of things, we have a number of companies that have loads of data about. All the different attacks processes running, whatever it may be in our infrastructure, cloud or end point, whatever it may be. And that is here to help aid us too. I'm just hoping that we get to the point of those models.
I think when we're assessing, Hey, where are we at in this journey? I think it's that model of capability and built together to get us to that end result we wanted to be at, which is how can we further augment us to be better than we ever were now that we have
Elliot: this. Interesting. So I think we're in this really interesting period where technology is, you're all saying there is a lot of ground to still cover.
Yep. Fortunately, we're also, or maybe unfortunately I should say, is we're in this cultural shift where people are adopting this technology a lot faster than a lot of other stuff that's out there. But they're in probably the back of a lot of our minds. We grew up with Terminator and Skydead and it's going to take over.
I can't remember Bostonodynamics with their dogs. Claim there wasn't being attached to these things. So like in the back of our mind, there's this happened, let's take over, but if we're at like 20 percent right now, we can see the faults and the factors of it, but every few months, maybe every year, it does leap ahead.
So I think that's one of the more unique places, but I'd love to know if you feel that organizations need to focus on that cultural shift and trying to help diffuse some of the alarms that might be like embedded in us, even if maybe it's not You know, Skynet's not real.
Tim Chase: Yeah. And I think that came up yesterday as well when I was talking to some folks and I think that it is a cultural thing we need to get past both on how to use it, what it's about.
And I just, I go back to the DevOps mindset. Like we, that's a cultural thing that we had to get through our organizations. And I think this is the same way too, but you need to be like, you're not going to have a robot coming in here the next day, taking over your job or something like that.
And not only the cultural aspect, but telling them what it is and what it isn't. Not that. Maybe it's going to come in and replace a hundred jobs, but it's going to be able to take what you're doing, the stuff that you find boring. And that's going to automate that and allow you to actually build or do and something to help you in your job.
So I do think there's a big cultural part that we need to, that we need to educate both maybe from a security perspective, but also maybe just from a general technology perspective.
Elliot: Interesting. I will say while I was getting to my hotel room last night, there was a robot that was waiting to get into the elevator and had drinks on it.
So someone's job might be taking that. I don't know if taking drinks up to people at 1 a. m. Is great. You want to expand upon that? Hopefully it
Matt Hillary: was good drinks, mezcal or something along those lines. We're not going to talk about
Elliot: what I feed you behind the
Matt Hillary: scenes. You're right. There is a black mirror associated with the use of this technology.
And I, and it's fascinating. Even some of the, seeing some of the episodes in black mirror series, they're like, Oh my gosh, like what could this technology do? Bad side of things to really open our eyes. It really starts, I think, internally at our organization to have a really solid acceptable use policy around it.
I know some companies are creating a separate policy that discusses AI specifically. I know at Grotto we specifically said, Hey, we have an acceptable use policy. AI is a technology. Let's actually put in there how we should use it. And it was very clear. We don't use, company sensitive information.
If you're going to use it personally, make sure you redact this information, whatever it be to really help protect those things today. But when you think about our journey around how we embed this and use this, keeping that in mind, the ethics around it really get us where we want to go. I think it's that north star that we're trying to get to and not get distracted by how could this be turned, but also keeping that awareness so that we protect ourselves against it.
One other thing I would add I am amazed the amount of additional domain knowledge AI has brought into security teams needing to have, like traditionally have a product or application security engineer. Whereas we're embedding this now. It's like, Oh my gosh, like we have a new tools that we need to help protect the prompt injections and the output, the manipulation and all the other vectors associated with manipulating our capabilities that we're trying to build there.
So that's one, two, like the red team associated with the say, yeah, how do we make sure that's working? When we build this, that we're not subjected to known vulnerabilities that exist in the thing that we just trained. Having that, it's almost an additional domain that is bolted on to all the different aspects of security that hasn't existed until this came out in reality.
It's very cool to see that.
Elliot: Yeah, it's true. Very interesting. I've got one final question for y'all. It's gonna be very specifically complex. We have the EUI Act. That is out. There are now several bills tied to AI and privacy and relevant to that. As security leaders, how are you all planning to untangle the overlaps that will no doubt impact organizations and the folks that you chat?
Tim Chase: Yeah that's really tough. A lot of GRC people, I mean, that you, that is, it is one reason why I would say that when you're building a cyber security program, I think some people don't realize that how many GRC folks you may have in a foreground. You may have one systems engineer, you may have three GRC folks these days that have been working on the privacy team.
So I think there's that. But I think there's also the opportunity for some tools to help. Cause there's a lot of, there's a lot of overlap, right? Some things that you have to do for GDPR and CCPA, like they're exactly the same things. And so that's where a lot of the modern compliance tools can help you understand so that you can have all of your, you can pull all of this data together and run your reports at the same time.
And I'm a, I've been a big fan of Sometimes I think we don't realize how important automated GRC actually is because that is one of the most Difficult and time consuming parts of a modern cyber security program right being able to pull all of this data So to me, it's really investing in that tool set And in that GRC function to be able to manage all that because that way you're not pulling the same data six times having to constantly keep track of The new regulations that are coming up and the other thing that i'll mention is more and more There's such a symbiotic relationship between the security team and the privacy team.
I remember back when I was running security for HealthStream that was really coming about and I was doing some of the privacy stuff as well. But now it's such a separate function. Like you can't, it's really hard. I don't, I'm not sure you could actually do like a CISO at the same time.
You know what I mean? And so that's why the chief of the privacy team and the security team have to be really intertwined because they can help you understand all the new things that are coming up and then you work with them from a controls perspective. Cool. Matt, I know
Elliot: you've got some opinions.
Oh yeah, sure.
Matt Hillary: No, absolutely. When you think about the traditional GRC programs or frameworks or even just standards that are out there, there's this new kind of overlap, right? And so you're right. And I think the same thing will happen in the AI space where we'll have some significant overlap between different answers, interpretations, stuff like that.
Good thing is they're focusing on the things that are like world ending types, which is great. I think most of us are going to agree that's going to be the signature overlap, maybe some things that kind of tweak in there and what a better use of large language models and to be able to consume those and say, where are the deltas here to really get us out of it.
And you're right. Those automated GRC tools really do help us do that in a way that helps us stay on top of that as we progress forward. I think it's just an immense space of learning for all of us to know, Hey, what matters and what doesn't matter. So we can incorporate that. At the end of the day, I think these regulations or standards are coming out for the purpose of helping us build trust in each other's organizations and the way we're doing things.
You are right about the barriers between privacy and security, almost like the barriers between security and legal and that framework. And so having that close attachment between teams, I think is going to help really build it in instead of bolting on these kinds of capabilities in our whole process.
Elliot: Amazing. All right. That's it, folks. Thank you all for tuning in. Semi and now live from RSA. Tim has been fantastic to have you on. I really appreciate his perspective. I'm also a fan, a huge fan of Matt and I'd like to terrorize him and bring it on whenever we can. Maybe next time we'll bring in tequila shots like we usually do.
I just didn't. So that's it. So tune in for the next episode. I think we're gonna be jumping on to more AI in the future. So Tim, thank you. Matt, thank you as always.
Matt Hillary: Thanks, Tim. Thanks, Elliot.
Elliot: This
Announcer: Thank you for joining a Z T an independent series. Your hosts have been Elliot Volkman and Neil Dennis to learn more about zero. Go to adopting zero trust.com. Subscribe to our newsletter or join our slack community viewpoint express during the show did not reflect the brands, employers, or companies of our hosts, guests or potential sponsors.
Navigating the Ever-Changing Landscape of Cybersecurity Regulations