Home

Rethinking Data Security, Governance, and Resilience for the Agentic Era

Rethinking Data Security, Governance, and Resilience for the Agentic Era

Anand Eswaran and Rehan Jalil join Patrick Moorhead and Daniel Newman to discuss how enterprises must rethink data governance, security, and resilience as AI shifts toward real-world deployment. The conversation explores why unstructured data is central to AI and how organizations can build trust at scale.

AI is scaling faster than enterprises can secure the data behind it.

At RSAC 2026, Patrick Moorhead and Daniel Newman sit down with Anand Eswaran, CEO of Veeam, and Rehan Jalil, President of Security and AI, to examine how data is becoming the defining layer of AI adoption.

As enterprises push toward agentic workflows, unstructured data is expanding both opportunity and risk. It is no longer just an input to AI systems. It is becoming the layer where outcomes are shaped and where exposure is created.

That shift is forcing a change in how security is approached. The conversation moves beyond perimeter defenses and into questions of visibility, permissions, governance, and recovery at the data level. What emerges is a clearer picture of what it takes to scale AI in production without introducing systemic risk.


Specific challenges include:

🔹 Unstructured data is increasingly shaping both AI outcomes and enterprise risk exposure
🔹 The attack surface is moving closer to the data layer as AI systems interact directly with it
🔹 AI adoption is advancing faster than trust, governance, and control frameworks can keep up
🔹 Data hygiene and context become more critical as AI agents operate across systems at scale
🔹 Security, compliance, and identity signals are converging into a more unified operating model
🔹 Platform-level approaches are emerging to connect governance, resilience, and recovery

Veeam outlines how its platform strategy is evolving toward a unified control layer designed to support faster innovation while maintaining control over data, access, and recovery.

The implication is straightforward. AI does not scale safely unless the data behind it is trusted, governed, and recoverable.

Watch the full conversation at sixfivemedia.com and subscribe to our YouTube channel so you never miss an episode.

Disclaimer: Six Five Media is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript

Anand Eswaran:


We live in a world where structured data was the fuel for business intelligence, BI, but unstructured data is the fuel for the AI era.

Patrick Moorhead: The Six FIve is On The Road here in San Francisco at RSAC 2026. Daniel, unsurprisingly, it's been all about AI. You know, it's funny, we used to talk forever about access control, but it's really about agent control at this point. But it's everything in between, you know? Like I say, you say, tech is not about or, it's about and. Everything just kind of layers on top.


Daniel Newman:
Yeah, the setup's great. Look, security is having a moment. We know that, you know, sometimes security is being pulled along by the innovation that's taking pace, but we're seeing now that as AI is moving at such exponential rate, the companies are really, you know, they're struggling, and I think they're here in San Francisco right now working really hard to try to figure out how do we make sure that these projects, these investments, these proof of concepts that we're taking to production, how do we make sure that we're securing the critical data, the applications, and all the tools so that we can take advantage of the AI opportunity.


Patrick Moorhead:
Yeah, that's right. Data governance is everything. And Daniel, when you and I were at what some people consider the beginning of the generative AI with Sam Altman out in Seattle, the first thing that went through my head was data, enterprise data. How is it going to do this? And I can't imagine two better people to talk about that than Anand and Rehan from Veeam. Great to see you guys. Welcome to The Six Five.

Anand Eswaran:
Pleasure to be here. 


Patrick Moorhead:

Yeah, it's great stuff. Day three for you, right? 

Anand Eswaran:

Day three, what feels like day 300 right now. 

Patrick Moohead:

By the way, if it makes you feel any better, this is my third different conference of the week.

Daniel Newman:
He was in a different country this morning.


Anand Eswaran:
I feel like that every day, right? Yes. But it's been fabulous. It's such a different RSA than the years past, but I'm sure we'll talk about it.


Daniel Newman:
Totally. Oh, absolutely. So let's start, Anand, with you. I mean, you heard my setup. Enterprises, they're all up against it right now. I think we're all excited. We're building. Some companies are vibe coding. They're implementing. The boards are putting the pressure top down. Use AI. Employees want to be more productive. But as all this happens at once, you think shadow IT was scary. I mean, the opportunity for risk, new threat services. What are enterprises sort of not getting? Or what do they need to really be thinking about with AI, especially when it comes to their data governance?


Anand Eswaran:
Great question. So I'll first start with your intro, which is you talked about security and AI. Perfect setup, that's why we bought Security AI, which Rehan was the founder and CEO of. Great name.


Daniel Newman:
That wasn't planned, by the way.


Anand Eswaran:
So that's the key. For us, it was all about how do you be the trusted data platform for the agentic era. But enough of that, I'll go back to your question now. What are enterprises not getting? People are talking about GPUs, computes, models. People are talking about how do I make sure I govern your agent and make sure the agent doesn't do anything bad. But people are missing the super critical middle link that it is actually all about your data. We live in a world where you know. Structured data was the fuel for business intelligence, BI, but unstructured data is the fuel for the AI era, and 90 percent of all data is unstructured. So you've suddenly just massively exploded your attack surface. You've introduced decision risk right now based on this attack surface which is imploded. So that's one thing. The second thing is because of that, the control point of security has changed. That's the other thing people need to think about, which is you historically said, I need to protect my endpoints, the perimeter, identities, how people access, how people come in. But now, it's not about that anymore because you're bringing AI all the way to your unstructured data. So data and making sure that data at a very granular level is understood and secured is the second thing. And the third thing people are not getting enough is now, The infrastructure to deploy AI has fast outpaced the infrastructure to trust AI. Because now autonomous agents are acting at machine speed. And trusted AI becomes the super critical thing. And trusted AI falls immediately down to do you trust your data?


Daniel Newman:
But hasn't that always been the push and pull of security? Security was often treated like the insurance, like the liability. How do we do as little as possible to stay? I mean, it sounds like AI is just magnifying that.


Anand Eswaran:
It is sort of, but security stayed at a system level and the perimeter level. But AI goes at a very granular data element level. That is the heart of the difference. And that's why our thesis is, how do we make sure that we are the trusted data platform for the agentic era? When you do that well, You basically make sure that everything else you need to do in an agentic world is a lot simpler.


Rehan Jali:
Yes. In addition to what you're saying, what they should be thinking, I think I would also add that what they should be not thinking. I think what they should not be thinking, if you go out in the floors right now, there'll be very siloed messages. Let's think about identity. Let's think about data. Let's think about AI SPM, AI security. Let's think about backups. they should not be thinking that way. Because on one hand, no question everything is about data, but it is more important what is related to data. But if you don't tie things back to the data, who created it? When was it created? What's inside that data file? Who has permissions to it? Which AI can touch it? What activity can be done or is being done? What compliance is related to it? What they should not be thinking is silos. what they should be thinking into relationships. And if you don't think it that way, then this problem is never going to be solved. It's going to be like, I'm going to look at my pristine lens of one bucket or the other. If you're a customer, why do you really care? I mean, you really care about solving this problem, and it's everything that's attached to data. And that's fundamentally what we've changed the game. is that not only that we do, yes, we do data security, yes, we do AI security, yes, we do data backup and resilience, but in a unified fashion that brings everything together, which is actually what's needed. That's what customers have told us. That's what they need.


Patrick Moorhead:
So, Rehan, I want to do a double click on what Anand said. He talked about visibility and control. With these new agentic systems, there are new workflows. There are new ways that we work with data. There's new data repositories that AI is actually creating to do agentic workflows. But let's try to simplify it for the audience here. What are some of the top breakdowns on visibility and control?


Rehan Jali:
That's a great question. And I'll add one layer after it. So when we talked about, at some point, shadow IT, SAS applications, it was simpler. because it was just figure out what SaaS applications are getting up. That was a huge wave. With the agents, they can take it coming from all directions. It can be sitting on top of a SaaS application, that becoming straightaway endpoint, people are going to the Internet, downloading it. That can be your developers putting in your infrastructure, they could be putting inside your AWS, developing applications on it. Visibility, all of a sudden, is about all visibility. But as I mentioned before, it is not about finding which AI model and agent is being used, because even if you make a list of it, frankly, that list will change every day within your company. what it can do on what data and what permissions and what damage. The visibility definition is not about AI visibility, is about what AI can do in your org. That is a must-have. And it's a multi-dimensional thing. It's not like from one entry point. That's the one thing. The controls discussion and also people should not be thinking, to your point, initial question, should not be thinking controls and just controls. What we've learned from our customers is that if you do not have hygiene in your environment and an agent comes on top of it, let's say what does hygiene mean? You really don't know where your sensor data is. You don't know who can access it. You don't know what agent will access it. You don't know how much is old garbage data. If you don't have hygiene in the environment, agent will come in and multiply that lack of hygiene by 100x, million x, whatever, depending on number of agents. So before you even talk about protection, let's do the cleanup. Hygiene and maintain it because it's not a one-time effort. There's no home that stays, you know, do it clean once and it stays on forever. You have to keep cleaning it, right?


Patrick Moorhead:
Yeah. It must be the same. I'm smiling because Dan and I are vibe coding and making like breaking every rule. I'm thinking, yep, I did that. Yeah, we've done that. I get that. Yeah, I mean, we've created seven agents in a month. Here you go.

Anand Eswaran: You're slow, mate. You've got to pick up the speed. In a month.


Daniel Newman:
I was reading about, you know, Coder's Note. I think shipping 500, you know, what is it, 500 a day to Git, like basically at that level of production. That's incredible.


Anand Eswaran:
That is incredible. But you know, Rehan said something important. And then the third element is resilience and recovery. Yeah. Because you understand your data at great depth. You understand the relationships of your data. With other data elements, but also with identity with permissions with entitlements from your security posture compliance. But when you do that you also need an effective recovery and resilience posture because when things go bad, which it inevitably will they will. They will. But you've got to, at that point, be able to surgically recover. That's why we call it undo. We call it precision undo. You're undoing five seconds of one data element being changed incorrectly, not using a sledgehammer rewind to take back a day's worth of operations, which has impacted the business. So when you now bring all of these together in one control plane, one platform, not a patchwork of partnerships, where your data security solutions with your security, compliance, privacy, governance, and resilience and recovery comes together, and you have a 360 degree view of your data across its lifecycle, across live and backup, and then every data system on the planet, then you set up every company's AI transformation to be successful.


Daniel Newman:
It's really interesting too, like so much of what I hear is the complaints about security for the longest time has been how fragmented it is. So when I'm listening at a Rion talk, you know, it just occurs to me that there's a new opportunity right now. Like we are literally in this next era of compute and companies that have sort of been trying to do security and patchwork have the opportunity to think about it more like you're saying in a more holistic way.


Rehan Jali:
Absolutely, and it has to be thought through from foundation up. Because even if you take these four or five different products, there is no one way to smash them and make them one. So you have to think about creating the right foundation there, which actually captures all these disparate, you know, very important aspects like identity, data, and your AI, and it brings it to one place. And above it, you still remember the persona that uses it may still be different. But you have to give them the experience, what they care about, what the outcomes they derive. For instance, compliance teams, they want a very different outcome. And they want to see it in a very different way. They engage it in a very different way. Whereas if you look at data security team, it's a very different way. Data privacy team is in a very different way. But if you have the foundation right, you can create the experiences for different personas. And I think that is the name of the game.


Daniel Newman:
Absolutely. I'm hearing here, you guys really want to be the data and AI trust company. What does that really mean? Take the hyperbole out. What does it mean in practice for enterprises to deliver trust? And how does Veeam enable that?


Anand Eswaran:
Yeah, and Rehan touched on it. It literally comes down to, do you understand your data in great depth? And when you think about Understanding your data it has many dimensions to it first is the breadth dimension most companies are focused on one domain security cloud infrastructure. But to understand your data in great depth to get to data trust you need to be multi domain you need to understand your data across. compliance, security, governance, identity, all of that together, and your recovery posture. So do you have the breadth of understanding of your data? Can you connect your data and understand it in relationship to the other elements? You have to get to a very good degree of data depth, not at a S3 bucket level or a database level, but can you understand your data at a very granular level, at a file level, at a row or a column in a database level? You understand the depth of your data. Do you understand the relationships of your data with everything else identities agents models when you bring all of that together the breadth the depth. Will the visibility and classification work at scale? Because suddenly, for a company which is used to making sure they look at a thousand systems, data systems, now they have to have a view of a billion files and data elements and identities and agents. So at that scale, can you still understand your data correctly, the relationships correctly? Can you classify with extreme accuracy? So when you bring these data dimensions across breadth, Depth and scale you create data trust you create trust of the data feeding an AI agent is got the right permissions. The agent is not going to access data. It is it shouldn't be seeing and surfacing and the right agent gets access to the right data. So this plumbing. of data is the key on how it feeds the agents. Now, simple analogy, you hear a lot about be monitor the agents. Like if you're driving a car, there's companies who are saying, I'm going to look at the driver, make sure the driver is driving correctly, and monitor the heck out of the driver. But someone has got to make sure that the fuel is clean, and the engine is running cleanly and smoothly, and the car, the system is working well. And the heart of that is the data which goes into it. That's what we stand for. When you do that, you create trust in your data. Trust in your data creates trust in the AI, which is acting on the data. So that's how we think about the trusted data platform for what will be a very different world in the world of agent technology. You know there's no drivers here in San Francisco.


Patrick Moorhead:
I think all of that makes sense to technologists. It makes sense to see those probably who the stewards of the data inside of enterprise. But did the boards get it. Has that changed over the last year making the connection between resilience and A.I. and an actual business risk. It's you do.


Rehan Jali:
So has that changed? It has changed. We always learn by experience. The fire is not hot until you put your hand on it. Then you learned. That is what is going on. So when you talk about trust and talk about whether it's learning or not, if you bring an AI into the company, How do you trust it? Let's say you bring a guest into your house. How do you trust it? It's not going to steal your information or your household things. You're going to trust it because you're going to destroy something. You're going to trust it that it is basically, and similarly, if you want to bring AI into the mix, you say, it's going to give you the right answers. It's not going to give you information of Anand to Rehan or Rehan's information to Anand. It is going to basically not do destructive stuff. You will trust it. Right? And what is happening is that we are enabling many of these, not all, for sure not all, but many of the important aspects of establishing the trust. Many are going to come straight from the model, the efficacy of the agent and all. We don't do that. But the aspect that is when you marry it with your data and your rule book, will the agent follow your wished rule book? And if it makes a mistake, can you recover from it? And if you provide that part of the stack that we do, then you're actually establishing more beta and AI trust, right? Now, coming to the other question, do people are getting it? The part of the reason they're getting it because destructive actions are happening. You can Google it and every day data is getting destroyed, whether it is Meta's news, whether it is Amazon's news, whether it is many.


Patrick Moorhead:
Or even OpenClaw should be great. Even though it's not, I mean, it's centralized, but it's not. It's still a great lesson in, I'll call it AI Unchained. Don't delete that production database or that code you don't like.


Rehan Jali:
Would you learn? Would the board know that the production data got They will learn that you need this layer that can create the data net trust. And then when you do that, then you can have speed. Then you can have speed, and then you should go and deploy. So people are getting because they've getting hands-on.


Daniel Newman:
Yeah. So let's take this home. The security angle, let's talk about it from the AI angle. Because security, as I've said, the challenge has always been getting the boards and the investment to keep up with the speed everyone's trying to build out the applications, tools, technologies. So nobody wants to slow down. So, you know, part of your ask here is, hey, slow down, pay attention to your security. And I think what everyone says, how do we do this and keep going this fast, bring security with it? And on like, how do you do this, keep the pace, meet your board's expectations, deliver on productivity and growth without compromising security?


Anand Eswaran:
It's a great question. Actually, we have done it internally as Veeam. That was the thesis which led us to the acquisition of security, which is to go fast, you've got to first build a foundation. So the foundation is very simple. You have to create a unified platform across data security, privacy, compliance, governance, and resilience. Create the foundation, not randomly put products together, but one control plane. And then all of these are different blades or slivers of pillars of this unified control plane. You come in and deploy the control plane, deploy the platform, and then a customer or a company can come in at any entry point, start with data security or start with resilience, and then fill out every single pillar. When you do that foundation, then you absolutely accelerate innovation. I'd mean we took 6 months to build a foundation not too long and now we can do it much faster. The foundation is built across all of these pillars unified and now our case of being an AI first company across every function that we're not just. saying a workflow has to be enabled or made better by AI, we are reimagining our company workflow and processes across every single function. The pace of innovation is incredible. That's what we did. So when we saw our thesis was bearing fruit, the next thing we did was said, okay, what is the right company which will allow us to build this unified platform? The tech and the culture, right? That was security AI. That was the acquisition. And we launched in two months after the acquisition, we launched the first product drop, Agent Commander. which literally unifies this platform, allows you to detect AI risk, protect every data element at depth, and undo with precision if mistakes were made. That is a unified platform which allows us to think about Veeam as the trusted data platform for the agent to gear.


Daniel Newman:
Gentlemen, I want to thank you both so much. Great to have this conversation. I know RSA is, what do you say, day 300? Been a great event, hopefully a lot of great meetings. Congratulations on all the progress, and we look forward to, of course, tracking what you're building and continuing to talk to the market about it.


Anand Eswaran:
Love it. Yeah, we talk about success in RSA is, it's been three days, we haven't had the time to go see the floor. which means it's a good three days because it's time spent with customers and partners. Terrific.


Daniel Newman: Thank you. Look forward to talking again soon. And thank you, everybody, for being part of our coverage here. We are at RSAC in San Francisco, California. It's 2026. Sometimes I have to think twice about that. We are moving fast and furious. Appreciate you being part of our community. Subscribe to catch all of our coverage here at RSAC and, of course, all of the great coverage on The Six Five. For Patrick and myself, it's time to say goodbye. See you all later.

MORE VIDEOS

How Autonomous IT Is Redefining Enterprise Operations

Matt Quinn, CTO of Tanium, joins Patrick Moorhead and Daniel Newman at RSAC 2026 to discuss how Autonomous IT is transforming enterprise operations, shifting from reactive systems to real-time, AI-driven decision-making at the endpoint.

Resilience in the AI Era: Why Security, Data, and Recovery Must Converge

At RSAC 2026, Commvault’s Anna Griffin and Michelle Graff join Patrick Moorhead and Daniel Newman to discuss how AI is reshaping resilience strategy. The conversation explores ResOps, platform unification, and why security, identity, and recovery must converge in the AI era.

Managing Intelligent Fleets: How HPE Is Redefining Compute Ops at Scale - Signal65 Webcast

Signal65’s Ryan Shrout and Russ Fellows discuss HPE’s unified ProLiant compute stack with Ganesh Subramanian, exploring cloud-native fleet management, AI-assisted operations, edge resilience, and how policy-driven orchestration is redefining enterprise infrastructure.

See more

Other Categories

CYBERSECURITY

QUANTUM