
Adobe Stock
Generative AI is designed to amplify user potential while still requiring users to evaluate its results. If a chatbot's output is insufficient or ill-advised, an employee using it for a business task can reject it and seek out a better alternative.
Agentic AI changes the game. It is designed to operate with a level of autonomy that makes it more of a workplace partner than a technology tool. Consequently, the integration of agentic AI in business processes is forcing companies to reshape their data governance strategies to address new data flows.
"In the past, organizations divided data access using two clear categories: human access and system access," said Mridul Nagpal, Co-Founder and CTO at Krazimo Inc. "Among 'System Only' access components, logging and auditing was far less regulated because the assumption has always been that such access has already been vetted by code approvals, necessary tests, and that it operates under predictable, deterministic patterns. But with agentic AI, that assumption is no longer safe."
Nagpal, a former Senior Software Engineer at Google, has deep expertise in building scalable systems, modular workflows, and trustworthy AI agents. His focus at Krazimo is designing and delivering multi-tenant agentic systems while maintaining rigorous standards for testing, reliability, and implementation.
"Companies are now beginning to realize that AI agents can't be treated as system access because of the decision-making authority they have," Nagpal said. "Rather, they need approvals or controls similar to those imposed on humans, which means they may require approvals from human overseers."
With generative AI, training was the main data security risk. Governance practices ensured data privacy by utilizing anonymization to protect Personally Identifiable Information (PII) and other sensitive data assets.
Agentic AI creates new risks because the technology has ongoing access to data sources. Consequently, new governance solutions are needed to facilitate data protection.
"Experts in governance have shifted from gatekeeping to collaborating at the beginning phase of digital and AI projects," said Chris Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. "This transformation has created roles such as 'data product manager' and 'AI risk lead,' which require a unique blend of technical and regulatory business acumen."
Hutchins is a nationally recognized leader in healthcare analytics and AI strategy who has more than 30 years of experience helping hospitals and healthcare organizations leverage data and technology to improve patient care. Hutchins now partners with healthcare organizations to unlock the full value of their data through ethical, scalable strategies.
"Key functions now include risk mitigation, ongoing model validation, and monitoring to champion exercised ethics on governance," Hutchins shared. "Governance is embedded in product and analytics teams, resulting in an organization that is more resilient and responsive, rather than a governance-focused silo in compliance offices."
The autonomy given to AI agents creates a situation in which any breach of the agent has the potential to create a considerable blast radius. Consequently, data minimization becomes a powerful data governance initiative.
"As governance policies evolve, there is a move from 'collect everything' to 'collect what you can defend,'" said Artur Balabanskyy, Chief Technology Officer and Co-Founder of Tapforce. "Storage is cheap, but breach and misuse are not. Legal, security, and product teams are beginning to align on data minimization, retention limits, and regional controls. For the enterprise, that reduces the risk surface and forces clearer thinking about what data is actually needed to deliver value."
Balabanskyy helps founders and executives bridge the gap between business vision and technical execution. He regularly advises early-stage startups on product development, technical leadership, and scalable infrastructure, drawing from years of hands-on experience launching and guiding technology ventures.
As data governance practices evolve to address agentic AI, data accuracy, privacy, and security can't be the only concerns. Companies seeking greater efficiency through automated data governance solutions must also ensure data governance processes allow for human feedback loops that enable organic insights to emerge.
"Automation has quietly removed the analysis and human feedback loop that used to force key stakeholders to think about their data, debate it, question it, and refine it," said Jared Navarre, Founder of Keyni Consulting and CEO of Onnix. "That's where the important decisions, protections, and adjustments used to happen. A technology tool may be able to replicate many governance tasks faster and cheaper than a human ever could, but it still can't replace the context, institutional knowledge, or brainstorming that came from those human interactions. That human data dialogue was a critical part of governance, one that most companies don't realize they've lost until something breaks."
Navarre is a multidisciplinary founder and creative strategist with a proven track record in launching, scaling, and exiting ventures across IT, logistics, entertainment, and service industries. He has consulted over 250 businesses, specializing in building operational systems, designing resilient technology infrastructure, and developing multi-platform brand ecosystems.
If data governance is to continue protecting and improving data quality and security, it must address AI's new capacity to manage data. Agentic AI has been given the role of data steward but has not yet proven its capacity to provide robust data governance. Consequently, data governance teams must ensure that data is used properly by focusing on oversight of agentic activity.
