For business leaders right now, two small words seem almost impossible to avoid: AI agents. Built on the ‘brain’ of an AI model, and armed with a specific purpose and access to tools, agents are autonomous decision-makers that are being increasingly integrated into live business processes.
Unlike normal AI tools, which rely on user prompts, agent-based – agentic – AI can execute tasks iteratively, making decisions that carry real business consequences, and real governance risk. In short, agents aren’t tools, they’re teammates. As well as sitting in an organization’s tech stack, they sit on its org chart.
Marc Benioff, cofounder, chairman and CEO of Salesforce, the $260 billion valued software giant, says that today’s CEOs will be the last to manage all-human workforces. (Asked if an agent could replace him some day, Benioff responded, half-joking, “I hope so.”) The sooner businesses recognize this shift, the faster they can move to securing and governing AI for accelerated innovation.
You may like
Donnchadh ‘DC’ Casey
Social Links Navigation
Just as human workers come under the umbrella of human resources (HR), it’s useful to think of agents as non-human resources (NHRs). Just like humans, there are costs to employing NHRs – including computing, architecture and security costs – and they need induction, training and appropriate limitations on what they can do, and how.
This is especially true as these NHRs move up the value chain to perform high-skill tasks that once belonged to mid-senior level talent. For example, autonomous agents are actively managing supplier negotiations, handling payment terms, and even adjusting prices based on commodity and market shifts – functions typically handled by teams of trained analysts.
Businesses can’t secure what they don’t understand
Introducing NHRs at the enterprise level is requiring an entire rethink of governance and security. That’s because existing cybersecurity focuses on managing human risk, internally and externally; it’s not built for the realities of always-on, self-directed agents that understand, think, and act at machine speed.
Like the best employees, the most effective agents will have access to enterprise data and applications, from staffing information and sensitive financial data to proprietary product secrets. That access opens the organization up the risk of attacks from outside, as well as misuse from within.
In 2024, the global average cost of a data breach was $4.9 million, a 10% jump on the previous year and the highest total ever – and that was before the introduction of agents. In the AI era, bad actors have new weapons at their disposal, from prompt injection attacks to data and model poisoning.
Internally, a misaligned agent can trigger a cascade of failures, from corrupted analytics to regulatory breaches. When failures stem from internally-sanctioned AI, there may be no obvious attacker, just a compliant agent acting on flawed assumptions. In the age of agents, when actions are driven by non-deterministic models, unintentional behavior is the breach – especially if safeguards are inadequate.
Imagine an agent is tasked with keeping a database up to date, and has access and permissions to insert or delete data. It could delete entries relating to Fast Company, for example, by accurately finding and removing the term ‘Fast Company’.
However, it could equally decide to delete all entries that contain the word ‘Fast’ or even entries starting with ‘F’. This crude action would achieve the same goal, but with a range of unintended consequences. With agents, the question of how they complete their task is at least as important as what that task is.
Onboarding agents like employees
As organizations introduce teams of agents – or even become predominantly staffed by agents – that collaborate to rapidly make decisions and take action with a high level of opaqueness, the risk is amplified significantly.
The key to effective agentic adoption is a methodical approach from the start. Simply rebadging existing machine learning or GenAI activity, such as chatbots, as ‘agentic’ – a practice known as ‘agent washing’ – is a recipe for disappointing return on investment
Equally, arbitrarily implementing agents without understanding where they are truly needed is the same as hiring an employee who is unsuited to the intended role: it wastes time, resources, and can create tension and confusion in the workforce. Rather, businesses must identify which use cases are suitable for agentic activity and build appropriate technology and business models.
The security of the AI model underlying the agent should be extensively red-teamed, using simulated attacks to expose weaknesses and design flaws. When the agent has access to tools and data, a key test is its ability to resist agentic attacks that learn what does and doesn’t work, and adapt accordingly.
From there, governance means more than mere supervision; it means encoding organizational values, risk thresholds, escalation paths, and ‘stop’ conditions into agents’ operational DNA. Think of it as digital onboarding. But instead of slide decks and HR training, these agents carry embedded culture codes that define how they act, what boundaries they respect, and when to ask for help.
As autonomous agents climb the (virtual) corporate ladder, the real risk isn’t adoption – it’s complacency. Businesses that treat AI agents as tools rather than dynamic, accountable team members will face escalating failures, eroding trust among customers.
Build cross-functional governance from day one
No smart business would let a fresh grad run a billion-dollar division on day one. Likewise, no AI agent should be allowed to enter mission-critical systems without undergoing structured training, testing, and probation. Enterprises need to map responsibilities, surface hidden dependencies, and clarify which decisions need a human in the loop.
For example, imagine a global operations unit staffed by human analysts, with AI agents autonomously monitoring five markets in real-time, and a machine supervisor optimizing output across all of them. Who manages whom – and who gets credit or blame?
And what of performance? Traditional metrics, such as hours logged or tasks completed, don’t capture the productivity of an agent running hundreds of simulations per hour, testing and iterating at scale and creating compounding value.
To help surface and answer these questions, many businesses are hiring Chief AI Officers and forming AI steering committees that have cross-department representation. Teams can collaboratively define guiding principles that not only align with each sector of the business but the company as a whole.
A well-configured agent should know when to act, when to pause, and when to ask for help. That kind of sophistication doesn’t happen by accident, it needs a proactive security and governance approach.
This isn’t just a technical evolution; it’s a test of leadership. The companies that design for transparency, adaptability, and AI-native governance will define the next era. NHRs aren’t coming, they’re already here. The only question is whether we’ll lead them or be led by them.
We list the best HR outsourcing service and the best PEO service.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro