Proactively mitigating the risks of Agentic AI
ChatGPT introduced generative AI (GenAI) as an experimental tool for generating conversational text, which was enough to catch the world’s attention and spur enterprise investment. Now the emerging technology is evolving to the next step with Agentic AI, a more powerful system that can not only analyze and generate content, but take actions without the need for human intervention. Businesses are already adopting this new automation tool.
But as AI shifts from augmenting human decisions to taking actions on its own, organizations must consider the security risks that come with empowering AI with this new agentic capability.
According to Bell’s Navigating the generative AI and cybersecurity journey: from perils to profits report, almost half of organizations surveyed are already integrating GenAI into specific business functions. IT, customer support and security are leading the way. Yet, 60% of organizations admit they don’t know where most of their sensitive data is stored. That exposes them to risk that AI could be accessing their sensitive data, even when it’s not intended. While this is a concern for GenAI functions, the risk increases significantly when it comes to autonomous Agentic AI.
The risk is clear: Agentic AI agents could access sensitive data and take actions with it without the organization even being aware. Survey takers also indicate they don’t have confidence that AI will make the right decisions. We are aware that AI can “hallucinate” or make mistakes in its output. This could lead to incorrect actions being taken too, and organizations rank AI taking incorrect actions as the third-highest overall risk. It comes second after security vulnerabilities and privacy violations. Yet most professionals say their organizations are willing to accept the risk of AI making something happen that it shouldn’t.
By empowering AI to take actions across different business functions, are organizations opening the door to a whole new set of risks?
AI that acts on its own
Our research found that organizations aren’t just experimenting with GenAI and Agentic AI; many are already letting it take real actions in critical business functions, and others are planning to do so in the near future.
-
-
- In IT, 47% of organizations already permit GenAI to access applications and data to complete tasks. Another 38% say they are piloting it, while 11% say they aren’t using it yet, but plan to do so in the future.
- In customer support, 41% of organizations say AI already responds to customer inquiries. Another 41% are at the piloting stage, while 14% are considering it for the future.
- In cybersecurity, 41% already use AI agents, 40% are in pilot, and further 15% are considering their use in the future.
-
The impacts are not just in these areas. From Finance to Marketing to HR, many organizations say they allow AI to complete tasks that a human would have completed previously. This provokes important questions: how much risk are organizations incurring with this new level of automation? And, how much risk should they be willing to tolerate?
How much can organizations trust AI?
Giving AI the ability to act autonomously means granting it system permissions that would normally be reserved for human employees. But does AI understand when it should act, and more importantly, when it shouldn’t? Could an organization detect if a bad actor has poisoned an AI model that’s key to their security?
Imagine an AI agent tasked with detecting and blocking cyber threats. If it mistakenly isolates a mission-critical server, it could disrupt legitimate operations. If a customer service AI agent automatically processes fraudulent refund requests, the business will suffer financial losses. With AI Agents taking actions today, these sorts of risks aren’t just theoretical; they are real and must be mitigated.
Our research found that while most organizations are actively integrating GenAI into business workflows, only 40% say they have a good handle on where their sensitive data is stored. This opens up a whole host of concerns about who could have access to that unmonitored sensitive data, and what they might do with it. Agentic AI introduces another risk on top of that, that agents could take incorrect actions with that sensitive data - a scary proposition.
Agentic AI exposes new threat vectors that require organizations to respond:
-
-
- Unauthorized access risks: AI systems granted administrative privileges present a high-value target for cybercriminals. A compromised AI agent could be exploited to manipulate data, approve unauthorized transactions or disable security controls.
- Decision-making errors: AI doesn’t always get it right. Without human review, it could incorrectly enforce security policies, deny legitimate access requests or misclassify threats.
- Data exposure and compliance risks: If AI agents access sensitive data and expose it, organizations could be on the hook for compliance violations related to security and privacy commitments.
-
How to safely pursue the benefits of Agentic AI
So, how can organizations strike the right balance of mitigating risks while still reaping the benefits of AI agents?
Our research highlights several critical best practices:
-
-
- Limit AI system privileges: AI should have the least privilege for data access required. Instead of full admin rights, organizations should enforce role-based access controls to prevent AI from executing unwanted actions.
- Monitor and audit AI decisions: Businesses should implement continuous monitoring to track AI actions, flag anomalies and provide human oversight where necessary. Log AI activities to detect when it’s going off the rails.
- Secure AI models against manipulation: Organizations need strict guardrails, including input validation to prevent AI agents from being exploited through prompt injection attacks.
-
AI agents can be used safely if organizations take a responsible approach with the right governance framework in place to mitigate risks.
Are you ready for Agentic AI?
Our research shows that the shift to agentic AI is well underway. Organizations see the potential, and many are moving forward despite the risk. But they must proceed with sufficient governance in place, while address security blind spots before AI autonomy scales even further.
Is your organization ready for AI agents? The answer depends on whether you’ve put the right cyber security measures in place for today and tomorrow.
Download the full Bell cybersecurity report to learn how organizations are securing AI-driven automation.