Best Practices for Secure Agentforce Implementation

AI agents are transforming how businesses operate — automating workflows, enhancing user experiences, and scaling support. But without a strong security foundation, generative AI (GenAI) agents can introduce serious risks, from data exposure to unauthorized behavior.

To help mitigate these risks, Agentforce, the agentic layer of the Salesforce Platform for deploying autonomous AI agents across any business function, provides a structured framework for deploying secure AI agents across the enterprise safely, effectively, and at scale. In this two-part blog series, we begin with the five foundational attributes of Agentforce security and how to apply best practices to each.

5 foundational attributes of secure agents

1. Role

A role defines the AI agent’s persona, scope, and objectives, and sets the foundation for secure interactions. Developers must carefully define the agent’s topics — predefined conversational themes and tasks — that shape its behavior and permissions.

Key security considerations include:

  • What is the agent’s job and scope? Define the agent’s purpose and responsibilities clearly. Is it answering customer support questions, assisting sales teams, or retrieving internal documentation? A tightly scoped role reduces the risk of the agent responding to out-of-scope or unauthorized requests.
  • Who is the target audience? Determine whether the agent will serve public visitors, internal employees, or authenticated customers. Access control policies will differ based on the audience’s trust level and required permissions.

Where is the agent deployed? Identify the deployment environment, such as a website, mobile app, or internal system. Each environment introduces unique security considerations, especially related to data exposure and endpoint protection.

Clearly defining roles and their associated scope lays the essential foundation for establishing appropriate access controls. This foundational understanding is crucial to minimize unintended behavior and ensure that access rights are granted correctly.

2. Data

Agents run on data, which inform their decisions and interactions. But with that intelligence comes risk. Agents that access too much or the wrong kind of data can expose sensitive information and compromise trust. 

Agentforce allows agents to consume data from CRM, Data Cloud, and third-party sources, offering flexibility and customization. That flexibility demands strong governance to maintain security and enforce proper data boundaries.

To protect your data and reduce exposure risk, focus on three core strategies:

  • Data Access: Know what the agent accesses and what it exposes in responses. For example, a support agent trained on CRM case histories shouldn’t access financial records. Unintended exposure can lead to compliance violations or user mistrust.
  • Data Governance: Align data usage with your organization’s management policies. Ensure agents respect boundaries between public, confidential, and restricted data.
  • Data Minimization: Provide only the data the agent truly needs. Avoid loading “nice-to-have” datasets that increase risk without adding core value.

Following these principles helps protect sensitive information while preserving the agent’s effectiveness.

3. Actions

Actions are predefined tasks triggered by instructions like Flow, Prompt Template, or Apex. Actions are the “muscle” of the AI agent — predefined tasks like retrieving information, updating records, or initiating workflows. These tasks are carried out using tools such as Flow, Prompt Templates, or Apex, based on instructions from the agent or user input.

In Agentforce, actions are categorized as:

  • Public Actions: These can be performed on behalf of anyone, often without authentication. For example, retrieving business hours or responding to FAQs. Public actions should be tightly scoped to minimize risk.
  • Private Actions: These involve sensitive tasks that require identity verification, such as updating account details or accessing personal records. Agents should be configured to confirm the user’s identity before performing these actions. This can be accomplished using the Customer Verification topic and restricting access to specific topics and actions until verification is complete. Verification requirements can be tailored based on the sensitivity of the action and the organization’s security policies.

Because actions directly affect systems and data, they must align with the agent’s defined role and access controls. Poorly scoped or overly permissive actions can bypass protections established at the data or channel level.

4. Guardrails

While roles, data, and actions define the agent’s core capabilities, guardrails set the operational boundaries, guiding what the agent can and can’t do as it interacts with users.  These natural-language instructions serve as dynamic safeguards to help preventing misuse, policy violations, and unintended outputs. In Agentforce, guardrails form a critical line of defense in maintaining AI security, especially in scenarios where the agent must interpret ambiguous or complex requests.

Key guardrail mechanisms include:

  • Supervisory Large Language Models (LLMs): These act as runtime monitors, scanning prompts, responses, and context to detect noncompliant behavior or potential risks before they escalate.
  • Einstein Trust Layer: Salesforce’s built-in security layer enforces protective measures such as Secure Data Retriever, Zero Data Retention agreements with model providers, toxicity filtering, and access restrictions across all agent interactions.
  • Agent Instructions: Natural-language prompts that define what the agent should and shouldn’t do. Clear, specific instructions help prevent the generation of harmful, biased, or out-of-scope responses.

Think of runtime guardrails not as limitations, but as proactive protections that let AI agents operate confidently within safe, predictable boundaries. Together, these mechanisms create a flexible yet robust Agentforce security layer that adapts as your deployment evolves.

5. Channel

A channel is the platform or interface where users interact with the AI agent — such as a website, mobile app, Slack workspace, or customer portal. The agent itself may appear in different forms, like a chatbot or embedded UI, depending on the channel. Each environment presents unique security considerations based on user access and exposure risk.

Key considerations for secure deployment include:

  • Endpoint Protection: Secure every deployment endpoint — whether public-facing or internal — using authentication, encryption, and access controls. Tailor protections based on the channel’s exposure level and risk profile.
  • Interface Risks: Public channels, like websites or social platforms, are more vulnerable to spam, adversarial prompts, or social engineering attempts. Internal tools, while less exposed, also require safeguards to prevent unauthorized access or privilege escalation.
  • Standard Security Protocols: Apply application security best practices across all interfaces. This includes HTTPS, session timeouts, secure cookie handling, input sanitization, and regular penetration testing.

Together, these controls ensure that every channel aligns with Agentforce security principles, to support users engaging with AI agents in a safe, trusted, and well-governed environment

Security shouldn’t be bolted on — it should be built in. By applying core Agentforce security principles to the five foundational attributes, organizations can deploy secure AI agents that operate safely, predictably, and within defined limits. These decisions help minimize risk while maximizing performance, control, and trust.

In Part 2 of this series, we will explore how to put these principles into action, with best practices for governance, permissions, testing, and real-time guardrails that help keep Agentforce secure as your AI strategy evolves and scales

Salesforce’s security resources




Learn more: Video: 5 Easy Steps for Secure Agentforce Implementation

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *