6 No-BS Deployment Tips from the Agentic AI Frontlines

For one of New York’s most celebrated jewelry brands, luxury products go hand in hand with a curated white glove experience. The company recently turned to agentic AI to scale the unmatched customer service that it’s famous for, but quickly ran into a common obstacle: latency. Customers were sometimes waiting as long as 15 seconds in between agent responses. 

With no obvious solution, Salesforce’s Forward Deployed Engineers (FDEs) stepped in to help. FDEs are technical experts that work directly with customers to unblock deployment challenges. Armed with deep technical expertise and situated on the product team, their feedback directly shapes the Agentforce roadmap. They soon diagnosed two problems: overly-prescriptive Apex code was bogging down the system, and a known bug was adding lag to each token before it got sent back to the customer. Partnering with Salesforce engineering, the FDE team devised a solution that brought down latency from 15 seconds per response to 3 or 4. 

Though each customer faces their own unique challenges in bringing AI agents from pilot to production, every deployment can benefit from the battle-tested best practices gleaned from countless FDE engagements. Below are 6 developer-level tips from the agentic frontlines to unblock your Agentforce deployments.

  1. Effective Topic and Action Design: Focus on creating strong topics and actions that reliably produce the desired output. Think of each topic as a bucket of actions, while instructions are a way to further define the topic. You shouldn’t rely on instructions to perfect your outputs since this can confuse the LLM and lead to inconsistent or hallucinated responses. Best practices for topics and actions are available in the Agentforce Guide to Topics, Instructions and Actions. Always bear in mind these three best practices:
    • Use short, clear instructions to add nuance to agent responses.
    • Let actions handle deterministic logic.
    • Don’t use instructions as a backdoor to prompt the LLM to behave a certain way. Instead, use a Prompt Template. 
  1. Understanding Agent Learning and Improvement: While Agentforce provides a robust suite of analytics tools and ways to collect feedback, it’s important to note that agents won’t automatically learn from this data. Agentforce employs an AI-assisted human-in-the-loop model where humans capture feedback and manually update the agent within the planner. The goal is to improve the agent based on user utterances and turn-by-turn responses, although the LLM does not retain knowledge from context variables beyond individual conversations. While tools like Omni Supervisor and Interaction Explorer allow customers to audit agent behavior and quickly identify shortfalls, fixing these gaps still requires human intervention. 
  1. Leveraging Context and Custom Variables for Agent Memory: Customers often face scenarios where data needs to be stored and referenced for later use, such as getting data and then performing an action with it, or mapping variables to action inputs and outputs. To do this, you can use context and custom variables as a form of memory to connect data to subsequent actions. Custom variables are particularly helpful for deterministic logic. To maintain agent memory beyond the default 5-6 turns, you can store context in a custom variable (e.g., `currentKnowledge`) and reference it throughout the conversation. This effectively creates a persistent knowledge base during the agent’s runtime, enabling the agent to reference responses from earlier in the conversation. This addresses the challenge of the agent not being able to remember information from, for example, 30 turns ago.
  1. Strategic Use of Structured Responses: While LLMs excel at natural language generation, they generally don’t provide consistently structured responses. Unlike a chatbot where you expect the exact answer every time, Agentforce responses will be unique each time. Forcing an agent to generate a structured response adds considerable complexity to instructions and prompting, which should be avoided unless there’s a clear business justification. If a structured response is necessary, use Prompt Builder and test different models. Every LLM has unique performance variations in latency, structured response output and token count. Choose the one that works best for your use case. If you need deterministic logic or outcomes, use Flows or Apex to handle the business logic and map custom variables to input and output. Store the outcome in a custom variable and output it to the agent to be used in the next action. 

For use cases like Order Management, avoid using prompts to create logic mappers such as if/else logic, or for loops. Instead, do this in Apex or Flow for best results. LLMs and instructions are not well-suited for handling business-specific if/else deterministic mappings, math, or calculations. When dealing with data, rather than using if/else logic or calculations within instructions or prompts, use Flow or Apex to achieve the desired deterministic results.

  1. Simulating Topic Sequencing: Though topic chaining isn’t natively supported, you can work around this by prompting the user for a confirmation (e.g., a yes/no question like “Can we escalate this conversation to a live agent?”). This allows for a deterministic Flow, useful for scenarios like case creation or escalation. Prompting for confirmation is particularly important for topics that can’t be easily reverted.
  1. Ensuring Clear Descriptions for Standard Actions: For standard actions like querying records, always ensure that objects and fields have clear and precise descriptions. Good data leads to good responses. Whether it’s CRM records, Data Cloud data, knowledge, or external data — all should have clear metadata, descriptions, fields, and content to ensure proper grounding. One common issue for many customers is having descriptions that match one item but the intent is for another, like “Work Order line items” versus “Order Line items.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *