Agentforce is a platform for building trusted AI agents for the enterprise. But when one leading global consulting firm started their implementation, they hit a snag. Jargon, acronyms, and other institutional knowledge weren’t being interpreted by the agent as expected.
Queries like, “Who’s the KSDM for opportunity ABC?” or “Show me opportunities where Company Y is a competitor,” returned incorrect answers. The Large Language Model (LLM) powering Agentforce couldn’t connect the dots between the users’ natural language questions and the data in their CRM. But the team didn’t have time to undertake complex data transformation or add technical debt to the project through changes to the underlying metadata.
Our solution? We created a custom action to rewrite queries with additional context. Now, when a user asks nuanced questions, our expanded instructions tell the firm’s AI agents how to navigate the questions and resolve the queries correctly.
This is a common challenge brought on by the advent of AI, and we can use AI to keep things moving forward.
This approach takes some time up front writing and testing instructions, but in this firm’s case, it was a more efficient way to give agents context than mapping new fields or cleaning up legacy data.
Below, we’ll share the nine steps that make this custom action work. Tailor your own custom actions to give your agents context for nuanced questions, mitigate misunderstandings, and deliver accurate responses. Let’s dive in!
9 steps to add context to user queries
We used the nine steps below to write instructions in the final prompt template.
Step 1. Identify and classify common queries
First, it’s key to identify the types of questions your users ask that the LLM will get wrong unless you provide more context and instruction for how to answer. Categorizing those questions will give you the initial outline for your instructions. In our case with the consulting firm, the prompt template covers questions related to:
- Team Roles
- Deal Competitors
- Deal Stages Grouping (Early Stage Deals, Won, Lost, Pipeline)
We numbered those terms to structure our instructions in the prompt template below, and included context and sample questions related to each area.
Step 2. Understand the data model
Once you’ve identified the types of questions returning wrong answers, study your data model and CRM schema to see where the wrong answers came from, and identify the path to correct answers. Pay attention to:
- Object relationships, like Opportunity to Team Roles, custom territories; other business concepts
- Field names and picklist values, like Opportunity_Grouping_Role__c, StageName
- Valid field values, like Opportunity stage = “06 – Closed – Won”, Roles = “RFP Designer”
Step 3. Define keyword-to-model mappings
Map natural language phrases to data fields and values.*
- Example:
“won deals” → StageName="06 - Closed - Won"
- Example: “KSDM” → Primary__c = true on Opportunity_Grouping_Role__c
Include synonyms or variations users may provide, like “deals” and “targets.”
*Note: In some scenarios, it makes sense to simply include mapping in instructions to the agent. For the consulting firm’s use case, we instead recommend mapping natural language phrases to data fields as it’s more effective and manageable with complex and numerous use cases.
Step 4. Give examples to structure the prompt output
Provide examples to expand vague user queries into precise, structured prompts that:
- Reference the correct object and field names
- Include necessary filter conditions
- Follow a consistent, SOQL-friendly phrasing style.
Example:
Original utterance: “Who is the KSDM for ACME?”
Expanded: “Retrieve the Key Sales Domain Manager from Opportunity_Grouping_Role__c for opportunity ACME, filtering by Primary__c = true.”
Step 5. Explicitly define how to handle special cases
Specify how to resolve key business concepts such as:
- Key Sales Domain Manager (KSDM)
- Key Lost to Competition
- Stage groupings like Pipeline or Qualified or Post-Processing.
Example: Key Lost to Competition: Key Lost to Competition is a special case for competitors and it is a single competitor for which the deal was lost to. The value of the list of competitor(s) is stored on the opportunity object in the KeyLostToCompetition__c field. The Key_Lost_To_Competition__c field is a reference to the account object.
Example input for questions about Key Lost to Competition use case are:
"Who is the key lost to competition on the ACME deal?"
"Who is the key lost to competition on the ACME target?"
Step 6. Test with real-world queries
We used four testing tactics that honed our prompt template to successfully interpret natural language questions.
- Validate your expansions using actual user inputs.
- Iterate on the mappings and structure to handle edge cases or unexpected phrasing.
- Develop a business-approved list of common user utterances and test iteratively via the Testing Center.
- Document and track success rates for each run after every release.
Step 7. Keep expansions concise
Limit expanded prompts to 50 words or fewer. Focus only on the necessary details to avoid overcomplicating the query.
For example, in the first paragraph of the consulting firm’s instructions below, you’ll see this line:The output should include enough details to generate a SQL query, while remaining under 50 words.
Step 8. Document and operationalize
Document your intent categories, mappings, and expansion patterns.
Ensure this documentation is accessible for future maintenance and scaling.
Integrate your expansions into the query generation workflow as a preprocessing step.
Step 9. Connect the prompt template to an agent
To invoke this action, add an instruction like this one to the Employee Agent instructions set.
Add_More_Business_Context must be called at all times. Add_More_Business_Context must be called prior to calling any other action. Add_More_Business_Context must be called before calling IdentifyObjectByName, QueryRecords, QueryRecordsWithAggregate action at all times.
Note: Agents take absolutes like “always” and “never” very literally. We are intentionally using “at all times” in these instructions.
Bring it all together: One consulting firm’s use case
Now let’s look at the consulting firm’s full prompt template created with the nine steps above.
First, we created a new flex prompt template.
Template name: Add More Business Context
Description: Enhances user utterances by incorporating relevant business context specific to opportunity management, enabling more accurate and context-aware responses.
Source name: User Input
Source Type: Free Text

Next, in the prompt template workspace, enter the instructions we developed with the nine steps.

Below is the full text of the instructions that worked for the consulting firm’s opportunity management use case.
Instruction
When a user asks a question, analyze the prompt to determine the user's intent and using the instruction below add more context to the user utterance to be used by a natural language query engine. Use relevant keywords from the following categories, Team Roles, Competitor opportunity stages to enrich the prompt and ensure it's optimized for generating a SQL query. The output should include enough details to generate a SQL query, while remaining under 50 words.
User input: {!$Input:User_Input}
---Instructions ---
Generate output text based on any one or any combinations of cases provided below. The generated output must be easily interpreted by the QueryRecords action.
1. Team Roles: Opportunity role data is stored in a related object and includes multiple role types. Users may ask about roles or specific types like Sales Domain Manager(SDM). The AI should interpret the request, match role types, and retrieve corresponding values accordingly.
The value for opportunity roles are stored on the Opportunity_Grouping_Role__c field on a related object called LJ_Opportunity_Roles__c. The values in opportunity roles include values like Sales Domain, Sales Domain - Director, Sales Domain - Other, etc. Interpret the request by matching values to the user's request and retrieve corresponding values accordingly.
Detect role-related requests: If the prompt includes terms like "sales roles", "opportunity roles", or named roles (e.g., "RFP Builder"), match against values in the LJ_Role__c field.
Filter the related object LJ_Opportunit
y_Roles__c using these role values.
Return the corresponding LJ_TeamID values.
Key Sales Domain Manager or KSDM is a special case; retrieve these values from the LJ_KeyTeam__c field on the related object called Opportunity_Grouping_Role__c where the Primary_Team_Member__c is set to true.
Handle KSDM requests separately by filtering for PrimaryInd__c = true and returning OM_TeamID__c.
Example input for questions about Sales and Account Team Roles are:
"Who is the KSDM for Opportunity ACME LLC?"
"Who is the Sales Domain Manager for Opportunity ACME LLC?"
"Who is the RFP Builder for Opportunity ACME LLC?"
Example output about Team Roles are:
"Retrieve the RFP Builder role from the LJ_Opportunity_Roles__c object for the opportunity identified as Acme Auto, specifically filtering by the LJ_Role__c field for the value "RFP Builder.""
2. Competitor: When a user asks for the competitors on an opportunity, extract the competitor information from the Competitor__c field on a related object called LJ_Competitor__c. and return the associated account names.
The values of an opportunity competitor are accounts like Mercedes Benz, Audi, Volkwagen, Toyota and more. Detect competitor-related prompts: Keywords to match: competitor, rival on the opportunity.Query the related object: LJ_Competitor__c.
Retrieve the value in the Competitor__c field for each record.
Resolve each Competitor__c value to its associated Account.Name.
Return a list of competitor account names involved with the opportunity.
Example input for questions about competitors are:
"What accounts are the competitors on the Acme Auto opportunity?
” "List the accounts or competitors on the Acme Auto.
” "Show me opportunities where Company A is a competitor."
Example input for questions about competitors are:
"Retrieve the competitor accounts from the LJ_Competitor__c object for the Acme Auto opportunity, specifically extracting values from the Competitor__c field and resolving them to their associated Account.Name."
3. Stages: Opportunity stagename is a field on opportunity object Opportunity.StageName. The values available in the stagename are FA- New, FB - Processing, FC - Sent, FD - Reviewing, FE - Approved, GA - Financial Recording, GB - Financial Record - Dummy. Keywords such as Pipe, Processed, Not Pipe, active, Won or Lost describe a predefined group of opportunity
stages.
When parsing user prompts related to opportunity stages: Detect if the prompt includes any keywords listed above. Replace the keyword in the prompt with the associated list of StageName values.
Pipeline - FA- New, FB - Processing, FC - Sent, FD - Reviewing, FE - Approved, GA - Financial Recording, GB - Financial Recor
d – Dummy
Qualified - FB - Processing, FC - Sent, FD - Reviewing, FE - Approved, GA - Financial Recording
Unqualified - Only FA - New
Active - FA- New, FB - Processing, FC - Sent, FD - Reviewing, FE - Approved, GA - Financial Recording
Won - GA - Financial Recording, GB - Financial Record - Dummy
Example input for questions about opportunity stages:
Give me a list of qualified opportunities?
What is the total amount of opportunities won this year?
### Response Instructions ----
"""Response Instructions
Carefully analyze the user's prompt to identify relevant keywords and intent. Generate a response with sufficient context for SQL query generation, while keeping the response concise (under 50 words). The output should contain necessary details, such as relevant field names, values, and conditions, formatted for easy SQL query creation.
Recap: Common challenges and solutions
This table summarizes key areas where we see agents misunderstand queries, and how we recommend solving for each.
Challenge | Solution |
Misinterpretation of unique terms and fields used in a company’s CRM. | Create a custom action to rewrite queries with additional context before calling QueryRecords.Use Prompt Builder to rewrite the query. Use topic instructions to explain custom objects and other unique terms and fields. |
Lack of understanding of business rules in user queries lead to incorrect results. | Use few-shot examples in topic instructions. Few-shot prompting builds in two or more examples, which helps agents recognize patterns and handle more complex tasks. |
Inconsistent output for similar queries leads to uncertainty. | Use flows to build deterministic steps for consistent and accurate query processing. |
How will Salesforce natively solve the challenge of business nuances?
Our custom action approach is a stop-gap solution to help orient your new AI agent teammates to your business-specific nuances and jargon.
Salesforce is already working on built-in ways to enhance AI agent understanding of corporate jargon and business rules implicit in user queries. These include:
- Data Prism and Metadata Studio: A set of automated and UI tools to better interpret entity and property names. This will work across CRM and Data Cloud entities.
- Few-Shot Learning: Enhance the system’s ability to understand business rules with a few examples.
These solutions to improve the accuracy and reliability of AI-driven CRM processes are in development or customer pilot testing as of May 2025. Stay tuned for their release!