At Salesforce, we believe responsible AI begins with clear boundaries. We have long advocated for a risk-based approach to AI governance that prioritizes human rights and ethical considerations. That’s why we build AI in alignment with our values — and why we support frameworks like the European Union’s AI Act, which draws important guidelines around practices that could harm people’s rights and safety. These prohibitions serve as a foundation for trustworthy AI and align with how we help customers use Agentforce responsibly.
Why guidelines matter in responsible AI
The EU AI Act established a set of clear “prohibited practices” (applicable since February 2025) — uses of AI that pose significant risks to fundamental rights, including privacy and security. These are not just theoretical concerns; they are now legally banned, grounded in international human rights principles.
In this blog, we outline what the EU AI Act prohibits and explore what this means for customers as they are embarking on their own compliance within companies or with end users. Our hope is customers will come away from this blog with a practical understanding of these restrictions and empowerment on how to align your business accordingly.
EU AI Act prohibited practices
The EU AI Act bans a range of AI practices that pose unacceptable risks to fundamental rights such as subliminal manipulation, exploitation of vulnerable groups, social scoring, predictive policing, indiscriminate biometric surveillance, unauthorized facial recognition, emotion recognition in sensitive contexts, and categorization based on sensitive biometric data. For example, the Act restricts the use of facial recognition in public spaces due to serious concerns about privacy, accuracy and potential misuse. At Salesforce, we made the decision years ago not to allow the use of our products for facial recognition – a prohibition that remains in place today and underscores our long-standing commitment to responsible AI development.
At Salesforce, our Acceptable Use Policy and AI Acceptable Use Policy disallows these and similar use cases, and our Trust Layer includes built-in safeguards- such as bias detection, audit trails and privacy features- to help customers stay compliant. For a full list of prohibited practices, we recommend reviewing Article 5 of the EU AI Act, along with our Acceptable Use and External-Facing Services Policy and Artificial Intelligence Acceptable Use Policy
5 recommendations for deploying AI responsibly
You may be thinking, “Well, what does this mean for me? These practices are pretty far from anything I or my employees would ever do.” And that’s true – these prohibited practices aren’t the daily tasks you would undertake to build greater efficiencies in your opportunity pipeline or better unify data for sales growth. However, understanding them helps you avoid risks as your AI footprint grows.
You may consider taking these steps to assess your AI use across your company. These five recommendations come from our learnings in aligning with international regulations and voluntary pledges for ethical and safe AI:
1. Review your own use of AI to ensure it’s in line with relevant regulatory frameworks
Consider doing a gap analysis comparing your company’s current use of AI with regulations like the EU AI Act to confirm alignment and identify potential gaps that may need to be addressed. Use this as a starting point to develop an AI risk register that you can maintain on an ongoing basis.
Action steps
- Document all AI systems your business currently uses
- Map each system against relevant regulatory requirements
- Identify any gaps
- Prioritize addressing identified gaps based on risk level, impact and regulatory needs
2. Set up your AI governance framework
Create a cross-functional oversight body that will regularly review the way your company is implementing AI. This will serve to bolster ongoing compliance with regulatory requirements and your company values, while also handling edge cases and emerging risks. Consider representation from teams like legal, compliance, trust and safety, security and beyond. Track metrics early on to assess improvement over time and track the body’s success in identifying and mitigating risks.
Action steps
- Set up an AI governance body
- Define metrics to track its success
- Implement regular reviews for AI implementation
- Create escalation paths for new risks and edge cases
3. Leverage Salesforce’s built-in guardrails
Take advantage of Salesforce’s guardrails like our Einstein Trust Layer and default model containment policies. Ensure relevant staff are aware of and integrating our prebuilt safeguards.
Action steps
- Map identified risks from your risk registrar to available product safeguards
- Ensure teams are trained on using built-in guardrails
- Document how each safeguard addresses specific regulatory requirements and addresses particular risks
4. Build company-wide AI literacy
Our Responsible AI Trailhead modules and our AI Associate and AI Specialist certificates are a great entry point. When growing your team’s AI literacy, consider baseline resources for all employees as well as more detailed role-specific trainings (e.g. helping engineers understand model documentation or helping sales teams understand disclosure best practices). Consider setting up team tabletop exercises where colleagues have to navigate emerging AI risks together.
Action steps
- Develop baseline AI literacy training resources for all employees
- Create role-specific guidance and learning paths
- Conduct collaborative tabletop exercise where employees navigate thorny AI dilemmas together
(Pro tip: Use this as an opportunity to socialize your AI governance body and its escalation path)
5. Enhance overall transparency
Proactively communicate your learnings and practices to your own customers and to your employees, to increase transparency and help everyone grow together. You may consider communicating how you’re responsibly deploying AI, your governance practices and more. Disclosure builds trust and demonstrates your ongoing commitment to responsible AI implementation.
Action steps
- Identify appropriate places to share AI disclosures for customers
- Develop simple, accessible language for disclosures
- Clearly communicate expectations of your own customers for acceptable use of AI
- Ensure you have feedback and abuse reporting channels for customers in case issues are identified
Remember, responsible AI isn’t just about meeting regulatory requirements — it’s about aligning technology with your organization’s values and building systems that are safe, transparent, and human-centered. It’s a collective effort and we are committed to partnering with you on your responsible AI journey. Get the latest info on our tools, policies and commitments at our Salesforce Trusted AI Resource Website.
Note: This blog post is provided for informational purposes only. It does not provide legal advice. Salesforce urges its customers to consult with their own legal counsel to familiarize themselves with the requirements that govern their specific situations. This information is provided as of the date of document publication, and may not account for changes after the date of publication.
Note: Our contractual terms prohibit the use of our products and services in violation of applicable laws, and thus for any of the prohibited practices listed in Article 5 of the EU AI Act. For a list of additional prohibited use cases and their details, we encourage you to review our Acceptable Use and External-Facing Services Policy and Artificial Intelligence Acceptable Use Policy, which provide clear guidance on how our AI products should not be used.