Salesforce, a leading provider of sales platforms and customer relationship management, has set ethical boundaries for the use of its AI products in an effort to prioritize responsible innovation. On August 23, the company released its Artificial Intelligence Acceptable Use Policy, which outlines the ways in which its generative AI products can and cannot be used. The policy, written under the supervision of Salesforce’s Chief Ethical and Humane Use Officer, Paula Goldman, includes clear guidelines and restrictions.
The policy restricts the use of Salesforce’s generative AI products in various areas. It specifically prohibits the use of the technology in weapons development, adult content creation, profiling based on protected characteristics like race, biometric identification, medical or legal advice, decisions that may have legal consequences, and more. By implementing these restrictions, Salesforce aims to ensure that its AI products are used responsibly and ethically.
In addition to the policy, Salesforce has also established a set of internal guidelines for the development of generative AI. Transparency is a key aspect of these guidelines, which align with the concept of trusted AI. Singh, the Executive Vice President and General Manager at Salesforce Industry Clouds, emphasized the importance of a zero retention policy to avoid the misuse of personally identifiable information. Singh stated that Salesforce is actively working on building filters to remove toxic content and constantly refining them.
The policy applies to all services offered by Salesforce and its affiliates, including its flagship generative AI products hosted on the EinsteinGPT platform. These products, such as ChatGPT, are used for customer service, CRM, and other tasks across various industries. Salesforce’s competitors in this space include Hubspot’s ChatSpot.AI and Microsoft Copilot.
Singh believes that companies need to upskill their employees in generative AI, particularly in areas like prompt engineering. He also emphasized the importance of having a strong trust foundation in order to ensure that AI remains on-task and produces accurate content. Singh predicts that industries with tighter regulatory oversight will adopt a more cautious human-in-the-loop approach in the next six months, while other industries will be more aggressive in adopting AI. He anticipates that industry-specific LLMs (large language models) will become increasingly relevant in the near future.
Overall, Salesforce’s policy on the use of generative AI sets an important precedent in the industry. By drawing ethical boundaries and establishing guidelines, Salesforce aims to guide the responsible and ethical use of this transformative technology. As AI regulations continue to be discussed and developed, Salesforce’s approach provides a framework for other companies to follow in creating their own policies around generative AI.