As part of your responsible artificial intelligence (AI) strategy, Amazon Web Services (AWS) has introduced Guardrails for Amazon Bedrock (preview), a new feature that allows users to implement safeguards customized to their use cases and responsible AI policies. This feature aims to promote safe interactions between users and generative AI applications.
Guardrails for Amazon Bedrock is designed to help developers integrate responsible AI across the AI lifecycle by providing a set of key controls. These controls enable users to define denied topics and content filters, ensuring that undesirable and harmful content is removed from interactions between users and applications. This additional level of control complements the built-in protections of foundation models (FMs).
With Guardrails for Amazon Bedrock, users can apply guardrails to all large language models (LLMs) in Amazon Bedrock, including fine-tuned models and Agents for Amazon Bedrock. This ensures consistency in deploying preferences across applications, allowing users to innovate safely while managing user experiences according to their requirements.
Some of the key controls available in Guardrails for Amazon Bedrock include denied topics and content filters. Denied topics allow users to define topics that are undesirable for their applications, such as investment advice for a banking application. Content filters enable users to set thresholds to filter harmful content across hate, insults, sexual, and violence categories. These controls give users the flexibility to filter interactions based on their specific use cases and responsible AI policies.
In addition, Guardrails for Amazon Bedrock also includes the upcoming feature of personally identifiable information (PII) redaction. Users will be able to select specific PII, such as names, email addresses, and phone numbers, to be redacted in FM-generated responses or to block user inputs that contain PII.
Guardrails for Amazon Bedrock integrates with Amazon CloudWatch, allowing users to monitor and analyze user inputs and FM responses that violate defined policies in the guardrails.
The limited preview of Guardrails for Amazon Bedrock is now available. Users can request access through their usual AWS Support contacts. During the preview, guardrails can be applied to all LLMs available in Amazon Bedrock, including Amazon Titan Text, Anthropic Claude, Meta Llama 2, AI21 Jurassic, and Cohere Command. Guardrails can also be used with custom models and Agents for Amazon Bedrock.
Guardrails for Amazon Bedrock represents AWS’s commitment to the responsible development of generative AI applications. By providing customizable safeguards and controls, AWS aims to help users build and deploy generative AI applications that align with their responsible AI goals. With this new feature, users can confidently deliver relevant and safe user experiences while upholding their company policies and principles.
The introduction of Guardrails for Amazon Bedrock marks another step toward responsible AI integration and promotes a people-centric approach to generative AI applications. By ensuring the implementation of safeguards tailored to specific use cases and responsible AI policies, AWS is empowering developers to innovate safely in the realm of AI.