Google Implements New Rules for Reporting Offensive AI-Generated Content
Google has announced that it will be implementing new rules for reporting offensive AI-generated content on its platforms. The company aims to ensure that AI-generated content is safe for users and that their feedback is taken into account.
Starting next year, developers will be required to include a way for users to report or flag offensive AI-generated content within their apps, without needing to exit the app. This will help in content filtering and moderation, similar to the current in-app reporting system for user-generated content. Google wants to empower users to contribute to the safety and quality of the content they consume.
In addition to this, apps that use AI to generate content will also have to prohibit and prevent the creation of restricted content. This includes content that facilitates the exploitation or abuse of children, as well as content that enables deceptive behavior. By setting these boundaries, Google aims to create a safer online environment for users.
To strengthen user privacy, Google has introduced a new policy that restricts app access to photos and videos. Apps will only be allowed to access these files for purposes directly related to their functionality. For apps that require occasional or one-time access to these files, they will be required to use a system picker like the Android photo picker. This will help protect user data and ensure that apps only access information that is necessary and relevant.
Google is also making changes to the use of full screen intent notifications. Starting with apps targeting Android 14, full screen intent notifications will only be allowed for high-priority use cases, such as alarms or receiving phone and video calls. For other notifications, apps will need to ask for user permission. This change aims to reduce interruptions and provide a better user experience.
These new rules reflect Google’s commitment to responsible AI practices and its dedication to providing the best possible experiences for its users. The company understands the importance of user feedback and wants to ensure that AI-generated content remains safe and beneficial for everyone.
Overall, these updates are a step in the right direction in creating a safer and more user-friendly online environment. By implementing stricter rules and guidelines, Google hopes to improve the overall quality and safety of AI-generated content on its platforms. Users can look forward to a more secure and enjoyable experience while using apps that utilize AI technology.