Google has updated its set of guidelines and recommendations to assist developers in integrating AI into their apps.
Emphasising testing and responsible use of the tool, the technology giant will now make it mandatory for developers to provide ways for users to report or flag offensive material, under its newly updated AI-Generated Content Policy.
Subscribe to Mobile Marketing Magazine
Click here to get the latest marketing news free in your inbox every Thursday
The technology giant also highlighted the importance of responsible prompting in Generative AI apps.
As a result, apps promoted for inappropriate use will be removed from the Play Store.
According to the document, to prevent such issues, developers should carefully review their marketing materials to ensure they accurately represent the app’s capabilities and comply with Google’s App Promotion requirements.
The policy now also includes implementing safeguards against prompts that could be manipulated to generate harmful or offensive content, with developers being encouraged to document their testing processes thoroughly, as Google may request this documentation to understand better how user safety is maintained.
Google also unveiled plans to introduce new app onboarding capabilities to make submitting Generative AI apps to Play more transparent and streamlined.
The news comes as last week Google launched a dedicated mobile app to allow users to use its AI tool, Gemini.