Tech giants are engaged in a fierce competition to bring text-to-image AI generators into mainstream tools such as Adobe Photoshop and YouTube. These AI tools have the capability to generate artwork or realistic images based on written commands. While they gained attention and impressed the public last year, they have yet to be widely adopted for use at home or work.
To address this, leading tech companies are working on integrating text-to-image generators into familiar tools like Adobe Photoshop and YouTube. However, before these tools can be introduced to the mainstream, users and regulators need to be assured that measures have been put in place to prevent copyright infringement and the creation of troubling content.
Early versions of AI image generators faced backlash, including copyright lawsuits and concerns about the creation of deceptive political ads or abusive sexual imagery. While these challenges have not been completely resolved, a new wave of image generators claim to have stronger safeguards against copyright theft and unethical usage.
For example, Amazon plans to allow U.S. customers to generate personalized displays on their Fire TV screens by simply speaking commands like Alexa, create an image of cherry blossoms in the snow. Adobe, on the other hand, has released an AI generator called Firefly, which addresses legal and ethical concerns by using its own Adobe Stock image collection and licensed content. This ensures that the generated images are legally clean and reduces the risk of copyright infringement.
Other tech giants are also jumping on the bandwagon. OpenAI, the creator of ChatGPT, recently unveiled its third-generation image generator DALL-E 3, which incorporates safeguards to decline requests for images in the style of living artists. Microsoft showcased how it is integrating the DALL-E 3 image generator into its graphic design tools and Bing search engine, while YouTube introduced a new Dream Screen feature for creators to customize backgrounds in short videos.
To address concerns surrounding AI-generated content, major AI providers including Amazon, Google, Microsoft, OpenAI, and Adobe have agreed to voluntary safeguards set by President Joe Biden’s administration. These safeguards include developing methods like digital watermarking to help identify AI-generated content.
While progress has been made, there are still challenges to overcome in ensuring the responsible and ethical use of text-to-image AI generators. However, with the efforts of tech giants to implement safeguards and integrate these tools into familiar platforms, the mainstream adoption of AI-generated images may become a reality.