The recent surge in interest in artificial intelligence-driven applications in 2023 has been remarkable. Apps such as ChatGPT are becoming more and more powerful with each passing day, and this is revolutionizing the way our world conducts business.
As a curious technologist, it is understandable that the use of ChatGPT to generate content is the source of many questions. With these questions, comes the need to discuss the reliability of the technology and the potential of it to infringe on the intellectual property rights of authors, bloggers, and other content creators.
The Human Artistry Campaign, launched by a coalition of more than forty organisations – including the Record Academy, National Music Publishers’ Association, and the Recording Industry of America – is working to address the questions around AI and copyright. Their objective is to ensure that the rights of songwriters, musicians and other content creators are upheld in the digital age.
An important part of this discussion revolves around the interpretation of copyright law. According to Section 2(2) of the German Copyright Act, copyright can only be given to creations that require the ‘personal, intellectual’ contributions of a human being. This means that if a work is deemed to have both human and AI elements, the traditional elements of authorship must be determined for it to be copyrightable.
Adobe’s Firefly is another example of an AI-driven art generator. Developed to be bias-free and avoid copyright issues, it offers a fresh perspective on the subject. The technology can be used to create entirely new works, removing the burden of bias and extending possibilities to different creators.
This attempt to automate creativity brings forth questions of its own. What constitutes as functional automation and how does this impact the generation of content? And how does this relate to copyright and other legal considerations?
Another area of contention is the tendency for many of the AI-generated content sources to be highly inaccurate and incomplete. Information from Google search results, for instance, typically take precedence over those from AI-generated sources.
At the end of the day, this technology remains a ‘work in progress’. Before trusting it completely, it is important to be aware of the implications of its use and content accuracy. As professionals, this means exercising caution and questioning the facts before accepting them as absolute.
The company mentioned in the article is OpenAI, a research laboratory focused on developing artificial general intelligence to better understand, augment and automate intelligence. OpenAI is backed by a variety of venture capital firms, technology companies, and non-profit organizations, and it is on a mission to ensure that artificial general intelligence is deployed responsibly and safely.
The person mentioned in the article is Daniel Lohrmann, an expert in cybersecurity and author of several books on the topic. Mr. Lohrmann is the Chief Security Officer of Security Mentor, and in his book ‘Virtual Integrity: Faithfully Navigating the Brave New Web’, he focuses on digital ethics and how to properly navigate the challenges of the online world. He is also the co-author of ‘Cyber Mayday and the Day After: A Leader’s Guide to Preparing, Managing and Recovering From Inevitable Business Disruptions’.