Late last year, we witnessed the launch of fascinating conversational artificial intelligence (AI) chatbots–also known as generative AI (GAI)–that could create impressive text, images, code, music, or video according to user requests. Driven by their promise, versatility, ease of use, and availability, these AI applications quickly gained the interest of various stakeholders, including the media, and became instantly popular. As a result, they gained a massive user base and were utilized for diverse purposes.
ChatGPT is a general-purpose conversational AI chatbot that can answer both broad and specific questions and generate well-written text on any topic on the fly. It can also refine (regenerate) the response following the user’s feedback. Within the first four days, it attracted its first million users; in just two months, it achieved 100 million active users, surpassing a milestone that took TikTok more than nine months to reach. GAI tools generated a lot of hype and enthusiasm and attracted enormous investments for further development.
The power of GAI has been embraced for both personal and enterprise applications. Recently, OpenAI, the maker of ChatGPT, released a ChatGPT API that enables developers to embed ChatGPT in their applications. Microsoft is integrating the technology into its Dynamics and Power platforms. SAP and others are also embracing ChatGPT in their application suite. As a result, the use of GAI for various applications in several sectors is rapidly rising.
Yet, despite its promise, popularity, and hype, GAI has significant limitations. Chatbots like ChatGPT make factual errors, fabricate answers, and give invalid responses, leading to mixed feelings among users. Furthermore, AI content generators raise several critical ethical concerns that developers, users, and regulators must address now. Otherwise, we might face disastrous, unintended consequences that could harm society, businesses, and the economy.
Ethical Concerns
AI-driven content generators raise several ethical concerns related to bias, plagiarism, intellectual property, misuse, and the potential to generate misinformation, fake news, or misleading content. These could have serious ramifications, such as harming reputations, spreading false information, or even inciting violence.
Bias in AI-Generated Content
A primary ethical concern in using AI content generators, particularly text generators like ChatGPT, is the potential for bias in their responses. As the large language models (LLMs) powering content generators are trained on a massive set of pre-existing information, images, and data drawn from several sources, including the Web, biases present in training data will be reflected in the model’s output. This can lead to an unfair, biased, inaccurate, or narrowly focused response and discriminatory outcomes, such as racial or gender discrimination. This prejudice is not inherent in GAI, but rather comes from how the AI is developed and trained. If we intentionally or unintentionally train LLMs with biased information and data, the resulting response will be biased. So, to mitigate bias and ensure a fair and trustworthy response, it’s crucial to use an unbiased, diverse, and representative dataset.
We could minimize AI bias by appropriately choosing training data, data curation, and filtering. However, considering the volume of training data, these measures seem more challenging to adopt in practice. Nevertheless, tech companies such as OpenAI are working on reducing chatbot bias and letting users customize its behavior.
Want More Tech News? Subscribe to ComputingEdge Newsletter Today!
Misuse and Abuse of AI Generators
Another major ethical concern is the potential for misuse and abuse of AI content generators. For example, text-generators like ChatGPT can be used to generate content for malicious purposes such as spreading misinformation or sexist, racist, or otherwise offensive messages. It could also be used for generating harmful content that could incite violence or social unrest or impersonate individuals.
Furthermore, since there are no limits to the types of questions a user can ask, bad actors could use the bot to engage in antisocial activities. For example, they could use chatbots to learn how to make explosives, shoplift, or cheat. So, safeguards are essential to prevent misuse and hold those who misuse the technology accountable.
Detecting AI-Assisted Plagiarism (AIgiarism)
The use of tools like ChatGPT can trigger increased plagiarism by authors and students that is challenging to detect. To address plagiarism aided by AI tools, known as aIgiarism, special tools that aim to distinguish AI-written text from human-written text, such ChatZero and AI Text Classifier, have emerged.
Security Risks
Hackers could effectively use AI content generators to create personalized, convincing spam messages and images with hidden malicious code. This can increase cybersecurity attacks and extend to a large number of victims.
Additionally, users may feed sensitive personal or business information to chatbots, which could be misused by the developer, who might store and analyze the information provided by the users. As illustrative examples, consider these two use cases. Recently, an executive cut and pasted the firm’s 2023 strategy document into a chatbot and asked it to create PowerPoint slides for presentation. In another incident, a doctor fed his patient’s name and medical condition into ChatGPT and asked it to craft a letter to the patient’s insurance company. These use cases highlight ethical concerns, data privacy issues, and security fears and risks surrounding AI content generators.
Legal Concerns
AI-generated content raises copyright questions, such as who owns the rights to an AI-generated essay, piece of music, or work of art. Is it the person who provided prompts and generated the content using AI, or is it those who provided data that was used to train the chatbot? In this context, be aware that the US Copyright Office declared that images generated by Midjourney and other AI text-to-image tools are not protected by US copyright law since they are not the product of human authorship. Artists have launched a class action against companies that offer AI-generated art, questioning the legality of using data to train an AI without the consent of those who created that data.
Other Implications
AI content generators have the potential to automate many tasks currently performed by humans–for example, writing, editing, and customer service–which raises concerns about job displacement and the impact on the workforce.
Furthermore, over-reliance on conversational chatbots for our activities and work raises ethical concerns about the loss of genuine human-to-human conversations and the resulting social implications.
Call to Action for Developers, Users, and Regulators of AI
The responsible development and utilization of GAI and its various applications will alleviate ethical concerns and minimize the associated risks while generating significant benefits for individuals, organizations, and ecosystems.
The responsible development and use of GAI is a collective responsibility among developers, users, and regulators. Developers are responsible for ensuring the model is trained on diverse and representative data and for implementing safeguards to prevent misuse. To address the complex challenges posed by GAI, we also need to adopt an interdisciplinary approach that brings together experts from computer science, law, ethics, and other fields.
We need to develop regulatory frameworks to govern the development and use of AI content generators. These frameworks will help ensure that these technologies are used responsibly and ethically and will address privacy, bias, and accountability concerns.
Ensuring accountability and responsibility, as well as developing an appropriate legal framework, will help promote the ethical use of these technologies for the benefit of society. A recently proposed NIST AI Risk Management Framework offers organizations and individuals approaches that help foster the responsible design, development, deployment, and use of AI systems over time.
Users must be mindful of the data they provide to AI systems, including personal information and other sensitive data, and be aware of the limitations of AI content generators and what they can and can’t do. Users must use AI content generators for ethical use, pose ethical and valid prompts, sense-check and fact-check their responses, and correct and edit them before using them. They shouldn’t use the system response without verification and correction as required.
In addition, we must embed general moral principles and a comprehensive overview of AI ethics into AI curricula and education for students and into training programs for AI developers, data scientists, and AI researchers.
Outlook
Although AI content generation is in a nascent stage, it will evolve and improve and address ethical concerns. Like it or not, AI-enabled content generators will become mainstream and a game-changer in many ways. As a result, we will soon be flooded with AI-generated content for good – and bad.
Therefore, we must embrace the promise of GAI, mitigate its limitations, and address the ethical concerns and risks. It is the collective responsibility of developers, users, regulators, government, and society. In addition, we must adopt a responsible and ethical approach to researching, developing, and deploying AI-powered content generators and use them for diverse purposes – now and in the future.
About the Author
San Murugesan is the director of BRITE Professional Services, Sydney, Australia, and interim editor-in-chief of IEEE Intelligent Systems. He is a Fellow of the Australian Computer Society, a Golden Core member of the IEEE Computer Society, a Life Senior member of the IEEE, a distinguished speaker of the IEEE and ACM, and a former editor-in-chief of IT Professional.
Generative artificial intelligence (GAI)-driven chatbots and writing, drawing, and coding tools present vast new opportunities but also raise concerns. Their implications are significant and could have far-reaching consequences. As San Murugesan, interim editor-in-chief of IEEE Intelligent Systems, and Aswani Kumar Cherukuri outline in an upcoming May 2023 Computer article, our ability to leverage GAI’s value in education and other sectors depends on understanding its capabilities and limitations.
Disclaimer: The author is completely responsible for the content of this article. The opinions expressed are their own and do not represent the IEEE’s position nor that of the Computer Society nor its leadership.