ChatGPT: Ethics and AI Chatbots

 

As cutting-edge artificial intelligence (AI)-enabled chatbots become available online, more of us are getting the opportunity to have a seemingly intelligent and fluid conversation with a machine. But to many, the sudden accessibility of these automated conversation partners feels a little disorientating. In the wake of OpenAI releasing ChatGPT, a powerful AI chatbot, online spaces are abuzz with reactions and “hot takes” as people try to make sense of what this technology means for the world and their lives. A quick glance at news headlines and at platforms like Twitter illuminates the bizarre and even fantastical relationship people are creating with automated conversation partners. But these AI chatbots aren’t just being used for common chit-chat. ChatGPT, for example, is theoretically able to write persuasive homework essays, generate and explain code for programmers, give financial analytics, and more. Although this might make ChatGPT “social media’s newest star,” it also creates some thorny ethical problems for OpenAI.

With such an impressive array of multidisciplinary talents, an AI chatbot like ChatGPT can seem mystifying to engage with. But under the spell of any oracle, it’s important to understand a couple of things. Firstly, ChatGPT — and other cutting-edge AI chatbots — have their limitations. ChatGPT may be useful in a number of areas; it may even save a sleep-deprived college student from writing their dreaded final term paper. But, it certainly does not approach anything like AI sentience or consciousness, nor does it approach so-called “artificial general intelligence”. ChatGPT only reflects back information that it synthesizes from the data it was trained upon, which is always important to keep in mind. The second thing that must be understood is that, though ChatGPT does not make independent and intelligent decisions, the outputs it creates still have substantial ethical impacts. The way in which technologies are used, (from a hammer to a missile, and a navigation system to a new pharmaceutical drug) informs the ethical risks and challenges associated with them. Scholars often highlight how new technologies generate new ethical questions because they expand our access to new theoretical and material considerations. Given the traction that ChatGPT and other cutting-edge AI chatbots have gained, it’s important to assess which specific ethical questions this technology raises, new or old. The below is a brief but systematic roundup of different ethical risks raised by the chatbot. 

Perhaps the most widely held concern when AI like ChatGPT is able to generate new content at the click of a button relates to originality. How can we prove that someone authored something themselves or used AI to author it for them? This principle of authorship is at the core of our education system, which is founded upon tests that rank people’s intellect, often in standardized and systematic ways. To consider what an AI system like ChatGPT means for the value of originality, it’s worth noting that originality is, itself, underpinned by a series of other ethical principles. These principles include fairness, among others. We value fairness when it comes to the importance of originality in writing: we believe, for example, that a good author is one that can spin the story themselves. Hence, we believe it is fair and right that high school students in a literature class should be tested on their ability to craft unique arguments or create new narratives, rather than to borrow old ones. The assumption is that everyone starts at an accessible baseline of their own skills — at a metaphorical blank page. As a result, it is reasonable to expect them to use those skills to compose and be assessed on their original work. ChatGPT poses challenges to the value of fairness by being able to bypass the common, shared baseline of accessible reason and thought needed to develop writing skills. Similarly, we also care about maintaining originality because of how our society values original and creative thought as pursuits that bring us closer to being an enlightened individual. ChatGPT, through reproducing seemingly complicated thoughts in an instant, skips the pedagogical or generative process of skill-building that is required of most people to produce creativity. The virtue of building enlightenment and creativity through pedagogy is one that is deeply held in our society. As such, GPT3 seems to pose a risk to originality through how it feigns originality without relying upon the skills needed to build it.

Other ethical concerns with ChatGPT have less to do with the nature of the content it creates and more to do with how that content was made in the first place. ChatGPT is trained on large volumes of data that are pulled from the internet and can include content that was not consensually shared. This implicates the privacy of people whose personal information is identifiable through this data, especially when they have not consented for a large language model to be trained upon their information. In addition, scraping content unscrupulously from the web to train AI systems can cause AI to learn upon data that is biased, prejudicial, and hateful. If language models like ChatGPT learn upon this kind of data, the outputs they produce are also likely to be biased, prejudicial, and hateful. These ethical risks associated with AI like ChatGPT are well-documented. But there remain several obstacles to mitigating privacy and bias concerns for large language models with technical safeguards, although OpenAI has made attempts to do this.

Another ethical risk concerns the accuracy — or rather the inaccuracy — of ChatGPT’s outputs. OpenAI acknowledges that the outputs of ChatGPT can be factually inaccurate, and users have pointed out that ChatGPT can give eloquent and convincing answers that are untrue. When ChatGPT spits out masses of information in mere seconds, it might be easy for people to assume that the automated responses of ChatGPT must be factual, accurate, or objective. But in reality, this is not always the case. Lastly, there are undoubtedly bad actors that will use ChatGPT for harmful purposes. Scammers have already taken advantage of being able to churn out fake communications en masse. Take the case of robocalling, for example. Nefarious actors will undoubtedly find ways to use ChatGPT for scams and harms. Following the example of mischief models, a phenomenon where people use AI models for the specific purpose of provoking or manipulating others, such as through “trolling,” it’s clear that people will find ways to produce irreverent chaos out of ChatGPT just for the hell of it, even at the cost of harming others.

As language models increase their capabilities, ethical concerns will only be heightened. But it is our hope that this primer on some of the ethical risks posed by ChatGPT, although brief and not comprehensive, will encourage wider discussion about whether the risks outweighs the rewards. If you open up ChatGPT on your computer and ask it about how we should mitigate the ethical risks it poses, ChatGPT replies that “…ultimately the responsibility for the ethical use of ChatGPT and other large language models lies with the users of these tools.” That’s an awfully convenient solution for the AI model (and, more specifically, those who coded it) to suggest: just defer ethical responsibility onto the users. But making users the ultimate arbiters of ChatGPT’s ethical fate is not the answer. Instead, we need a combination of better ethical methods and training for developers of this technology, realistic regulation, and organizational policies that assess its impacts. We also need to promote user education and awareness to address the moral challenges of ChatGPT and other cutting-edge AI chatbots.

What might these methods, policies, and education systems look like? These are the processes that we work on at Moral IT every day to apply to organizations’ technologies. Implementing these processes for innovative AI chatbots like ChatGPT could allow developers and policymakers to initiate the appropriate safeguards against harmful uses and impacts of the technology. These safeguards might make chatbots like ChatGPT ethically sound and safe for users. Alternatively, however, a comprehensive assessment of the technology’s risks against its benefits may lead some to conclude that AI chatbots of this sort should simply not be mainstream and accessible to us all at the click of the button.

 
Next
Next

Tesla is Designing a Dangerous Future of Driving