Should we be concerned with the rapid development of AI technology?

Photo courtesy of Blake Israel

In light of the recent open letter penned by the Future of Life Institute, an organization funded in part by Elon Musk, we at the Technique Editorial Board wish to discuss issues surrounding artificial intelligence (AI) technology as well as the OpenAI, an organization dedicated to granting the public full access to cutting-edge AI developments and research. 

In particular, we wish to discuss concerns with the rapid development of artificial intelligence technology without limitations, as well as calls to stem high-level AI research. This has been put forth in hopes of increasing or improving governing legislation and other checks on AI power. 

We, at the Technique, do acknowledge there are many merits to AI. Primarily, certain forms of AI such as ChatGPT-4, which inspired this discourse, can function as a resource to ask about topics and get fairly accurate answers, all in the form of a conversation. Such resources can be a good opportunity to expand access to information that previously may have been inaccessible or difficult to comprehend. 

Technology, such as large language models (LLM), has the potential to impact educational systems by functioning as tutoring services and reaching/educating a wider audience. In recent years, other developments have been made in creating machine learning databases that can aid in medical research and drug development. Additionally, there are now databases that can compile medical symptoms and communicate solutions without bias. 

Contrarily, technology is only as good as the humans who build it, and thus AI often reflects intrinsic human biases. Prediction machines are only as good as the data that it is given to dictate its responses. However, data and external information, especially hailing from the Internet and other resources, has innate bias that cannot be unlearned by AI technology. 

The deepest fallacy of AI is its lack of empathy. While AI machines can be taught to act on free will, these machines arguably cannot truly act on free will. This is what distinguishes the technological machine from humans. While the machine may think like a human, it is not necessarily human. Options such as ChatGPT-4 are convenient assets, but they lack the human experience. Using AI as a personal writer erases individuality, as it does not know an individual’s personal life, sentiments and experiences sufficiently enough to truly emulate the human voice. 

In the creative world, it removes lived experience from words, art and other forms of expression. Thus, AI cannot replace humans or all jobs because of the merit of originality, something that cannot be coded.

While there are widespread fears of the power of AI and the trope of “robots taking over the world,” we do not share this concern. As AI technology evolves, the tools to track and govern them will also evolve. Schools and universities are already putting measures in place to counteract the use of ChatGPT and other machines in academic settings. 

As research in AI continues, so will research in understanding and stemming the power that the technology holds.

We appreciate the benefits associated with AI as a whole, but find it important to highlight some problems involved specifically with the OpenAI research laboratory. According to a 2023 Time article, OpenAI outsourced to Kenyan laborers who worked for under $2 per hour. 

This is contrary to the ideas of “Silicon Valley success” that ChatGPT was touted as. 

Concerningly, the earlier edition of ChatGPT, ChatGPT-3, was prone to hateful commentary, exhibiting racism, violence and sexism in its rhetoric. This verbiage was scrubbed by sending thousands of pieces of offensive text and content to the Kenyan firm in order to teach the system what language and topics are unacceptable. We strongly condemn this practice. 

While AI is a useful tool, it should be one among a toolbox of other resources. One dangerous possibility is overt reliance on AI technology; if, or when, this technology crashes, we, as a society, would be unable to function. 

As it stands, we do not see AI as something that should be feared, but if those who are well-versed in the field, such as Musk, have raised a concern, this may be cause for concern. We encourage everyone to educate themselves on the benefits and perils of AI as it is very quickly integrating itself into our world.