Liran Razinsky speaks ethics and AI over coffee

Event speaker Liran Razinsky is a lecturer at Bar Ilan University in Isreal. His studies center around hermeneutics and cultural studies. // Photo by courtesy of the Buber Institute

The modern world is becoming ever more dependent on artificial intelligence (AI), but there are potential philosophical issues that come with the increasing use of AI. On Sept. 28, the GT Ethics, Technology and Human Interaction Center (ETHICx) hosted a gathering focused on just that. 

According to the ETHICx website, the center was founded in collaboration with the Ivan Allen College of Liberal Arts and the College of Computing and works to provide a space to connect discussions of the ethics of technology across campus. At the event, guest speaker Liran Razinsky, senior lecturer at Bar Ilan University in Israel, presented his views on the popular notion that AI algorithms know humans better than they know themselves. 

During the event, Razinsky took the position that AI will never be able to understand the intricacies and complexities of humans, arguing that algorithms do not have the full depth and complexity of humans’ knowledge.

“Algorithmic knowledge is that of a distinctly different kind to human knowledge. We can’t compare algorithmic knowledge to self-knowledge,” Razinsky said.

The complexity of human interpretations of the world, according to Razinsky, is a result of humans using all of their past experiences to interpret new stimuli. 

Razinsky explained that, unlike an AI system, two humans could have the same experience and walk away with a different explanation of the event; he chalks this up to the human condition.

Additionally, Razinsky spoke about how humans take time for reflection and interpretation after receiving a question, unlike AI, which returns its responses almost instantly after receiving data. Razinsky said that due to these factors, “AI fails to sense an inner perspective that we have of ourselves,” and therefore, does not surpass the bar of knowing humans better than they know themselves. He also believes that AI will never expand to this level of knowledge partly because of the human inability to fully understand ourselves.

In Razinsky’s view, every human lacks complete self-knowledge, making humans distinct from AI. For example, humans often do not know the source of their emotions and fail to recognize their own omissions of information in their thoughts. In a way, the failure to completely understand ourselves is a part of who we are, according to Razinsky.

“The absence of full self-awareness characterizes each human. Our individual uniqueness is rooted in our own blindspots,” said Razinsky, on humanity. He concluded his presentation by saying, “algorithms will never know us completely, even if we barely know ourselves at all.”

The Technique spoke to Micheal Hoffmann, Professor of Philosophy and Co-Director of ETHICx, about this event and what ETHICx is doing to promote ethics across campus.

“Ethics is an issue many people have to deal with and are happy to deal with, but what’s missing is some forum for communication and exchange,” Hoffmann said.

Hoffmann and others associated with ETHICx are also involved in several different AI projects across campus, including the National AI Institute for Adult Learning and Online Education (AI-ALOE), the National AI Institute for Advances in Optimization (AI4OPT) and The AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING).

“We have three AI institutes headquartered here at Georgia Tech, which means the discussion within ETHICx developed into artificial intelligence,” Hoffmann said on the projects’ roles.

ETHICx addresses many challenges directed at promoting diversity and inclusion in AI. Additionally, there are privacy and security issues when AI systems collect large amounts of data.

Beyond these more well-known issues that designers need to consider when implementing AI technology, Hoffmann and the other ETHICx members want to address more niche topics.

“Not only do we have an internal ethical analysis, but we are also organizing stakeholder engagement meetings to learn what the impacts of the technology are on wellbeing: positive or negative,” Hoffmann said.

Hoffmann noted that implementing ethical policies in AI technology does not stifle innovation rather they compliment each other, as he explained, “I think it’s not slowing innovation because technology teams
 are interested in what we’re doing. The technology teams are getting feedback that was very hard to get. It’s a synergy.”

Looking to the future, Hoffmann expressed his belief in the good AI can do when used as a tool, but that it should continue to be monitored and regulated by humans for the forseeable future.

“I think it’s just a great tool, but it is important to observe what’s going on and ensure that nothing bad happens,” Hoffmann said.

ETHICx hopes to host more events in a similar manner to continue providing students, faculty and staff a place to discuss ethics issues as campus becomes increasingly involved in the development of AI technologies. 

Anyone looking to learn more about ETHICx and future events can find additional information on their operations at