top of page
  • Joseph Krisnanto

Biases in AI: Good or inaccurate?

Artificial intelligence has recently become significantly more prominent in many forms of media, research, and more. Because of its more significant presence, many predict an “AI boom,” much like how the “dot-com boom” increased the value of many internet-based companies. However, AI is still in very experimental and incomplete stages, and there are no certainties about how businesses plan to progress in AI development. However, one issue rises above the rest: the existing biases in AI responses. 


AI is a source that cannot independently generate responses. A prompt-based AI will only be able to answer questions that have been previously answered because it is unable to use intelligence independently, hence the name: “Artificial Intelligence”. While using the internet to answer questions may seem like an innovative solution, the internet has been used in the past and present as a tool to promote misleading and untrue information. Because of this, issues regarding biased AIs are pretty common. 


Google’s AI, Gemini, perfectly reflected this issue. Google AI uses a generator that creates images based on text prompts. However, there was an issue with the image generator. It was highly susceptible to stereotype-based prompts, such as “Name a US president”. To counteract this, employees at Google implemented explicit rules to reduce the usage of white Americans in images and remove the possibility of AI reinforcing stereotypes. However, this heavily backfired. When Google’s AI was asked to generate an image of a Founding Father (Who were all White, American men), all the results would depict a minority ethnicity such as African Americans or Asians. Many other prompts seemingly should’ve generated a particular image, but the AI would select different candidates for the photos due to the implemented bias. 



These issues go beyond AI; they delve into the ethical implications of manipulating a tool designed for extracting factual data. The 'politically correct' nature of AI is a product of its data-based intelligence, a feature that cannot be altered. Google’s well-intentioned efforts to improve AI serve as a stark reminder of the potential pitfalls of controlling a force that generates information. This ethical dilemma, whether to reinforce or alter the truth in the face of recent issues, demands our immediate attention. 


Reference List


Shamim, Sarah. “Why Google’s AI Tool Was Slammed for Showing Images of People of Colour.” Al Jazeera, 9 Mar. 2024, https://www.aljazeera.com/news/2024/3/9/why-google-gemini-wont-show-you-white-people. Accessed 25 May 2024.

Commentaires


bottom of page