Exploring Ontological Assumptions in AI: A Study's Insights
A study reveals how ontological assumptions in AI limit understanding of diverse perspectives and potentially perpetuate bias.
Key Points
- • A recent study critiques the ontological assumptions in AI that shape biases.
- • AI outputs often reflect a narrow, biological perspective of humanity, favoring Western philosophies.
- • Diverse cultural perspectives are underrepresented in AI systems' evaluations and outputs.
- • The authors advocate for new frameworks that embrace a variety of human experiences in AI development.
A recent study published at the CHI Conference on Human Factors in Computing Systems highlights the critical need to examine ontological assumptions in generative AI to combat inherent biases. Conducted by Stanford PhD candidate Nava Haghighi, the research indicates that current AI systems, like ChatGPT and Google Bard, predominantly reflect Western philosophical perspectives, neglecting diverse cultural views.
The study revealed that when asked to visualize concepts, such as a tree, AI tools sometimes produce simplistic representations, lacking key elements like roots. This points to how different cultural backgrounds inform our understanding of fundamental concepts and how AI often fails to capture this diversity. Researchers analyzed various AI models using 14 specific questions regarding their ontological frameworks and found that while some acknowledge differing definitions of humanity across cultures, there remains a significant bias toward a narrow biological definition.
Moreover, the evaluation of AI outputs suggests that AI-generated behaviors can be erroneously considered more "human" than those of actual people, raising questions about the criteria used for determining what constitutes human-like behavior. The authors recommend that AI development must shift to include diverse ontological perspectives and new evaluation methods that highlight fairness while fostering broader conceptual understanding.
Failure to address these assumptions could result in the establishment of dominant truths that restrict human imagination, ultimately shaping the future impact of AI technologies on society.