Generative Artificial Intelligence systems that produce images often reproduce narrow and exclusionary patterns of representation, particularly in contexts associated with success, wealth, and leadership. When prompted with descriptors such as “successful man,” “rich businessman,” or “happy professional,” the outputs overwhelmingly depict white men, while Black, Asian, Indigenous, Latino, and other underrepresented groups are systematically omitted or excluded by default. This default representation reflects assumptions embedded in training data and model design, reinforcing a limited and inequitable visual narrative of success and authority.
Although these ethnic groups do not yet constitute the majority in positions of economic and corporate power, they hold significant representation within the global economy, occupying leadership roles, entrepreneurial positions, and decision-making spaces. However, this social reality is not adequately reflected in generative AI outputs, which continue to rely on historically Eurocentric visual references. Social change is already underway, but it has not been consistently incorporated into AI systems. This persistent underrepresentation raises critical questions about the social norms, cultural narratives, and perceptions of legitimacy and success that these technologies tend to reinforce.
This talk presents a critical analysis through case studies, examining prompts and AI-generated images to reveal recurring patterns of inclusion and exclusion.
Drawing on Human-Centered Design and Tech Ethics Frameworks, the session explores the ethical, social, and design implications of generative AI, highlighting the responsibility of designers and the role of Eurocentric and North American big tech companies in shaping more equitable, representative, and socially responsible AI systems. Designing for users requires not only addressing functional needs, but also engaging with the social, cultural, and symbolic contexts in which users exist.
Audience Takeaways:
1) Understand how generative AI systems reproduce structural exclusion in visual representations associated with success, power, and leadership.
2) Recognise the impact of training data, algorithmic bias, and design decisions on representational outcomes.
3) Develop a critical perspective grounded in Human-Centered Design, considering users within their broader social and cultural contexts.
4) Apply principles from Tech Ethics Frameworks to assess the ethical, social, and cultural implications of generative AI systems.
5) Reflect on the responsibility of design in shaping technologies that influence collective narratives of belonging, legitimacy, and authority.