The Hidden Costs of Neglecting Diversity Inclusion in AI Systems

3 Jul 2024


(1) Muneera Bano;

(2) Didar Zowghi;

(3) Vincenzo Gervasi;

(4) Rifat Shams.

Abstract, Impact Statement, and Introduction

Defining Diversity and Inclusion in AI

Research Motivation

Research Methodology



Conclusion and Future Work and References


In this section we briefly discuss a few insights that are relevant in the wider context in which our proposed process for deriving D&I-focused user stories should take place, as well as discussing limitations of our research.

A. Importance of Longterm Trade-off Analysis

In the competitive AI-driven business landscape, prioritizing D&I over accuracy and profit may seem like a hard sell. However, it is essential to recognize that neglecting D&I can have adverse long-term effects on a company's reputation, including backlash, loss of user trust, and potential legal consequences. Recognizing that businesses may initially benefit from the higher accuracy and efficiency of AI systems that do not fully address D&I concerns, and may be tempted to save on related costs, the approach we have presented in this paper helps focus the investment on those facets of D&I that really matter in the specific context, by way of considering among the requirements only the user stories focusing on the attributes, values, roles and artifact that are relevant for the project.

B. Responsibility of the AI development team

While biased training datasets can perpetuate and reinforce discriminatory algorithms due to the use of shared and reusable datasets containing inherent biases [11] [31], a lack of diversity and inclusion in AI development teams can lead to unconscious biases and stereotypes, as team members may inadvertently project their own perceptions of reality or society into their work [32, 33].

C. D&I Requirements and Non-Functional Requirements

Analysing D&I requirements in AI, like other nonfunctional requirements, entices concepts such as satisfaction up to a certain level (satisficing), inter-requirements structures based on positive and negative contributions to certain goals, and the determination of acceptable trade-offs in case of conflicts. By embodying general D&I guidelines into concrete user stories, our approach empowers developers to give them the same level of attention and rigour as other vital system characteristics. This approach ensures that AI systems are not only efficient and reliable but also ethically responsible and inclusive, catering to diverse user needs.

D. Context and Culture

The context surrounding AI system development plays a crucial role in implementing diversity and inclusion requirements. For example, Europe's General Data Protection Regulation (GDPR) [34] enforces strict privacy laws that organizations must adhere to when designing AI systems; this in turn may affect whether certain D&I requirements can be satisfied. In contrast, some countries with less individualistic cultures may have more hands-off privacy norms. The realization of D&I requirements in AI systems is also influenced by various factors such as organizational culture, governmental and legal frameworks, and societal norms that govern the development and deployment of these systems.

E. Using GPT-4 for writing D&I in AI Requirements

GPT-4, with its expansive training dataset, is well-equipped to assume diverse personas and extrapolate a broad range of requirements, reflecting potential challenges from myriad perspectives. This capacity aids in generating a holistic view of potential requirements, especially in the early stages of AI system design. However, while it can simulate multiple viewpoints, it inherently lacks the authenticity and depth of real human experiences, especially regarding nuanced aspects of diversity and inclusion. The subjective intricacies and tangible challenges faced by individuals in their lived experiences often provide invaluable insights that may not be entirely understood by GPT-4. Thus, combining the broad, simulated perspectives of GPT-4 with the grounded, real-world experiences of human stakeholders can offer a synergistic approach. Together, they provide a richer, more comprehensive information base that can immensely benefit analysts and developers, ensuring AI systems are both inclusive and cognizant of the intricate facets of human diversity.

F. Limitations

Our research, despite employing a rigorous evidence-based methodology to identify D&I requirement themes, has certain limitations in coverage and applicability. One limitation is that the themes are derived solely from published and publicly available research, which might not cover the entire spectrum of D&I issues in AI systems. Additionally, although the identified requirements are generalizable and applicable to a broad range of AI systems, further analysisis necessary to tailor and adapt them to the specific context of an individual AI system project. The process of deriving D&I user stories that we have outlined is highly reliant not only on the requirements engineer’s professional ability but also on their ethical and social sensibility. We believe this to be necessary and the only way to give due consideration to the nuances of ethical reasoning.

These limitations imply that the current research serves as a foundation for understanding D&I requirementsin AI, but there is a need for more in-depth examination and customization to address unique challenges and peculiarities that may arise in specific AI implementations and social contexts.

This paper is available on arxiv under CC 4.0 license.