Systematic errors in AI predictions caused by assumptions in the training data or algorithm. Can lead to unfair or inaccurate outputs for certain groups or scenarios.
AI bias refers to systematic errors or unfairness in AI system outputs that arise from problematic assumptions in the training data, algorithm design, or deployment context. Unlike random errors, biases consistently disadvantage certain groups or skew results in particular directions.
Sources of AI bias:
Types of bias:
Detecting bias:
For US businesses, understanding and mitigating AI bias is crucial for compliance with federal anti-discrimination laws, FTC enforcement actions, and state-level AI regulations like the NYC Local Law 144 on automated employment decisions.
We help American businesses identify and mitigate AI bias through proper testing, diverse data practices, and ongoing monitoring aligned with NIST AI RMF and EEOC guidance on algorithmic decision-making.
"A US employer using an AI hiring tool must comply with EEOC guidelines and state laws like Illinois BIPA and NYC Local Law 144 - requiring bias audits, candidate notification, and ongoing monitoring to avoid discriminatory outcomes."