Algorithmic bias refers to systematic and repeatable errors in a computer system that produce unfair outcomes, often reflecting societal prejudices. These biases can arise from unrepresentative Training Data or flawed model design, leading to discriminatory impacts in fields from finance to criminal justice. Mitigating such bias is crucial for building equitable AI Systems.