Bias in Background Checks: How Discrimination-Free Screening Works
Background checks occupy a legally sensitive point in the hiring process: they are intended to reduce risk, make statements about a person’s integrity, and significantly influence the decision on whether someone gets the job or not. At the same time, they carry considerable potential for discrimination — not only where discrimination is deliberate, but especially where standardized processes or algorithmic systems replicate and scale unconscious bias. For HR leaders in DACH companies, this creates a tension shaped by three legal frameworks: the General Equal Treatment Act (AGG), the GDPR with its prohibition on automated individual decisions in Article 22 GDPR, and the EU AI Act (Regulation (EU) 2024/1689), which has been applicable in stages since August 2024 and classifies HR-related AI systems as high-risk systems.
This article explains which forms of bias appear empirically in background checks, which legal limits apply, which technical and organizational measures are effective, and which ten questions every HR leader should ask a screening provider before putting the tool into production.
What the AGG specifically prohibits
The AGG is the central national implementing act for the European anti-discrimination directives. Under Section 1 AGG, its purpose is to prevent or eliminate disadvantages on grounds of race or ethnic origin, gender, religion or belief, disability, age, or sexual identity. Section 7 AGG contains the actual prohibition of discrimination, which under Section 6 AGG also applies to applicants. The key provision for background-check processes is Section 22 AGG: if, in a dispute, the employee proves facts that give rise to a presumption of discrimination, the other side bears the burden of proving that there was no violation of the anti-discrimination provisions.
The practical effect of this reversal of the burden of proof is often underestimated. An applicant with a non-German-sounding name who is rejected after a background check does not have to prove that the tool discriminated. They only need to present indications — for example, statistical differences in rejection rates — and the employer is then required to prove the opposite. If you operate here without documented test metrics, you simply have no viable defense position.
Which forms of bias occur in background checks
Bias in screening is not a monolithic phenomenon. In empirical research and regulatory reviews, four patterns are documented particularly often:
Name-based bias: Algorithms and manual researchers match names against public databases. Common names in certain ethnic groups generate more false-positive hits because name similarities occur more often. Statistically, a Mohammed Ahmed will have more namesakes in watchlists than a Sebastian Meier, without that saying anything about the person themselves.
Geographic bias: Addresses in certain postal code areas are rated more poorly by credit scoring systems or risk-scoring models. Through the proxy variable “place of residence,” this can lead to indirect discrimination under Section 3(2) AGG.
Socioeconomic bias: Gaps in a résumé, unusual educational paths, or changes between atypical employment arrangements are interpreted as risk indicators in automated plausibility checks. These patterns systematically correlate with social background.
Adverse-media over-coverage: Media screening captures people who are covered in the news. In countries with a free press and lower press freedom, reporting density varies dramatically. An applicant from a highly mediated environment produces more hits — including positive or neutral ones — than someone from a data-sparse environment.
GDPR Article 22: Limits on automated decisions
Under Article 22(1), the GDPR prohibits decisions based solely on automated processing that produce legal effects concerning the data subject or similarly significantly affect them. A hiring decision unquestionably falls under this rule. In its judgment of 7 December 2023 (C-634/21, SCHUFA Holding), the CJEU clarified that even the creation of a probability score used by third parties as a decisive basis for decisions can constitute an automated decision within the meaning of Article 22.
For background-check tools, this means the following in practice: if the tool outputs a risk score and that score effectively determines the hiring decision, Article 22 applies even if a human formally “clicks” at the end. The exceptions in Article 22(2) — contractual necessity, legal authorization, explicit consent — only apply under narrow conditions. Explicit consent in the application context must be assessed critically because of the structural imbalance under Article 4(11) GDPR and Section 26(2) BDSG. The remaining practical approach is therefore: the tool prepares decisions, but a human reviews them substantively and can deviate without difficulty.
EU AI Act: High-risk AI in HR
Under Annex III No. 4, the EU AI Act classifies AI systems used to select applicants or assess candidates as high-risk systems. The requirements for high-risk systems are substantial. Especially relevant:
Article 14 AI Act – Human oversight: High-risk systems must be designed so that they can be effectively overseen by natural persons. The oversight must aim to prevent or minimize risks to health, safety, and fundamental rights. Supervisors must understand the system’s capabilities and limitations, recognize any automation bias that may arise, and be able to interpret the system outputs correctly.
Article 26 AI Act – Deployer obligations: Anyone who uses a high-risk system — typically the employer itself — has independent obligations. These include using the system in accordance with its purpose and instructions, ensuring human oversight by qualified persons, monitoring operation, and reporting serious incidents to the provider and, where applicable, to the competent market surveillance authority.
Article 27 AI Act – Fundamental rights impact assessment: When used in HR contexts, a fundamental rights impact assessment must be carried out before deployment, with particular attention to discrimination risks.
The provisions on high-risk AI will apply in full from 2 August 2026. Anyone currently procuring HR tools with AI components must ensure compliance by then.
Concrete measures against bias
The most effective countermeasures are not technologically exotic, but organizationally disciplined:
Blind review: In the first screening stage, identifying features — name, photo, nationality, address — are masked. Decision-makers initially see only the information relevant to suitability. Identity is unmasked only in later stages.
Anonymized candidate views: Modern screening platforms offer dedicated views in which sensitive categories are hidden or replaced with placeholders. The hiring manager decides based on standardized, comparable information.
Audit logs: Every access, every decision, every manual deviation from the tool’s recommendation is logged. In a dispute under Section 22 AGG, this is the defense foundation.
Four-eyes principle for adverse decisions: No tool recommendation leads to a rejection on its own. Every negative decision is reviewed by a second qualified person.
Calibration meetings: Regular team reviews of screening decisions ensure that criteria are applied consistently and individual biases do not become entrenched.
Red-flag analysis: How to spot bias in a screening tool
How can you tell whether a tool is producing bias? A few indicators:
Opaque scores: The tool outputs a number or traffic-light color without making the underlying factors understandable. If you do not know the decision factors, you cannot address bias.
Missing fairness metrics: Providers that do not report Equal Error Rate, Disparate Impact Rate, or comparable metrics cannot demonstrate that their system performs evenly across protected groups.
Proxy variables: Postal code, educational attainment, native language, or job history can lead to indirect discrimination. If these fields are used in scoring logic, extreme caution is required.
No documentation of the training dataset: AI models are only as fair as their training data. A provider that does not supply provenance documentation cannot demonstrate compliance with Article 10 AI Act.
Testing: How background-check tools are assessed for bias
Empirical bias testing uses established statistical metrics. The two most important are:
The Equal Error Rate (EER) measures whether the rate of false positives and false negatives is consistent across protected groups. If the EER for group A (for example, men) is 3 percent and for group B (women) is 8 percent, the system produces systematically unequal errors. The Disparate Impact Rate (DIR) comes from the U.S. Equal Employment Opportunity Commission and checks whether the selection rate of a protected group reaches at least 80 percent of the selection rate of the majority group (the four-fifths rule). If these thresholds are not met, there is strong evidence of indirect discrimination.
Reputable providers publish these metrics in a model card or transparency report. They also have their systems independently audited for fairness and provide the corresponding certificates.
What applies in Switzerland, Austria, and across the EU?
Switzerland: Equality Act and Federal Constitution
Switzerland does not have a comprehensive anti-discrimination law comparable to the AGG; instead, it relies on sector-specific protections. The key law is the Gender Equality Act (GlG, SR 151.1), which prohibits discrimination based on sex in employment under Article 3 GlG — explicitly including hiring. Article 8(2) of the Federal Constitution contains a broader prohibition of discrimination, but it primarily applies against the state. In private employment relationships, Article 328 OR (duty of care) and Article 328b OR for data protection in employment relationships also apply. As things stand, Switzerland does not yet have a law for HR AI systems comparable to the EU AI Act; the relevant provisions are the general rules of the revised FADP, especially Article 21 FADP on automated individual decisions, which is structurally similar to Article 22 GDPR.
Austria: Equal Treatment Act and compensation
Austria has a legal framework structurally comparable to the AGG in the form of the Equal Treatment Act (GlBG, Federal Law Gazette I No. 66/2004). Section 17 GlBG prohibits discrimination in employment based on ethnic affiliation, religion, belief, age, and sexual orientation. Of particular practical importance is Section 26 GlBG on claims for damages: if an employment relationship is not entered into, the employer owes at least one month’s salary as compensation for the personal harm suffered. The Equal Treatment Commission under Sections 1 ff. of the GBK/GAW Act can issue expert opinions that have evidentiary value in labor court proceedings. The EU AI Act will apply directly in Austria; additional national implementing rules are being prepared.
Across the EU: Race Equality and Employment Equality Directives
The European anti-discrimination architecture is primarily based on two directives: the Race Equality Directive 2000/43/EC prohibits discrimination based on race or ethnic origin in numerous areas of life, including employment. The Employment Equality Directive 2000/78/EC extends protection to religion, belief, disability, age, and sexual orientation, limited to the employment context. Both directives have been implemented in all Member States; their interpretation is continuously refined by the CJEU. In addition, there is the Gender Equality Directive 2006/54/EC and, in the AI field, the EU Charter of Fundamental Rights, especially Article 21 (non-discrimination) and Article 8 (data protection). The EU AI Act supports this framework from a technical and organizational perspective; the AI Liability Directive still under negotiation will sharpen civil liability questions.
Practical checklist: Ten questions for the tool provider
Before introducing a screening tool, HR and data protection officers should ask the provider the following ten questions. Evasive answers are grounds for rejection.
High-risk classification: Is the system classified as high-risk AI within the meaning of Annex III of the AI Act? Is there a CE mark and declaration of conformity?
Training dataset: Which data sources were used to train the model? How was representativeness ensured under Article 10 AI Act?
Fairness metrics: Are Equal Error Rate and Disparate Impact Rate reported? How often are they updated?
Independent audits: Are there independent audit reports on fairness and bias? From which body?
Human oversight: How is human oversight under Article 14 AI Act specifically designed? What intervention options exist?
Transparency of decision factors: For each decision, is it traceable which factors were weighted and how?
Proxy variables: Which input variables are used? Do any of them act as proxies for protected characteristics?
Data hosting and transfer: Where is the data processed? Are there transfers to third countries, and if so, on what basis under Article 44 ff. GDPR?
Right to object and correction: How is the data subject’s right to human review under Article 22(3) GDPR implemented operationally?
Documentation and audit log: What logging is performed? In the event of an AGG dispute, can the decision history and tool output be reconstructed for the individual case?
Conclusion: Compliance as a product feature
Discrimination-free screening is not a by-product of good intentions, but the result of a clean system architecture, documented processes, and regular review. The legal framework is clear: the AGG, GDPR, and AI Act require employers to understand, control, and defend the decisions made by their screening tools. If you rely on black-box systems here, you bear the liability risk alone — without the tool provider being held responsible. Indicium Technologies builds background-check solutions that treat fairness requirements not as a later add-on, but as a core product feature: transparent factors, documented metrics, and audit-ready processes.
Book a demo and see what discrimination-free, AI Act-compliant screening could look like in your company.
Read more — related articles
Nabil El Berr




