Regulation

EU AI Act and HR: When Background-Check Software Becomes High-Risk AI

EU AI Act and HR: When Background-Check Software Becomes High-Risk AI

April 20, 2026

April 20, 2026

Banner Image

Regulation

EU AI Act and HR: When Background-Check Software Becomes High-Risk AI

April 20, 2026

Banner Image

EU AI Act and HR: When Background-Check Software Becomes High-Risk AI

From 2 August 2026, the core obligations of the EU AI Act will apply in full. Article 6 in conjunction with Annex III classifies recruiting and personnel selection AI as high-risk. For HR teams, that means transparency obligations, bias monitoring, and documentation. This guide shows which background-check software falls under this — and how you can become compliant.

What does Article 6 of the AI Act say?

Article 6 of the AI Act defines when an AI system is considered "high-risk". There are two routes:

  1. Product-integrated AI (Art. 6(1)) — AI as a safety component of regulated products (medical devices, vehicles).

  2. Standalone AI under Annex III (Art. 6(2)) — AI systems in sensitive areas.

Annex III, number 4, covers "employment, personnel management and access to self-employment" — specifically:

  • AI for analyzing and filtering applications

  • AI for evaluating candidates (video interview analysis, scoring, ranking)

  • AI for decisions on hiring, promotion, dismissal

  • AI for monitoring work performance

Which background-check software is high-risk?

The distinction is crucial. Not every piece of software with an AI component is automatically high-risk.

High-risk (Annex III)

  • AI-based CV parsing that independently ranks or scores candidates

  • Video interview analysis with automatic personality assessment

  • Automated rejection decisions without human review

  • Predictive hiring — AI that forecasts the likelihood of success

Not necessarily high-risk

  • Pure data-matching tools (sanctions lists, PEP, adverse media) without scoring logic

  • Identity verification with machine vision (usually classified as "biometric verification", Art. 6(3)(b))

  • Automation that merely prepares human review (shortlist, hit display)

Art. 6(3) of the AI Act provides an exception: AI systems that do not pose a significant risk are not high-risk — for example, purely preparatory tasks or narrowly limited procedural work. What matters is whether the AI makes decisions independently or structurally supports a human decision.

Obligations for employers (deployers)

Anyone using high-risk AI is a "deployer" under the AI Act — and has several obligations:

  • Transparency toward candidates (Art. 26(7)): candidates must be informed before the AI evaluation takes place

  • Human oversight (Art. 14): a qualified person must review the results before a decision is made

  • Impact assessment (Art. 27): fundamental rights impact assessment before use, with documentation

  • Input data monitoring (Art. 26(4)): training and input data must be representative

  • Logging (Art. 26(6)): automatic logging of AI decisions for at least 6 months

  • Employee information (Art. 26(7)): inform the works council / workforce before rollout

Obligations for providers

Software manufacturers have even stricter obligations:

  • Conformity assessment under Annex IV

  • CE marking for high-risk AI

  • Risk management system (Art. 9)

  • Quality management system (Art. 17)

  • Post-market monitoring (Art. 72)

  • Registration in the EU database for high-risk AI

Anyone offering high-risk AI in the EU must have a designated person in the EU (Art. 25).

How Indicium is positioned

Indicium is not a high-risk AI system under Annex III. The platform performs structured data matching (sanctions lists, PEP, adverse media), documents hits, and presents the results via a traffic-light system — the final hiring decision is made exclusively by the human decision-maker.

Specifically:

  • No automated rejection — Indicium delivers review results, not hiring recommendations

  • No personality assessment — no video interview analysis, no predictive scoring

  • Audit trail for human oversight (Art. 14 AI Act compliant)

  • Documented training-data governance for the databases used

For regulated industries (BaFin, FINMA), we can build additional compliance layers to AI Act standards on request.

What applies in Switzerland, Austria, and across the EU?

Switzerland

Switzerland has not adopted the AI Act. A separate AI regulation is being prepared for 2027. Until then, Art. 26 et seq. of the revised FADP (data protection), Art. 328b CO (employer data processing), and the general prohibition of discrimination apply. Swiss companies operating in the EU must nevertheless comply with the AI Act (extraterritorial scope under Art. 2).

Austria

The AI Act applies directly. In addition: § 10 AVRAG (employee data protection) and the Equal Treatment Act (GlBG). The Data Protection Authority (DSB) is likely to be designated as the AI Act supervisory authority for high-risk HR AI.

Across the EU

On 2 February 2026, the Commission published guidelines on the practical implementation of Art. 6 (Art. 6(5)). The full application date for high-risk AI is 2 August 2026.

Checklist for HR teams

  1. Create an inventory of all AI tools used in recruiting — including embedded AI in ATS, scoring tools, and video interview platforms

  2. Check each tool: high-risk under Annex III or exception under Art. 6(3)?

  3. Request conformity documentation from the provider (CE, technical documentation)

  4. Implement deployer obligations: impact assessment, human oversight, logging, candidate information

  5. Involve the works council (§ 87 BetrVG, § 96 ArbVG in Austria)

  6. Update the data protection impact assessment (Art. 35 GDPR)

Conclusion

The AI Act clearly separates background-check tools: pure data-matching platforms like Indicium are not high-risk. Scoring, ranking, and decision AI are. For HR teams, that means: take stock of your tools, check the scope, request documentation. If an AI provider cannot give you a clear AI Act position, you should switch — otherwise the deployer liability sits with you.

Book a demo and see how Indicium structurally supports human decision-making instead of replacing it. You can find all compliance documents in the Trust Center.

Further reading — related articles

Nabil El Berr





Save 70% of your screening time

Every unchecked hire is a risk. Start now with automated background checks.

GDPR-compliant · Made in Europe · Results in minutes

Dashboard der Indicium Plattform mit unterschiedlichen Analysebereichen.
Anzeige des Risikolevels eines Bewerbers in dem Report von Indicium.

Save 70% of your screening time

Every unchecked hire is a risk. Start now with automated background checks.

GDPR-compliant · Made in Europe · Results in minutes

Dashboard der Indicium Plattform mit unterschiedlichen Analysebereichen.
Anzeige des Risikolevels eines Bewerbers in dem Report von Indicium.

Save 70% of your screening time

Every unchecked hire is a risk. Start now with automated background checks.

GDPR-compliant · Made in Europe · Results in minutes

Dashboard der Indicium Plattform mit unterschiedlichen Analysebereichen.
Anzeige des Risikolevels eines Bewerbers in dem Report von Indicium.
Sign up for the newsletter

Legal Information

Made in Europe

Compliant with Data Protection

Ready to use immediately

Hünenberg (Switzerland) · Hamburg (Germany)

© 2026 Indicium Technologies AG.

All rights reserved.

Sign up for the newsletter

Legal Information

Made in Europe

Compliant with Data Protection

Ready to use immediately

Hünenberg (Switzerland) · Hamburg (Germany)

© 2026 Indicium Technologies AG.

All rights reserved.

Sign up for the newsletter

Legal Information

Made in Europe

Compliant with Data Protection

Ready to use immediately

Hünenberg (Switzerland) · Hamburg (Germany)

© 2026 Indicium Technologies AG.

All rights reserved.