Call Sales +44 808 164 2563
 
< Back to Hiring Blog

How iCIMS is preparing for the EU AI Act (and other AI legislation)

May 19, 2025
5 min read
Learn how iCIMS can
help you drive ROI

iCIMS has delivered leading-edge applicant tracking and related software for over twenty-five years, and during that time, our global customer base empowered us to create transformative software. iCIMS’ experience in the HR sector has given us a deep understanding of the evolving intersection between HR, technology, and regulations—along with a strong respect for its impact. Our team of privacy, legal, engineering and data science experts are keenly aware of the laws and regulations (both in force and forthcoming) that impact our products and solutions. This is also true of regulations relating to artificial intelligence (“AI”), especially since our acquisition of Opening.io in 2020, which catapulted iCIMS into the AI space. iCIMS has been tracking and monitoring AI regulations and proposals in multiple global jurisdictions, and we are committed to ensuring that our products and solutions will comply with the applicable laws, the EU AI Act in particular.

 

What is the EU AI Act?

The EU AI Act is a comprehensive regulatory framework aimed at ensuring the safe and ethical use of AI within the European Union. It classifies AI systems based on their risk levels: minimal, limited, high or unacceptable risk. The Act imposes different requirements for each category, with the most stringent regulations applied to high-risk AI systems. These requirements include rigorous conformity assessments, documentation and record-keeping practices and enhanced transparency and human oversight.

Key dates

Some EU AI Act obligations (concerning unacceptable risk AI and AI literacy) went into effect on February 2, 2025. Obligations relating to EU member state governance and “General Purpose AI” will go into effect on August 2, 2025, but the full EU AI Act obligations concerning high-risk AI systems will apply effective August 2, 2026.

A note on risk levels

Although it may sound foreboding, a “high-risk” system under the EU AI Act simply speaks to the potential for an adverse impact on a person’s fundamental human rights under EU law.1 It does not indicate that a system is itself “risky” or poorly designed.  Because most of iCIMS’ AI systems will be provided in the employment context, to “be used for the recruitment or selection of natural persons, in particular, to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates,” iCIMS has determined that most of our AI systems fall into the “high-risk” category. We are preparing the required materials, assessments, and documentation to comply with the requirements of high-risk systems.

 

What is required by the EU AI Act?

The EU AI Act contains many obligations for providers (“Developers”) of AI systems. As a Developer of “high-risk” AI systems, at a high level, iCIMS must:

  1. Establish a risk management system throughout the development lifecycle;
  2. Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose;
  3. Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance;
  4. Design systems for record-keeping to enable automatic recording of events throughout the system’s lifecycle;
  5. Provide instructions for use to downstream deployers to enable user compliance;
  6. Develop a post-market monitoring system and plan; and
  7. Design high-risk systems to allow human oversight, and to achieve appropriate levels of accuracy, robustness, and cybersecurity.

 

How is iCIMS preparing?

Thanks in large part to our extensive experience in technology regulation and AI, iCIMS is on track to comply with EU AI Act obligations.  We’ve had robust AI governance in place for many years, including policies and procedures for documenting development to the proper use of AI in our systems. Our Responsible AI Committee has maintained responsibility for risk management and meets regularly to update our risk registers and provide important updates related to AI development and laws. In addition, our preparation for annual external audits, as required by NYC LL 144/2021, has allowed us to demonstrate fairness in a key feature of our AI technology.

We have conducted a gap assessment to understand what further details and documentation must be prepared for full compliance with the EU AI Act (among other laws), and this cross-departmental plan is already being executed.

To demonstrate our commitment to ongoing governance and transparency, iCIMS has achieved a Responsible AI Certification from TrustArc.  This certification is intended to provide our customers with a clear demonstration of our ongoing commitments to Responsible AI development while we continue our EU AI Act compliance efforts. iCIMS is the only enterprise recruiting software provider that has obtained this certification.

Deployer obligations

iCIMS customers that operate in the EU must also be prepared to comply with certain obligations, including those laid out in Article 26.  iCIMS is preparing to ensure that we provide our customers with the necessary documentation they will need to use the AI systems in accordance with the law, as well as information needed to provide appropriate notifications to their candidates.  These documents and details will be provided in advance of enforcement dates.

 

What other laws is iCIMS preparing for?

While the EU AI Act is an important piece of legislation, it is not the only law that places responsibilities on developers and deployers of AI systems. New York City’s “AEDT” law was the first legislation passed, which attempts to ensure that AI systems do not subject job applicants to discriminatory outcomes.  You can read more about our compliance efforts with this law here.

Additionally, new AI legislation has been coming out of the states at a rapid pace.  Notably, the Colorado Consumer Protections for Artificial Intelligence Act was signed into law on May 17, 2024, and goes into effect on February 1, 2026. The Colorado Act requires that developers provide information and summaries of high-risk AI systems, implement a risk management program, and use reasonable care to avoid algorithmic discrimination (among other responsibilities). The California AI Transparency Act, approved by the governor on September 28, 2024, will require that adequate documentation is available to users of generative AI systems to provide information on the datasets used in the creation of the AI system. Most recently, the Virginia state senate passed the High-Risk Artificial Intelligence Developer and Deployer Act, which imposes requirements on developers and deployers regarding transparency, impact assessments, and allowing for corrections.  If this law is signed by the governor, it will go into effect on July 1, 2026.

iCIMS has a project plan in place to ensure that documentation, impact assessments, and other developer requirements will be ready by the time of enforcement of each of these laws.  iCIMS’ Responsible AI Committee is tracking our obligations and each of these laws (as well as others that are in proposal stages) to ensure that we have the materials our customers need to be comfortable using our AI software, and to help with their compliance obligations. We will provide more information as these materials are developed, and look forward to working with our customers on these important issues.

×

Learn how iCIMS can help you drive ROI

Explore categories

Explore categories

Back to top

Join our growing community
and receive free tips on how to attract, engage, hire, & advance the best talent.

Read more about AI & ML

How to develop an effective talent acquisition strategy for large enterprises

Read more

6 outside-the-box recruiting ideas to attract top talent

Read more

iCIMS May Workforce Report: Cautious confidence and a growing talent pool

Read more

About the author

Christine Raniga

Christine serves as a key liaison between the product development, engineering, and legal teams in her role as AGC, Product and Strategic Programs, and serves as a trusted advisor to iCIMS’ internal teams in multiple legal areas.

She also serves on iCIMS’ Responsible AI Committee, ESG Committee, and provides support and guidance across the business for commercial transactions, partnership programs, and policy development. Christine is licensed to practice law in New York and New Jersey, and holds multiple professional certifications including CIPP/E, CIPP/US, CIPT, AIGP, and FIP. 

Read more from this author >