The California Civil Rights Council recently announced the release of proposed regulations to protect against employment discrimination from the use of artificial intelligence (AI), algorithms, and other automated decision-making systems.
Civil Rights Department Director Kevin Kish remarked on the proposed rules:
Through advances in technology and artificial intelligence, we’re taking steps to tackle climate change, develop cutting edge treatments in healthcare, and build the economy of tomorrow. At the same time, we also have a responsibility to ensure our laws keep pace and that we retain hard-won civil rights. The proposed regulations announced today represent our ongoing commitment to fairness and equity in the workplace. I applaud the Civil Rights Council for their work.
Under state law, the California Civil Rights Department is given the authority to enforce many of the state’s civil rights laws. This includes the areas of:
Employment;
Housing;
Business;
Public accommodations; and
State-funded programs and activities.
The Civil Rights Council, which is a branch of California Civil Rights Department, develops and issues regulations to implement state civil rights laws. As to automated-decision systems, the Civil Rights Council’s initial proposed regulations were the product of several public discussions, including an April 2021 hearing, expert input, and federal reports and guidance. The hearing included experts who explained the ways that algorithms make employment, housing, lending, and healthcare decisions and can perpetuate existing biases and inequalities. Speakers presented ideas to address these issues, and members of the public spoke and submitted comments to the Council.
Automated decision-making systems — which may rely on algorithms or artificial intelligence — are used in employment settings increasingly to facilitate a wide range of decisions concerning job applicants or employees. This can include recruitment, hiring, and promotion. These tools can provide many benefits, but they can also exacerbate existing biases and contribute to discriminatory result. For example, a hiring tool may reject women applicants by mimicking the existing features of a company’s male-dominated workforce, and a job advertisement delivery system may bolster gender and racial stereotypes by directing cashier ads to women and taxi jobs to Black workers. The Council said many of the challenges have been well-documented.
Among other changes, the Civil Rights Council’s proposed regulations look to do the following:
Explain that it’s a violation of California law to use an automated decision-making system if it harms applicants or employees based on protected characteristics;
Make certain that employers and covered entities keep employment records, including automated decision-making data, for at least four years;
Give clear examples of tests or challenges that are used in automated decision-making system assessments which may be illegal medical or psychological inquiries;
Validate the fact that the use of an automated decision-making system alone doesn’t take the place the requirement for an individualized assessment when considering an applicant’s criminal history;
Clarify that third parties are prohibited from aiding and abetting employment discrimination, including through the design, sale, or use of an automated decision-making system; and
Append the definitions to add key terms used in the proposed regulations, such as “automated-decision system,” “adverse impact,” and “proxy.”
The Civil Rights Council encourages the public to comment on the proposed regulations by July 18, 2024.
Takeaway
The use of AI tools is increasing, which means new regulations. In addition to California, the Biden administration issued Executive Order 14110 in the fall of 2023, which called for a “coordinated, Federal Government-wide approach” to the responsible development and implementation of AI. The Executive Order focuses on new immigration policy, civil rights issues, wage-and-hour compliance, and labor risks and opportunities.
And just last month, the U.S. Department of Labor’s Wage and Hour Division and Office of Federal Contract Compliance Programs also issued new AI guidance. The Wage and Hour Division’s new guidance clarifies employers’ obligations under federal labor laws as they pertain to use of automated systems and artificial intelligence (AI). The guidance from the OFCCP cautions federal contractors they must monitor their use of artificial intelligence and automated systems to make sure they don’t adversely affect applicants and employees from protected groups.
Bottom Line
California employers who use AI, algorithms, and automated-decision systems should stay up-to-date on the Council’s proposed regulations, as well as federal agency guidance. If the state rules are passed, employers and employment agencies would be liable for disparate impact or disparate treatment stemming from the use of these tools.