Unfair Automated Hiring Systems...

Lina Khan, chair of the US Federal Trade Commission (FTC), recently highlighted the agency's commitment to regulating AI in an essay published in The New York Times. However, she failed to mention a crucial AI application that requires urgent regulation: automated hiring systems. These systems, ranging from basic resume parsers to more complex decision-making tools, are increasingly being used by job applicants who are obligated to rely on them for employment opportunities.

In "The Quantified Worker," the author argues that AI technologies, particularly automated hiring systems, reduce American workers to mere numbers in the workplace. These systems often assign scores or ranks to applicants, disregarding the holistic human experience. Moreover, some systems engage in prohibited practices by sorting individuals based on their race, age, and sex during the employment decision-making process.

Ironically, many of these systems are marketed as unbiased and capable of reducing discriminatory hiring practices. However, due to loose regulation, studies have revealed that they deny equal employment opportunities to individuals based on protected categories such as race, age, sex, and disability. Legal actions have been taken against companies like Meta (owned by Facebook) for selectively showing job advertisements based on gender and age, disproportionately excluding older workers and women from certain positions.

Employers invest in automated hiring systems to minimize liability for employment discrimination, and vendors are legally required to substantiate their claims of efficacy and fairness. The FTC has jurisdiction over automated hiring systems but has yet to release specific guidelines for advertising these systems. The author proposes that auditing should be mandatory to ensure that these platforms fulfill their promises. Vendors must provide clear records of audits demonstrating their systems' effectiveness in reducing bias, following Equal Employment Opportunity Commission (EEOC) guidelines.

Collaborating with the EEOC, the FTC could establish the Fair Automated Hiring Mark, certifying systems that pass rigorous auditing. This mark would serve as a quality signal for both applicants and employers. Additionally, the FTC should allow job applicants, considered consumers of AI-enabled online application systems, to sue under the Federal Credit Report Act (FCRA). This law can apply whenever a report is created for an "economic decision," which includes applicant profiles generated by online hiring platforms. Launching an education campaign would inform applicants about their rights to petition for access to and correction of these reports.

Drawing from a relevant legal precedent, the case of Thompson v. San Antonio Retail Merchants Ass'n (SARMA), the FTC could create a website to help applicants query automated hiring platforms for their reports and file claims for corrections. Fines and legal action against platforms failing to update inaccurate reports could also be established.

Finally, the author recommends a complete ban on the sale of automated video interviewing systems that claim to analyze human emotions. These claims, rooted in pseudoscience similar to the discredited practice of phrenology, perpetuate biases and exclude applicants who do not fit the majority race or have disabilities.

Regulating marketing claims related to AI hiring systems is crucial to discourage deceptive practices and promote fair competition among job applicants. By implementing these proposals, the FTC can ensure that well-intentioned employers invest in effective hiring tools rather than ineffective solutions.

Previous
Previous

Tired of Spam Calls on iPhone...

Next
Next

Amazon Working on AI Search...