By: Matthew Mah
Volume IX – Issue I – Fall 2023
I. Introduction and Background
Artificial intelligence (AI) is the simulation of human intelligence in machines. This often includes “learning technology, software, automation, and algorithms” designed to make rules or predictions based on existing data. [1] Recently, chatbots such as ChatGPT catapulted AI into the forefront of the public consciousness. These chatbots and other machine learning systems made headlines as they won art competitions [2] and “beat 90% of humans who take the bar to become a lawyer.” [3] Simultaneously, concerns have arisen about AI in the workplace. For employers, the potential benefits of supplementing or even replacing workers with AI are immense; AI could perform repetitive and mundane tasks faster and more accurately than humans [4] —with improved productivity and without compensation. Understandably, many employees are concerned about their positions—recent polling found that 24% of workers fear AI will make their jobs obsolete. [5]
An even more pronounced concern than job security amongst workers is AI involvement with managerial tasks such as terminations and evaluations—56% of workers say they are not comfortable with AI assisting in such tasks. [6] Some corporations have already turned to AI when making termination decisions. At Amazon, algorithms—with little to no human oversight—often track and determine which workers to terminate. [7] Fearing the potential for a dystopian workplace controlled by faceless machines, legislators have passed bills to curtail AI managerial influence. U.S. Senators Bob Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act” on July 20, 2023. [8] This proposed bill states that AI technologies pose significant challenges for employment choices, labor rights, and workplace safety. The bill recognizes that without the appropriate oversight and safeguards, AI may increase “the risks of discrimination [...] and dangerous working conditions.” [9] The proposed bill requires employers to disclose the use of automated decision-making systems, including details about the data input and outputs from these systems. [10] The No Robot Bosses Act is one of many that attempt to prevent a future where workplace decisions are dominated by machines. Notably, AI legislation proposes solutions—such as requiring disclosure of AI-produced data in employment decision-making—which is likely to unintentionally soften the at-will rule. [11]
In the United States, the foundation of employment law and employment relations for nonunion private-sector businesses in every state except for Montana is the at-will rule. This rule permits employers from firing or disciplining workers for any reason if the employers’ justification is considered lawful. [12] However, even if an employer did fire an employee for an unlawful reason—such as protected class membership—the onus is on the worker to prove the unlawfulness of their termination. The worker must “collect the necessary evidence, prove discriminatory intent, and mount a legal challenge.” [13] For the past fifty years, employee advocates and work law scholars have targeted the at-will rule. [14]
There have been exceptions to at-will—while it is the default rule, it is not absolute. According to Novosel v. Nationwide, there exists a public policy exception to the at-will rule. The public policy exception has three prongs: the public policy must have no remedial structure within the statute, there must be no other reason for termination, and the public policy must be well-established, recognized, and strike at the heart of the public. [15] This exception is narrow. The requirement of no other reason for termination is a low bar for employers to meet. It is very easy for employers to find some reason to terminate an employee that is unrelated to a public policy.
However, many labor-focused groups argue that narrow exceptions to at-will are insufficient, and have called for the replacement of at-will with “just cause” [16] —a standard in which “employers could only fire workers for well-documented cases of poor performance, misconduct or loss of business or profit.” [17] The recent wave of bills restricting AI in managerial capacities, with their focus on transparency, may be the first step towards the long-term labor goal of supplanting at-will with a more worker-friendly alternative.
This article will first discuss how the recent legislative attempts at regulating AI emerged. It explores the aversion to AI managers and human workers. After, this article will focus on the substance of proposed AI curtailing legislation to determine its purpose and possible side effects. This article concludes with an analysis of the side effects under the framework of the at-will rule—how will these bills designed to limit AI erode at-will?
II. The Human Aversion to AI: Why AI Restrictions are Emerging
In the United States, human attitudes towards AI are largely negative. [18] This negative perception stems from many sources but this article will focus on two of relative significance: AI bias and the existential threat AI represents. AI bias describes situations in which AI makes decisions that are systematically unfair to particular people. People often imagine AI as perfectly objective, but this view rarely reflects reality. The existential threat of AI refers to the common perception, rooted in popular culture and academic concerns, that AI will inevitably become rogue and ultimately lead to callous human subjugation. While the latter source of negative AI perception may seem irrational, it consistently and prominently features in the zeitgeist and informs current legislative proposals.
AI technology creation partially explains AI bias. Someone—a human being—programs the machines, selects the data analyzed, develops the algorithm, and applies the algorithm. Development is often convoluted and complex—at each step, the opportunity for bias is present, frequently resulting in inequitable AI products. In one instance, facial recognition AI was found to misidentify Black women 35% of the time while correctly identifying Caucasian men nearly every time [19] which could lead to wrongful arrests of people of color. In this scenario, the software attributed this bias to the data it “trained” on—mugshot databases, employed in identifying people with face recognition algorithms, “recycle racial bias from the past.” [20] This scenario is not an isolated occurrence. “Bad data” exacerbating injustices for historically marginalized groups is a recurring pattern, as seen in several mortgage algorithms charging minority borrowers higher interest rates compared to white borrowers. [21]
As mentioned previously, AI development is incomprehensible to the average person. Employees are rarely innately aware of why AI determined that they should be fired or if those same standards are applied to their coworkers. As a result, automated systems sans human oversight “dehumanize and unfairly punish employees.” [22] Simultaneously, AI lacks accountability since non-sentient programs cannot face responsibility for managerial errors. In tandem, these two factors necessarily mean that AI involvement in the workplace obscures managerial decisions further than they already are. There would be no face to confront or reasoning to challenge AI decisions.
The discussion of AI growing too powerful for human control and causing harm is common. One thought experiment intended to highlight such a concern is the “Squiggle Maximizer.” It proposes the following: consider an artificial general intelligence—an AI that is capable of behaving intelligently over many domains—whose goal is to maximize the number of paper clips in its collection. Inevitably, this machine would work to increase its ability to maximize its intelligence because more intelligence would help it accumulate more paper clips. In this pursuit, the machine would use its enhanced abilities to further self-improve, undergoing an intelligence explosion and reaching levels far beyond humans. It would innovate better techniques to maximize the number of paperclips and pour all resources into paperclip manufacturing. This may seem foolish to humans, but for a machine solely focused on producing paper clips, if disregarding human life, joy, or other factors would maximize paper clips, it would do exactly that. [23]
This thought experiment illustrates that an intelligent machine is not necessarily capable of the “moral balancing” humans constantly undergo and it may not reach the same moral conclusions as humans (for example: favoring paper clips over joy or prioritizing workplace productivity over human well-being). Consequently, AI in the workplace may pose a substantial risk, as it could view humans as mere resources, similar to any other tool or asset. This view of AI as an inevitably malevolent force that micromanages humans in the workplace to their detriment is not uncommon. Thus, for many, the concept of an AI manager is fundamentally intolerable.
III. Proposed Legislation
Algorithmic opacity makes direct regulation of AI in decision-making difficult. To ameliorate concerns about AI in the workplace, proposed legislation creatively circumvents direct analysis and approaches regulation from two ends—outcome reviews and transparency. A proposed bill in New York, for example, stipulates three requirements for any employer who uses AI in the hiring process: (1) a disparate impact analysis—akin to the testing “of the extent to which use of an automated employment decision tool is likely to result in an adverse impact to the detriment of any [...] protected class” [24] —must be conducted annually; (2) a summary of the most recent disparate impact analysis must be made public prior to the implementation or continued use of the tool; and (3) the employer must provide the state with a summary of the most recent disparate impact analyses. [25] The bill, moreover, enables the attorney general to initiate an investigation if they establish a suspicion of a violation. [26] This concept is not unique. A New York City law passed in 2022 imposed similar requirements but goes further, requiring such disclosure for promotions and hiring [27] and the IllinoisVideo Interview Act requires employers who utilized AI review to submit a demographic breakdown for those who were rejected. [28]
While demographic disclosure is important, the initial crux of the Illinois Video Interview Act was a requirement to obtain consent from job applicants whose videos will be analyzed by AI. [29] This concept of consent is a universal part of AI regulation in the workplace and reflects the unique perceived difference between machine and human decision-makers as there are no laws regulating humans performing the same tasks. [30] Massachusetts’ Bill H.1873, An Act Preventing a Dystopian Work Environment, attempts to address this dignitary disparity. The bill stipulated two notable procedural requirements. First, employees were given a right to dispute the accuracy of data collected on them by AI. An employer is obligated to investigate (and correct, if applicable) disputed data. Second, if an employer relied on AI when making an employment decision—such as hiring, promoting, or terminating—the employer must (1) corroborate AI data by other means and (2) provide notice to affected workers what “worker data” the AI relied on, the other means that corroborated the decision, and that the worker has the right to dispute. The act also proposed restrictions on the use of worker data, limitations on the electronic monitoring of employees to the extent that monitoring must be “the least invasive” to accomplish an enumerated purpose, and stated that AI must not “result in physical or mental harm to workers.” [31] An Act Preventing a Dystopian Work Environment is an example of what AI regulation may look like as legislators have generally responded to the threat of AI and the complexity of algorithmic opacity in a similar fashion: demanding more disclosure and transparency.
IV. Side Effects
Legislation similar to the Dystopian Work Environment Act requires employers to provide more information to employees than they currently are. This act requires employers to disclose to workers fired based on AI assessments that they are being fired and what data—both AI and non-AI—were used in coming to this decision. The worker would be able to dispute the relied upon data and if they were to do so, the employer would be required to investigate and adjust any decision based on inaccurate data.
This transparency is antithetical to the current at-will rule where employees can be fired for good cause, bad cause, or no cause at all. For the at-will worker, if their workplace adopts AI in the managerial decision-making process, the transparency of the workplace ironically increases as they gain the right to know why they are being terminated and the opportunity to dispute termination. But what are the consequences of weakening at-will, and is it desirable? To answer this, consider the effects of the current at-will standard.
Nearly half of workers have indicated that they had been fired for no reason or a bad reason. [32] This was their subjective perspective but this statistic speaks to “how workers think about their jobs, their relationships with their managers and supervisors, and their daily experience of work.” [33] At-will is also responsible for eroding workplace protections. The default assumption of the at-will rule is that the employer’s action against an employee is legal unless disproven, as such, the individual worker is responsible for documenting that their treatment was unlawful, presenting a complaint, and pursuing legal recourse. The difficulty in proving employer wrongdoing and seeking redress partially explains why employers mistreated workers. [34] Similarly, at-will increases the risk of unaddressed employer retaliation. Since workers are required to take affirmative steps in reporting workplace violations, the law goes unenforced if the employee fears retaliation enough to avoid reporting violations. Indeed, fear of retaliation is relatively widespread, especially in low-pay industries—43% of workers in low-pay industries who claimed to have raised complaints or issues with employers claimed to face illegal retaliation such as termination, “threats to call immigration authorities, or threats to cut hours and pay.” [35] Compounding at-will’s negative impacts, the contemporary trend of the workplace—characterized by declining union power and increasing mandatory arbitration clauses [36] — exacerbates the power at-will exerts over the workforce.
The Supreme Court’s May 2018 ruling in Epic Systems Corp. v. Lewis has reinforced the prevalence of arbitration clauses. Section 7 of the National Labor Relations Act (NLRA) guarantees workers “the right [...] to engage in other concerted activities for collective bargaining or other mutual aid or protection.” [37] While previous cases have interpreted Section 7 to protect a broad range of concerted activity (including non-union and informal activities) [38] Epic Systems limits this interpretation. Epic Systems confines the application of Section 7 to the right to organize unions and bargain collectively. It curtails the expansive protection afforded by previous interpretations. This limitation applies to protection from mandatory arbitration agreements. In essence, Epic Systems establishes that individualized arbitration agreements, as long as they do not run afoul of other existing laws, are valid [39] —an explanation for the ubiquity of mandatory arbitration agreements in the field of work today. From the perspective of workers, with such an increasingly dire work landscape, almost any reasonable endeavor to reform or replace at-will is desirable.
V. Conclusion
The rise of artificial intelligence in the workplace has triggered both excitement and apprehension. While AI promises efficiency and productivity gains, concerns about job security and the potential for biased decision-making have prompted legislative responses. The negative perception of AI, rooted in biases and fears of uncontrollable machines, has influenced these regulatory efforts. AI's opaque decision-making processes, coupled with the potential for algorithmic bias, raise concerns about accountability and fairness. Legislative proposals attempt to address these issues by requiring disclosure, consent, and dispute resolution mechanisms.
However, these regulatory measures may inadvertently challenge the long-standing at-will employment rule in the United States. The shift towards transparency, while empowering workers with information about AI-driven decisions, contradicts the traditional at-will principle where termination can occur without cause. The unintended consequence of these regulations could be a great boon to workers—a gradual erosion of at-will employment, providing workers with more rights and protections.
Endnotes
[1] Cornell Law School, artificial intelligence (AI), Legal Information Institute, Legal Information Institute, https://www.law.cornell.edu/wex/artificial_intelligence_(ai) (last updated May, 2023).
[2] Kevin Roose, An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy., The New York Times, (Sept. 2, 2022), https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html
[3] John Koetsier, GPT-4 Beats 90% Of Lawyers Trying To Pass the Bar, Forbes, (Mar. 14, 2023), https://www.forbes.com/sites/johnkoetsier/2023/03/14/gpt-4-beats-90-of-lawyers-trying-to-pass-the-bar/?sh=1b1691f03027
[4] Clifton B. Parker, Artificial intelligence will both disrupt and benefit the workplace, Stanford scholar says, Stanford News, (May 17, 2018), https://news.stanford.edu/2018/05/17/artificial-intelligence-workplace/
[5] Laura Wronski, CNBC|SurveyMonkey Workforce Survey May 2023, SurveyMonkey.com, (May, 2023), https://www.surveymonkey.com/curiosity/cnbc-workforce-survey-may-2023/.
[6] Wronski, supra note 4.
[7] Spencer Soper, Fired by Bot at Amazon: `It’s You Against the Machine’, Bloomberg Law, (Jun. 28, 2021), https://www.bloomberglaw.com/bloomberglawnews/daily-labor-report/X62V090K000000?bna_news_filter=daily-labor-report
[8] Forman et al., The No Robot Bosses Act Aims to Regulate Workplace AI, JDSUPRA, (Aug. 1, 2023), https://www.jdsupra.com/legalnews/the-no-robot-bosses-act-aims-to-4374708/
[9] Bob Casey & Brian Schatz, No Robot Bosses Act of 2023, Bob Casey: U.S. Senator for Pennslyvannia, https://www.casey.senate.gov/imo/media/doc/no_robot_bosses_act_of_2023.pdf (last visited Nov. 2, 2023)
[10] Casey & Schatz, supra note 8.
[11] Dallas Estes, Preventing a Dystopian Work Environment: AI Regulation and Transparency in At-Will Employment, onlabor.com, (Sept. 28, 2023), https://onlabor.org/preventing-a-dystopian-work-environment-ai-regulation-and-transparency-in-at-will-employment
[12] Kate Andrias & Alexander Hertel-Fernandez, Ending At-Will Employment: A Guide For Just Cause Reform, Roosevelt Institute, (Jan. 2021), https://rooseveltinstitute.org/publications/ending-at-will-employment-a-guide-for-just-cause-reform/
[13] Andrias & Hertel-Fernandez, supra note 11.
[14] Rachel Arnow-Richman, Just Notice: Re-Reforming Employment At-Will, UF Law Scholarship Repository, (2010), https://scholarship.law.ufl.edu/cgi/viewcontent.cgi?article=2005&context=facultypub
[15] Novosel v. Nationwide Ins. Co., 721 F.2d 894 (3d Cir. 1983)
[16] Arnow-Richman, supra note 13.
[17] Andrias & Hertel-Fernandez, supra note 11, at 5.
[18] 8Lisa-Maria Neudert et al., Global Attitudes Towards AI, Machine Learning & Automated Decision Making, Oxford Internet Institute, (Oct. 7, 2020), https://oxcaigg.oii.ox.ac.uk/wp-content/uploads/sites/11/2020/10/GlobalAttitudesTowardsAIMachineLearning2020.pdf
[19] Joy Buolamwini & Timmit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research, (2018), http://proceedings.mlr.press/v81/buolamwini18a.html
[20] Kade Crockford, How is Face Recognition Surveillance Technology Racist?, ACLU, (Jun. 16, 2020), https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist
[21] Laura Counts, Minority homebuyers face widespread statistical lending discrimination, study finds, BerkeleyHaas, (Nov. 13, 2018), https://newsroom.haas.berkeley.edu/minority-homebuyers-face-widespread-statistical-lending-discrimination-studyfinds/
[22] Kevin Roose, A Machine May Not Take Your Job, but One Could Become Your Boss, New York Times, (Jun. 23, 2019), https://www.nytimes.com/2019/06/23/technology/artificial-intelligence-ai-workplace.html
[23] Nick Bostrom, Ethical Issues in Advanced Artificial Intelligence, nickbostrom.com, https://nickbostrom.com/ethics/ai, (last visited Nov. 3, 2023)
[24] 4Latoya Joyner, Assembly Bill A7244A, The New York State Senate, https://www.nysenate.gov/legislation/bills/2021/A7244, (last visited Nov. 3, 2023)
[25] Joyner, supra note 22.
[26] Joyner, supra note 22.
[27] New York City Department of Consumer and Worker Protection, Use of Automated Decision Making Tools, 2022, https://rules.cityofnewyork.us/wp-content/uploads/2023/04/DCWP-NOA-for-Use-of-Automated-Employment-Decisionmaking-Tools-2.pdf
[28] Jamie M. Andrade, Jr., Video Interview Act, Illinois General Assembly, (Aug. 9, 2019), https://www.ilga.gov/legislation/BillStatus.asp?DocNum=2557&GAID=15&DocTypeID=HB&SessionID=108&GA=101
[29] Andrade, Jr., supra note 26.
[30] Estes, supra note at 10.
[31] Dylan A. Fernandes, An Act Preventing a Dystopian Work Environment, The 193rd General Court of the Commonwealth of Massachusetts, https://malegislature.gov/Bills/193/H1873, (last visited Nov. 3, 2023)
[32] Alexander Hertel-Fernandez, American Workers’ Experiences with Power, Information, and Rights on the Job: A Roadmap for Reform, Roosevelt Institute, (Apr. 30, 2020), https://rooseveltinstitute.org/publications/american-workers-experiences-with-power-information-and-rights-on-the-job-a-roadmap-for-reform/
[33] Andrias & Hertel-Fernandez, supra note 11.
[34] Arnow-Richman, supra note 13.
[35] Annette Bernhardt et al., Broken Laws, Unprotected Workers,: Violations of Employment and Labor Laws in America's Cities, NELP, (Sept. 21, 2009), https://www.nelp.org/publication/broken-laws-unprotected-workers-violations-of-employment-and-labor-laws-in-americas-cities/
[36] Alexander J.S. Colvin, The growing use of mandatory arbitration, Economic Policy Institute, (Sept. 27, 2018), https://www.epi.org/publication/the-growing-use-of-mandatory-arbitration/ 29 U.S.C § 157
[37] 29 U.S.C § 157
[38] Labor Board v. Washington Aluminum Co., 370 U.S. 9 (1962)
[39] Epic Systems Corp. v. Lewis, 584 U.S. (2018)