FeaturesSpecial Coverage

Legislation Tackles ‘Responsible Use’ of Artificial Intelligence

A Brave New Year

By Lauren C. Ostberg, Esq. and Michael McAndrew, Esq.

Artificial intelligence — specifically, natural-language chatbots like ChatGPT, Bard, and Watson — have been making headlines over the past year, whether it’s college writing teachers’ attempts to avoid reading machine-generated essays, the boardroom drama of OpenAI, the SAG-AFTRA strike, or existential anxiety about the singularity.

On the frivolous end of the spectrum, one of the authors of this piece used ChatGPT to find celebrity lookalikes for various attorneys at their firm, and learned that ChatGPT defaults to the assumption that, irrespective of race or gender or facial features, most people (including Lauren Ostberg) look like Ryan Reynolds. On the more serious end, the legislatures of state governments, including those in Massachusetts and Connecticut, have labored over bills that will harness, regulate, and investigate the power of AI.

“The legislatures of state governments, including those in Massachusetts and Connecticut, have labored over bills that will harness, regulate, and investigate the power of AI.”

Lauren Ostberg

In Massachusetts, for example, the Legislature is considering two bills, one (H.1873) “To Prevent Dystopian Work Environments,” and another (S.31) titled “An Act Drafted with the Help of ChatGPT to Regulate Generate Artificial Intelligence Models Like ChatGPT.” The former would require employers using any automatic decision-making system to disclose the use of such systems to their employees, and give employees the opportunity to review and correct the worker data on which those systems relied. The latter, sponsored by Hampden County’s state Sen. Adam Gomez, aims to regulate newly spawned AI models.

While the use of AI to draft S.31 is, in its own right, an interesting real-world application of AI, the use of AI in this way is not the only important part of S.31, which proposes a regulatory regime whereby “large-scale generative artificial intelligence models” are required to register with the attorney general. In doing so, AI companies would be required to disclose detailed information to the attorney general, including “a description of the large-scale generative artificial intelligence model, including its capacity, training data, intended use, design process, and methodologies.”

In addition to requiring the registration of AI companies, S.31 (if passed) would also require AI companies to implement standards to prevent plagiarism and protect information of individually identifiable information used as part of the training data. AI companies must “obtain informed consent” before using the data of individuals. To ensure compliance, the bill gives the AG enforcement powers and grants it the authority to propound regulations that are consistent with the bill.

While S.31 provides robust protections against using data garnered from citizens of the Commonwealth in programming AI models, it may fail because of the amount of disclosure required from AI companies. As part of a new and fast-moving field, AI companies may be hesitant to disclose their processes, as is required by S.31.

Though commendable in its effort to protect creators and citizens, S.31 may ultimately drive AI-based businesses out of the Commonwealth if they fear that their competitively sensitive AI processes will be disclosed as part of the public registry envisioned by S.31. However, the structure of the proposed registry of AI businesses is currently unclear; only time will tell how much information will be available to the public. Time will also tell if S.31 (or H.1873, referenced above) makes it out of committee and into law.

Meanwhile, in Connecticut

This past June, Connecticut passed a law, SB-1103, that recognizes the dystopian nature of the government using AI to make decisions about the treatment of its citizens. It requires that — by, on or before Dec. 31, 2023 — Connecticut’s executive and judicial branches conduct and make available “an inventory of all their systems that employ artificial intelligence.” (That is, it asks the machinery of the state to reveal itself, in part.)

“This proposed legislation is, of course, just the beginning of government’s attempts to grapple with the ‘responsible use’ (an Orwellian term, if ever there was one) of AI and technology.”

Michael McAndrew

By Feb. 1, 2024, the executive and judicial branches must also conduct (and publicly disclose) an “impact assessment” to ensure that systems using AI “will not result in unlawful discrimination or a disparate impact against specified individuals.” ChatGPT’s presumption, noted above, that every person is a symmetrically faced white man would be much more serious in the context of an automated decision-making system that impacts the property, liberty, and quality of life of Connecticut residents.

This proposed legislation is, of course, just the beginning of government’s attempts to grapple with the ‘responsible use’ (an Orwellian term, if ever there was one) of AI and technology. Massachusetts has proposed the creation of a commission to address the executive branch’s use of automated decision making; Connecticut’s new law has mandated a working group to consider an ‘AI Bill of Rights’ modeled after a federal blueprint for the same. The results — and the inventory, and the assessments — remain to be seen in the new year.

Lauren C. Ostberg is a partner, and Michael McAndrew an associate, at Bulkley Richardson, the largest law firm in Western Mass. Ostberg, a key member of the firm’s intellectual property and technology group, co-chairs the firm’s cybersecurity practice. McAndrew is a commercial litigator who seeks to understand the implications and risks of businesses adopting AI.