The deliberations round Uniform Electronic Transactions Act envisioned the current advent of fully autonomous transactional agentic systems
The law.MIT.edu research project on Agentic AI follows up in part on researcher Dazza Greenwood’s early and continuous involvement with automated and autonomous agentic systems legal frameworks sine the 1990’s.
One thread we plan to take forward in our current research is highlighted in the memo below that formed part of the deliberations leading to the Uniform Electronic Transactions Act, regarding how the law should treat fully autonomous transactional processes. The memo recognizes that as within the scope of the legal framework but as of then, still a future application. Based on the recent and emerging applications of generative AI for autonomous agent systems, that future is now upon us.
M E M O R A N D U M
To: Uniform Electronic Transactions Act Drafting Committee;
Professor Patricia B. Fry, Chair and Professor Benjamin Beard, Reporter
From: ABA Section of Business Law, Cyberspace Law Committee Electronic Commerce Subcommittee Working Group on Electronic Contracting Practices Co-Chairs: Daniel Greenwood ([email protected]) and John Muller ([email protected]) (fn1)
Date: February 16, 1999
Subject: Preliminary Draft Report on UETA Legal Treatment of Electronic Agents
1. Introduction
This document is a revised draft (fn2) of the meeting proceedings of the Electronic Contracting Practices Working Group at the American Bar Association’s Cyberspace Law Committee meeting in Atlanta January 15-16, 1999. The Working Group is focusing its efforts on a survey of legal issues arising from the deployment of electronic agents for business purposes, including considerations of commercial, agency, intellectual property and tort law. As part of this survey, the Working Groups is monitoring and commenting upon developments in UCC Articles 2, 2B and the UETA with respect to Electronic Agents. An initial rough draft of this document was presented on January 16th to Benjamin Beard, UETA Reporter, who has subsequently requested a more formal submission for consideration by the UETA drafting committee in time for the upcoming February 19-21, 1999 drafting Committee meeting in Richmond, VA.
This document first explores the definition currently used in the UETA for Electronic Agent, and then examines the core operative legal rule in the current and immediately prior draft of the UETA, along with a tentative proposed rule developed by the Working Group at the Atlanta meeting. Next, the extent to which operations of an electronic agent should be attributed to a user are discussed, followed by a query about the advisability of developing different operative rules for tool-like vs. intelligent electronic agents (or, possibly, simply limiting application of the UETA to non-intelligent systems). Clearly, other sections of the UETA as well as other NCCUSL draft products also deal with use of electronic agents, however, at this time the Working Group has limited comments to the provisions noted due to the preliminary nature of this draft.
2. UETA Definitions
January 29, 1999 UETA definition of electronic agent (Section 102(8)):
"Electronic agent" means a computer program, electronic, or other automated means used to initiate or respond to electronic records or performances in whole or in part without review by an individual.
Working Group commentary: The emphasis in the definition on review by an individual may call for clarification that review occurring at any point after completion of the record or performance does not cause a program to fall outside this definition. Perhaps this should be clarified with reference to an objective standard, such as: "without human review up to and including the point at which a reasonable person would have expected the transaction to be concluded." It has also been pointed out to the Working Group that (absent the artificial intelligence or malfunction), the future authorized operations of an electronic agent are in fact "reviewed" at the time the user enters the "input" (e.g. enters: "buy 5 shares of X stock at market value").
Query whether "inputs" and/or "outputs" are a preferable term to "performances" because any real world performances (such as shipping goods) are really the result of computer input or output.
3. Alternative Operative Legal Provisions
Prior UETA Treatment: "A person who configures and enables an electronic device is bound by operations of the device."
Working Group Atlanta Meeting Suggestion: "A person may act through an electronic device, and the resulting operations of that device are the acts of that person."
Current UETA Treatment: Operations of an electronic agent are the acts of a person if the person used the electronic agent for such purposes.
4. Attribution of an Electronic Agent's Operations to the User
The Working Group strongly supports the move in the current draft of the UETA away from the wording that the user of an electronic agent is "bound" by the agent, and towards a simpler attribution rule that the acts of an agent are deemed to be the acts of the person who chose to use the agent, subject to all of the defenses that would be available to the person under existing substantive rules of law. Baldly legislating that a person is to be legally "bound" by the act of an agent strongly suggests a liability and risk allocation rule of strict liability. The scope of application of the UETA is broad enough to raise significant concerns over such a rule. For example, if the underlying transactions gave rise to contract or negligence claims, then application of such a harsh rule would seem out of place because these underlying bodies of law take into account various "escape valves" that allow avoidance of liability under certain circumstances. For example:
Contracts: For example, there exists a contract law doctrine that "a reasonable person would have concluded that an offer had been made" as a condition of the right to accept. The user of an agent that entered an order for 100,000 widgets when the purchaser had regularly purchased quantities of no more than 1,000 should be able to avail herself of this rule.
Negligence: Similarly, under tort law, the user may be able to establish the she met a reasonable level of care which a user must take regarding supervision of its e-agent’s actions, and beyond which the user is exculpated
Agency: Under agency law, principals may avoid contract liability for an agent’s acts if (i) the agent acts outside its scope of authority, or (ii) if apparent authority is established, the agent’s act nevertheless was of a character that prohibits reasonable reliance. Under a corollary "escape valve" rule, principals may avoid tort liability for an agent’s acts when the agent’s acts fall outside the scope of employment or scope of duties (i.e. see detour and frolic doctrines).
A rule that calls for a user to be "bound" by the operations of an electronic agent fails to take account of the types of situations for which the exceptions carved out in contract, tort and agency law have been developed. Simply holding that the operations of an agent are to be deemed the acts of a user leaves room for underlying fairness and exculpation rules to operate. It may be appropriate to note the availability of these types of defenses in commentary to the statute.
The Working Group also expressed concern about the potentially confusing terms "configures or enables," which could be read to apply to the person that installs the agent rather than the user. The word "uses" was suggested by the Working Group rather than "configures and enables" an electronic agent. The current draft UETA speaks to one who "used" an electronic agent.
Some members of the Working Group favored indexing attribution of an electronic agent’s operations only to a person who engaged in use of the electronic agent for that purpose. The concern raised was that complex implementations may be capable of acts or operations that are non-obvious or unpredictable by a user and which the user should not be committed. Situations constituting intervening/superceding causes, product malfunctions and unreasonable reliance by another party were discussed. Some members of the Working Group, however, disagree with an intent test such as seems to be embodied in the current draft's use of the rule: "Operations of an electronic agent are the acts of a person if the person used the electronic agent for such purposes" (emphasis added). Those members take the view that, where an innocent party experienced some loss, the counterparty should not have as a defense that her electronic agent acted against her purposes. If she made the choice to use the agent, she should assume responsibility for its acts, again subject to available defenses such as reasonable reliance. These members would likely prefer wording proposed by the Working Group at the Atlanta meeting (see above):
The first clause of the Working Group’s tentative language (above) is intended to make it clear that no formalistic legal barrier exists to persons wishing to use electronic agents. The second, more substantive clause is a simple rule of attribution deeming the acts of the agent to be the acts of the user. The understanding underlying this approach is that the legal consequences that would flow from those acts would and should be subject to defenses of the type described above. Inclusion of the word "resulting" was intended to link the actions which in fact set the electronic agent in motion with the consequent operations of that agent. In other words, an agent that is in fact hijacked by another person (or agent) and which then followed that inter-meddler's bidding may not be the result of the original User's activities. This would be an objective determination, however, instead of the more subjective determination of the user's purposes.
Another concept not discussed by the group in Atlanta but considered by the co-chairs is that a "one size fits all" rule for electronic agents may not be workable. Current applications of electronic agent technology tend to be fairly limited by their algorithms and do not attempt to "learn" or predict their user's desires. For example, an Amazon.com program that leads a user through the selection process and completes the sale of a book or CD, and most automated inventory management systems, do not negotiate prices. Agent technology already in existence and starting to be deployed, however, is capable of more complex "autonomous" decision making skills and, particularly when interacting with a community of other agents, so-called "emergent behavior" can result. Emergent behaviors cannot be predicted based solely on an understanding of the constituent parts of the system. In such a case, particularly as between several users of agents who each voluntarily set their agent in motion in a community of agents, it may not be appropriate to hold a user responsible for emergent behavior of its agent. This suggests that a different rule of attribution (and perhaps substantive liability rules as well) may need to be developed to address the unique and complicated results of emergent behavior of artificially "intelligent" mechanisms. Especially in systems designed to accommodate multi-agent transactional environments, group emergent behavior may require development of legal rules that are specially tailored to those contexts and circumstances.
Whatever concepts and wording are chosen as the applicable attribution rule for electronic agents, it would be appropriate to clarify in commentary that the statute does not address the topic of the liability of agent developers for defective design and "manufacture" of the agent, and is not intended either to expand or contract the scope of such liability, if any.
The Working Group is very much in the preliminary stages of its analysis, and welcomes any input and further dialogue on these topics.
-------------------------------------------------------------------------------
© 1999 American Bar Association. All rights reserved.
1. This draft reflects only the views of those individuals listed above. Until and unless finalization and approval of this draft occurs, neither the contents of the report, nor the opinions expressed therein, necessarily represent the views of the American Bar Association or any part thereof. This draft is a work in progress, and it is expected that final work product will be in the form of a report.
2. The revisions relate to the new definition and provisions relating to Electronic Agents in the January 29, 1999 draft of the UETA. Although the document reflects points raised in the course of discussions in Atlanta, it is still very much a work in progress. Members of the Working Group on Electronic Contracting have not yet given the Chairs their comments on this draft. There is a possibility that members of the Working Group or Committee will have further thoughts on this draft. These will be conveyed by either the members themselves or the Chairs on their behalf at that Drafting Committee meeting.