A Legal Theory for Autonomous Artificial Agents

'''Chopra, Samir, and Laurence F. White. A Legal Theory for Autonomous Artificial Agents.''' Ann Arbor, MI: University of Michigan Press, 2011. Rev. Jorge Martins Rosa, "Slaves, Vending Machines, and Bots," Extrapolation 54.2 (Summer 2013): 235-39, our source for this entry.

Rosa notes that "[...] in spite of following a very different route from the established debate concerning the idea of the post-human, namely because more focused on the 'humanization' of machines than on the 'cyborgization' of our biological self, they arrive at very similar conclusions." Hence, Chopra and White argue for "substantive reform" in the status of what we'll call AI entities, "acknowledging them as full-fledged agents." Chopra and White, Rosa asserts, want for these entities "legal personality: 'The most radical (and perhaps in the not-too-distant future, the most just) solution to the contracting problem," and other legal issues beyond contracts, "would be to treat artificial agents as legal agents who have legal personality and contracting capacities in their own right' (42)" (Rosa, p. 235) — i.e., that at least under the law, such entities would be "persons" (Rosa 238).

A recurrent example is the similarity between the proposed treatment of artificial agents and the way Roman law regarded slaves: able to enter into contracts on behalf of their masters but not to sue in their own name. In an enlightening passage, the authors remind us: "The comparison of technologies of automation to slave labor is not new. Norbert Wiener famously noted" in The Human Use of Human Beings (1950/54), that "the automatic machine, whatever we may think of any feelings it may have or may not have, is the precise economic equivalent of slave labor" (Chopra and White 41, quoting Wiener op cit. p, 162, end of section IX).

Perhaps surprisingly, Chopra and White, Rosa says, do not bring in I. Asimov and the Laws of Robotics (pp. 236-37). They bring in the idea of things in the special case of artificial agents, having what we usually like to think of as uniquely human attributes, but, as a practical, pragmatic matter AI entities can also possess, or soon will. Rosa puts it that the most serious barrier to AI/advanced-robot personhood — we use the common SF terms — "is [...] our own [human] stubbornness in believing we are the sole entities gifted with free will, autonomy, and a sense of moral responsibility. However, even with such stubbornness overcome, there would remain the practical problem of identifying the actual agent. Quoting Legal Theory: "It is not clear whether the subject agent" — sic: apparently a psychological/philosophical "subject," with a psyche and subjectivity who can act with autonomous agency — "It is not clear whether the subject agent is the hardware, the software, or some combination of the two," which can get further complicated when hardware and software may be dispersed over different sites. "Or consider an agent system, consisting of multiple copies of the same program in communication, which might alternately be seen as one entity and a group of entities" (Legal Theory 181; Rosa p. 238). These technical legal problems can be overcome, and we might be heading toward expansion of some central human ideas, most specifically the "post-humanist" call by Norbert Wiener, long before the use of the term "post-humanist," for a redefinition of "life" (Rosa 239). Note this idea of living machines for, for example, the film SHORT CIRCUIT (1986) and its tag-line, referring to the robot Number (Johnny) 5, "Something wonderful has happened...No. 5 is alive." For what it's worth, note that Clockworks2 Initial Compiler (Rich Erlich) retorted that the "alive" business indicates a failure by human people to imagine spontaneous, autonomous behavior by non-living things, a biocentric failure.