Legal Interpretation of Digital Algorithms and Consequences for Work

Julia Tomassetti
Assistant Professor
City University of Hong Kong School of Law

Legal Interpretations of Digital Algorithms and Consequences for Workers
This paper investigates the legal interpretation of digital algorithms and consequences of these
interpretations for workers who are subject to algorithmic management. Legal liability often
turns on judgments about human agency—evaluations of discretion, control, and/or culpability.
When parties to such disputes attribute their actions to digital algorithms they create and manage,
legal decision-makers must interpret the algorithm to determine liability. At first glance, digital
algorithms are simply formulas, backed by computational power, that process inputs into outputs.
But they are also a social cipher: digital algorithms have human authors, but operate on such
scales as to appear hyper-rational and inscrutable. They appear automated and objective, but
require human curation and modulation of their “raw” data inputs. They are insensate, but
generate normative judgments on sundry social phenomena, from fictional literature to politics.
They are fixed by initial programming, but capable of interacting with users in unexpected ways.
Platform companies often use digital algorithms to manage labor—to design, allocate, monitor,
optimize, evaluate, and/or compensate work. For example, Uber uses its algorithm to coordinate
the work of thousands of drivers across space and time; drivers have little in-person interaction
with company personnel. The paper examines three kinds of disputes where the phenomenon of

the digital algorithm assumes legal import: (1) antitrust lawsuits accusing platforms of price-
fixing; (2) disputes over the employee status of platform workers; and (3) disputes over the

employer status of fast food franchisors that use payroll and scheduling algorithms to manage
their franchisees’ employees. I find that platform companies and courts tend to impute three
qualities to digital algorithms in arguing about agency: (1) automatism: the algorithm entails
little or no human intervention in its decisions and actions; (2) extra-human cognition: the
algorithm has calculative abilities at scales and levels of complexity beyond that which the
human mind is capable of coordinating or even comprehending; and (3) impersonal objectivity:
the romantic and animal passions that interfere with human decision-making do not contaminate
algorithmic rationality. I suggest that invoking these qualities performs significant analytical
work in the cases: First, legal actors liken the algorithm to the putative “invisible hand” of the
market, thereby excusing the company in antitrust and employment status disputes for its
substitution of market mechanisms with centralized coordination. Second, invoking algorithmic
impersonality draws on a stubborn association of employment with master-servant relations. The
rhetoric depicts the algorithm as an exemplar of disinterested authority and antithesis of the
mercurial master, suggesting that platforms and franchisors are not the employers of platform
workers or fast food employees. Finally, these qualities implicate the algorithm in an
interpretative conundrum regarding the role of contractual specification in disputes over
employment status: on the one hand, the contract between a worker and platform company—the
alleged employer—often appears to fix, and therefore exhaust, the company’s authority to
intervene in the work by delegating management to the software; from the worker’s perspective,
however, the software algorithm enables the company to intervene continuously with the work,
and in a discretionary, opaque manner.