Software code is built on rules. The way it enforces them is analogous in certain ways to the philosophical notion of legalism, under which citizens are expected to follow legal rules without thinking too hard about their meaning or consequences. By analogy, the opacity, immutability, immediacy, pervasiveness, private production, and ‘ruleishness’ of code amplify its ‘legalistic’ nature far beyond what could ever be imposed in the legal domain, however, raising significant questions about its legitimacy as a regulator. With the aim of mitigating this ‘computational legalism’, the article explores how we might critically engage with the text of code, rather than just the effects of its execution. This means contrasting the technical performance of code with the social performativity of law, demonstrating the limits of viewing the latter as merely a regulative ‘modality’ that can be easily supplanted by code. The latter part of the article considers code and the processes and tools of its production, drawing on theories of textual interpretation, linguistics, and critical code studies to consider how its production might be legitimised.
computational legalism, legality, code pragmatics, speech act theory, legal institutionality, critical code studies
1. Rules, Texts, Hermeneutics
Software code is all about rules: an algorithm is a “procedure or set of rules used in calculation and problem-solving”. By executing such a procedure, the computer will often subject its user — a citizen — to a further set of ‘rules’, imposed by the interface and the ‘geography’ of the running code. The code shapes, guides, enables, and inhibits the citizen’s possibilities for action, whether within the ‘virtual’ geography of the software system itself (for example in the rules of a video game, or the affordances of a social media application or word processor) or more broadly in the physical world (for example where a car prevents a driver from breaking the speed limit, or a library’s digital borrowing system automatically prevents the lending of a book).
In the legal world, uncritical deference to rule-following is sometimes characterized under the rubric of legalism. In its strongest forms, legalism becomes an ideology, according to which not only should rules be followed, but they should not be questioned or even interpreted beyond their apparent meaning. Whereas this legalistic stance is a choice in the legal world, in the world of computation it is entirely standard. Software code cannot accommodate the kinds of productive ambiguity that natural language can, and when its ‘ruleishness’ is combined with code’s other characteristics of opacity, immutability, immediacy, pervasiveness, and private production, what results is a ‘legalism’ that is qualitatively and quantitatively far more troubling than the legal alternative.
Broadly, our thinking about code rules fits into two distinct temporal categories – their production and their execution. By focusing on the former, we can identify sites of critical engagement with the text of those rules – the source code – and the tools and methods of its creation. Code text is ‘performative’, creating a new state of affairs in the world when it is executed, but also is also documentary, both describing what will happen and telling us something meaningful about the conditions of its production. By interpreting code as a text, we can make it intelligible and can, in turn, begin to think about ways to shape its production that avoid or at least minimize the slip of its normative force toward the ‘computational legalism’ just described. The problem of computational legalism arises not just in systems to which legal operations have been delegated (though it will have particular salience in that context), but is equally important wherever code is employed as a means of ordering human affairs. Just as we ought not to accept the ideology of legalism in the practice of law, so too should we resist its manifestations in other normative contexts, be it the rules of conduct in a social setting such as a community club or college dormitory, or the code that structures a citizen’s interactions with a smart thermostat or autonomous car.
The contributions of this article are two-fold. First, it sets out the notion of computational legalism, mapping its broad contours with an emphasis on the characteristic of ruleishness and what it means for the interpretation and enforcement of code normativity as compared with textual rules. Secondly, it approaches the phenomenon so identified from the perspectives of speech act theory and textual interpretation, drawing on literature in media studies and the philosophy of language. This enquiry raises many questions, in relation for example to seeing source code as a speech act, the ‘autonomy’ of the code-text, and the simultaneously performative and documentary roles that those texts inhabit. Can there be such a thing as a ‘governing ideal’ for the production of code rules, one that minimizes code’s legalism at the level of the machine such that the normativity it produces is acceptable in a democratic society? And to the extent that legal and code texts share characteristics – both are performative, albeit in crucially different ways; both impose rules; both are built around pre-defined constructs, normativity, and ordering – what might this tell us about the appropriate interplay between the two worlds, particularly when code is the medium through which legal norms are enforced? While one article cannot answer these questions exhaustively, I do aim to highlight the need to interpret the code rules to which citizens are subject, which in turn entails the consideration of how the text of those rules is produced. The article concludes by suggesting some avenues for future thinking in this area.
2. Rule by Code
2.1 From Legalism to Strong Legalism
Before we can consider code’s ‘legalistic’ nature, it is necessary to take a step back to understand legalism in its orthodox context. Legalism is the perspective that deems rules, promulgated by an authorized sovereign, to be the proper fundaments of social ordering. In the text of the rule lies the beginning and the end of how citizens are required to behave. Legalism admits of degrees, however. In its stronger, more ideological variant, citizens are expected to act like automatons, minimally interpreting the rules and simply following them as they are laid down. There is little in the way of interpretative flexibility and the respect for individual autonomy that this implies; what is normative under a weaker notion of legalism (asking citizens to follow a rule) becomes simply a command, something to be followed mindlessly. Law is seen as solipsistic, sealed off from the social world it serves; it is “self-contained and autogenerative”. Enfranchised citizens have a political role, but once the outputs of the “dirty business” of politics have been converted into the rules of legislation, they take on the character of scientific data to be processed by the institutions and vocabulary of the legal scientist.
Wintgens extends these characterizations in his in-depth analysis of legalism and its philosophical genesis. For him, strong legalism flows historically from the “conjugation” of various elements, namely (i) the representation by rules of the ‘true’ world, (ii) their embodiment of timeless truths that are not open for debate, (iii) the concealment of the political reasons that animate a particular articulation of a rule, (iv) the belief that the state is the only true source of law, and (v) the view of law as a science. Taken together, these elements form the ‘strategy’ of strong legalism, where the focus for citizens is the following of rules rather than how they came to be. Under strong legalism, the sovereign’s exercise of power is de facto legitimate, and thus not open to question.
Strong legalism contrasts with legality, a concept that acknowledges the importance of rules and legal certainty but views these as elements of a functioning legal system rather than its whole. This more reflexive perspective seeks to respect individual autonomy within a framework of democratically legitimated rules that apply to both the individual and the sovereign rule-giver. Under legality, the bare expectation that citizens follow the rules — which on its own would be strong legalism — is thus tempered by further considerations. This can include the design of the legal rule, which is justified only if it meets minimal formal requirements. These include characteristics such as intelligibility, prospectivity, non-contradiction, and temporal stability, which contribute to the legitimation of the rule independently of its political merits. All things being equal, this makes it reasonable to expect citizens to acquiesce to the resulting rule, because its ‘design’ has been legitimated in advance. A theme that is common in theories of legality is thus the counterbalancing or constraining of the bare power of rule-making. This in turn implies a continual reinvigoration of law’s nature as legal, and a ‘binding to the mast’ of the sovereign’s otherwise unfettered power, under the rubric of the rule of law.
2.2 Computational Legalism
The analysis of legalism and its historical and political roots may seem an odd perspective from which to view the regulative capacity of software code. The connection is deeper than it might at first seem, however. Code constitutes and regulates different forms of behaviour, and in so doing it embodies to a great extent the ideology of strong legalism. In fact, it both concretises that ideology and amplifies it far beyond what is imagined in the legal sphere. As Bańkowski and Schafer put it,
The alternative to legality is not anarchism, it is legalism… ‘not thinking about it’, if left to its own devices, tends to take over the entire social world, or at least cyberspace.
This is a challenge to the cyberlibertarian ideology that implicitly prioritizes the freedom of the commercial designer, rather than the citizen (or ‘user’). Even if we accept the cyberlibertarians’ claims that cyberspace is in fact free of institutional law, what replaces law for the user is not freedom, but rather the unthinking rule-following imposed by the commercial code that makes up that ‘place’.
I am not suggesting that designers and the programmers of code necessarily harbour a legalistic ideology, however. What matters more is that code by its very nature tends toward a kind of strong legalism. This is the case regardless of the intent of the programmer, however vicious or virtuous that may be. The ontological features of code very easily facilitate the strongest of ‘legalisms’, under which the ideological ought of strong legalism (you must follow this rule without thinking too much) becomes the technological is of code (you have no choice but to follow this rule — or, more strongly, the very character and limits of your action are defined from the outset). What was normative becomes simply descriptive. This alone is sufficient cause for concern, but when it is combined with, and amplified by, the other characteristics of code, the picture becomes starker. The sheer speed of execution collapses any opportunity for deliberation, preventing the possibility of reinterpreting its ‘rules’. If we add to this the opacity of its operation (at both ‘back-end’ infrastructure and ‘front-end’ interface levels), the sheer amount of it operating around us and, of course, its production by private enterprise for profit, the simultaneous parallels and contrasts with legalism become clearer.
In the computational domain, code imposes, accelerates, and amplifies the characteristics of strong legalism beyond what a traditional legal system is capable of. This is particularly so given the reliance of law first on text as the medium through which it promulgates rules, and second on the courts as the institutional mechanism of their recognition and enforcement. By contrast, neither legitimacy nor the threat of enforcement are required for code rules to be enforced. By nature, the ability of code to enable and constrain behaviour is deeply latent; the structure and logical flows of its rules may be impossibly complex in aggregate, but they nevertheless lie in wait, fully primed and ready to execute, awaiting the command to execute. Once the algorithm is set in motion, its execution happens entirely independently of any requirement of recognition on the part of those subject to its rules, and the ‘interpretation’ of the code is solely and entirely within the purview of the machine, at least at the point of performance. Code is deterministic in this important and limited sense. So while one should not overstate the extent to which code rules currently predetermine behaviour in the ‘offline’ world, the risk is that as we transition into the onlife this risk becomes ever greater. This underlines the importance of analyzing the production of code ex ante, rather than focusing only on its effects, whose reflexivity might mean our responses are too little, too late.
2.3 Three Consequences of Code Rule
We come now to the consequences of code’s ‘ruleishness’, the central aspect of its legalistic nature. Ruleishness is the idea that, at point of execution, code imposes a clearly delineated and pre-determined set of hard-edged rules. Upon execution, nothing outside the rules will be imposed, and everything inside them will be. This is true regardless of any pragmatic contingencies that mean — or should mean, were we able to argue the point — that some other condition should be taken into account to alter either the fact or the nature of the code’s execution.
There are three primary elements to code’s ruleishness: its mindless execution, the hard edges of the rules, and the necessarily limited ontology that a given system can represent. Of course, the rule might be designed to allow for different possibilities — a scaled value instead of a binary, for example — but the essential point is that this design choice is itself ruleish. In this sense, then, code is representational: it constructs and operates on a particular world, a representation of the real world, and knows nothing of anything that lies beyond the borders so constructed.
2.3.1 Mindless Execution
The mindlessness of code stems from its execution in every case where the ex ante requirements of the rules it contains are met. This is regardless of the context, consequences, or any reason that implies that the rule should not be executed. To paraphrase Introna, code “produces what it assumes”.
Crucially, this mindless production of the ex ante assumptions of the programmer takes place regardless of her intention; the problem of computational legalism is not simply about the deliberate mis-use of power by the creators of code, but also about their inadvertent exercise of that power. Again, we see the amplification of orthodox legalism: whereas a legislature might produce a statutory rule whose terms inadvertently create the potential for some malady, this can in principle be ignored by those subject to it, and if necessary struck down by a court once the problem has been identified. Not so in the code context. Provided the rules are semantically valid according to the strictures of the programming language (I will return to this theme below), the code will execute regardless of what the programmer intended or what she thought she was writing. This is an extremely common experience in writing code: programmers iteratively test the rules they write to see if they perform as intended. But it also hints at a more troubling implication: what about latent conditions in the code that the programmer is unaware of? Where these manifest as obvious bugs, they can be detected easily and hopefully fixed. There is always the possibility, however, they are hiding in plain sight, contained in code that ostensibly performs as intended. In either case, the machine will carry on regardless.
2.3.2 Hard Edges
This is the other side of the coin from mindless execution: as long as the conditions in a code-based rule are not met, the rule will never execute, regardless again of any external condition or consideration. There is no Hartian open texture and “penumbra of doubt”; only the core of meaning does or can exist in the world of executable code — there is no possibility of alternative interpretations, at least for the computer. This is the inverse of Hart’s argument about “mechanical jurisprudence”, which he suggested could not exist in a contingent world. Where rules are executed within the computational domain, a mechanical jurisprudence is precisely what is imposed on a contingent world by the ruleishness of code, made manifest and amplified by the interlocking and mutually-reinforcing characteristics of computational legalism. The bright lines of computational rules demonstrate the complete inability of code to accommodate ambiguity. Any apparent ambiguity is illusory, since it has been consciously designed and is trated as ambiguous at the level of human interpretation, rather than existing as such within the internal calculus or logic of the machine. By contrast, all human understanding is predicated on the interpretation of the (potentially) ambiguous, in which an existing ‘horizon’ of tacit knowledge ‘fills out’ the limited depth of the communicated text.
This is not the case with respect to code, however. There is only what there is in the text; the system has no sensitivity to any notion of tacit or background knowledge beyond itself. The conditions that determine the application of the rule are defined ex ante, and there is no possibility of departing from them however justified one might be in wishing or arguing for it.
A code’s ontology is also fundamentally limited. The form of the rule is that which is laid down; a design might allow for some leeway, but, paradoxically, the flexibility is defined inflexibly. While the contextual interpretation of the effects of code’s performance may be feasible, which might include resisting those effects, this is necessarily separate from the text of the code itself. This is precisely why focus on the production of that text is so important. It is there that interpretation can play an important precautionary role, feeding into the process of designing the code to critically identify what aspects of the world must be represented, how the representations are limited, and what the implications are when code assumes them to be the ‘real thing’ in its objects, data structures, and logical flows. I discuss this important focus later in the Section on performativity.
2.3.3 Limited Ontology
The limited ontology of code represents the point at which the previous two concerns become problematic in practice. Code can only respond to features and conditions that are anticipated and represented in its design (again, even code which can apparently adapt to new data — including machine learning algorithms — is reflexively constrained by the design of that very adaptability). Code operates on a closed world assumption, with an ontology that is determined from the outset as a kind of Platonic simulacrum of the phenomenon being represented. If its designers anticipate only responses A, B, and C to conditions X, Y, or Z, these are all that the code will ever recognise. This simplification of reality may be necessary in order to find a pragmatic balance between representation, complexity, and the solution to the problem (assuming those are even the appropriate concepts). Unlike underdetermined textual rules, however, there can be no re-interpretation after-the-fact, to extend the code’s ontology where that might be appropriate. Code can only ever be sensitive to what Hildebrandt terms ‘intra-systemic meaning’, which in this case refers to the rules and representations designed into its ontology, whereas meaning for humans is borne of the interactions that cross the boundaries between systems. In the end, we are stuck with the ontology of the code defined by the programmer, which brings us back to the question of production and the need to ensure the problems of computational legalism are adverted to up-front, during the design process.
Code is legalistic, therefore, because even where its text is open to scrutiny there is still no space left for the citizen to interpret. Even if she has access to and can understand the source code, in the course of its performance she has no choice but to acquiesce to the rule as it was designed. And as we saw above, even where there is the possibility of some choice built into the design, the scope of this is itself pre-determined. Even were this not the case, the default settings of a system are often assumed to be immutable, or at least the most sensible option.
When taken together, these elements suggest a ‘legalism’ that goes far beyond even the strongest of orthodox legalisms. Under the rule of code, the possibility of interpreting a rule and disagreeing about how to respond to it is routinely and comprehensively elided, simply by virtue of the nature of the medium.
3. Interpreting the Rules of Code
The previous Section discussed how the ‘ruleishness’ of code is especially problematic in terms of interpreting how it constitutes and regulates behaviour, as a means to contesting and guiding it toward legitimate applications. The question then becomes: What can be done to ameliorate this ‘legalistic’ nature?
One way of analyzing code is to think in terms of its relationships to those who use it or are affected by it. This will include individual citizens, collectives, people with specific (in)capacities (e.g. the visually or cognitively impaired), or those whose roles fit within a broader normative context (e.g. judges, traffic wardens, public administrators). For such relational analyses, the theory of affordance is proving valuable in the legal literature. One can adopt a normative legal perspective when viewing code through this lens: we can ask, for example, what relationships and features the positive law requires a digital artefact to have with respect to a specific class of ‘user’. This is powerful in terms of identifying the features and relationships that code ought to facilitate, but its engagement with the underlying text of a system’s code is only indirect. That text of course constitutes the features of the system, and (one side of) the affordance relationships that it instantiates when it is executed.
Although it is valuable to analyze code in relational terms, in the remainder of this paper I want to consider the possibility of directly analyzing code as a text. The idea is to consciously adopt legality as an interpretative or hermeneutic position, from which to consider code and the tools and practices of its production. The goal is to explore the extent to which the requirements of legality are met at code’s lowest level, prior to its performance.
Before we can begin, however, there is an important question to be asked in relation to interpretation of code: who is the interpreter? If the meaning of a text is supplied not solely by its author, but is construed through its appropriation by an interpreter, we might ask who — or what — it is that appropriates the code-text.
3.1 Code’s Bi-directionality: A Text for both Computers and Humans
The text of code points in two directions: it is both a set of instructions for the computer to execute, and a document that describes what the code will do. These central characteristics separate it from most other text, in degree if not in category; while legal texts are also performative, this is in a way that is temporally inverted from that of code.
In code of any complexity, the documentary function becomes crucial for understanding what the system does or is intended to do, especially across time and space and where more than one programmer contributes to its development. The form of the code’s text will to a great extent govern its documentary function, and this in turn is influenced by the vocabulary and grammar of the underlying programming language.
The facilitation of human understanding works on two levels. First, there are the linguistic signs used in the programming language itself, where the vocabulary representing elements in the code is straightforwardly intelligible to the reader (for example, the command
‘print’, or an array of three elements represented as
[a, b, c]). The language might even be explicitly designed to prioritize human understanding, as in the case of the Literate Programming paradigm, discussed further below.
At the second level of intelligibility, there is ad hoc documentation in the form of non-executable comments interspersed throughout the executable text. These are often used to explain what particular sections of code are intended to do, why the programmer chose to adopt a particular approach instead of some alternative, or to label sections of code that are incomplete or have been temporarily ‘hacked together’. Such comments are flexible in terms of their documentary function, since they have no necessary connection with the performance of the code. In the comments, the programmer can choose to describe her code in as much or as little detail as she wishes, without this in any way affecting how it will behave when executed.
On the other side of the intelligibility coin, it is the machine that acts as interpreter. Here, the ‘meaning’ of the text is created by the compiler of the source code, and the CPU that executes it. They are the ‘readers’ of the code. Compilers are themselves software artefacts, constrained by the grammar of a particular programming language (more on such grammars below). Thus, the ‘autonomy of the (code) text’ is at the same time extreme and fundamentally constrained. It is extreme because it is a performative par excellence: its constitutive nature creates profound effects in the world far removed from the geographical or temporal control of the author. However, that autonomy is simultaneously constrained, because its ruleishness delimits its performance absolutely.
For some, the meaning of code lies only in this performative role, but this can only be true if we delegate interpretation of the text to the compiler whilst simultaneously limiting the interpretive role of humans to the observable effects of the code’s execution. While those effects are indeed where computational legalism can be witnessed, they are not where it can be challenged. The documentary purpose of code can give us insight into what those effects are likely to be before they are let loose in the world. The salient difference for computational legalism between the interpreter (human or compiler/CPU) and the object of interpretation (text or effects) thus lies in the temporality of the interpretative practice: in its documentary role, prior to execution, the interpreter of the code is a human. This happens before the point of compilation and closure, and is necessarily ex ante. On the other hand, in its performative role the interpreter of the text is the compiler whose interpreted output is executed by a CPU; the human becomes is an ex post interpreter of effects, rather than of the text. The latter is closed off, and so is the possibility of it being re-interpreted and changed.
3.2 Code as a Speech Act
Looked at this way, code could be said to be a kind of speech act: it is a written text with latent performativity that creates a new state of affairs in the world when it is executed. Unlike textual laws, however, the consensual threshold for execution is much lower, and in many cases non-existent; as mentioned above, once execution has been initiated, there is little in the way of scope for mitigatory interpretation or collective agreement not to recognize or act upon the ‘performative’. This up-ends law’s scheme of performativity, which is built upon institutive rules that require the recognition of a community in order to have any practical effect in the world.
In terms of Austin’s theory of speech acts, writing code could then be viewed as perlocutionary, which is to say its performance and the effects that flow from it take place sometime later than the moment of utterance. Perlocutionary speech acts can be contrasted with both locutionary utterances (true or false statements about the world that have no performative effect) and illocutionary speech acts, which have some effect as they are performed. The classic example of the latter is the utterance ‘I now pronounce you man and wife’, which does not describe a marriage but brings it into being. What Austin calls the ‘happiness’, or effectiveness, of such an utterance is contingent on the requirements of a ‘conventional procedure’ being met at the point it is made. For example, saying ‘I now pronounce you [husband/wife] and [husband/wife]’ will have no effect if the speaker is not a legally-authorized officiant, or if any of the other conditions of marriage are not met (majority and consent of the parties, etc.).
Viewed in terms of computational legalism, the ‘happiness’ of a code utterance does not depend on anything other than the syntactic validity of the statement, as defined by the programming language. Code that meets the ‘grammatical’ requirements of that language will execute whenever its required conditions are met, regardless of the performance that was intended by the programmer (hence the constant need to ‘debug’ software – to fix failed, unwanted, or unexpected performance). The relationship between what was written and what was understood to have been written is especially problematic because of this mindless performance, especially when multiplied by the interplay of different codes within and between systems. This is what makes the legalism of code such a concern — the threshold of performativity is syntactic validity and nothing more; outside of a hardware failure the machine will not stop to consider whether or not performance is appropriate or matches the intention of the programmer.
Code is perlocutionary in that its consequences take place at the point of execution, which is necessarily after the ‘speech act’ of writing the programme. This may be seconds, months, or years after the moment of writing. Code is thus latently performative; the immutable utterance is ‘shipped’, full of possibility, to await execution at some unknowable time in some unknowable context. This highlights the crucial need to focus on the conditions of production. Given that the code utterance creates a set of rules that can have performative effect far beyond the foresight of the programmer, it is essential that the design of those rules is sensitive to those future possibilities. It may not be reasonable to expect programmers to see the future, but we can expect them to design their code to protect those who will be affected by it when the future arrives.
Is the notion of code performativity developed above properly congruent with speech act theory, however? That is, is it appropriate to think of code statements as utterances that perform acts we recognize in our shared social world, as opposed to mere computational steps that are executed deterministically by the machine? An answer to this lies in the pivotal distinction between performance and performativity.
3.2.1 Performance or Performativity?
The normative system of law is constituted in great part by performative speech acts. When executed according to the conventional procedure, these result in constructs that are recognized and have purchase in the legal world. Legal institutions are the ‘templates’ of those constructions, defined by positive law through rules that define how they can be created and terminated, and what the consequences are of each. Individual instances of an institution are called legal-institutional facts, for example a particular contract, marriage, or entity such as a company or university.
The practical existence of legal institutions and of legal-institutional facts depend on a shared commitment to the rule of law. One might disagree about the specifics of a given legal-institutional fact (for example by asserting that a contract is void or a marriage invalid), but mere disagreement will usually not be sufficient to extinguish the institutional fact if its conventional procedure does not provide for this. To do that, some form of adjudication will be required, for which the paradigmatic forum is, of course, the court. Judges interpret the rules and the evidence to determine whether or not the requirements of the procedure have been met. Given that courts are the primary body with authority to make such determinations, there is a kind of temporal balance between on the one hand the ex ante conditions of the conventional procedure, and on the other the ex post determination by the adjudicator that those conditions have been met.
Natural language permits flexibility in assessing whether or not a performative speech act was successful. What might appear to be an unsuccessful performative, for example where the exact requirements of the conventional procedure have not been met, can in principle be remedied by a court where there is sufficient reason to do so. The apparently unsuccessful speech act is thus rendered successful by the court, which looks beyond the bare text of the rule to ‘find’ that the institutional fact does exist (and perhaps always did). This will usually be achieved on some principled basis, weighing up the function of the legal institution, and of the law more generally, alongside the evidence of what actually went on. Sometimes the failure to perfectly follow the procedure is outweighed by the value of recognising the institutional fact.
This kind of flexibility would not be acceptable under strong legalism, and even less so under ‘rule by code’, which in an analogous circumstance would be unwavering in its execution of the rule as laid down. The three elements of ruleishness described above mean that the notion of a code performative being unsuccessful is absolute. It is questionable, therefore, whether it can ever be appropriate to ‘outsource’ the creation of legal-institutional facts to code, without first thinking deeply about what the reflexive consequences would be for the nature of law.
Legal institutions are indeed predefined, at least in the sense that the creation of an institutional fact relies on a conventional procedure being followed. This might trick us into thinking that because there are specifications for their creation, we can simply automate the relevant speech act. Superficially, the legal condition maps easily onto the ‘if this, then that’ structures so common in code. But that is not all that matters, unless we are willing to adopt a normative position that computational legalism is a desirable thing. “[T]o be able to tell the rules of chess is not to know chess”; any move in this direction will require deep sensitivity to the differences between the constitutive nature of code’s normativity and the legal effect of textual normativity, the latter being inherently and productively limited in the extent to which it can direct our actions. The performance of the designer’s utterance, written in the language of code, can have far more immediate constitutive force than a speech act in the legal domain can. Such code performatives do not create institutional facts, but rather constellations of brute fact. These might appear similar or even identical to legal-institutional facts, but in fact they have very little to do with them. Even where the vocabulary in the code-text is intelligible, using verbs like ‘print’ that make some intuitive sense to non-programmers, they mean something very different to their respective readers, i.e. the human on the one hand and the compiler/CPU on the other.
As suggested above, legal-institutional facts combine the ex ante and the ex post; they are constructs recognized within the broader interpretative context of the rule of law. This context is in no way a prerequisite for the constitutive and regulative performance of code, however; its execution has no necessary role for democracy or the citizen. Whereas in the legal domain the rule flows from the legislature to the citizen who interprets and follows it, in the computational domain the programmer ‘sends’ her code to the compiler/CPU, which bypasses the role of the citizen/user in its imposition of the rule upon her.
The roles of the author, text, reader, and effect thus have very different normative roles in the two domains. Because of this, code-created ‘institutions’ ought not to be considered isomorphic with any notional counterpart from the legal domain, at least not without an acute awareness of how this will change the nature of legal institutionality. This will necessarily involve consideration not just of the performance/performativity distinction, but also the other quantitative characteristics of code’s ‘legalism’ that amplify this central concern. The shared social world in which legal institutions exist is bound up in the ‘slowness’ of text, which means that by default those institutions can be contested, all things being equal.
Even if we assume it is not a category error to attempt to render legal institutions and institutional facts computationally, their nature would be profoundly altered by the ontological characteristics of the code medium, and in particular its immediacy, its immutability at execution time, and its pervasiveness. The role played by programmers in determining the scale and character of that reshaping would be vast, raising many questions about the democratic legitimacy of those making the relevant design choices. The legitimate use of computation in this context will require a set of design practices, and perhaps a programming language, that admits of the flexibility that is necessary to avoid legalism and the collapsing of law’s inherent adaptability. This is a theme I return to below.
3.3 Code as a Document
We have seen above that code’s ‘performativity’ is latent. This means that in order to ensure that its eventual execution avoids the pitfalls of computational legalism, we must consider its design up front. With code, anticipating the future effects of performance requires direct engagement with the text at the stage of programming. By shifting focus to this ex ante point, we can ask questions that cannot be answered by looking only at the effects of execution ex post. We might ask, for example, why a particular programming language was used, which third-party code libraries were incorporated, and why a particular method was used to achieve an output or effect. These in turn suggest further questions about the broader context of production, for example how commercial platform power results in certain languages and tools gaining prominence in programming education, and thereafter in industry practice.
By critically engaging with the text of the code rather than just the effects we are able to observe, we can open up a matrix of points of interpretation — ex ante/ex post, text/effects, human/compiler. Looked at this way, we follow Ricoeur in looking beyond the world that is presupposed by the author of the code-text. This, for him, would constitute a naïve “hermeneutics of faith”, which while necessary as a first step in interpreting a text is insufficient without a further phase of “suspicious” engagement with it. Under the hermeneutics of faith, the text is taken at face-value, and the meaning that the author apparently intends is accepted as such. It does not go further to interrogate the conditions of the text’s production or the assumptions made about the world that it helps create — the reader is passive, taking a “vow of obedience”.
Here there is a connection back to strong legalism: recall how the rules simply ‘are’, the intent of the legislator is deemed to be contained in the text and the political motivations that lie behind the rules are veiled from the subject — she must simply obey. In the context of code, this view is reflected in analyses focused only on the effects of execution. While such effects can be and frequently are subjected to critique in their own right, they take the underlying text as a given. This is necessarily so because, as we have seen, at point of execution there is no alternative — the ‘closure’ of the code-text has already taken place.
In contrast to the hermeneutics of faith, under the hermeneutics of suspicion the reader is rigorous in uncovering “relationships of power, conflicts, and interests implicated in [the text].” This second phase of interpretation engages at a deeper level, lifting the veil to uncover the political interests reflected, even unconsciously, in the text. To interpret code in this way we must look directly at the text, not just at its effects, many of which will not be apparent to us, either because they are hidden from view or because the conditions of their performance have not yet been met and so the effects are yet to materialize.
Expounding this network of meaning becomes critically important in light of the bricolage nature of most code. If we want to design code to mitigate computational legalism, we must, by definition, scrutinize it before it has passed the point of compilation. We adopt the position of an engaged and suspicious reader of the text, rather than just a passive observer of its effects. The goal is to have an ex ante impact on the execution that takes place after closure has happened.
Focusing on the production of the text in this way up-ends Heidegger’s notion of a technology being ready-to-hand (that is, subsumed within a practice such that it recedes from perception) versus present-at-hand (interrupting our practice and intruding into our attention). The latter usually comes after the former, when the artefact breaks down. Here, the idea is not to passively observe the code’s effects, waiting for an anomaly to intrude on our attention. Instead, we proactively interpret the code as it is being produced and is still in a “state of non-functioning”, in order to anticipate its nature after it is compiled and embeds itself in our experiential world as ready-to-hand.
The broader consequences of this for the legal domain are something I will return to below, but for now we can consider a very simple example that engages directly with the text of code and demonstrates some aspects of its bi-directionality.
3.3.1 Hello World!
Showing the phrase ‘Hello World!’ onscreen is traditionally the first step in getting to grips with a new programming language and its workflow. It is generally among the simplest of tasks in any given language, acting as a simple test of whether everything is properly set up and ready for more complex applications to be written.
Consider the following implementation in Python, a language commonly used in machine learning applications:
Even without specialist knowledge of these languages, we can intuitively understand what this code will do when executed. The verbs ‘print’ and ‘alert’ are easily intelligible. The documentary function of the text is quite explicit here, even in such simple examples.
But take another example, written in the ‘esoteric’ language Brainfuck:
While statements expressed in different programming languages may be for all performative purposes identical, the documentary function — the communication of meaning beyond performance — can vary significantly. Beyond of esoteric languages like Brainfuck, which aim to be unintelligible (the clue is in the name), this communicative function is an affordance of that language, and is contingent on the decisions made by the language’s designers. I will return to the theme of language design in the Section on grammar versus use below, but for now we have a hint at the constitutive nature of languages vis-à-vis the rules that can be written in them. When these are combined with the tools and common practices of code production, they affect the nature of the code artefacts that emerge from those processes.
3.4 Producing Code Rules
We have seen that code is simultaneously documentary and performative, with both these capacities being fundamentally shaped by the programming language that is used. The design of the language constitutes a meta-frame for the practice of code’s production, within which the programmer of code is in a sense merely a user, herself ‘programmed’ by pre-existing conditions of production. The designer of the programming language — and indeed of other elements such as user interfaces, standardized libraries of off-the-shelf code, operating systems, and the hardware itself — thus wields significant power over the programmer of the ultimate product. Vismann and Krajewski conceive of this role as the ‘programmer of the programmer’:
The programmer of the programmer, designing the tools and methods of a coding language (such as the compiler, code syntax, abstract data types, and so on) maintains the ultimate power because he or she, as the constructor of the programming language itself, defines what the “normal” programmer, as a user, will be able to do. Both types of programmers establish the conditions for using the computer, and, as such, they behave like lawmakers or, rather, code-makers.
The programmer of the programmer (‘PoP’) is not a single person or platform, but can be interpreted to mean the conditions of possibility that govern what the ‘production programmer’ (i.e. the creator of an artefact’s code) can produce. This framework is partly constitutive of the outputs of the production process, and when the guiding force it provides is itself designed with a normative end in mind, one can think of it as in a sense ‘constitutional’. The PoP, viewed as the collection of tools and practices that pre-configure code production, can have an impact on the characteristics of what is produced from within its framing.
3.4.1 Programming Languages: Grammar versus Use
Programming languages, designed by the PoP, have interesting properties as compared with human languages. The grammar of a natural human language, its langue, is a crystallization of existing usage, or parole. The speakers of a language participate in the evolution of that grammar, however unconscious and marginal that participation might be. The langue will change and adapt over time, as the speakers’ mores and habits are reflected in the parole; what was once ‘bad’ grammar becomes tolerated and then accepted, with communication in the meantime going on unabated. The grammar therefore develops ex post, as a result of the ever-changing uses of the language in practice.
Programming languages by contrast are profoundly ex ante, up-ending this arrangement of constantly evolving meaning. In computation, there is generally no input to the langue from those who ‘speak’ it, which is to say programmers. The langue is determined in advance by the PoP, and there can be no ex post adaptation of it in favour of a shifting parole: as we saw above in relation to performatives, any particular computational ‘parole’ (i.e. the code of some artefact) is entirely ineffectual if it fails to meet the precise requirements of the predetermined computational ‘langue’. Such code will usually be completely ineffectual, failing to compile and execute. The successful performance of the code ‘speech act’ is thus entirely dependent on it precisely following the predetermined strictures of the computational langue. The line is very thin indeed between perfect execution of the text as written, regardless of what was intended, and the categorical ‘unhappiness’ of the ‘performative’.
3.4.2 The Linguistic Relativity of Programming Languages
Such computational ‘grammar’ is, along with the design conventions associated with a particular programming language, especially responsible for framing the solution to a given task. Nearly all modern languages are Turing complete, meaning they can perform the same set of atomic calculations which in turn are the building blocks of general-purpose computation. Despite this, some languages are designed for, or are especially appropriate for performing, particular kinds of computation. From the programmer’s perspective, this is due to the amount and forms of abstraction they provide, for example by including predetermined functions for achieving particular computational goals in a single step without the need for bespoke coding. These functions are abstractions that are integrated into the core grammar of the language, somewhat like idioms in natural language that formalize particular (performative) meanings in a single ‘phrase’ or way of speaking.
Setting aside what might make a particular language more fashionable at any given moment in programming history, the choice to use one language over another can in principle be tied to the abstractions that that language provides, and how these ‘fit’ the programmer’s mental model of the problem at hand and how best to solve it. The question of what constitutes the ‘best’ solution will of course be contested, but the suggestion here is that the best solution is one that mitigates computational legalism. Programming languages are, as Graham puts it, “not just technologies, but habits of mind”. These habits develop over time, based in part on the kinds of problem being solved. Given that a programmer will usually become ‘fluent’ in only a handful of languages, she will begin to see the problems she is charged with solving through the lens of those languages, with the affordances of their syntax, functions, and data structures coming to frame how she conceives her solutions.
This representational idea mirrors the notion that the natural language we use affects our interpretation of the world. Our mother tongues frame our experiences and understanding; vocabulary and grammatical structures forming a lens through which our worlds are refracted, culture shaping language and vice-versa. As Sapir put it,
[h]uman beings do not live in the objective world alone[...] but are very much at the mercy of the particular language which has become the medium of expression for their society… the ‘real world’ is to a large extent unconsciously built up on the language habits of the group.
Although this is referring to natural language, one can claim something similar of programming languages. The ultimate design of a given code artefact will be affected to some degree by the data structures that the programming language provides (e.g. arrays, data frames, matrices). Similarly, the rules that process the artefact’s inputs and outputs will be affected by the grammar and functions that the language provides — the ideal function might not be available, but the programmer ‘makes do’ by working with what the language makes available. Even at the level of the vocabulary used by a programming language, it is interesting to note that the majority use English verbs and nouns, such as the ‘print’ and ‘alert’ we saw above. This in itself creates a linguistic framing effect separate and in addition to the other features of the programming language, especially for those programmers for whom English is especially foreign.
Chen notes that once the practice of writing code has begun, the “boilerplate and design patterns” of a given programming language are internalized as “unconscious and automatic idioms”, ready to be “regurgitated on demand”. In a study of the standard ‘split, apply, combine’ task in data science, Chen illuminates the notion of linguistic relativity as between the R, MATLAB, APL, and Julia languages. Where some languages have relatively constrained data structures and ‘habits’, others are more flexible in their generative possibilities. But whatever the language used, the programmer is situated within a set of technical and cultural practices associated with it. These will to a greater or lesser degree guide her understanding of the challenge she is tasked with solving. Weizenbaum emphasizes this power of programming languages to structure understanding:
…to a person who has only a hammer, the whole world looks like a nail. A [programming] language is also a tool, and it also, indeed, very strongly, shapes what the world looks like to its user… these frameworks cease to serve as mere modes of description and become, like Maslow’s hammer, determinants of their view of the world. The design of a public language, then, is a serious task, one pregnant with consequences and thus laden with extraordinarily heavy responsibility.
There is thus no ‘view from nowhere’ in producing code; the programmer’s perspective is framed from the outset by the tools available to conceptualize and to solve the problem, namely the languages she is familiar with, their attendant functions, data structures, associated libraries, and design patterns, all of which are in place before a single character is typed.
4. The Rule(s) of Code versus the Rule of Law
We saw in the first Sections of this article how the ruleishness of code militates against alternative interpretative possibilities, at least with respect to execution. This is amplified by the other ontological characteristics of code to create a kind of legalism profoundly at odds with the general claim that systems that constrain and enable our behaviour ought to afford at least some basic capacity for contest. The enforcement of rules by ‘legalistic’ digital systems is antithetical to a normative conception of law as, at least in part, an interpretative exercise. This observation provides us with a normative stance from which to interpret code, one that can inform future guidance of its production. We might mitigate computational legalism by interpreting both the text of code per se, and the tools and processes of its production, thereby reflecting the values of legality and the rule of law in the code that is ultimately compiled.
Nothing in code is a given, and so the legalism of code is also not an inevitability, even if its ontological characteristics might tend in that direction. Just as individual systems can be designed differently, so too can the processes, tools, and languages from which they emerge.
In the context of programming languages, we might envisage a design whose vocabulary, grammar, and design patterns — i.e. langue — steer toward a particular set of values in the ‘parole’ of the code that is written using it. Some proposals already exist for such value-driven languages, aimed at reflecting for example a feminist perspective or the particularities of non-Western, non-English-speaking cultures, as is the case in ethnoprogramming. An example of this is قلب (‘heart’), which poses a vivid challenge to the Anglo-centricity of contemporary programming languages and practice.
From the perspective of code’s bi-directionality, it is possible to intertwine closely the documentary and performative roles of code. This is the goal of the Literate Programming paradigm and its WEB language, which tightly weaves together executable code and commentary in a single file. Literate Programming stems from the notion that programming ought to reflect human ways of thinking, such that the culture shapes the programming language more so than the converse. This in turn can provide us with an alternative, potentially normative angle on the linguistic relativity of programming languages. This aim is reflected in the design of business-oriented languages such as COBOL and its ancestor FLOW-MATIC, as well as LOGO, which aims to reflect the programmer’s embodied perception, and Inform 7, which uses entirely natural language sentences for its expressions. The ends of a given language are expressed in its vocabulary and grammar; these could conceivably reflect the broad aims of legality and the rule of law. And even if the underlying language still facilitates the building of legalistic structures (as invariably it will, if it is Turing complete), the integrated development environment (IDE) might provide hints that tend toward a new form of ‘best practice’ that can avoid them. If we bear in mind that of course the IDE is itself designed, we can imagine its own design mandating this kind of process, the PoP entering as a meta-guide to the writing of product code and opening up a hermeneutic in the initial steps of creating a new project. Documenting the text might also be required by the IDE, compilation being prevented before comments have been added.
These are just a few examples of how computational legalism might be mitigated via an interrogation of the text of code. By making changes at the ‘constitutional’ level of the design process, it is possible to alter the texts that are produced there, and therefore in turn the effects of the resulting performance of the code. Cultivating such normative ideals for code interpretation could result in the production of code that avoids computational legalism in its many troubling manifestations.
This contribution has aimed to do two things. Firstly, to set out the ways in which code manifests the characteristics of legalism, to a degree that far outstrips their manifestation in the legal domain. Code’s ruleishness is particularly problematic, given its resistance to interpretation and contest by the user. That leads, secondly, to the suggestion that we might fruitfully consider the code as a text, before it is executed, through an adapted practice of textual interpretation that seeks to mitigate computational legalism.
In constitutional democracies we tend to want to resist the blanket heteronomy of rules passed down from sovereign legislators. Not only do we require that such rules exhibit certain basic legitimising characteristics, but we also aim for procedural safeguards that allow us to interpret and contest them in practice. We should take the same stance against producers of code that directly constitutes, limits, and structures so much of our environment, behaviour, and actions. One promising way to do this is to critically interrogate its text, during the process of production, from the perspective of legality and the rule of law.
‘About’ (Inform 7) <http://inform7.com/about> accessed 11 June 2021
‘Algorithm, n.’ <http://www.oed.com/view/Entry/4959> accessed 11 June 2021
Arendt H, The Human Condition (2nd ed, Chicago: University of Chicago Press, 1998)
Arns I, ‘Code as Performative Speech Act’ (2005) 4 Artnodes
Austin JL, How to Do Things with Words (Oxford: Oxford University Press, 1962)
Austin JL, ‘Performative Utterances’ in J.O. Urmson and G.J. Warnock (eds.), Philosophical Papers (Oxford University Press, 1979)
Bańkowski Z, ‘Don’t Think About It: Legalism and Legality’ in M.M. Karlsson, Ó. Páll Jónsson and E.M. Brynjarsdóttir (eds.), Rechtstheorie: Zeitschrift für Logik, Methodenlehre, Kybernetik und Soziologie des Rechts (Berlin: Duncker & Humblot, 1993)
Bańkowski Z and N MacCormick, ‘Legality without Legalism’ in W. Krawietz et al. (eds.), The Reasonable as Rational? On Legal Argumentation and Justification; Festschrift for Aulis Aarnio (Berlin: Duncker & Humblot, 2000)
Bańkowski Z and B Schafer, ‘Double-Click Justice: Legalism in the Computer Age’ (2007) 1 Legisprudence 31
Barthes R, ‘The Death of the Author’ in S. Heath (ed.), Image — Music — Text (London: Fontana, 1977)
Binns R, ‘Analogies and Disanalogies between Machine-Driven and Human-Driven Legal Judgement’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law
Bowker GC and SL Star, Sorting Things Out: Classification and Its Consequences (Cambridge, Mass; London, UK: The MIT Press, 2000)
Brownsword R, ‘Lost in Translation: Legality, Regulatory Margins, and Technological Management’ (2011) 26 Berkeley Technology Law Journal 1321
Calo R, ‘Can Americans Resist Surveillance?’ (2016) 83 The University of Chicago Law Review 23
Chen J, ‘Linguistic Relativity and Programming Languages’  arXiv:1808.03916 [cs, stat] <http://arxiv.org/abs/1808.03916> accessed 16 April 2020
Cohen JE, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Connecticut: Yale University Press, 2012)
Danaher J, ‘The Threat of Algocracy: Reality, Resistance and Accommodation’ (2016) 29 Philosophy & Technology 245
Dewitz S, ‘Using Information Technology as a Determiner of Legal Facts’ in Z. Bankowski, I. White and U. Hahn (eds.), Informatics and the Foundations of Legal Reasoning, vol. 21 (Dordrecht: Springer Netherlands, 1995)
Diakopoulos N, ‘Algorithmic Accountability: Journalistic Investigation of Computational Power Structures’ (2015) 3 Digital Journalism 398
Diver LE, ‘Law as a User: Design, Affordance, and the Technological Mediation of Norms’ (2018) 15 SCRIPTed 4
——, ‘Computational Legalism and the Affordance of Delay in Law’ (2021) 1(1) Journal of Cross-disciplinary Research in Computational Law
——, Digisprudence: Code as Law Rebooted (Edinburgh: Edinburgh University Press, 2022)
Donaghy M, Leachim6/Hello-World (Github, 2020) <https://github.com/leachim6/hello-world> accessed 11 June 2021
Dworkin R, Law’s Empire (Cambridge, Mass: Belknap Press, 1986)
Estrin T, ‘Women’s Studies and Computer Science: Their Intersection’ (1996) 18 IEEE Annals of the History of Computing 43
Everett D, ‘Cultural Constraints on Grammar and Cognition in Pirahã: Another Look at the Design Features of Human Language’ (2005) 46 Current anthropology 621
Floridi L (ed), The Onlife Manifesto (Cham: Springer International Publishing, 2015)
Fuller LL, The Morality of Law (Yale University Press, 1977)
Gadamer H-G, Truth and Method (Joel Weinsheimer and Donald G Marshall trs, London: Bloomsbury, 2013)
Galloway AR, Protocol: How Control Exists after Decentralization (Cambridge, Mass: MIT Press, 2004)
Gillespie T, ‘The Relevance of Algorithms’ (2014) 167 Media technologies: Essays on communication, materiality, and society
Goldoni M, ‘The Politics of Code as Law: Toward Input Reasons’ in J. Reichel and A.S. Lind (eds.), Freedom of Expression, the Internet and Democracy (Leiden: Brill, 2015)
Golumbia D, The Cultural Logic of Computation (Cambridge, Mass: Harvard University Press, 2009)
Graham P, ‘Beating the Averages’ (2003) <http://www.paulgraham.com/avg.html> accessed 11 June 2021
Grimmelmann J, ‘Regulation by Software’ (2005) 114 The Yale Law Journal 1719
Gürses S and J van Hoboken, ‘Privacy after the Agile Turn’ in E. Selinger, J. Polonetsky and O. Tene (eds.), The Cambridge Handbook of Consumer Privacy (1st edn., Cambridge University Press, 2018)
Hart HLA, The Concept of Law (2nd edn., Oxford: Clarendon Press, 1994)
Hayles NK, ‘Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis’ (2004) 25 Poetics Today 67
Heidegger M, Being and Time (John Macquarrie and Edward Robinson trs, Oxford: Blackwell, 1962)
Hildebrandt M, ‘Legal and Technological Normativity: More (and Less) than Twin Sisters’ (2008) 12 Techné: Research in Philosophy and Technology 169
——, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (London: Edward Elgar Publishing, 2015)
——, ‘Law As an Affordance: The Devil Is in the Vanishing Point(s)’ (2017) 4 Critical Analysis of Law 116
——, Law for Computer Scientists and Other Folk (Oxford, New York: Oxford University Press, 2020)
——, ‘The Adaptive Nature of Text-Driven Law’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law
Hoeren T and S Pinelli, ‘Agile Programming – Introduction and Current Legal Challenges’ (2018) 34 Computer Law & Security Review 1131
Ihde D, Technology and the Lifeworld: From Garden to Earth (Indiana University Press, 1990)
‘Introducing Visual Studio IntelliCode’ <https://devblogs.microsoft.com/visualstudio/introducing-visual-studio-intellicode/> accessed 11 June 2021
Introna LD, ‘Hermeneutics and Meaning-Making in Information Systems’ in R.D. Galliers and W.L. Currie (eds.), The Oxford Handbook of Management Information Systems: Critical Perspectives and New Directions (Oxford University Press, 2011)
——, ‘The Enframing of Code: Agency, Originality and the Plagiarist’ (2011) 28 Theory, Culture & Society 113
Jeffery M, ‘What Would an Integrated Development Environment for Law Look Like?’  MIT Computational Law Report <https://law.mit.edu/pub/whatwouldanintegrateddevelopmentenvironmentforlawlooklike/release/2> accessed 11 June 2021
Kerr I, ‘The Devil Is in the Defaults’ (2017) 4 Critical Analysis of Law
Kesan JP and RC Shah, ‘Setting Software Defaults: Perspectives from Law, Computer Science and Behavioral Economics’ (2006) 82 Notre Dame Law Review 583
Kittler FA, ‘Protected Mode’ in J. Johnston (ed.), S. Harris (tr.), Literature, Media, Information Systems: Essays (Psychology Press, 1997)
Knuth DE, ‘Literate Programming’ (1984) 27 The Computer Journal 97
Krajewski M, ‘Against the Power of Algorithms Closing, Literate Programming, and Source Code Critique’ (2019) 23 Law Text Culture 119
La Torre M, ‘Reform and Tradition: Changes and Continuities in Neil MacCormick’s Concept of Law’ in A.J. Menéndez and J.E. Fossum (eds.), Law and Democracy in Neil MacCormick’s Legal and Political Theory: The Post-Sovereign Constellation (Dordrecht: Springer Netherlands, 2011)
Laiti O, ‘The Ethnoprogramming Model’, Proceedings of the 16th Koli Calling International Conference on Computing Education Research (Koli, Finland: Association for Computing Machinery, 2016)
Levi-Strauss C, The Savage Mind (University of Chicago Press, 1966)
Ma M, ‘Writing in Sign: Code as the Next Contract Language? ✍️ ⏭ 💻’  MIT Computational Law Report <https://law.mit.edu/pub/writinginsign/release/1> accessed 11 June 2021
MacCormick N, Law as Institutional Fact (Edinburgh: University of Edinburgh, 1973)
——, ‘Reconstruction after Deconstruction: A Response to CLS’ (1990) 10 Oxford Journal of Legal Studies 539
——, Institutions of Law: An Essay in Legal Theory (New York: Oxford University Press, 2007)
Marino MC, ‘Critical Code Studies’  electronic book review
——, Critical Code Studies (Cambridge, Massachusetts: The MIT Press, 2020)
McQuillan D, ‘Data Science as Machinic Neoplatonism’ (2018) 31 Philosophy & Technology 253
Nasser R, Nasser/--- (Github, 2020) <https://github.com/nasser/---> accessed 11 June 2021
Neff G and DC Stark, ‘Permanently Beta: Responsive Organization in the Internet Era’ (Columbia University Institute For Social And Economic Research And Policy, 2002)
Ong WJ, Orality and Literacy: The Technologizing of the Word (3rd edn., London: Routledge, 2012)
Papert S, ‘Different Visions of Logo’ (1985) 2 Computers in the Schools 3
Pasquale F, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge: Harvard University Press, 2015)
Polack P, ‘Beyond Algorithmic Reformism: Forward Engineering the Designs of Algorithmic Systems’ (2020) 7 Big Data & Society 205395172091306
Radbruch G, ‘Legal Philosophy’ in K. Wilk (ed.), The Legal Philosophies of Lask, Radbruch, and Dabin (Cambridge, MA and London, England: Harvard University Press, 1950)
Ricoeur P, Interpretation Theory: Discourse and the Surplus of Meaning (TCU Press, 1976)
——, Freud and Philosophy: An Essay on Interpretation (Denis Savage tr, Yale University Press, 1977)
Sapir E, ‘The Status of Linguistics as a Science’  Language 207
Schlesinger A, ‘Feminism and Programming Languages’ <https://www.hastac.org/blogs/ari-schlesinger/2013/11/26/feminism-and-programming-languages> accessed 11 June 2021
Schulz W and K Dankert, ‘“Governance by Things” as a Challenge to Regulation by Law’ (2016) 5 Internet Policy Review
Searle JR, The Construction of Social Reality (New York: Free Press, 1995)
Shklar JN, Legalism (Harvard University Press, 1964)
Swartz P, ‘A Tower of Languages’, Division III: Essays in Programs as Literature (Hampshire College, 2007)
——, ‘How Do Programs Mean?’, Division III: Essays in Programs as Literature (Hampshire College, 2007)
——, ‘The Poetics of Programming’, Division III: Essays in Programs as Literature (Hampshire College, 2007)
——, ‘White Boys’ Code’, Division III: Essays in Programs as Literature (Hampshire College, 2007)
Taylor C, ‘To Follow a Rule’, Philosophical arguments (Cambridge, MA: Harvard University Press, 1995)
Tedre M et al., ‘Ethnocomputing: ICT in Cultural and Social Context’ (2006) 49 Communications of the ACM 126
‘The Rule of Least Power’ (W3C, 2006) <https://www.w3.org/2001/tag/doc/leastPower.html> accessed 11 June 2021
van den Berg B and RE Leenes, ‘Abort, Retry, Fail: Scoping Techno-Regulation and Other Techno-Effects’ in M. Hildebrandt and J. Gaakeer (eds.), Human Law and Computer Law: Comparative Perspectives (Dordrecht: Springer Netherlands, 2013)
van Dijk N, ‘The Life and Deaths of a Dispute: An Inquiry into Matters of Law’ in K. McGee (ed.), Latour and the Passage of Law (Edinburgh University Press, 2015)
Vismann C and M Krajewski, ‘Computer Juridisms’  Grey Room 90
Waldron J, ‘The Rule of Law and the Importance of Procedure’ (2011) 50 Nomos 3
Weizenbaum J, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: Freeman, 1976)
Wexelblat RL, ‘The Consequences of One’s First Programming Language’ (1981) 11 Software: Practice and Experience 733
Whorf BL, ‘Science and Linguistics’ in J.B. Carroll (ed.), Language, thought, and reality: selected writings of Benjamin Lee Whorf (28th edn., Cambridge, Mass: The MIT Press, 2007)
Wintgens L, ‘Legisprudence as a New Theory of Legislation’ (2006) 19 Ratio Juris 1
——, Legisprudence: Practical Reason in Legislation (Surrey: Routledge, 2012)
Wittgenstein L, Philosophical Investigations (GEM Anscombe tr, Oxford: Blackwell, 1968)
Header image generated with Wombo