Software code is built on rules, and the way it enforces them is analogous in certain ways to the philosophical notion of legalism, under which citizens are expected to follow legal rules without thinking too much. The ontological characteristics of code – its opacity, immutability, immediacy, pervasiveness, private production, and ‘ruleishness’ – amplify its ‘legalistic’ nature far beyond what could ever be imposed in the legal domain, however, raising significant questions about its legitimacy as a regulator. This contribution explores how we might critically engage with the text of code, rather than just the effects of its performance, in order to temper these extremes with the reflexive wisdom of legality. This means contrasting the technical performance of code with the social performativity of law, demonstrating the limits of viewing the latter as merely a regulative ‘modality’ that can be easily supplanted by code. The latter part of the article considers code and the processes and tools of its production from the perspective of legality, drawing on theories of textual interpretation, linguistics, and critical code studies. The goal is to consider to what extent it might be possible to guide that production, in order to ameliorate an ingrained ‘legalism’ that is democratically problematic.
computational legalism, legality, code pragmatics, speech act theory, legal institutionality, critical code studies
Software code is all about rules: an algorithm is a “procedure or set of rules used in calculation and problem-solving”.1 By executing such a procedure, the computer will often subject its user – a citizen – to a further set of ‘rules’, imposed by the interface and geography of the running code. This technological normativity shapes, guides, enables, and inhibits the citizen’s possibilities for action, whether within the ‘virtual’ geography of the software system itself (for example in the rules of a video game, or the affordances of a social media application or word processor) or more broadly in the physical world (for example where a car prevents a driver from breaking the speed limit, or a library’s computerised borrowing system automatically prevents the lending of a book).2
In the legal world, uncritical deference to rule-following is sometimes characterised under the rubric of legalism. In its strongest forms, legalism becomes an ideology, according to which not only should rules be followed, but they should not be questioned or even interpreted beyond their apparent meaning. Whereas the legalistic view is a particular choice in the legal world, in the world of computation it is entirely standard. Software code cannot accommodate the kinds of productive ambiguity that natural language can, and when its ‘ruleishness’ is combined with code’s other characteristics of opacity, immutability, immediacy, pervasiveness, and private production, what results is a ‘legalism’ that is qualitatively and quantitatively far more troubling than the legal alternative.
Broadly, our thinking about code rules fits into two distinct temporal categories – their production and their execution. By focusing on the former, we can identify sites of critical engagement with the text of those rules – the source code – and the tools and methods of its creation. That text is ‘performative’, creating a new state of affairs in the world when it is executed, but it is also documentary, both describing what will happen and telling us something meaningful about the conditions of the code’s production.3 By interpreting the text of code as a text we can make it intelligible,4 and can in turn begin to think about ways to shape its production that avoid or at least minimise the slip of its normative force toward the computational legalism described above.5 The problem of code’s legalism arises not just in systems to which legal operations have been delegated (although it will have particular salience in that context), but is equally important wherever code is employed as a means of ordering human affairs. Just as we ought not to accept the ideology of legalism in the practice of law, so too should we resist its manifestations in other normative contexts, be it the rules of conduct in a social setting such as a club or college dormitory6 or the code that structures a citizen’s interactions with a smart thermostat or autonomous car.
The contributions of this article are two-fold. First, it sets out the notion of computational legalism, mapping its broad contours with an emphasis on the characteristic of ruleishness and what it means for the interpretation and enforcement of code normativity as compared with textual rules. Secondly, it approaches the phenomenon so identified from the perspectives of speech act theory and textual interpretation, drawing on literature in media studies and the philosophy of language. This enquiry raises many questions, for example in relation to seeing source code as a speech act, the ‘autonomy’ of the code-text, and the simultaneously performative and documentary roles that those texts inhabit. Can there be such a thing as a ‘governing ideal’ for the production of code rules, one that minimises code’s legalism at the level of the machine in order that the normativity it produces is acceptable in a democratic society? And to the extent that legal and code texts share characteristics – both are performative, albeit in crucially different ways; both impose rules; both are built around pre-defined constructs, normativity, and ordering – what might this tell us about the appropriate interplay between the two worlds, particularly when code is the medium through which legal norms are enforced? While one article cannot answer these questions exhaustively, I do aim to highlight the need to interpret code rules to which citizens are subject, which entails consideration of how the text of those rules is produced. The article concludes by suggesting some avenues for future thinking in this area.
Before we can consider code’s ‘legalistic’ nature, it is necessary to take a step back to understand legalism in its traditional context. Legalism is the perspective that deems rules, promulgated by an authorised sovereign, to be the proper fundaments of social ordering, and that in the text of the rule lies the beginning and the end of how citizens are required to behave. Legalism admits of degrees, however. In its strong, more ideological variant, citizens are expected to act in effect like automatons, minimally interpreting the rules and simply following them as handed down. There is little in the way of interpretative flexibility and the respect for individual autonomy that this implies; what is normative under weak legalism (asking citizens to follow a rule) becomes simply a command, something to be followed mindlessly.7 Law is solipsistic, sealed off from the social world it serves; it is “self-contained and autogenerative”.8 Enfranchised citizens have a political role, but once the outputs of the “dirty business” of politics have been converted into legislation, they take on the character of scientific data to be processed by the institutions and vocabulary of the legal scientist.9
Wintgens extends these characterisations in his in-depth analysis of legalism and its philosophical genesis.10 For him, strong legalism flows historically from the “conjugation” of various elements, namely the representation by rules of the ‘true’ world, their embodiment of timeless truths that are not open for debate, the concealment of the political reasons that animate a particular articulation of a rule,11 the belief that the state is the only true source of law, and the view of law as a science.12 Together, these elements form the ‘strategy’ of strong legalism, where the focus for citizens is the following of rules rather than how they came to be. Under strong legalism the sovereign’s exercise of power is de facto legitimate, and thus not open to question.13
Strong legalism contrasts with conceptions of legality that acknowledge the importance of rules and legal certainty but view these only as elements of a functioning legal system, rather than its whole. This more reflexive perspective seeks to respect individual autonomy within a framework of democratically legitimated rules that apply to both the individual and the sovereign rule-giver. Under legality, the bare expectation that citizens follow the rules – which on its own would be strongly legalistic – is thus tempered by further considerations. This can include the design of the legal rule, which is justified only if it meets minimal formal requirements, such as intelligibility, prospectivity, non-contradiction, and temporal stability, which contribute to the legitimation of the rule independently of its political merits.14 All things being equal, this in turn makes it reasonable to expect citizens to acquiesce to the resulting rule, because its ‘design’ has been legitimated in advance.15 A theme common among theories of legality then is the balancing of the bare power of rule-making against other elements. This in turn implies a continual reinvigoration of law’s nature and a ‘binding to the mast’ of the sovereign’s otherwise unfettered power, under the rubric of the rule of law.16
The analysis of legalism and its historical and political roots may seem an odd lens through which to view the regulative capacity of software code. The connection is deeper than might at first be apparent, however. Code constitutes (makes possible) and regulates (limits) different forms of behaviour, and in so doing it embodies to a great extent the ideology of strong legalism. Indeed, it both reifies that ideology and amplifies it far beyond what is imagined in the legal sphere. As Bańkowski and Schafer put it,
The alternative to legality is not anarchism, it is legalism… ‘not thinking about it’, if left to its own devices, tends to take over the entire social world, or at least cyberspace.17
This is a challenge to a cyberlibertarian ideology that implicitly prioritises the freedom of the commercial designer, rather than the citizen qua ‘user’. Even assuming that the locus of cyberspace is in fact free of law, as many a cyberlibertarian adherent has argued, what replaces law for the individual is not freedom but rather the unthinking rule-following imposed by the commercial code that makes up that ‘place’.
I am not suggesting that designers and the programmers of code necessarily harbour a legalistic ideology, however. What matters more is that code by its very nature tends toward the embodiment of strong legalism, and that this is the case regardless of the intent of the programmer, however vicious or virtuous that may be. The ontological features of code very easily facilitate the strongest of ‘legalisms’, under which the ideological ought of strong legalism (you must follow this rule without thinking too much) becomes the technological is of code (you have no choice but to follow this rule or, indeed, the very nature of your action is constituted by it from the outset such that there is no possibility to act otherwise). What was normative becomes simply descriptive.18 This on its own should be sufficient cause for concern, but when it is combined with and amplified by the other characteristics of code, the picture becomes starker. The sheer speed of execution collapses any opportunity for deliberation, an effect that is in turn amplified by code’s relative immutability, at execution time, which prevents the possibility of reinterpreting its ‘terms’.19 If we add to this the opacity of the rules it imposes (at both ‘backend’ infrastructure and ‘frontend’ interface levels20), the sheer amount of it operating around us and, of course, its production by private enterprise for profit, the simultaneous parallels and contrasts with legalism become clearer.
In the computational domain code imposes, accelerates, and amplifies the characteristics of strong legalism beyond what a traditional legal system is capable of, given the reliance of the latter on text as the medium through which rules are promulgated, and courts as the (ex post) institutional mechanism of their recognition and enforcement. By contrast, neither legitimacy nor the threat of enforcement are required for code rules to be enforced. By nature, code normativity is deeply latent; the structure and logical flows of its rules may be impossibly complex in aggregate, but they nevertheless lie in wait, fully primed and ready to execute, held back until the command to execute is given. Once the algorithm is set in motion, its execution happens entirely independently of any requirement of recognition on the part of those subject to the rules, and the ‘interpretation’ of the code is solely and entirely within the purview of the machine, at least at the point of performance. Code is deterministic in this important and limited sense; so while one should not overstate the extent to which code rules currently predetermine behaviour in the ‘offline’ world, the risk is that as we transition into the onlife this risk becomes ever greater.21 This underlines the importance of analysing the production of code ex ante, rather than focusing only on its effects in the world, whose reflexivity might mean our responses are too little, too late.
We come now to the consequences of code’s ‘ruleishness’,22 the central aspect of its legalistic nature. Ruleishness is the idea that, at point of execution, code imposes a clearly delineated and pre-determined set of hard-edged rules.23 Upon execution, nothing outside the rules will be imposed, and everything within them will be. This is true regardless of any pragmatic contingencies that mean – or should mean, were we able to argue the point – that some other condition should be taken into account to alter either the fact or the nature of the code’s execution.
There are three primary elements to code’s ruleishness: its mindless execution, the hard edges of the rules, and the limited ontology that code can represent. Of course, the rule might be designed to allow for different possibilities – a scaled value instead of a binary, for example24 – but the crucial point is that this design choice is itself ruleish.25 In this sense, then, code is representational: it constructs and operates on a particular world, a representation of the real world, and knows nothing of anything that lies beyond the borders so constructed.
The mindlessness of code stems from its execution in every case where the ex ante requirements of the rules it contains are met. This is regardless of the context, consequences, or any reason that implies that the rule should not be executed. To paraphrase Introna, code “produces what it assumes”.26
Crucially, this mindless production of the ex ante assumptions of the programmer takes place regardless of her intention;27 the problem of computational legalism is not simply about the deliberate mis-use of power by the creators of code, but also about their inadvertent exercise of that power. Again, we see the amplification of orthodox legalism: whereas a legislature might produce a statutory rule whose terms inadvertently create the potential for some malady, this can in principle be ignored by those subject to it, and if necessary struck down by a court once the problem has been identified. Not so in the code context. Provided the rules are semantically valid according to the strictures of the programming language (I return to this theme below), the code will execute regardless of what the programmer intended or what she thought she was writing.28 This indeed is an extremely common experience in writing code: programmers iteratively test the rules they write to see if they perform as intended. But it also hints at a darker underside: what about latent conditions in the code that the programmer is unaware of? Where these manifest as obvious bugs they can be detected easily and hopefully fixed, but there is always the possibility they are hiding in plain sight, contained in code that ostensibly performs as intended. In either case, the machine will carry on regardless.
This is the other side of the coin from mindless execution: as long as the conditions in a code-based rule are not met, the rule will never execute, regardless again of any external condition or consideration that might have made that execution desirable or beneficial. There is no Hartian open texture and “penumbra of doubt”; only the core of meaning does or can exist in the world of executable code – there is no possibility of alternative interpretations, at least for the computer.29 This is the inverse of Hart’s argument about “mechanical jurisprudence”, which he suggested could not exist in a contingent world.30 Where rules are executed within the computational domain, a mechanical jurisprudence is precisely what is imposed on a contingent world by the ruleishness of code, made manifest and amplified by the interlocking and mutually-reinforcing characteristics of computational legalism. Taken together, the bright lines of computational rules demonstrate the complete inability of code to accommodate ambiguity.31 Any apparent ambiguity is illusory, since it has been consciously designed and is made manifest at the level of human interpretation, but it does not (and cannot) exist within the internal calculus or logic of the machine. By contrast, all human understanding is predicated on the interpretation of the (potentially) ambiguous, which requires a pre-existing ‘horizon’ of tacit knowledge that necessarily ‘fills out’ the limited depth of the communicated text.32
This is not the case with respect to code’s effects, however. There is only what there is in the text; the system has no sensitivity to any notion of tacit or background knowledge that lies outside itself.33 The ‘operative facts’ that determine the application of the rule are those defined ex ante, and there is no possibility of departure from their strictures, however justified one might be in wishing or arguing for it.34
A code’s ontology is fundamentally, and in its effects sometimes tragically, limited. The form of the rule is that which is laid down; a design might allow for some leeway, but, paradoxically, this will only ever be within the ruleishly-bound limits of that pre-determined flexibility – the flexibility is defined inflexibly. While the contextual interpretation of the effects of code’s performance may be feasible, which includes resisting those effects, this is necessarily external and in opposition to the text of the code itself, from which those effects necessarily flow. This is precisely why focus on the production of that text is so important. It is there that our powers of interpretation can play an important precautionary role, feeding into the process of designing the code to critically identify what aspects of the world must be represented, how the representations are limited, and what the implications are when code assumes them to be the ‘real thing’ in its objects, data structures, and logical flows. I discuss this important focus later in the Section on performativity.
The limited ontology of code represents the point at which the previous two concerns become problematic in practice. Code can only respond to features and conditions that are anticipated ex ante and represented in its design (again, even code which can apparently adapt to new data – including machine learning algorithms, for example – is constrained reflexively by the ex ante design of that very adaptability). Code operates on a closed world assumption, with an ontology that is determined from the outset as a kind of Platonic simulacrum of the phenomenon being represented.35 If its designers anticipate only responses A, B, and C to conditions X, Y, or Z, these are all that the code can ever react with and to. This simplification of reality may of course be necessary in order to find a pragmatic balance between representation, complexity, and the solution to the problem (assuming those are even the appropriate concepts). Unlike underdetermined textual rules, however, there can be no re-interpretation to extend the code’s ontology where that might be appropriate to do ex post, and the choice of that simplification will inevitably be framed by the situatedness of the programmer.36 Code can only ever be sensitive to what Hildebrandt terms ‘intra-systemic meaning’, which in this case refers to the rules and representations designed into its ontology, whereas meaning for humans is borne of the interactions that cross the boundaries between systems.37 In the end we are stuck with the ontology of the code as designed, which brings us back to the question of production and the need to ensure the problems of computational legalism are adverted to up-front, during the design process.38
Code is legalistic, therefore, because even where its text is open to scrutiny there is still no space left for the citizen to interpret; even if she has access to and can ‘read’ and understand the code-text, in the course of its performance she has no choice but to acquiesce to the rule as it was designed. And as we saw above, even where there is the possibility of some choice built into the design, the scope of this is itself pre-determined, and in any event the default settings of code are often taken by users to be an immutable fact, or at least the most sensible configuration.39 Taken together, this implies that computational legalism is potentially far stronger than the strongest of orthodox legalisms. Under the legalistic rule by code, the possibility of interpreting a rule and disagreeing about how to respond to it is routinely and comprehensively elided, by simple dint of the nature of the medium.
The previous Section discussed how the ‘ruleishness’ of code is especially problematic in terms of interpreting how it constitutes and regulates behaviour, as a means to contesting and guiding it toward legitimate applications. The question then arises of what can be done to ameliorate this ‘legalistic’ nature.
One way of analysing code is to think in terms of its relationships to those who use it or are affected by it, for example individual citizens, collectives, people with specific (in)capacities (e.g. the visually or cognitively impaired), or those whose roles have specific normative implications (e.g. advocates, judges, traffic wardens, public administrators). For such relational analyses the theory of affordance is proving valuable in the legal literature.40 One can adopt a normative legal perspective when viewing code through this lens: we can ask, for example, what relationships and features the positive law requires a digital artefact to have with respect to specific classes of ‘user’, be they human or otherwise.41 Although this is powerful in terms of identifying the features and relationships that code ought to facilitate, its engagement with the underlying text of the system’s code is, however, only indirect. That text of course constitutes the features of the system, and (one side of) the relationships that it instantiates when it is executed.
Although it is valuable to analyse code in terms of the relationships it brings into being while in operation, in the remainder of this paper I want to consider the possibility of directly analysing the text of the code, as a text. The idea is to consciously adopt a hermeneutic position, one of legality, in order to interpret the text of the code and the tools and practices through which it is produced. Adopting this position might allow us to see to what extent the requirements of legality are met at code’s lowest level, prior to performance.42
Before we can begin, however, there is an important question to be asked in relation to interpretation of code: who is the interpreter? If the meaning of a text is supplied not solely by its author but is constructed through the appropriation of it by the interpreter,43 we might ask who – or what – it is that appropriates the code-text, and what the pragmatics are that have a bearing on the meaning(s) that text thereby accrues.
The text of code points in two directions: it is both a set of instructions for the computer to execute,44 and a document for the human interpreter, describing what the code will do. These central characteristics of code as both documentary and performative separate it from most other text, in degree if not in category;45 legal texts are also performative, although in a way that is temporally inverted from that of code-text.
In code of any complexity, the documentary function becomes crucial for understanding what the system does or is intended to do, especially across time and space and where more than one programmer contributes to its development. The form of the code’s text will to a great extent govern its documentary function, and this in turn is influenced by the vocabulary and grammar of the underlying programming language.46
The facilitation of human intelligibility works on two levels. First, there are the signs used in the programming language itself, where for example function names and data flow structures are intended to facilitate human understanding. The language might even be explicitly designed to prioritise human understanding over designs that might otherwise facilitate faster computation, as in the case of the Literate Programming paradigm, discussed further below. At the second level of intelligibility, there is ad hoc documentation in the form of non-executable comments interspersed throughout the executable text. These are often used to explain what particular sections of code are intended to do, why the programmer chose to adopt a particular approach instead of some alternative, or to label sections of code that are incomplete or have been temporarily ‘hacked together’. Such comments are flexible in terms of their documentary function, since they have no necessary connection with the performance of the code – in the comments the programmer can choose to describe her code in as much or as little detail as she wishes, without this in any way affecting how it will behave when executed. By the same token, her comments might mischaracterise what the code does, either deliberately or through misunderstanding, or indeed she might fail to document it altogether.
On the other side of the intelligibility coin, it is the computer that acts as interpreter. Here, the meaning of the text is created by the compiler of the source code,47 and the CPU that executes it. They are the ‘readers’ of the text. Compilers are themselves software artefacts, constrained by the grammar of a particular programming language (more on such grammars below). Thus the ‘autonomy of the (code) text’ is at the same time extreme and fundamentally constrained. It is extreme because it is a performative par excellence; its constitutive nature creates profound effects in the world far removed from the geographical or temporal control of the author.48 But that autonomy is simultaneously constrained, because its ruleishness delimits its performance absolutely.
For some, the meaning of code lies only in this performative role,49 but this can only be true if we in effect delegate interpretation of the text to the compiler whilst at the same time limiting the interpretive role of humans to the observable effects of the code’s execution. While those effects are indeed where computational legalism can be witnessed, they are not where it will or can be challenged. The documentary purpose of code can give us insight into what the effects are likely to be before they are let loose in the world. The salient difference for computational legalism between interpreter (human or compiler/CPU) and the object of interpretation (text or effects) thus lies in the temporality of the interpretative practice: in its documentary role, prior to execution, the interpreter of the code is a human. This happens before the point of compilation and closure, and is necessarily ex ante.50 On the other hand, in its performative role the interpreter of the text is the compiler whose interpreted output is executed by a processor; the human is demoted to an ex post interpreter of effects, rather than of the text, which becomes closed off to her, and with it the possibilities of re-interpretation and change.51
Code could thus be said to be a kind of speech act: it is a written text with latent performativity that creates a new state of affairs in the world once executed. Unlike textual laws, however, the consensual threshold for execution is much lower, and in many cases non-existent; as mentioned above, once execution has been initiated, there is little in the way of scope for mitigatory interpretation or collective agreement not to recognise or act upon the ‘performative’. This up-ends law’s scheme of performativity, built on institutive rules that require the recognition of a community, both pro- and retro-spective, in order to have any practical effect in the world.52
In terms of Austin’s theory, the act of writing code could then be viewed as perlocutionary, which is to say its performance and the effects that flow from it take place sometime later than the moment of utterance.53 Perlocutionary speech acts can be contrasted both with locutionary utterances, true or false statements about the world that have no performative effect, and illocutionary speech acts, which have an immediate effect as they are uttered. The classic examples of the latter are ‘I now pronounce you man and wife’ and ‘I sentence you to…’. What Austin calls the ‘happiness’ of a performative utterance is contingent on the requirements of a ‘conventional procedure’ being met at the point of utterance.54 For example, saying ‘I now pronounce you [husband/wife] and [husband/wife]’ will have no effect if the speaker is not a legally-authorised officiant, or if any of the other conditions of marriage are not met (majority and consent of the parties, et cetera).55
Viewed in terms of computational legalism, the ‘happiness’ of a code utterance is not contingent on anything other than the syntactic validity of the statement, as defined by the programming language. Code that meets the ‘grammatical’ requirements of that language will execute whenever its required conditions are met, totally regardless of the performance intended by the programmer (hence the constant need to ‘debug’ software – to fix failed, unwanted, or unexpected performance). The relationship between what was written and what was understood to have been written is especially problematic because of this mindless performance, especially when multiplied by the interplay of different codes within and between systems.56 This is what makes the legalism of code so potentially dangerous – the threshold of performativity is syntactic validity and nothing more; outside of a hardware failure the machine will not stop to consider whether or not performance is appropriate or matches the intention of the programmer.
Code is perlocutionary in that its consequences take place at the point of execution, necessarily after the ‘speech act’ of writing the programme. This may be seconds, months, or years after the moment of utterance. Code is thus latently performative; the immutable utterance is ‘shipped’, pregnant with (fixed) possibility, awaiting execution at some unknowable time in some unknowable context. This highlights the crucial need to focus on the conditions of production. Given that the utterance puts in place a set of conditions that can have performative effect far beyond the foresight of the programmer, it is essential that those conditions are sensitive to those future possibilities. We may not be able reasonably to expect programmers to see the future, but we can expect them to build features into their code that will protect those affected by it when that future arrives.
Is the notion of performativity just developed properly congruent with speech act theory, however? Is it appropriate to think of code statements as utterances that perform acts we recognise in our shared social world, as opposed to them simply being computational steps that are executed deterministically by the machine? An answer to this lies in the pivotal distinction between performance and performativity.
The normative system of law is constituted in great part by performative speech acts. When executed according to the conventional procedure, these result in constructs that are recognised and have purchase in the legal world. Legal institutions are the ‘templates’ of those constructions, defined by positive law through rules that define their creation and termination, and the consequences of each of these.57 Individual examples of an institution are legal-institutional facts that will be recognised by the legal system, for example a contract, a marriage, or a trust.58 The practical existence of an institution (both the construct and any particular instance of it) is contingent on a shared commitment to the rule of law. One might disagree about the specifics of a given legal-institutional fact (for example by asserting that a contract is void or a marriage invalid) but the mere fact of disagreement will usually not be sufficient to extinguish the institutional fact if its conventional procedure does not provide for this. To do that, some form of adjudication will be required, for which the paradigmatic forum is of course the court. Judges interpret the rules and the evidence to determine whether or not, in the light of law’s governing ideals, the requirements of the conventional procedure have been met. Given that courts are the primary body with authority to make such determinations, there is a kind of temporal balance between on the one hand the ex ante conditions of the conventional procedure, and on the other the retrospective (ex post) determination by the adjudicator that those conditions have been met.59
The under-determinacy of natural language permits flexibility in the assessment of whether or not the purported effect of a speech act was successful. What might appear in law to be an unhappy performative, for example because the precise requirements of the institutive rules have not been met, can in principle be remedied ex post by a court where there is sufficient reason, found outside the bare text of the rule, to do so.60 The incorrectly performed speech act is thus rendered valid by the court, which looks beyond the bare rule to ‘find’, on a principled basis, the institution that was purported to exist all along. That finding is of course itself a speech act, made according to a conventional procedure that gives it legal effect by dint of the court’s authorised status and the point in the legal process when it is made.
This kind of flexibility would not be acceptable under strong legalism, and even less so under ‘rule by code’, which in an analogous circumstance would be unwavering in its execution of the rule as laid down. The three elements of ruleishness described above mean that the notion of a code performative being ‘unhappy’ is absolute. It is questionable therefore whether it can ever be appropriate to ‘outsource’ the creation of legal-institutional facts to code, without first thinking deeply about the reflexive consequences.61
Legal institutions are indeed predefined, at least in the sense that the creation of the relevant institutional fact is contingent in part on an extant conventional procedure being followed. This might trick us into the false notion that because there are ex ante specifications for their institution, we can simply automate the relevant speech act. Superficially, the modus ponens of a legal condition maps easily onto the ‘if this, then that’ structures so common in code, but that is not all that matters, unless we are willing to adopt a normative position that computational legalism is a desirable thing. “[T]o be able to tell the rules of chess is not to know chess”;62 a move in this direction would require deep sensitivity to the difference between the constitutive nature of code’s normativity and the legal effect of textual normativity, the latter being inherently and productively limited in the extent to which it can direct our actions.63 The performance of the designer’s utterance, written in the language of code, can have far more immediate constitutive force than can a speech act in the legal domain. Such code performatives do not create institutions, but rather constellations of brute fact64 that might appear similar to legal-institutional facts but in fact have very little to do with them. Even where the vocabulary in the code-text is intelligible, using verbs like ‘print’ that make some intuitive sense even to non-experts, they mean something different to their respective readers, i.e. the human and the compiler/CPU.65
As suggested above, legal-institutional facts combine the ex ante and the ex post; they are constructs recognised within the broader interpretive context of the rule of law. This context is in no way a prerequisite for the constitutive and regulative performance of code, however; its execution has no necessary role for democracy or the citizen. Whereas in the legal domain the rule flows from the legislature to the citizen who follows it, in the computational domain the programmer ‘sends’ her code to the compiler/CPU, which imposes the rules, bypassing the role of the enfranchised citizen/user who is nevertheless subject to the effects of their performance.66 The roles of the author, text, reader, and effect thus have very different normative roles in the two domains. Because of this, code-created ‘institutions’ ought not to be considered isomorphic with any notional counterpart from the legal domain, at least not without an acute awareness of how this will change the nature of legal institutionality.67 This will necessarily involve consideration not just of the performance/performativity distinction, but also the other quantitative characteristics of code’s ‘legalism’, discussed above, that amplify this central concern. The shared Welt of legal institutions, bound up in the slow materiality of text, means that by default those institutions can be contested, all things being equal.68 Even assuming it is not a category error to attempt to render institutions and their performance computationally, their nature would be profoundly altered by the ontological characteristics of the code medium, and in particular its immediacy, its immutability at execution time, and its pervasiveness. The role played by designers in determining the scale and character of that reshaping would be vast, raising many questions about the legitimacy of those making the relevant design choices. The legitimate use of computation in this context will require a set of design practices, and perhaps a programming language, that admits of the flexibility and ‘spaces’ that are necessary to avoid legalism and the collapsing of law’s inherent adaptability.69 This is a theme I return to below.
We have seen above that code’s ‘performativity’ is latent. This means that in order to ensure its eventual execution avoids the pitfalls of computational legalism, we must directly consider its design up front. With code, anticipating the future effects of performance requires direct engagement with the text at the stage of programming. By shifting focus to this ex ante point, we can ask questions that cannot be answered by looking only at the ex post effects of execution. We might ask for example why a particular programming language was used, which third-party code libraries were incorporated, and why a particular method was used to achieve a particular output or effect. These in turn suggest further questions about the broader context of production, for example how commercial platform power results in certain languages and tools gaining prominence in programming education, and thereafter in industry practice.
By critically engaging with the text of the code beyond its apparent effects, we can open up a matrix of points of interpretation – ex ante/ex post, text/effects, human/compiler. Looked at this way, we follow Ricoeur in looking beyond the world that is presupposed by the code’s author and constituted by the effects of its execution.70 This for him would constitute a naïve “hermeneutics of faith”, which while necessary as a first step in interpreting a text is insufficient without a further phase of “suspicious” engagement. Under the hermeneutics of faith, the text is taken at face-value, and the meaning that the author apparently intends is accepted as such. It does not go further to question the conditions of the text’s production or the assumptions about the world that it embodies – the reader is passive, taking a “vow of obedience”.71 There is of course a connection back to strong legalism: recall how the rules simply ‘are’, the intent of the legislator is deemed to be contained in the text and the political motivations that lie behind the rules are veiled from the subject – she must simply obey. In the context of code, this solipsistic perspective is to an extent reflected in analyses that focus on only its effects. While such effects can be and frequently are subjected to critique in their own right, in such cases the underlying text is taken as a given. This is necessarily so, because as we have seen its immutability at point of execution means there is no alternative at the point at which effects are observed – the closure of the code has already taken place.
In contrast to the hermeneutics of faith, under the hermeneutics of suspicion the reader shows willingness to suspect, to be rigorous in uncovering “relationships of power, conflicts, and interests implicated in [the text].”72 This second phase of interpretation engages at a deeper level, lifting the veil to uncover the political interests presupposed and reflected, even unconsciously,73 in the text. To interpret code in this way we must look directly at the text, not just at its effects, many of which will not be apparent to us, either because they are hidden from view or because the conditions of their performance have not yet been met and so the effects are yet to materialise.
Expounding this network of meaning becomes critically important in light of the bricolage nature of most code. If we are to change the code in ways that can mitigate computational legalism, we must by definition scrutinise it before it has passed the point of compilation that will render its rules immutable, at least until the next software update. We adopt the position of an engaged and suspicious reader of the text, rather than merely a passive observer of its effects.74 The goal is to have an ex ante impact on the execution that takes place after closure has happened.
Focusing on the production of the text in this way up-ends Heidegger’s notion of a technology being ready-to-hand (that is, subsumed within a practice such that it recedes from perception) versus present-at-hand (interrupting our practice and intruding into our attention). The latter usually comes after the former, when the artefact breaks down.75 Here, the idea is not passively to observe the code’s effects, waiting to detect some anomaly intruding on our attention that in turn creates new interpretative possibilities about why that has happened.76 Instead, we proactively interpret the code as it is being produced and is still in a “state of non-functioning”,77 in order to anticipate its nature once it is compiled, emerging from the process to embed itself in our experiential world as ready-to-hand.
The broader consequences of this for the legal domain are something I will return to below, but for now we can consider a very simple example that engages directly with the text of code and demonstrates some aspects of its bi-directionality.
Showing the phrase ‘Hello World!’ onscreen is traditionally the first step in getting to grips with a new programming language and its workflow.78 It is generally among the simplest of tasks in any given language, acting as a simple test of whether everything is properly set up and ready for more complex applications to be written, executed, and tested. Consider the following implementation in Python, a language commonly used in machine learning applications:
#!/usr/bin/env python print("Hello World!")
Even without specialist knowledge of these languages we can intuitively understand what this code will do when executed. The verbs ‘print’ and ‘alert’ are easily intelligible.79 The documentary function of the text is quite explicit here, even in such simple examples. But take another example, written in the ‘esoteric’ language Brainfuck:
While statements expressed in different programming languages may be for all performative purposes identical, the documentary function – the communication of meaning beyond performance – can vary significantly. Outside of esoteric languages like Brainfuck, which aim to be unintelligible (the clue is in the name), this communicative function is an affordance of that language, contingent on the decisions made by the language’s designers. I will return to the theme of language design in the Section on grammar versus use below, but for now we have a hint at the constitutive nature of languages vis-à-vis the rules that can be written in them. When these are combined with the tools and common practices of code production, they affect the nature of the code artefacts that emerge from those processes.
We have seen that code is simultaneously documentary and performative, both capacities being fundamentally shaped by the programming language that is used. The design of the language constitutes a meta-frame for the practice of code’s production, within which the programmer of code is in a sense merely a user, herself ‘programmed’ by pre-existing conditions of production. The designer of the language – and indeed of other elements such as user interfaces, standardised libraries of off-the-shelf code, operating systems, and the hardware itself – thus wields significant power over the programmer of the ultimate product. Vismann and Krajewski conceive of this role as the ‘programmer of the programmer’:
The programmer of the programmer, designing the tools and methods of a coding language (such as the compiler, code syntax, abstract data types, and so on) maintains the ultimate power because he or she, as the constructor of the programming language itself, defines what the “normal” programmer, as a user, will be able to do. Both types of programmers establish the conditions for using the computer, and, as such, they behave like lawmakers or, rather, code-makers.81
The programmer of the programmer (‘PoP’) is not a single person or platform, but can be interpreted to mean the conditions of possibility that govern what the ‘production programmer’, i.e. the creator of an artefact’s code, can produce. This framework is partly constitutive of the outputs of the production process; when the guiding force it provides is itself designed with a normative end in mind, one can think of it as in a sense ‘constitutional’. The PoP, viewed as the collection of tools and practices that pre-configure code production, can have an impact on the characteristics of what is produced from within its framing.
Programming languages, designed by the PoP, have interesting properties as compared with human languages. The grammar of a natural human language, its langue, is a crystallisation of existing usage, or parole.82 The speakers of a language participate in the evolution of that grammar, however unconscious and marginal that participation might be. The langue will change and adapt over time, as the speakers’ mores and habits are reflected in the parole; what was once ‘bad’ grammar becomes tolerated and then accepted, with communication in the meantime going on unabated. The grammar thus develops ex post as a result of the ever-changing uses of the language in practice.83
Programming languages by contrast are profoundly ex ante, up-ending this arrangement of constantly evolving meaning.84 In computation, there is generally no input to the langue from those who ‘speak’ it, which is to say programmers. The langue is determined in advance by the PoP,85 and there can be no ex post adaptation of it in favour of a shifting parole; any particular computational ‘parole’ (i.e. the code of some artefact) is entirely ineffectual if it fails to meet the precise requirements of the predetermined computational ‘langue’.86 Such code will usually be completely ineffectual, failing to compile and execute. The successful performance of the code ‘speech act’ is thus entirely dependent on it precisely following the predetermined strictures of the computational langue.87 The line is very thin indeed between perfect execution of the text as written, regardless of what was intended, and the categorical ‘unhappiness’ of the ‘performative’.88
Such computational ‘grammar’ is, along with the design conventions associated with a particular programming language, especially responsible for framing the solution to a given task.89 Nearly all modern languages are Turing complete, meaning they can perform the same set of atomic calculations which in turn are the building blocks of general-purpose computation. Despite this, some languages are designed for, or are especially appropriate for performing, particular kinds of computation.90 From the programmer’s perspective, this is due to the amount and forms of abstraction they provide, for example by including predetermined functions for achieving particular computational goals in a single step without the need for bespoke coding. These functions are abstractions that are integrated into the core grammar of the language, somewhat like idioms in natural language that formalise particular (performative) meanings in a single ‘phrase’ or way of speaking.
Setting aside what might make a particular language more fashionable at any given moment in history, the choice to use one language over another can in principle be tied to the abstractions that that language provides, and how these ‘fit’ the programmer’s mental model of the problem at hand and how best to solve it.91 The question of what constitutes the ‘best’ solution will of course be contested, but the suggestion here is that the best solution is one that mitigates computational legalism.92 Programming languages are, as Graham puts it, “not just technologies, but habits of mind”.93 These habits develop over time, based in part on the kinds of problem being solved. Given that a programmer will usually become ‘fluent’ in only a handful of languages, she will begin to see the problems she is charged with solving through the lens of those languages, with the affordances of their syntax, functions, and data structures coming to frame how she conceives her solutions.94
This representational idea mirrors the notion that the natural language we use affects our interpretation of the world.95 Our mother tongues frame our experiences and understanding; vocabulary and grammatical structures forming a lens through which our worlds are refracted, culture shaping language and vice-versa.96 As Sapir put it,
[h]uman beings do not live in the objective world alone[...] but are very much at the mercy of the particular language which has become the medium of expression for their society… the ‘real world’ is to a large extent unconsciously built up on the language habits of the group.97
Although this is referring to natural language, one can claim something similar of programming languages.98 The ultimate design of a given code artefact’s ontology will be affected to some degree by the data structures that the programming language accommodates (e.g. arrays, data frames, matrices).99 Similarly, the rules that process the artefact’s inputs and outputs will be affected by the grammar and functions that the language provides – the ideal function might not be available, but the programmer ‘makes do’ by working with what the language provides. Even at the level of a programming language’s vocabulary, one can observe that the majority of programming languages use English verbs and nouns,100 which in itself creates a linguistic framing effect separate and in addition to the other features of the programming language, especially for those programmers for whom English is especially foreign.101
Echoing Whorf, Chen notes that once the practice of writing code has begun, the “boilerplate and design patterns” of a given programming language are internalised as “unconscious and automatic idioms”, ready to be “regurgitated on demand”.102 In a study of the standard ‘split, apply, combine’ task in data science, Chen illuminates the notion of linguistic relativity as between the R, MATLAB, APL, and Julia languages. Where some languages have relatively constrained data structures and ‘habits’, others are more flexible in their generative possibilities. The programmer is necessarily situated within a set of technical and cultural practices that are associated with the language and its related libraries and tools, and thus will to a greater or lesser degree be guided in her understanding of the programming challenge she is tasked with solving.103 Weizenbaum emphasises the power of programming languages to structure understanding:
…to a person who has only a hammer, the whole world looks like a nail. A [programming] language is also a tool, and it also, indeed, very strongly, shapes what the world looks like to its user… these frameworks cease to serve as mere modes of description and become, like Maslow’s hammer, determinants of their view of the world. The design of a public language, then, is a serious task, one pregnant with consequences and thus laden with extraordinarily heavy responsibility.104
There is no ‘view from nowhere’ in producing code; the programmer’s perspective is framed from the outset by the tools available to conceptualise and to solve the problem, namely the languages she is familiar with, along with their attendant functions, data structures, associated libraries, and design patterns, all of which are in place before a single character is typed.
It is of course necessary to critique individual code artefacts in operation, but this level of analysis cannot have the same breadth of application as can analysis of the PoP and how it invariably frames a computational problem, and its answer, before any code is written.
We can consider within this ‘meta frame’ the affordances of particular programming languages, including the programming paradigms, such as object-oriented programming, that they embody. We might also consider production methodologies such as waterfall and agile, and how their relationships with time and with requirements engineering, and in turn commercial incentive, have a bearing on the code they produce. A linear waterfall process, for example, will generally have a lower tolerance for mistakes or new knowledge compared to a cyclical, iterative ‘agile’ process.105
Off-the-shelf code is another important source of critical engagement. New code almost always builds on existing code,106 since it is expensive and potentially dangerous to re-invent the wheel each time one needs standard functionality or to perform a common task.107 The libraries of off-the-shelf code chosen to solve these problems are often very mature and accepted as standard across industry,108 but they might just as easily have little or no pedigree. Conceptually related are application programming interfaces (APIs) that allow disparate code artefacts to communicate. These impose a further level of rule-based, third party-sourced structure on the communicating code; failure to follow the pre-determined rules that define the API, particularly around authentication and data interchange, will mean the code is simply unable to communicate with the third party system.
What these elements point to is the bricolage of code production, where a project develops in a piecemeal, jigsaw-like fashion.109 The code artefact consists of a patchwork of statements made in a programming language that reflect its computational affordances, existing libraries of off-the-shelf code, and communication with external APIs, all of which are brought together within a more or less explicitly followed production methodology. The result is a ‘finished’ artefact that is often precariously constituted rather than clean and pristine, as it might often appear to the end-user.110 Within these overlaps and frictions one can find much of the meaning that lies beyond the performative text.
The guiding role of the programmer of the programmer is to a great extent facilitated at the point of production by the integrated development environment, or IDE. These are essentially sophisticated word processors for code; they provide an interface to enter the text, as well as features that assist in the complex task of keeping track of the multiple code files that make up a project and the logical flows between and within them.111 Functions such as the automatic hierarchical indentation of code-text and the coloured highlighting of the language’s syntax assist the programmer in scanning and understanding the code. IDEs are an essential mental prosthetic, keeping track of a project and maintaining a semblance of order as the code grows and evolves in the process of production.
IDEs also facilitate the raw production of text. They ‘understand’ the langue of the programming language, providing hints as to how to complete code statements as they are being typed, in a fashion not dissimilar to a smartphone’s autocomplete. These hints can take into account not just the objective grammar of the language being used, but also the specifics of the code that has already been written, for example suggesting method and variable names on-the-fly, thus reducing the load placed on the programmer’s memory and her mental model of the project.
More recently, code itself has become data, used to train machine learning models as to what ‘correct’ code ought to look like. Microsoft’s VisualStudio IDE includes a suggestion system trained on open-source code hosted on the Github platform, which the company bought in 2018.112 The claim there is that the code contained in the most popular Github projects represent best practice,113 which will in turn be reflected in the code of new projects developed using VisualStudio as a result of the code hints that it provides. Even assuming code quality can be measured quantitatively, it is questionable whether popularity is an appropriate metric, but once code’s text and the context of its production are made open to critical interpretation, this becomes a difficult position to maintain. Many contextual questions can be asked about why a particular project has become so popular, but the answers to these will certainly not be reflected in the suggestions that are based on the code of such projects. While such tools may indeed facilitate good practice in many cases, it is open to question whether in some cases they entrench negative design patterns and the ‘habits of mind’ that, at least potentially, lead to undesirable phenomena exemplified by computational legalism. The popularity of IDEs like VisualStudio and platforms like Github might in fact amplify the problem of pervasiveness by encouraging the (re)production of ‘legalistic’ code.
We saw in the first Sections of this article how the ruleishness of code militates against alternative interpretative possibilities, at least with respect to execution. This is amplified by the other ontological characteristics of code to create a kind of legalism profoundly at odds with the general claim that systems that constrain and enable our behaviour ought to afford at least some basic capacity for contest. The enforcement of rules by ‘legalistic’ digital systems is antithetical to a normative conception of law as, at least in part, an interpretative exercise.114 This observation provides us with a normative stance from which to interpret code, one that can inform future guidance of its production. We might mitigate computational legalism by interpreting both the text of code per se, and the tools and processes of its production, thereby reflecting the values of legality and the rule of law in the code that is ultimately compiled.115
Nothing in code is a given, and so the legalism of code is not an inevitability, even if its ontological characteristics and the culture and tools of its production might tend in that direction. Just as individual systems can be designed differently, so too can the processes, tools, and languages from which they emerge.
In the context of programming languages, we might envisage a design whose vocabulary, grammar, and design patterns – i.e. langue – steer toward a particular set of values in the ‘parole’ of the code that is written using it. Some proposals already exist for such value-driven languages, aimed at reflecting for example a feminist perspective116 or the particularities of non-Western, non-English-speaking cultures, as is the case in ethnoprogramming.117 An example of this is قلب (‘heart’), a language that poses a vivid challenge to the Anglo-centricity of contemporary programming languages and practice.118
From the perspective of code’s bi-directionality, it is possible to intertwine closely the documentary and performative roles of code. This is the goal of the Literate Programming paradigm and its WEB language, which tightly weaves together executable code and commentary in a single file from which both documentation and the executable code can be generated.119 Literate Programming stems from the notion that programming ought to reflect human ways of thinking, such that the culture shapes the programming language more so than the converse. This in turn can provide us with an alternative, potentially normative angle on the linguistic relativity of programming languages.120 This aim is reflected in the design of business-oriented languages such as COBOL and its ancestor FLOW-MATIC,121 as well as LOGO, which aims to reflect the programmer’s embodied perception,122 and Inform 7, which uses entirely natural language sentences for its expressions.123 The ends of a given language are expressed in its vocabulary and grammar; these could conceivably reflect the broad aims of legality and the rule of law. And even if the underlying language still facilitates the building of legalistic structures (as invariably it will, if it is Turing complete), the IDE might provide hints that tend toward a new form of ‘best practice’ that can avoid them. If we bear in mind that of course the IDE is itself designed, we can imagine its own design mandating this kind of process, the PoP entering as a meta-guide to the writing of product code and opening up a hermeneutic in the initial steps of creating a new project. Documenting the text might also be required by the IDE, compilation being prevented before comments have been added. This suggestion would likely be resisted as being counter to any ‘move fast, break things’ ethos of software development, but the latter is as much a question of culture as the suggested alternative, and can be reflexively challenged if there is sufficient willingness to do so.
These are just a few examples of how computational legalism might conceivably be mitigated. By making changes at the ‘constitutional’ level of the design process, it ought to be possible to alter the text that that process produces, and its resulting performative effects. Cultivating such normative ideals for code interpretation could result in the production of code that avoids computational legalism in its many troubling manifestations.
This contribution has aimed to do two things. Firstly, to set out the ways in which code manifests the characteristics of legalism, to a degree that far outstrips their manifestation in the legal domain. Code’s ruleishness is particularly problematic, given its ontological resistance to interpretation and to contest. That leads secondly to the suggestion that we might fruitfully adapt practices of textual interpretation in service of mitigating that inherent legalism. In constitutional democracies we tend to want to resist the blanket heteronomy of rules to rules passed down from sovereign legislators. We should take the same stance against producers of the code that directly constitutes, limits, and structures so much of our environment and our possibilities of action. One way to do this is to critically interrogate its text, during the process of production, from the perspective of legality and the rule of law.
‘About’ (Inform 7) <http://inform7.com/about> accessed 11 June 2021
‘Algorithm, n.’ <http://www.oed.com/view/Entry/4959> accessed 11 June 2021
Arendt H, The Human Condition (2nd ed, Chicago: University of Chicago Press, 1998)
Arns I, ‘Code as Performative Speech Act’ (2005) 4 Artnodes
Austin JL, How to Do Things with Words (Oxford: Oxford University Press, 1962)
Austin JL, ‘Performative Utterances’ in J.O. Urmson and G.J. Warnock (eds.), Philosophical Papers (Oxford University Press, 1979)
Bańkowski Z, ‘Don’t Think About It: Legalism and Legality’ in M.M. Karlsson, Ó. Páll Jónsson and E.M. Brynjarsdóttir (eds.), Rechtstheorie: Zeitschrift für Logik, Methodenlehre, Kybernetik und Soziologie des Rechts (Berlin: Duncker & Humblot, 1993)
Bańkowski Z and N MacCormick, ‘Legality without Legalism’ in W. Krawietz et al. (eds.), The Reasonable as Rational? On Legal Argumentation and Justification; Festschrift for Aulis Aarnio (Berlin: Duncker & Humblot, 2000)
Bańkowski Z and B Schafer, ‘Double-Click Justice: Legalism in the Computer Age’ (2007) 1 Legisprudence 31
Barthes R, ‘The Death of the Author’ in S. Heath (ed.), Image – Music – Text (London: Fontana, 1977)
Binns R, ‘Analogies and Disanalogies between Machine-Driven and Human-Driven Legal Judgement’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law
Bowker GC and SL Star, Sorting Things Out: Classification and Its Consequences (Cambridge, Mass; London, UK: The MIT Press, 2000)
Brownsword R, ‘Lost in Translation: Legality, Regulatory Margins, and Technological Management’ (2011) 26 Berkeley Technology Law Journal 1321
Calo R, ‘Can Americans Resist Surveillance?’ (2016) 83 The University of Chicago Law Review 23
Chen J, ‘Linguistic Relativity and Programming Languages’  arXiv:1808.03916 [cs, stat] <http://arxiv.org/abs/1808.03916> accessed 16 April 2020
Cohen JE, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Connecticut: Yale University Press, 2012)
Danaher J, ‘The Threat of Algocracy: Reality, Resistance and Accommodation’ (2016) 29 Philosophy & Technology 245
Dewitz S, ‘Using Information Technology as a Determiner of Legal Facts’ in Z. Bankowski, I. White and U. Hahn (eds.), Informatics and the Foundations of Legal Reasoning, vol. 21 (Dordrecht: Springer Netherlands, 1995)
Diakopoulos N, ‘Algorithmic Accountability: Journalistic Investigation of Computational Power Structures’ (2015) 3 Digital Journalism 398
Diver L, ‘Law as a User: Design, Affordance, and the Technological Mediation of Norms’ (2018) 15 SCRIPTed 4
——, ‘Computational Legalism and the Affordance of Delay in Law’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law
——, ‘Computational Legalism and the Normative Affordance of Delay’ [forthcoming] Journal of Cross-disciplinary Research in Computational Law
——, Digisprudence: Code as Law Rebooted (Edinburgh: Edinburgh University Press, in press)
Diver LE, ‘Digisprudence: The Affordance of Legitimacy in Code-as-Law’ (PhD thesis, University of Edinburgh 2019) <https://era.ed.ac.uk/handle/1842/36567>
Donaghy M, Leachim6/Hello-World (Github, 2020) <https://github.com/leachim6/hello-world> accessed 11 June 2021
Dworkin R, Law’s Empire (Cambridge, Mass: Belknap Press, 1986)
Estrin T, ‘Women’s Studies and Computer Science: Their Intersection’ (1996) 18 IEEE Annals of the History of Computing 43
Everett D, ‘Cultural Constraints on Grammar and Cognition in Pirahã: Another Look at the Design Features of Human Language’ (2005) 46 Current anthropology 621
Floridi L (ed), The Onlife Manifesto (Cham: Springer International Publishing, 2015)
Fuller LL, The Morality of Law (Yale University Press, 1977)
Gadamer H-G, Truth and Method (Joel Weinsheimer and Donald G Marshall trs, London: Bloomsbury, 2013)
Galloway AR, Protocol: How Control Exists after Decentralization (Cambridge, Mass: MIT Press, 2004)
Gillespie T, ‘The Relevance of Algorithms’ (2014) 167 Media technologies: Essays on communication, materiality, and society
Goldoni M, ‘The Politics of Code as Law: Toward Input Reasons’ in J. Reichel and A.S. Lind (eds.), Freedom of Expression, the Internet and Democracy (Leiden: Brill, 2015)
Golumbia D, The Cultural Logic of Computation (Cambridge, Mass: Harvard University Press, 2009)
Graham P, ‘Beating the Averages’ (2003) <http://www.paulgraham.com/avg.html> accessed 11 June 2021
Grimmelmann J, ‘Regulation by Software’ (2005) 114 The Yale Law Journal 1719
Gürses S and J van Hoboken, ‘Privacy after the Agile Turn’ in E. Selinger, J. Polonetsky and O. Tene (eds.), The Cambridge Handbook of Consumer Privacy (1st edn., Cambridge University Press, 2018)
Hart HLA, The Concept of Law (2nd edn., Oxford: Clarendon Press, 1994)
Hayles NK, ‘Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis’ (2004) 25 Poetics Today 67
Heidegger M, Being and Time (John Macquarrie and Edward Robinson trs, Oxford: Blackwell, 1962)
Hildebrandt M, ‘Legal and Technological Normativity: More (and Less) than Twin Sisters’ (2008) 12 Techné: Research in Philosophy and Technology 169
——, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (London: Edward Elgar Publishing, 2015)
——, ‘Law As an Affordance: The Devil Is in the Vanishing Point(s)’ (2017) 4 Critical Analysis of Law 116
——, Law for Computer Scientists and Other Folk (Oxford, New York: Oxford University Press, 2020)
——, ‘The Adaptive Nature of Text-Driven Law’ (2021) 1 Journal of Cross-disciplinary Research in Computational Law
Hoeren T and S Pinelli, ‘Agile Programming – Introduction and Current Legal Challenges’ (2018) 34 Computer Law & Security Review 1131
Ihde D, Technology and the Lifeworld: From Garden to Earth (Indiana University Press, 1990)
‘Introducing Visual Studio IntelliCode’ <https://devblogs.microsoft.com/visualstudio/introducing-visual-studio-intellicode/> accessed 11 June 2021
Introna LD, ‘Hermeneutics and Meaning-Making in Information Systems’ in R.D. Galliers and W.L. Currie (eds.), The Oxford Handbook of Management Information Systems: Critical Perspectives and New Directions (Oxford University Press, 2011)
——, ‘The Enframing of Code: Agency, Originality and the Plagiarist’ (2011) 28 Theory, Culture & Society 113
Jeffery M, ‘What Would an Integrated Development Environment for Law Look Like?’  MIT Computational Law Report <https://law.mit.edu/pub/whatwouldanintegrateddevelopmentenvironmentforlawlooklike/release/2> accessed 11 June 2021
Kerr I, ‘The Devil Is in the Defaults’ (2017) 4 Critical Analysis of Law
Kesan JP and RC Shah, ‘Setting Software Defaults: Perspectives from Law, Computer Science and Behavioral Economics’ (2006) 82 Notre Dame Law Review 583
Kittler FA, ‘Protected Mode’ in J. Johnston (ed.), S. Harris (tr.), Literature, Media, Information Systems: Essays (Psychology Press, 1997)
Knuth DE, ‘Literate Programming’ (1984) 27 The Computer Journal 97
Krajewski M, ‘Against the Power of Algorithms Closing, Literate Programming, and Source Code Critique’ (2019) 23 Law Text Culture 119
La Torre M, ‘Reform and Tradition: Changes and Continuities in Neil MacCormick’s Concept of Law’ in A.J. Menéndez and J.E. Fossum (eds.), Law and Democracy in Neil MacCormick’s Legal and Political Theory: The Post-Sovereign Constellation (Dordrecht: Springer Netherlands, 2011)
Laiti O, ‘The Ethnoprogramming Model’, Proceedings of the 16th Koli Calling International Conference on Computing Education Research (Koli, Finland: Association for Computing Machinery, 2016)
Levi-Strauss C, The Savage Mind (University of Chicago Press, 1966)
Ma M, ‘Writing in Sign: Code as the Next Contract Language? ✍️ ⏭ 💻’  MIT Computational Law Report <https://law.mit.edu/pub/writinginsign/release/1> accessed 11 June 2021
MacCormick N, Law as Institutional Fact (Edinburgh: University of Edinburgh, 1973)
——, ‘Reconstruction after Deconstruction: A Response to CLS’ (1990) 10 Oxford Journal of Legal Studies 539
——, Institutions of Law: An Essay in Legal Theory (New York: Oxford University Press, 2007)
Marino MC, ‘Critical Code Studies’  electronic book review
——, Critical Code Studies (Cambridge, Massachusetts: The MIT Press, 2020)
McQuillan D, ‘Data Science as Machinic Neoplatonism’ (2018) 31 Philosophy & Technology 253
Nasser R, Nasser/--- (Github, 2020) <https://github.com/nasser/---> accessed 11 June 2021
Neff G and DC Stark, ‘Permanently Beta: Responsive Organization in the Internet Era’ (Columbia University Institute For Social And Economic Research And Policy, 2002)
Ong WJ, Orality and Literacy: The Technologizing of the Word (3rd edn., London: Routledge, 2012)
Papert S, ‘Different Visions of Logo’ (1985) 2 Computers in the Schools 3
Pasquale F, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge: Harvard University Press, 2015)
Polack P, ‘Beyond Algorithmic Reformism: Forward Engineering the Designs of Algorithmic Systems’ (2020) 7 Big Data & Society 205395172091306
Radbruch G, ‘Legal Philosophy’ in K. Wilk (ed.), The Legal Philosophies of Lask, Radbruch, and Dabin (Cambridge, MA and London, England: Harvard University Press, 1950)
Ricoeur P, Interpretation Theory: Discourse and the Surplus of Meaning (TCU Press, 1976)
——, Freud and Philosophy: An Essay on Interpretation (Denis Savage tr, Yale University Press, 1977)
Sapir E, ‘The Status of Linguistics as a Science’  Language 207
Schlesinger A, ‘Feminism and Programming Languages’ <https://www.hastac.org/blogs/ari-schlesinger/2013/11/26/feminism-and-programming-languages> accessed 11 June 2021
Schulz W and K Dankert, ‘“Governance by Things” as a Challenge to Regulation by Law’ (2016) 5 Internet Policy Review
Searle JR, The Construction of Social Reality (New York: Free Press, 1995)
Shklar JN, Legalism (Harvard University Press, 1964)
Swartz P, ‘A Tower of Languages’, Division III: Essays in Programs as Literature (Hampshire College, 2007)
——, ‘How Do Programs Mean?’, Division III: Essays in Programs as Literature (Hampshire College, 2007)
——, ‘The Poetics of Programming’, Division III: Essays in Programs as Literature (Hampshire College, 2007)
——, ‘White Boys’ Code’, Division III: Essays in Programs as Literature (Hampshire College, 2007)
Taylor C, ‘To Follow a Rule’, Philosophical arguments (Cambridge, MA: Harvard University Press, 1995)
Tedre M et al., ‘Ethnocomputing: ICT in Cultural and Social Context’ (2006) 49 Communications of the ACM 126
‘The Rule of Least Power’ (W3C, 2006) <https://www.w3.org/2001/tag/doc/leastPower.html> accessed 11 June 2021
van den Berg B and RE Leenes, ‘Abort, Retry, Fail: Scoping Techno-Regulation and Other Techno-Effects’ in M. Hildebrandt and J. Gaakeer (eds.), Human Law and Computer Law: Comparative Perspectives (Dordrecht: Springer Netherlands, 2013)
van Dijk N, ‘The Life and Deaths of a Dispute: An Inquiry into Matters of Law’ in K. McGee (ed.), Latour and the Passage of Law (Edinburgh University Press, 2015)
Vismann C and M Krajewski, ‘Computer Juridisms’  Grey Room 90
Waldron J, ‘The Rule of Law and the Importance of Procedure’ (2011) 50 Nomos 3
Weizenbaum J, Computer Power and Human Reason: From Judgment to Calculation (San Francisco: Freeman, 1976)
Wexelblat RL, ‘The Consequences of One’s First Programming Language’ (1981) 11 Software: Practice and Experience 733
Whorf BL, ‘Science and Linguistics’ in J.B. Carroll (ed.), Language, thought, and reality: selected writings of Benjamin Lee Whorf (28th edn., Cambridge, Mass: The MIT Press, 2007)
Wintgens L, ‘Legisprudence as a New Theory of Legislation’ (2006) 19 Ratio Juris 1
——, Legisprudence: Practical Reason in Legislation (Surrey: Routledge, 2012)
Wittgenstein L, Philosophical Investigations (GEM Anscombe tr, Oxford: Blackwell, 1968)