This issue of the column will feature Adam Nicholas to discuss the stickiness of 'ordinary meaning.'
In this issue of the Linguistic Etch-a-Sketch, I welcome my colleague, Adam Nicholas, a historical linguist at the University of Cambridge, to intervene. In his piece, Adam interrogates a common legal construction - the notion of “ordinary meaning” - and provides his expert opinion on its use. The significance to computational law? As Adam shall reveal, it is precisely owed to the impossibility of defining ‘ordinary meaning’ that translation, from law to code, is not as intuitive as some might profess. Rather, meaning is a far more nuanced and complex exercise that cannot simply be resolved through logic alone. Without a sufficient understanding of how meaning operates and is manifested in the natural language realm, constructing law in code is unthinkable. Therefore, in his work, we hope to address how the science of language provides the blueprint on the future of legal text.
As a trained linguist, ‘ordinary meaning’ in statutory law is a concept of great interest. One must, of course, acknowledge that the legal system requires methods of enshrining and interpreting the letter of the law. However, the ‘ordinary meaning’ approach seems fundamentally (and perhaps inevitably) at odds with a tacit tenet of linguistic science: analysis must be descriptive rather than prescriptive. This tenet is likely owed to the fact that linguistic systems, for the most part, remain a mystery – or, at least, are still under heavy debate. The field of semantics, which explores word meaning, is one such area of contention. Originating in the semiotics and structuralism of the turn of the 20th century, early semanticists such as Ferdinand de Saussure and Gottlob Frege considered word meaning as a referential mapping between items (or properties) and their assigned symbols (words). In subsequent decades, due to various observations and developments, there has been a shift towards mentalistic theories, which focus on the cognitive make-up behind word meaning. A commonly cited mentalistic model for meaning is prototype theory, spearheaded by the psychologist Eleanor Rosch and introduced to linguistics by William Labov in the 1970s. Prototype theory holds that meaning revolves around category judgements according to typicality. For instance, a dove or a robin tends to feel more typically bird-ish than a penguin or an emu; simultaneously, all four creatures are recognized as birds since they share enough qualifying characteristics for the category bird. Contrastingly, puppies and dining tables, which lack such characteristics as feathers, wings, and eggs, do not qualify for the category bird. Although pure prototype theory (as do all current semantic theories) suffers from certain weaknesses, including a lack of compositionality and unidentifiable prototypes, it is nevertheless a useful tool for modelling the inherent fuzziness and imprecision behind word meaning. This troublesome characteristic is perhaps nowhere better exemplified than through such cases as Nix v. Hedden, White City v. PR Restaurants, and United Biscuits (UK) Ltd v. Revenue & Customs, in which the fuzzy meaning and categorization of food items (tomatoes, burritos, and Jaffa cakes) had significant legal and financial implications. Furthermore, this fuzziness extends beyond nouns (naming-words) to all word types. A prime example is the interpretation of the verb ‘to use (a firearm)’ in Smith v. United States.
The fact is that meaning in language is not clear-cut, and modelling ‘ordinary’ or unequivocally accepted usage seems theoretically unattainable. Part of the complication, alongside perceived fuzziness, is that language – and therefore word meaning – never stands still: it undergoes constant change and innovation. No matter the age of the reader, it is surely possible to identify words typical of one generation which are eschewed by another. Indeed, every word in every language undergoes change – and often dramatically so. For instance, ‘quick’ used to mean ‘alive’, ‘with’ meant ‘against’, and ‘gay’ meant ‘quick’ before later developing from ‘jolly’ into ‘homosexual’ within the span of a single lifetime. Indeed, linguistic change can evidently be rapid: the implications when reading a legal text written a couple of years, decades, or even centuries ago are surely not to be taken lightly.
A common remedy in the search for ‘ordinary’ meaning is to invoke a legal or general dictionary, such as Black’s Law or the Oxford Dictionary. However, it must be noted that referring to a language such as English as a single, uniform phenomenon is a gross (although convenient) oversimplification. Whether intentionally or unintentionally, all language is realistically subject to the geographical, socio-political and cultural factors pertaining to the author and the linguistic norms of the day (which, as discussed, are in a constant state of change). Thus, even dictionaries and their definitions are still subject to the same inherent fuzziness as well as the indelible stamp of the lexicographers who so diligently compile them in a particular time and place. Moreover, with language changing so rapidly, dictionaries are outdated from even the point of first publication. Thus, finding a dictionary to aid in the interpretation of some particular language recorded in a particular statute on a particular date in a particular state is conceivably a great challenge. One must be careful in assuming dictionaries provide a panacea.
Overall, then, speakers of language and writers of law find themselves caught in a maelstrom of definitional fuzziness and intense and inevitable linguistic variation. Quite frankly, discovering a method for truly recognized ‘ordinary’ meaning would be miraculous.
Header image generated with Wombo