Jerrold Soh is Assistant Professor at the Yong Pung How School of Law and Deputy Director of its Centre of Computational Law. His work explores how technology can, will, and should be used to improve the legal system. Drawing on his training in law, economics, and programming, he has written and published work on legal text analytics, the physics of law, legal applications of blockchain, and AI liability. Jerrold teaches these things too, alongside more traditional subjects like torts and business law. He would be happy for young(er), future lawyers to follow or contact him over LinkedIn or Twitter, though he can’t guarantee he can be of assistance to everyone.
*Opinions expressed here are solely Jerrold’s own and do not express the views or opinions of his employer’s.
what aspects of law and technology intrigue you the most? when did you first find an interest in it?
I learnt web development before entering law school and ran an A-level study notes website all throughout my undergrad. At first, my programming and law sides were mostly kept separate. There was not much scope to apply one to the other. When I began to intern at law firms, however, I saw a number of practices which made me think “hmm, actually programmers have better ways of doing similar things”. Many of these were basic things like version control, document management, templating, document style sheets, even naming conventions. As I began to think more about how my programming knowledge could help me be a better lawyer, there happened to be a landslide of news (circa 2016) about the impending disruption of law by technology. Articles would speak of AI judges and AI lawyers coming for our jobs. As any lawyer worth their salt would tell you, understanding the opponent is crucial. I figured that I had better find out what this legal AI stuff was really about; given my tech background, I had no excuse. I quickly learnt that the truth was very different from what the tabloids (and even some allegedly reputable publications) painted. Far from being something to be afraid of, AI (and legal tech more generally) was something I could be building myself. This was precisely where my programming knowledge could help significantly.
“Far from being something to be afraid of, AI (and legal tech more generally) was something I could be building myself.”
how do you think your interdisciplinary background of law, economics, and computational skills has benefitted you in your work and changed the way you approach legal cases/issues?
I could not do, and probably would not have gotten, the job I do today if not for my interdisciplinary background. My approach to research almost always starts from technology. Even when I am not actively programming something to analyse law, I am trying to identify the core aspects of some technology that truly presents a challenge to existing law. This has to be premised on a deep understanding of what the technology is. A vague picture created by reading puff pieces and tabloids does not cut it. A quick example: when lawyers and regulators talk or write about AI, you can usually tell quite quickly if they really know the technology by looking at how often they refer to science fiction or Hollywood movies.
My background training in economics and computer science gives me the confidence to read the scientific papers and reports that foundationally describe the technology, and I often think of myself as acting as a translator for these things for lawyers.
In terms of how I approach legal cases and issues, I’ll just say that disciplinary perspectives are a bit like Instagram filters. Putting on one versus the other quickly helps one see what appears to be the same picture in a radically different way. Law trains us to see things in sepia. Economic and computational thinking on the other hand prioritises things like strategy, optimisation, logic, abstraction and, most importantly, math. All of today’s so-called AI is math, so it is crucial to at least see the math. Legal training is not helpful in this regard.
“…disciplinary perspectives are a bit like Instagram filters. Putting on one versus the other quickly helps one see what appears to be the same picture in a radically different way.”
do you believe that singapore’s justice system is comparatively ahead of other countries in terms of incorporating technology into law? why/why not?
To sensibly answer this, I might have to unpack a few things. First, what do we mean by ahead? If it means the sheer number of (units of) technology that has been incorporated into law, I might say yes, Singapore has built and continues to build legal technology into its justice system. A recent example is our Motor Accident simulator, which I believe is the first instance of a court- built outcome simulator made available to the general public anywhere in the world (except China, though I would need to double-check what the Chinese court AI really does). Of course, you would realise that technology units isn’t a sensible metric. More technology is not the goal. It is more justice. And justice of course means different things to different people, and in different times and places. Take Indonesia and the Philippines as examples. Promoting justice in a country with a few hundred million people spread across tens of thousands of islands would involve different types and extents of technologies relative to Singapore.
The other problem is defining “incorporation into law”. The legal system has many stakeholders. Not all are equally tech-savvy. Our government and court institutions have been very forward with legal tech. Our law firms are a bit less so (relative to firms in the UK and US). So I can’t really give a full answer. If you’d like a nuanced one, I’d direct you to reports we’ve published in 2019 and 2020 on the State of Legal Innovation in the Asia-Pacific region. I think the better question to ask in light of all this is: how well is Singapore’s justice system doing in terms of using technology to promote justice? We have made good progress in recent years, but there is still some way to go.
how might ai help to reduce bias in our legal systems, if at all?
I’ll answer this question with respect to legal systems in general, not just Singapore’s, because the kinds of biases that could arise varies a lot across legal systems. In broad strokes, there are attribute biases, where the system discriminates against people based on some disfavoured attribute (race, gender, etc). There are also cognitive biases, like the example raised in Freakonomics that judges could sentence more leniently after lunch. I believe the most promising use of AI in this regard lies in bias detection. Without going too much into AI, the machine learning systems you hear so much about today are mostly complex statistical algorithms. They are great at calculating averages and standard deviations, and are thus great at detecting numerical outliers. If a judge has consistently been giving out sentences above all of their peers, AI can quickly identify that, and can, if built right, also account for other variables (e.g. mitigating factors) that could explain this. China has apparently built such an AI system.
To be clear, detecting bias is only the first step towards reducing it. AI alone cannot solve bias simply because bias comes from human institutions and society. AI can detect statistical anomalies, but we would still need to qualitatively investigate the true causes of this anomaly, and put in place systems to de-bias our processes. If the judge has really been sentencing with reckless abandon, that might mean firing them. But we might find that the anomaly arose because this one judge has just been assigned all the serious cases. Then perhaps nothing needs to be done. Or, we might learn that judges have been applying certain precedents or sentencing guidelines wrongly. Then those guidelines need to be clarified.
“AI can detect statistical anomalies, but we would still need to qualitatively investigate the true causes of this anomaly, and put in place systems to de-bias our processes.”
Also, you would probably have read a number of tabloid articles about evil, biased AI threatening to undermine all our human rights. Machine learning AI learns from data generated by society and will therefore reproduce societal biases. Ironically, this is why they are great at helping us detect biases. If we dip iodine-laced paper into a clear solution and the paper turns blue, we know there’s starch in the solution. In the same way, if we train AI on some data and the AI comes out racist, we know that data (and the system which produced that data) is racist as well. This insight actually produced a series of influential scientific papers on how AI could be used to measure sexism in the English language, because researchers found that AI models trained on English datasets were consistently putting “doctor” closer “male” and “nurse” closer to “female”.
we are now, more than ever, hearing phrases like “crypto”, “nfts”, “fintech” being thrown around in our daily lives. what notable legal issues do you think might arise from their increasing prominence? (perhaps you could draw on your personal experience in these areas)
“Fintech” has been around for very long — even longer than legaltech. “Crypto” has theoretically been here since Nakamoto’s 2008 paper, but really only came into the public consciousness within the last few years when, partly also due to NFTs, people started to see and sell it as a way to get rich quick.
I would say the societal and economic implications of these are more interesting than the legal. The rise of crypto-millionaires and huge crypto projects, some of whom might have made their pot of gold through unsavoury means, makes a mockery of hard work and ethical living. If I was younger by about 10 years, I would surely question why I am being told to study hard and get a stable and meaningful job. The implications of a generation growing up amidst all this, a pandemic, and a climate crisis, is interesting to think about, to say the least.
Legally, there are all sorts of interesting theoretical issues that academics have raised about whether cryptocurrencies count as ‘property’, whether they can and should be regulated as ‘securities’, even the extent blockchain tech can be used in the legal system (say, as a system to prove ownership for intellectual or real property). But the main practical questions will probably be the ones about what happens in the wake of a crypto-fraud. How far can property in the coins be traced back or returned to the victims of a scam? To what extent could intermediaries (e.g. celebrities pushing a drop) be also held liable?
and finally, how do you think the legal industry is going to evolve five, ten years down the line? what changes can aspiring lawyers look forward to?
As many who are more qualified than me have said or written, the legal industry of the future will be more globalised, more technological, and more multidisciplinary. Globalisation means work (and work hours) will cut across many geographical zones. Because of this it will get a lot more complex. Technologically, there is no question that legal practice will involve using more and more types of software beyond just Microsoft Word. This was galvanised by the pandemic, but has been a trend for longer than that. A very practical tip I want to share here is for aspiring lawyers to make sure they have the mental openness to pick up new software programs, especially but not only Microsoft Excel and Powerpoint. This is related to multidisciplinarity. Increasingly legal roles in Big Law at least will put you in teams that might also comprise career legal engineers, knowledge managers, project managers, accountants, economists, coders, etc.
“…the legal industry of the future will be more globalised, more technological, and more multidisciplinary.”
These professionals’ software suite extends far beyond Word. If you’d like to communicate and work well with them, be able to speak their language; don’t expect them to speak yours. I’m not sure if these are all things that aspiring lawyers can ‘look forward’ to; some might very well see these as absolute chores. I know that I myself would be a lot happier in a role where I get to work with a variety of professionals to solve challenging, complex problems by making use of the latest, cutting-edge technologies, as compared to being confined in a room full of documents alone reading file after file after file. My view might, of course, be biased by where my path has taken me.