"AI Regulatory": Your Law Firm's Next Practice Group

AI Regulatory? Yes, I can see it now. In the list that drops down when I click on ‘Practices’ at the top of a law firm’s home page.

AI Regulatory? Yes, in retrospect, it will seem inevitable.

AI Regulatory? Right now, it is the legal practice area of the near future. At least for BigLaw and some specialist technology law firms. They will have specialist groups that focus on the law and regulation of artificial intelligence. Why? Because artificial intelligence, while hard to define exactly, is a group of emergent advanced technologies that are primed to become pervasive and impact every aspect of human existence.

As recently as 18 months ago, there was little by way of news coverage in legal publications about AI and its impact on the delivery of legal services. But over the last few months it seems the floodgates have opened. As Kira Systems, Neota Logic, RAVN and ROSS convert pilot applications into fully-fledged engagements with law firms, we see almost daily reporting on AI and the law, as to both the uptake of AI systems and AI’s future impact on the legal profession. Much of the discussion – the hype – has been about the promise of “better, faster, cheaper” legal services and the anticipated impact on legal jobs. That’s understandable. And the debate grinds on between the futurists, who see a much changed legal profession, and the traditionalists, who see little real threat to human lawyers and their role at the heart of legal services. But whatever the implications of AI for law firms as businesses and employers, the dawning reality of artificial intelligence brings with it the need for regulation.

In parallel with the social, economic, political, technological and philosophical issues raised by AI, there are legal and ethical issues. Testament to this is last week’s report Preparing for the Future of Artificial Intelligence issued by the White House. It anticipates both challenges and opportunities, but it is also clear on one other issue: regulation. The language is cautious and balanced, soft even. The report is currently tipped toward using existing regulations (as is happening with driverless cars controlled by autonomous AI systems) in the spirit of not impeding AI development. Nonetheless, the report signals that regulation is in the government’s mind. And this is true elsewhere. Also last week, in the UK, a House of Commons committee called for an AI commission to be established to regulate artificial intelligence. And in New Zealand, the Institute of Directors has just issued a paper to call on its government to establish a high-level working group to consider the potential impact of AI technologies. AI-related task forces and government initiatives have also been established, or are being called for, in Canada, Singapore, Japan and South Korea. This demand for review and regulation is bound to catch on as interest groups and governments wake up to the future implications of AI.

While Preparing for the Future of Artificial Intelligence acknowledges that “broad regulation of AI research or practice would be inadvisable at this time”(apparently on the basis that the goals and structure of existing regulations are sufficient and can be adapted as necessary to account for the effects of AI), it seems clear that new and increased regulation will ensue “to safeguard the public good”. And, if nothing else, for legal services providers, regulation means opportunity.

A recent example is the launch of drone practices in 2014 and 2015. A number of firms (such as Hogan Lovells, Crowell & Moring, MoFo, and Paul Hastings) have teams of lawyers who advise on the issues created by the use of UAVs (unmanned aerial vehicles). With increased focus on driverless cars and trucks, some of those practices are expanding to address autonomous vehicle (AV) systems. It is now inevitable that as governments around the world apply themselves to the vexed issues of artificial intelligence and its potential impact on the economy, on public safety, healthcare, education, transportation, defense and more, they will move to legislate and regulate AI products and services. The White House report rightly draws a distinction between automated systems (where humans are still effectively in control) and autonomous systems where the computer is self-managing. As autonomous AI systems become more common across industries, the kinds of questions regarding legal responsibility and liability relating to AI systems in driverless cars and trucks will be seen again and again, and expanded further.

As to whether AI Regulatory will become a single identifiable area of law or whether it is an interdisciplinary practice requiring collaboration across multiple practice areas remains to be seen. Initially at least, the latter seems the more likely given that an AI practice could include not only established regulatory lawyers but also lawyers from a firm’s other practices, such as insurance, IP/IT, product liability, cybersecurity and litigation.

To end where I started, with inevitability. Perhaps it is ultimately inevitable that an AI will be the source of legal advice on what AI-related law and regulation says.