On the DoNotPay and AI Lawyer Experiment
Building and scaling justice technology is a responsibility I hold very dear. Especially for justice tech founders like Cami Lopez of PeopleClerk, Sonja Ebron of Courtroom5, Devshi Mehrota and Leslie Jones-Dove of JusticeText, and me, all of whom come from communities directly impacted by the justice gap, we set out to solve justice-related problems because they are deeply personal to us, and deeply important to humanity. Which is why the DoNotPay Twitter debate was so painful to watch.
There were a number of disappointments in these threads for me: the ‘gamification’ of others’ legal outcomes; the desire to ‘trick’ the system; the risk of justice tech as a category being ridiculed or not taken seriously as a whole; and perhaps an umbrella to them all — the empowerment of justice innovation naysayers to shut down even greater innovative solutions for those in need in the future. A few responses:
We must build responsibly. The Legal Services Corporation reports that 92% of low income individuals’ civil legal needs are either inadequately or not met, and 50% of qualified individuals are turned away by legal services organizations due to a lack of capacity. We *must* be building bold tools for consumers, for self-represented litigants, for those arrested, etc — but we should be building with an understanding of second and third order consequences. These are real humans, with real implications of the lines of code we write. We need to understand existing institutions and structures and the opportunities within, rather than ‘trick’ the system or ‘gamify’ a process to protect end users.
We should be moonshotting. My first thought: what if this AI chatbot earpiece experiment actually worked? Would all of this conflict have been worth it? The National Immigrant Justice Center reports that “Not surprisingly, individuals with counsel are more likely to pursue relief from deportation and win their cases. Detained immigrants are 11 times more likely to pursue relief when they have legal counsel and are twice as likely to obtain relief than detained immigrants without counsel. Among unaccompanied children with representation, a Syracuse University analysis of immigration court data shows that 73 percent are allowed to remain in the United States whereas only 15 percent of unrepresented children are allowed to stay.” What if an AI chatbot could produce the same results? I believe that we have the capacity to build technology that is game-changing for populations who have been systemically excluded from our legal system, and that we can give individuals in need a fairer shot. So yes, we should be thinking about ways we can apply new technology to the justice gap, even in the boldest of ways.
We should be collaborating. At Paladin, every big piece of functionality we build is done alongside 6–12 legal services, law firm, or corporate partners to ensure we’re building the right thing and aware of various use cases. We seek to involve the right stakeholders to gain valuable expertise, and ultimately buy-in, of folks to make our solutions sustainable. In this case, there is an opportunity to work with innovative judges and court systems to test new tools that can help individuals help themselves. I wish those conversations had been had first (or that we knew about them).
Investment in justice tech is still critical. Public-private partnerships are crucial to solving the justice gap, and one organization’s stunts should not discount the dozens of justice tech companies designing well-rounded solutions. We should learn from DoNotPay’s mistakes and continuing iterating and investing in new solutions to make them even better.
We know that lawyers cannot solve the justice gap alone. We know that technology cannot solve the justice gap alone. Only by incorporating each’s potential and expertise can we build meaningful solutions and help the tens of millions in the U.S. who need urgent access to our legal system. It’s not us versus them, it’s us versus the gap. We should act like it.