Some Thoughts on AI

I am a weird social position when it comes to the discourse over large language models or artificial intelligence more generally.

Politically, I align with the socialist left and have since I was a teenager. On the internet, I am most well-known as a left-wing economic policy guy who runs the People’s Policy Project think tank and helps left-wing politicians — like Bernie Sanders and Zohran Mamdani — with their policy agendas. I also practice union-side labor law, maintain the NLRB Research database, and publish the NLRB Edge newsletter, which I’m told has been helpful in a small way to the labor movement.

But I am also really into coding, mostly as a hobby and as something I can use for my various other projects, but not as something I have ever wanted to make a specific career out of. I used to be a lot better at it. When I was a teenager, I got really into Linux and Bash scripting and even contributed some system scripts to a couple of Linux distributions (my most lasting contribution was this rankmirrors script which is still used by the Archlinux package manager). Later on, I got very interested in Python, especially when I wanted to learn statistical programming. I can still do both reasonably well, but, as the saying goes, if you don’t use it, you lose it, and that has happened to some extent with me as I spend most of my work time on legal work and public writing and most of my free time on my kids.

For various reasons — some understandable, others not — there is a lot of reflexive opposition to AI/LLMs on the left. But even though I am of the left, this has not been my reflexive response to the technology, perhaps because of my hobbyist coding background. Instead, when it was first introduced a few years ago, I was very intrigued by it. As it has gotten better, that intrigue has only grown. What generative AI does and how it does it is genuinely amazing, and I have found so many ways to make use of it, both to speed up work I already do and to enable me to do work I didn’t previously have the time or ability to do.

What I Do

For example, I have not written any statistical programming code in many months and I don’t see why I would ever write any such code ever again. I know how to do it, but I don’t need to. Instead, I can take my extensive knowledge about the various public microdata sets and simply prompt an LLM harness, like Claude Code, to do it for me, after which I can eyeball it to make sure it looks right. This is not complicated coding by any means, but it reduces an hour-long task to a few seconds.

For another example, my NLRB Edge publication tracks all new documents put out by the agency using scrapers and the NLRB Research database that I used AI coding assistants to help construct (I did a lot of this before those assistants got as good as they are now, but even in their more error-prone days, they were helpful). Those new documents are then summarized by an LLM and posted to the NLRB Edge newsletter, which has 13,000 subscribers.

More recently, I spent a lot of time with Claude Code and the Opus 4.5 model working through how to reliably grab down election, case, and docket data from the NLRB website, which is not straightforward because the NLRB website is a mess and there are a bunch of frustrating technical problems with the parts of the website that serve up this kind of information. Claude Code also struggled to figure out solutions to these problems. At one point, it even sharply concluded that the NLRB had done “sloppy database work,” which is the meanest I have ever seen it be. But by bouncing ideas back and forth with Opus 4.5 and having it quickly write code to test the ideas, eventually I was able to figure out how to produce the clean, comprehensive, self-updating datasets I wanted.

For legal research, the LLMs have proven less helpful than I once thought they would be because they all have limited “context windows,” which refers to how much material you can feed it to help it answer your prompt. But even there, I have figured out ways to make it useful by narrowing the legal documents I want to get information from through a more conventional search first and then feeding only those matches to the LLM (this is called RAG). Another, even more useful path has been to take a large chunk of cases where I think the information might be and have an LLM go through them one at a time and summarize them, and then feed all of the summaries back to the LLM and prompt it from there (this is called Map/Reduce). I’ve used this approach to generate a legal reference book about the NLRB that summarizes the most-cited points of NLRB case law (to be released soon).

Even in my free time with my kids, it has provided amusement in the form of making simple javascript games with them, which we have posted at GamesAxolotl.com.

Skepticism

The skepticism I’ve read about AI/LLMs comes in various forms and it’s important not to lump it all together. These forms are listed below:

  1. Skepticism of the technology itself. This is the most indefensible type of skepticism, but also the least concerning. If something like this actually works, it will prove itself because people will make use of it. Of course, they already are making use of it, but it’s early days, and there are many sectors that are unaffected. Sometimes, this skepticism comes along with examples of it not working. But everything has an error rate and things only work if you use them correctly. On this point, I think it’s useful to distinguish between people who just prompt general-knowledge chatbots without providing any of their own material (“context”) to work with and people who prompt the LLMs along with context. A lot of regular people have only ever experienced the former use of LLMs, but that is the most error prone. Professional applications of LLMs all use context. Indeed much of the design of those applications is dominated by context management questions.
  2. Skepticism of the valuation of the technology. This is what most of the “AI bubble” discourse is about. This particular skepticism can be fueled by (1), but is logically distinct from it. After all, it could be that the technology works and is really useful, but also that financial markets think it is 10x more valuable than it actually is or will be. This seems like very reasonable skepticism to me, but also not that interesting and hard to really know. Overvaluation of companies and sectors happens all the time. If you can spot it and also time the correction, you can make a lot of money shorting it. But that’s easier said than done.
  3. Skepticism of the distributive effects of the technology. Necessarily a labor-saving technology like this will result in the reallocation of labor factors (i.e. people having to change their jobs, which also involves unemployment spells). The companies that win the scramble for market share will end up minting many billionaires and the creation of extremely rich industrialists can have hugely negative impacts on American society and politics, as we’ve seen. These concerns also seem quite reasonable to me, but I see them as valid critiques of the capitalist system, not of LLMs. It’s a great case for socialism, but not really a case against any particular labor-saving technology. I’ve written extensively about the fact that capitalism distributes the gains from economic production and innovation in a completely insane and indefensible way, but that’s a problem with capitalism not with economic production and innovation per se.

One of the ironies of the left-tinted skepticism of AI is that it seems in part to be related to dislike of the tech sector, which also is very reasonable. But the sector that is going to be most shook up by this is the tech sector. The labor that is being replaced is tech labor, coding labor. Whether this results in a total decimation of tech head counts (same output but now done by LLMs) or in an expansion of tech output (more output enabled by LLMs) is hard to say at this point. But if you don’t like the tech sector, a new technology that makes it so that coding is trivially easy seems like exactly the thing you might invent to stick it to them!

For me, ultimately what I like is the technology, not any particular story I have about what its ultimate distributive results will be. It reminds me of what it was like when I first figured out that you can use a computer, not just to run other people’s software, but to write your own, to have it do whatever you want to do. It reminds me of what it was like when I first got into open source software and the free software movement where you could read any code you wanted and therefore, with enough time, figure out how to do whatever you wanted. This is all of that on steroids and it’s very nice to have a tool that enables you to do a lot more of what you want to do than you could before.