I believe (correct me if I’m wrong), their point is that with time, we’re writing less code ourselves and more through LLMs. This can make people disconnected from the “joy” of using certain programming languages over others. I’ve only used cl for toy projects and use elisp to configure my editor. As models get better (they’re already very good), the cost of trashing code spirals downwards. The nuances of one language being aesthetically better than other will matter less over time.
FWIW, I also think performant languages like rust will gain way more prominence. Their main downside is that they’re more “involved” to write. But they’re fast and have good type systems. If humans aren’t writing code directly anymore, would a language being simpler or cleverer to read and write ultimately matter? Why would you ask a model to write your project in python, for instance? If only a model will ever interact with code, choice of language will be purely functional. I know we’re not fully there yet but latest models like opus 4.6 are extremely good at reasoning and often one-shotting solutions.
Going back to lower level languages isn’t completely out of the picture, but models have to get way better and require way less intervention for that to happen.
I used to appreciate Lisp for the enhanced effectiveness it granted to the unaided human programmer. It used to be one of the main reasons I used the language.
But a programmer+LLM is going to be far more effective in any language than an unaided programmer is in Lisp—and a programmer+LLM is going to be more effective in a popular language with a large training set, such as Java, TypeScript, Kotlin, or Rust, than in Lisp. So in a world with LLMs, the main practical reason to choose Lisp disappears.
And no, LLMs are doing more than just generating text, spewing nonsense into the void. They are solving problems. Try spending some time with Claude Opus 4.6 or ChatGPT 5.3. Give it a real problem to chew on. Watch it explain what's going on and spit out the answer.
> But a programmer+LLM is going to be far more effective in any language than an unaided programmer is in Lisp—and a programmer+LLM is going to be more effective in a popular language with a large training set, such as Java, TypeScript, Kotlin, or Rust, than in Lisp. So in a world with LLMs, the main practical reason to choose Lisp disappears.
You are working on the assumption that humans don't need to even look at the code ever again. At this point it in time, it is not true.
The trajectory over the last 3 years do not lead me to believe that it will be true in the future.
But, lets assume that in some future, it is true: If that is the case, then Lisp is a better representation than those other languages for LLMs to program in; after all, why have the LLMs write in Javascript (or Java, or Rust, or whatever), which a compiler backend lowers into an AST, which then gets lowered into machine code.
Much better to program in the AST itself.
IOW, why program in the intermediate language like JS, Java, Rust, etc when you can program in the lowered language?
For humans, using the JS, Java or Rust lets us verbosely describe whatever the AST is in terms humans can understand, however the more compact AST is unarguably better for the way LLMs work (token prediction).
So, in a world where all code is written by LLMs, using an intermediate verbose language is not going to happen unless the prompter specifically forcibly selects a language.
> The trajectory over the last 3 years do not lead me to believe that it will be true in the future.
Everything changed in November of 2025 with Opus 4.5 and GPT 5.2 a short time later. StrongDM is now building out complex systems with zero human intervention. Again, stop and actually use these models first, then engage in discussion about what they can and can't do.
> But, lets assume that in some future, it is true: If that is the case, then Lisp is a better representation than those other languages for LLMs to program in; after all, why have the LLMs write in Javascript (or Java, or Rust, or whatever), which a compiler backend lowers into an AST, which then gets lowered into machine code.
That's your human brain thinking it knows better. The "bitter lesson" of AI is that more data=better performance and even if you try to build a system that encapsulates human-brain common sense, it will be trounced by a system simply trained on more data.
There is vastly, vastly more training data for JavaScript, Java, and Rust than there is for Lisp. So, in the real world, LLMs perform better with those. Unlike us, they don't give a shit about notation. All forms of token streams look alike to them, whether they involve a lot of () or a lot of {;}.
> That's your human brain thinking it knows better. The "bitter lesson" of AI is that more data=better performance and even if you try to build a system that encapsulates human-brain common sense, it will be trounced by a system simply trained on more data.
I feel you glossed over what I was saying.
Let me try to rephrase: if we ever get to a future where humans are not needed to look at or maintain code again, all the training data would be LLM generated.
In that case, the ideal language for representing logic in programming is still going to be a Lisp-like one.
The difference between the programming tools available before and LLM-based programming tools is the difference between your hammer and that of Fix-it Felix, which magically "fixes" anything it strikes. We are living in that future, now. Actually try it with frontier models and agentic development loops before you opine.
Assuming that everybody disagreeing with such takes simply can't have tried the latest generator is quite telling. Consider, that maybe, I'm not as easily impressed?
This feels like half the document is missing. What does this HTTPS metadata actually look like? What do the payloads look like? Section 3 states "Authorization, policy enforcement, and result verification are defined by AIIP" but this is the AIIP specification and doesn't define any of them. Authorization and policy enforcement are somehow supposed to be big selling points that are solved in some amazing way, but are not specified at all.
I don't get how this is better than an HTTP API (especially since payloads are just UTF8 json), and that's entirely down to the document not telling us anything of substance. I get it's "experimental", but there isn't much of an experiment being described here apart from a different message frame that allows us to leave out the http headers and add a signature (while apparently using the assumption that each ip only hosts one AIIP service)
That's the protocol specification, not the URI scheme. And the packet diagram is amazing because it's missing every odd bit and "SigLen (16)" is twice the width it should be. I guess they vibecoded it.
I tried this on a small Clojure codebase and asked it to write some tests. It couldn't get its parentheses balanced. After 10 attempts or so it tried to write a smaller test file first, but again failed.
Regardless of the parentheses, the test code it came up with was quite basic and arbitrary. It didn't try to come up with interesting edge cases or anything.
To me, it was never about the hardware. It was not even about LISP.
It is about "clean design" and what a great computing environment was capable of, and still would be, had its potential not been shredded by the advent of cheap addicting hardware combined with an "operating system" so "simple and elegant" that even today, a program simply segfaults leaving you with nothing (instead of showing at least an inspectable stacktrace). So "simple and elegant" that the only two data formats end users are dealing with are "copy & paste text", "files", and "screenshots". An operating system so "pure" that every program lives in its own uninteroperable walled garden, that understands nothing about the environment and data loaded around it. We lost a whole computing world and it might still take ages getting that back.
Fun fact: NeXT's Interface Builder was originally built in Lisp. So Apple software was really good at one point, in part because someone wanted to bring the Lisp machine to the NeXT environment.
A transducer transforms a reducing function. Its signature is rfn->rfn. The resulting rfn can then be used to reduce/fold from any collection/stream type into any other collection/stream type.
I don't see what your functions have to do with that.
(HN is going to collapse this comment because the code makes it too long).
My functions are exactly equivalent to transducers.
My link in the original comment goes over it at a more theoretical level.
But if you want runnable code, I've included Clojure below that translates between the two representations (there's some annoying multi-arity stuff I haven't handled very rigorously, but that's mainly an artifact of the complection in the traditional Clojure representation of transducers and goes away when you think of them as just `a -> List<b>`).
There's a line in the original article that the author doesn't go far enough on. "Everything is a fold." Yes more than that, folding is not just a function over a list, a fold is a list and vice versa (this holds for any algebraic datatype). Transducers are just one example of this, where Clojure has decided to turn a concrete data structure into a higher order function (wrongly I believe; although I haven't gone to the effort of truly specializing all my functions to verify my suspicions that you can actually get even better performance with the concrete data representation as long as you use specialized data containers with specialized behavior for the zero and one element cases).
(defn tmap
"Map transducer"
[f]
(fn [x] [(f x)]))
(defn tfilter
"Filter transducer"
[f]
(fn [x] (if (f x) [x] [])))
(defn ttake
"Take n elements"
[n]
(let [n-state (volatile! n)]
(fn
[x]
(let [current-n @n-state]
(if (pos? current-n)
(do (vswap! n-state dec) [x])
[])))))
(defn simple-transducer->core-transducer
[simple-transducer]
(fn
[rf]
(fn
([] (rf))
([result] (rf result))
([result input]
(reduce rf result (simple-transducer input))))))
(defn core-transducer->simple-transducer
[core-transducer]
(fn
[x]
((core-transducer #(cons %2 %1)) [] x)))
(defn catcomp
([f g]
(fn [x] (mapcat g (f x))))
([f g & fs]
(reduce catcomp (catcomp f g) fs)))
(def example-simple-transducer
(catcomp
(tmap inc)
(tfilter even?)
(tmap inc)
(ttake 2)))
(defn example-simple-transducer-manual
[x]
(->> ((tmap inc) x)
(mapcat (tfilter even?))
(mapcat (tmap inc))
;; Stateful transducers are hard to work with manually
;; You have to define it outside of the function to maintain the state
;; This is true for traditional transducers as well
;; (mapcat (ttake 2))
))
(def example-core-transducer
(comp
(map inc)
(filter even?)
(map inc)
(take 2)))
;; Yields [3 5]
(into [] (simple-transducer->core-transducer example-simple-transducer) [1 2 3 4 5])
;; Also yields [3 5]
(into [] example-core-transducer [1 2 3 4 5])
;; Yields [3]
(simple-transducer 1)
;; Also yields [3]
((core-transducer->simple-transducer example-core-transducer) 1)
Maybe not "elegant", but quite readable compiler impl. compared to what I have seen. And which (real world) compiler has an "elegant" implementation anyways.
It's quite different, the way you model things, it's entirely different. Even though both are able to deliver highly concurrent computation.
A process in Elixir has an id, state, and is sent messages to its address. The message are queued inside the process, and handled one at a time.
One process can spawn another, and so on.
It's more like white collar workers sending emails to each other.
In core.async, a Process is anonymous, it doesn't have an id and doesn't have an address. You cannot send messages to it.
Instead a process is more like a worker on an assembly line with conveyor belts.
What matters are the conveyor belts and what's on them. Where things go from one belt to the next and what happens to the things as they flow through. You can have multiple workers working a belt and if something jams dependent belts stop. The belts are called Channels.
reply