It's referencing other variables and functions that aren't defined in that snippet. Presumably you can somewhat guess what it's trying to do (call a function for each element of the response), but it doesn't seem like a very good example. I also don't quite understand how this utilises virtual threads effectively, isn't the code still adding the elements of the list one at a time instead of concurrently?
so what's the point? we can already write blocking code. The reason why people want to use reactive frameworks is because they don't want to block threads just because some I/O is happening.
Point is exactly you can block because blocking is cheap with virtual thread and can have 100's of thousand block calls with exhausting all system resources. VT takes few KBs of memory instead of few MBs of memory, so plentiful of VT is a feature. With this one can write plain blocking code get benefit of async scalability but with simple APIs structure and debuggable sync code.
Nima uses virtual threads where blocking call blocks only the current virtual thread but not the underlying carrier-thread, which can run another virtual thread while waiting i/o to complete.
But in this example, the entire code is still blocking because it has to wait for one call of "callRemote" to complete while waiting for the next one to be executed. So I don't get the point.
for (int i = 0; i < count; i++) {
resp.add(callRemote(client));
}
Before virtual threads, Java server would reserve a full OS thread while that code is being processed. OS threads use lots of memory and there might a limited number available. With a single incoming request or low traffic that is not a problem, but with many parallel requests the server would choke on using too much memory or OS threads.
Javascript moved on from callbacks to async/await, which colors the async functions and requires the programmer to be aware of the issues related to async/await.
For Java Project Loom takes a different approach. For example with server applications, the server would start running the code in a virtual thread and most of the code that looks like blocking will actually work like unblocking code. You get to use regular blocking code but you get async performance.
When the code needs to wait for a response, JVM can unmount the virtual thread from the OS thread and then the JVM can run another virtual thread in the OS thread. Once the response is returned and the OS thread is free, JVM can continue executing the original request.
There are some caveats to virtual threads. There is some overhead (but not a lot), some IO calls still block (but you don't get an indication about using blocking IO) and you might stumble on new kinds of bugs related to resource exhaustion (try reserving a million OS file handles).
But no more async/await, reactive programming, callbacks. Mostly just regular imperative blocking-looking code. I'm very excited about that.
This could be solved by using for..of in an asnyc function though.
It might look like surprsising behavior maybe, but putting the async keyword outside and looping without non-awaited callbacks is the only possible way to keep the semantics sound, no?
In other words, forEach would have to return a Promise too.
That's where the function coloring rightfully annoys programmers to catch their mistakes, there are if cause also lint rules for this.
AFAIK there are two active proposals that aim to provide this natively (and additonaly enable working with lazy and/or infinite iterators or generator functions):
AsyncIterator and
tc39/proposal-async-iterator-helpers
Things don’t have to be that way. You can reject the use of async/await in contexts that aren’t aware of it (delegating it to special APIs that properly handle it).
var remoteResultA = callRemoteA(client);
var remoteResultB = callRemoteB(f(remoteResultA));
Where handling a request means you have to make two http calls and those two calls have some order dependence. I.E. the result of one is used to construct the call to get the other.
What virtual threads give is the ability to write the code like I did above and still get ideal performance, as opposed to.
callRemoteA(client)
.then(remoteResultA -> callRemoteB(f(remoteResultA)))
.then(... your world lives here now ...);
Virtual threads are, from a language semantics point of view, the same as regular threads. Virtual threads map maybe 10000 "logical threads" to a handful (maybe 8 or so) actual operating system threads.
The big difference is that for each regular Thread you have exactly one backing Operating System thread.
I agree that this would be a better example that would make more sense (and I also think that the reasoning you provided should be added as context to that code snippet).
That entire section is running on a virtual thread, which is scheduled on a platform (OS) thread. While it blocks for each callRemote invocation the platform thread is free to process other virtual threads.
In addition, this implementation detail might help in gaining some insight: "The synchronous networking Java APIs, when run in a virtual thread, switch the underlying native socket into non-blocking mode. When an I/O operation invoked from Java code does not complete immediately (the native socket returns EAGAIN - “not ready” / “would block”), the underlying native socket is registered with a JVM-wide event notification mechanism (a Poller), and the virtual thread is parked. When the underlying I/O operation is ready (an event arrives at the Poller), the virtual thread is unparked and the underlying socket operation is retried."
Yeah but if for some reason, no other (or few other) request is coming in concurrently, then the CPU will just sit by idly. I don't understand why you wouldn't issue the "callRemote" calls concurrently, that seems like it's missing the point of writing non-blocking code.
Exactly. That's the point. It is beneficial only when you have a huge number of requests relative to the number of platform threads.
The idea is that you can map a million of these virtual threads on to a small number of platform threads, and the JVM will schedule the work for you, to achieve maximum throughput without needing to write any complicated code. Virtual threads are for high throughput on concurrent blocking requests.
EDIT: the sequential calls to "callRemote" still process in sequence, blocking on each call. Don't get confused there. But the overall HTTP request itself is running on a virtual thread and does not block other HTTP requests while waiting for the callRemote invocations to complete.
> EDIT: the sequential calls to "callRemote" still process in sequence, blocking on each call. Don't get confused there. But the overall HTTP request itself is running on a virtual thread and does not block other HTTP requests while waiting for the callRemote invocations to complete.
Yeah I got that. But then the use case seems to be much narrower, because not every scenario is one where many concurrent requests come in at the same time. If you write non-blocking code you'll get high throughput no matter the number of requests. Ok, maybe non-blocking code is harder to write (and I don't think it's actually that hard if you use, say, Kotlin coroutines), but honestly this seems to me like something that developers should learn eventually anyway.
Or maybe it's me being weird. But I remember 10 years ago when node.js was being hyped for being "fast" because it was "async" and now suddenly using threads and blocking operations is again all the rage now.
The difference now is that it's implemented in the JVM instead of a library / framework. It is easier, simpler, and probably more efficient. You can get higher throughput from existing code with minor refactoring.
Also there are some traceability issues involved in using asynchronous APIs: "In the asynchronous style, each stage of a request might execute on a different thread, and every thread runs stages belonging to different requests in an interleaved fashion. This has deep implications for understanding program behavior: Stack traces provide no usable context, debuggers cannot step through request-handling logic, and profilers cannot associate an operation's cost with its caller. "
Collecting those potentially very expensive stack traces, specially in those highly concurrent environments with high throughput requirements would be great for debugging but probably is gonna kill the performance of the system. But it would be nice (even if as a very heavy hammer) as an option.
I'm not saying that virtual threads aren't a good thing (other runtimes than the JVM have had green threads for a very long time now), but that it doesn't seem like such a paradigm shift. It's still threads (with all the benefits and drawbacks), it's just now that they have less overhead.
Presumably I could just keep an existing server written in Spring MVC or a similar technology and just wait for the underlying container to support virtual threads to get the same benefits. I believe Jetty already does support them. So why would I need a new framework?
It removes the need to write async code in many cases. It is a more straightforward and efficient way to accomplish what we have already been doing for years on the JVM.
You certainly could make the callRemote calls in parallel, and there are easy and safe ways to do that (good old CompletableFuture or the new structured concurrency stuff). But doing that or not is completely independent of using Nima, so I think here they're just showing some very simple code, because this is an example of using Nima, not an example of writing concurrent code in general.
Because concurrent calls are confusing and create bugs.
The point this train wreck of an example is trying to make (and failing) is that it's fine to write sequential blocking code in your request handlers. They can take as long as they want ... seconds, minutes, days .... because the handler threads are now an infinite resource as far as the JVM is concerned.
Not being well versed in the capabilities of the virtual threads. However, both the Servlet and the JAX-RS models are "thread agnostic" in that while we "know" we're running on a thread, the API doesn't expose it directly.
So, if something like Tomcat "simply" switch its internal threading model from the current model to the virtual model, would the applications know any different? Could you drag and drop a JAX-RS service or a Java Servlet into a container running on virtual threads, or would rapidly run into some leaky wall?
Seems like a virtual thread model of Tomcat would be quite useful in some scenarios.
It definitely would. I - for one - won't be rewriting our 20+ year old servlet app in anything else but servlets, because it has proven to be an exceptionally stable environment. More than virtual threads for Tomcat requests, I'd love cheap, structured threads to be run from within a request, eg to access the database while I'm reading a webservice.
Granted, they may not be used much, but if you have an existing application it's probably easier to migrate away from the annotations than to switch to a different framework altogether.
I am looking forwarding to trying Nima. The blocker for me is IntelliJ supporting Java 21, which i believe will be this winter [1]; i don't absolutely need IntelliJ, but i am very lazy.
At the moment, my apps are using either the JDK HttpServer, which is easy to use, but of questionable robustness and lacking websocket support, or a Netty-based server, which works very well and supports websockets, but is often awkward to program against because it's asynchronous. Nima should combine a nice simple synchronous API with a fairly luxurious feature set and good scalability.
Of course, it will turn out there are things terribly wrong with it, because this is real life. But for now i'm optimistic!
Non-MicroProfile Helidon uses a reactive interface for most things, which lends itself well to async operations. It a very nice, simple and powerful abstraction that makes async relatively easy. IMO anything webserver or client should be async when virtual threads aren't available, due to Java having the tendency to quickly build up a ton of threads when you use the one-thread-per-request model. That being said, virtual threads does make that model viable at scale, so reactive async probably isn't as necessary.
Helidon 4 is massive refactoring to support Java 21/ virtual thread from grounds up to the point that most reactive stuff is removed and replaced plain sync APIs.
As soon as you start talking thread pools, you're talking about async task dispatching at some level, even if it's abstracted away at the API level. What I'm talking about is using a virtual thread, not event-driven shuttling of stateful objects between threads via task queues, to complete a request from start to finish.
The main problem I have with reactive programming in Java, at least with Project Reactor which is the only one I've used is the cryptic stack traces. You can regain some information by enabling runtime instrumentation to be used in development, but has too much of a performance hit to be used in production. It doesn't produce much better stack traces either. Trying to read a profiler flamegraph is nigh impossible for reactive code. For the debugger, you need to manually insert breakpoints in each step of the chain, because otherwise you'll just be looking around the reactive library's internal code.
The other problems I have is that as soon as you need to do anything blocking (e.g talking to SQLite or RocksDB over JNI), the whole thing falls apart in terms of threading. Doing a HTTP request somewhere in a reactive chain that does some blocking operations - well shit now you're blocking the reactor-http-netty-* threads, which are fixed so you're worse off than with thread-per-request at that point. Or even just using Caffeine cache, or any caching library really, but Caffeine is one of few that prevents async stampede - well shit now you're running on ForkJoin pool during cache misses, but "reactive" threads on cache hits. You can see how this quickly becomes a tangled mess of thread-hopping and avoiding blocking event loops, and it's extremely complicated to get this right with how subscribeOn/publishOn works and how the profiler can't tell you which threads runs what code because the stack traces are filled with garbage.
You also opt out of a lot of what Java offers in terms of synchronization. Using the synchronization keyword is a big no-no, the JVM can park the thread while waiting for the lock. Now that thread cannot schedule other tasks just because that one task ran into lock contention, and it's highly likely a fixed pool of threads. You're basically left with CAS, AtomicLong/AtomicBoolean/etc, and that's it. I've even seen BlockHound, project reactor's tool to help you find if you are blocking somewhere, trigger "Blocking call to LockSupport.park()!" during a ConcurrentHashMap lookup. Add to this that most caching libraries are implemented on top of ConcurrentHashMap. Getting async right in reactive code has a huge mental overhead and you need intricate knowledge of the libraries you depend on, the platform you run on and what it is that actually enables async in the kernel, and where it is unavailable. Good example of this; that UUID library you use uses a CSPRNG. Have you set the JVM flag to read it from the non-blocking /dev/urandom rather than the default /dev/random on Linux?
I think reactive programming in Java is decent as long as all you're really doing is accepting/performing HTTP requests and talking to a database that has a reactive client connector. In essence, anything that strictly deals with networking, since that's all that can really be made truly async currently, thanks to epoll/kqueue/iocp on linux/mac/windows respectively. Hopefully io_uring will broaden that to file IO as well eventually, but unfortunately that's linux-only for now.
Contrast this with Go's scheduler that just takes care of this for you - no need to think about it. Or Rust + Tokio, where it is explicit with tokio::spawn_blocking and offering async counterparts to std Mutex, RWLock, Channels, etc that doesn't block the current thread on lock contention.
I've been using intellij to write code that uses a Java 21 or 22 dev build for months. Do you perhaps mean support for running the IDE itself on Java 21?
IDE productivity, and it started with Smalltalk and Lisp Machines, was adopted by C++, Visual Basic and Delphi, among several 4GLs, several years before Java was invented.
Everyone is free to use Java in vi with make, if they feel happy doing so.
In fact, there were hardly any Java IDEs when the language was released in 1996, which were quickly provided by Smalltalk, Delphi and C++ vendors.
I've worked on plenty of projects where I can navigate around in a pretty dumb text editor and run "make" and things mostly work. In Java land most things are several folders deep and things are injected in from places I don't understand without tooling to show me what is going on. I mean, have you tried using jdb? It basically exists as an advertisement for an integrated debugger.
No, you’re being facetious. GDB and jdb are not comparable at all. jdb is not meant for actual use. This becomes immediately obvious if you spend like ten seconds with it. There’s no readline support, no tab completion, no fancy breakpoints, poor expression evaluation…I can go on and on. GDB has a learning curve but is unquestionably a professional tool that sees heavy use. Similarly, people can make also sorts of spaghetti projects with C(++) but you can also make very nice ones. With Java the state of the art basically requires you to juggle a classpath at all times. You surely know this: all the tools are designed to be driven by automation. Why are you claiming otherwise?
It's not so much that you can't do Java without an IDE—Java is verbose in part because it doesn't assume that you have an IDE to give you context—it's that IntelliJ is such a good IDE that it's hard to go back to weaker tooling once you've become comfortable with it.
All my co-workers use VS Code for TypeScript, but I feel crippled when I can't use WebStorm. Not because I can't program without it, but because I'm missing a powerful force multiplier.
I used to think that WebStorm was terrible for refactoring; but somehow it's still very far ahead of other editors like vscode, and vscode itself is very far ahead of almost everything else too.
WebStorm is worse than IntelliJ Java, but that's because TypeScript is much harder to statically analyze. There are fewer safe refactorings in a language that has so much flexibility.
Thats because Java IDE's are ridiculously powerful. When I work in vscode and Python, I am always like: This is so much easier when I am using Java and Intellij.
It is not mere IDE Dependency. It is IDE Supremacy. Java is the leader of the IDE master race - the other PL IDE's need to squint to see how far ahead Java IDE's are.
I've seen many frameworks shared in hackernews. But rarely do I see
anyone commenting on their IDE's capability to be able to use that framework for go-lang, rust, c, c++, etc.
But then they complain they had to manually type an import statement ... before going back to their opinionating on how bad IDEs are.
I find the whole notion of software engineers rejecting software applications being useful as a concept while it being the primary focus of their own profession quite fascinating.
If a developer does not want to try this framework because an IDE
does not yet support a particular version of java, then the situation
has gone beyond an IDE being just useful.
Personally I use various IDEs, light weight IDE/Editors and even vi/vim
depending on the language & situation.
Think of it this way:
If I asked my team to develop a project/POC with this framework and they
came back to me saying that they have to wait for (or prefer to wait for,
or whatever) jet brains to add java 21 support to do this,
I would not be very happy.
JDK21 hasn't even hit general availability yet according to its own schedule [0]. It seems a little impatient to get upset that tooling isn't 100% there before it's even released. Naturally tool vendors aren't going to release their official support until they can test against the final version. This indeed is part of the maturity people do like about the Java ecosystem.
So you're saying that you are unhappy that Jetbrains doesn't officially support JDK21 yet? Cool story bro. It's a tool preference. Same deal with any other language. Nothing is preventing your developers writing code for JDK21 today, even with the Jetbrains IDE. You can write and compile JDK21 projects and ignore any related highlighted syntax/semantic errors.
Good job on distorting my comments. Hopefully the developer who
made the original comment will read your recommendations and
will decide to give this framework a try.
I'm saying your comments basically boiled down to that. You are unhappy that developers might prefer a tool that doesn't officially support newer features of a language, even though it doesn't really prevent them developing with the new features using the same or different IDE or a text editor or whatever. Same deal with C# or C++ and every other language supported by IDE's, hence C.S.B. This is an issue with programming in general, not Java specific.
If a developer says "The blocker for me is IntelliJ supporting Java 21",
clearly there's an IDE dependency. Powerful enough for a developer
not to adopt a new framework.
It's entirely possible to write Java without an IDE. I've done it, and one of my colleagues has been using VSCode quite happily (we've bullied him into stopping because it doesn't have a formatter and he keeps checking in wonky code - but that's another story).
But writing any language is much more productive with a good IDE. For me, the added value in using Java 21 now rather than in a few months is not enough to outweigh giving up the use of an IDE.
I write both Kotlin and Java but Kotlin is my go to. Kotlin's when statements alone are a reason to use it. Then there's the null safety, extension functions, built in lazy initialization, and smart casts. Kotlin is just a joy to write in.
Was there ever? What made Kotlin fundamentally different in that respect?
Kotlin is great for fixing some legacy design oopsies (Java really should have been smarter about null and it can't be fixed without breaking legacy code), but I wouldn't expect it to do something that was impossible or it infeasible in Java.
Yes from a pure functional perspective, it's really down to null handling at this point. But kotlin has a lot of nice syntactic sugar still. Same with Groovy.
What do you mean, how Groovy ended? It's still going strong with regular releases and continues to keep up with new Java features! It's a pretty nice language if you don't mind dynamic typing, which has its uses.
Well I care a lot that it exists. And many other people I know do as well. Just because you don't seem to like it, you shouldn't imagine everyone else is like you.
There are many solid seeming ones that don't show those signs: Elixir, TypeScript, Clojure, C++, Kotlin etc. I think enumerating short lived languages is not representative since programming languages generally have very high share of short-lived/abandoned ones.
But it's not necessarily a "don't go there" sign that there's a shorter average lifespan. System languages can't really go away unless the whole platform can and are doomed to be a bit clunky. Picking a language from the category that might only have a 15-year heyday is fine for a lot of apps if it's otherwise better than the system language.
I actually think Groovy sits a bit different to other JVM languages in this respect, because it never was at its core, pitched as an alternative to Java (for a brief moment it was considered in that light, but that was mainly because Java itself became so stagnated). They aren't competing to be the same thing the way other JVM languages are - Groovy is trying to be a really good scripting language and Java is trying to be a really good application language. Similar to how Clojure will probably never be popular but it will probably also never go away.
Coroutines are one of the most prominent features of Kotlin, so it does make sense to bring this up in the discussion.
OTOH, I'm unconvinced that virtual threads make coroutines, reactive APIs etc. obsolete. Those are about structured concurrency, and with projet Loom, the structured concurrency proposal is still in the incubating stage.
The magic of Kotlin's structured concurrency is that it "just works" without you having to think about it, and it's difficult to screw up. This looks a fair bit more involved, though I'm excited for it nonetheless.
Oooh this looks promising. Seeing helidon nima's take on java microservices with virtual threads is certainly a different direction. Way way back, while working with a java microservices codebase, we were stuck with OJ (ObscuraJ, to be honest I'm not sure if it was publicly available or internal, was so many years ago) and that was pure hell. The configuration overhead, especially the cumbersome dynamic routing setup, and layers upon layers of indirection and dependency injection was a headache. níma's approach piques my interest, albeit with a bit of caution, will need to dig into the source more
It's a weird name, because it's a transliteration of the Greek word for "thread", which would conventionally be "nema" in English (as in nematode worms, nematocysts, Treponema, etc).
“Zip” is penis in Arabic, so I remember at university our female colleagues would read it as an abbreviation “ZIP” zed-eye-pee to avoid the embarrassment!