Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Helidon Níma: A Java microservices framework based on virtual threads (helidon.io)
92 points by philonoist on Aug 19, 2023 | hide | past | favorite | 117 comments


I'm not really grokking the code sample on their frontpage. What is it doing? Explain it like I am a Java developer with 18 years of experience.


It's referencing other variables and functions that aren't defined in that snippet. Presumably you can somewhat guess what it's trying to do (call a function for each element of the response), but it doesn't seem like a very good example. I also don't quite understand how this utilises virtual threads effectively, isn't the code still adding the elements of the list one at a time instead of concurrently?


One at a time, not concurrently.


so what's the point? we can already write blocking code. The reason why people want to use reactive frameworks is because they don't want to block threads just because some I/O is happening.


Point is exactly you can block because blocking is cheap with virtual thread and can have 100's of thousand block calls with exhausting all system resources. VT takes few KBs of memory instead of few MBs of memory, so plentiful of VT is a feature. With this one can write plain blocking code get benefit of async scalability but with simple APIs structure and debuggable sync code.


Nima uses virtual threads where blocking call blocks only the current virtual thread but not the underlying carrier-thread, which can run another virtual thread while waiting i/o to complete.


But in this example, the entire code is still blocking because it has to wait for one call of "callRemote" to complete while waiting for the next one to be executed. So I don't get the point.

  for (int i = 0; i < count; i++) {
    resp.add(callRemote(client));
  }


Before virtual threads, Java server would reserve a full OS thread while that code is being processed. OS threads use lots of memory and there might a limited number available. With a single incoming request or low traffic that is not a problem, but with many parallel requests the server would choke on using too much memory or OS threads.

Previously in Java if you wanted to avoid code that blocks the OS thread you would need to use callback based code or reactive programming: https://www.alibabacloud.com/blog/how-java-is-used-for-async...

Javascript moved on from callbacks to async/await, which colors the async functions and requires the programmer to be aware of the issues related to async/await.

For Java Project Loom takes a different approach. For example with server applications, the server would start running the code in a virtual thread and most of the code that looks like blocking will actually work like unblocking code. You get to use regular blocking code but you get async performance.

When the code needs to wait for a response, JVM can unmount the virtual thread from the OS thread and then the JVM can run another virtual thread in the OS thread. Once the response is returned and the OS thread is free, JVM can continue executing the original request.

There are some caveats to virtual threads. There is some overhead (but not a lot), some IO calls still block (but you don't get an indication about using blocking IO) and you might stumble on new kinds of bugs related to resource exhaustion (try reserving a million OS file handles).

But no more async/await, reactive programming, callbacks. Mostly just regular imperative blocking-looking code. I'm very excited about that.


what are the issues with async/await?


the mental gymnastics.

Having fun using async/await in loops? https://zellwk.com/blog/async-await-in-loops/

An actual blocking model is much easier to grok.


This could be solved by using for..of in an asnyc function though.

It might look like surprsising behavior maybe, but putting the async keyword outside and looping without non-awaited callbacks is the only possible way to keep the semantics sound, no?

In other words, forEach would have to return a Promise too. That's where the function coloring rightfully annoys programmers to catch their mistakes, there are if cause also lint rules for this.

AFAIK there are two active proposals that aim to provide this natively (and additonaly enable working with lazy and/or infinite iterators or generator functions):

AsyncIterator and tc39/proposal-async-iterator-helpers


Things don’t have to be that way. You can reject the use of async/await in contexts that aren’t aware of it (delegating it to special APIs that properly handle it).


Can you elaborate? Do you mean like the Promise API with the "spesialisti APIs"?


You could have a forEach that awaits the promise created each iteration, for example.


Because that's what "blocking" calls were invented to avoid.


A better example would be

    var remoteResultA = callRemoteA(client);
    var remoteResultB = callRemoteB(f(remoteResultA));
Where handling a request means you have to make two http calls and those two calls have some order dependence. I.E. the result of one is used to construct the call to get the other.

What virtual threads give is the ability to write the code like I did above and still get ideal performance, as opposed to.

    callRemoteA(client)
       .then(remoteResultA -> callRemoteB(f(remoteResultA)))
       .then(... your world lives here now ...);
Virtual threads are, from a language semantics point of view, the same as regular threads. Virtual threads map maybe 10000 "logical threads" to a handful (maybe 8 or so) actual operating system threads.

The big difference is that for each regular Thread you have exactly one backing Operating System thread.


I agree that this would be a better example that would make more sense (and I also think that the reasoning you provided should be added as context to that code snippet).


That entire section is running on a virtual thread, which is scheduled on a platform (OS) thread. While it blocks for each callRemote invocation the platform thread is free to process other virtual threads.

https://docs.oracle.com/en/java/javase/20/core/virtual-threa...


In addition, this implementation detail might help in gaining some insight: "The synchronous networking Java APIs, when run in a virtual thread, switch the underlying native socket into non-blocking mode. When an I/O operation invoked from Java code does not complete immediately (the native socket returns EAGAIN - “not ready” / “would block”), the underlying native socket is registered with a JVM-wide event notification mechanism (a Poller), and the virtual thread is parked. When the underlying I/O operation is ready (an event arrives at the Poller), the virtual thread is unparked and the underlying socket operation is retried."

https://inside.java/2021/05/10/networking-io-with-virtual-th...


Yeah but if for some reason, no other (or few other) request is coming in concurrently, then the CPU will just sit by idly. I don't understand why you wouldn't issue the "callRemote" calls concurrently, that seems like it's missing the point of writing non-blocking code.


Exactly. That's the point. It is beneficial only when you have a huge number of requests relative to the number of platform threads.

The idea is that you can map a million of these virtual threads on to a small number of platform threads, and the JVM will schedule the work for you, to achieve maximum throughput without needing to write any complicated code. Virtual threads are for high throughput on concurrent blocking requests.

EDIT: the sequential calls to "callRemote" still process in sequence, blocking on each call. Don't get confused there. But the overall HTTP request itself is running on a virtual thread and does not block other HTTP requests while waiting for the callRemote invocations to complete.


> EDIT: the sequential calls to "callRemote" still process in sequence, blocking on each call. Don't get confused there. But the overall HTTP request itself is running on a virtual thread and does not block other HTTP requests while waiting for the callRemote invocations to complete.

Yeah I got that. But then the use case seems to be much narrower, because not every scenario is one where many concurrent requests come in at the same time. If you write non-blocking code you'll get high throughput no matter the number of requests. Ok, maybe non-blocking code is harder to write (and I don't think it's actually that hard if you use, say, Kotlin coroutines), but honestly this seems to me like something that developers should learn eventually anyway.

Or maybe it's me being weird. But I remember 10 years ago when node.js was being hyped for being "fast" because it was "async" and now suddenly using threads and blocking operations is again all the rage now.


The difference now is that it's implemented in the JVM instead of a library / framework. It is easier, simpler, and probably more efficient. You can get higher throughput from existing code with minor refactoring.

  Thread thread = Thread.ofVirtual().start(() -> System.out.println("Hello"));
  thread.join();


Also there are some traceability issues involved in using asynchronous APIs: "In the asynchronous style, each stage of a request might execute on a different thread, and every thread runs stages belonging to different requests in an interleaved fashion. This has deep implications for understanding program behavior: Stack traces provide no usable context, debuggers cannot step through request-handling logic, and profilers cannot associate an operation's cost with its caller. "

https://openjdk.org/jeps/444


In theory an async-aware runtime can stitch together a coherent and useful backtrace, but in practice most legacy tooling won’t :(


Collecting those potentially very expensive stack traces, specially in those highly concurrent environments with high throughput requirements would be great for debugging but probably is gonna kill the performance of the system. But it would be nice (even if as a very heavy hammer) as an option.


I'm not saying that virtual threads aren't a good thing (other runtimes than the JVM have had green threads for a very long time now), but that it doesn't seem like such a paradigm shift. It's still threads (with all the benefits and drawbacks), it's just now that they have less overhead.

Presumably I could just keep an existing server written in Spring MVC or a similar technology and just wait for the underlying container to support virtual threads to get the same benefits. I believe Jetty already does support them. So why would I need a new framework?


It removes the need to write async code in many cases. It is a more straightforward and efficient way to accomplish what we have already been doing for years on the JVM.

You don't need to switch frameworks

https://spring.io/blog/2022/10/11/embracing-virtual-threads


You certainly could make the callRemote calls in parallel, and there are easy and safe ways to do that (good old CompletableFuture or the new structured concurrency stuff). But doing that or not is completely independent of using Nima, so I think here they're just showing some very simple code, because this is an example of using Nima, not an example of writing concurrent code in general.


Because concurrent calls are confusing and create bugs.

The point this train wreck of an example is trying to make (and failing) is that it's fine to write sequential blocking code in your request handlers. They can take as long as they want ... seconds, minutes, days .... because the handler threads are now an infinite resource as far as the JVM is concerned.


"callRemote" simulates a blocking outbound http-call.


Not being well versed in the capabilities of the virtual threads. However, both the Servlet and the JAX-RS models are "thread agnostic" in that while we "know" we're running on a thread, the API doesn't expose it directly.

So, if something like Tomcat "simply" switch its internal threading model from the current model to the virtual model, would the applications know any different? Could you drag and drop a JAX-RS service or a Java Servlet into a container running on virtual threads, or would rapidly run into some leaky wall?

Seems like a virtual thread model of Tomcat would be quite useful in some scenarios.


It definitely would. I - for one - won't be rewriting our 20+ year old servlet app in anything else but servlets, because it has proven to be an exceptionally stable environment. More than virtual threads for Tomcat requests, I'd love cheap, structured threads to be run from within a request, eg to access the database while I'm reading a webservice.


JDK21 has structured concurrency. It works with virtual threads so the main task will be moved aside (from the OS thread) until the sub tasks finnish.

https://www.infoq.com/news/2023/06/structured-concurrency-jd...


Yes, that's what I was referring to.


In order to use those things, you have to write annotations.

With Nima, you just write the code to do the thing.

It's a huge difference.

If you have the wisdom to see how toxic annotation-driven programming in Java is, you will like Nima.

Helidon Nima is more comparable to building services in golang. It's more like Vert.x but with a serial API.


There are ways of writing Spring without annotations (or XML config): https://blog.frankel.ch/annotation-free-spring/

Granted, they may not be used much, but if you have an existing application it's probably easier to migrate away from the annotations than to switch to a different framework altogether.


Counter-example: https://javalin.io/ uses Servlets, and seems to be doing quite fine without annotations.


Servlets existed before annotations, so no, you are wrong.


I am looking forwarding to trying Nima. The blocker for me is IntelliJ supporting Java 21, which i believe will be this winter [1]; i don't absolutely need IntelliJ, but i am very lazy.

At the moment, my apps are using either the JDK HttpServer, which is easy to use, but of questionable robustness and lacking websocket support, or a Netty-based server, which works very well and supports websockets, but is often awkward to program against because it's asynchronous. Nima should combine a nice simple synchronous API with a fairly luxurious feature set and good scalability.

Of course, it will turn out there are things terribly wrong with it, because this is real life. But for now i'm optimistic!

[1] https://intellij-support.jetbrains.com/hc/en-us/community/po...


Non-MicroProfile Helidon uses a reactive interface for most things, which lends itself well to async operations. It a very nice, simple and powerful abstraction that makes async relatively easy. IMO anything webserver or client should be async when virtual threads aren't available, due to Java having the tendency to quickly build up a ton of threads when you use the one-thread-per-request model. That being said, virtual threads does make that model viable at scale, so reactive async probably isn't as necessary.


Reactive APIs are horrible, and precisely what I'm trying to avoid.


Helidon 4 is massive refactoring to support Java 21/ virtual thread from grounds up to the point that most reactive stuff is removed and replaced plain sync APIs.


I’ve had no problems scaling jetty to 400-500 qps per node on small thread pools.


Jetty can be used with virtual threads: https://webtide.com/jetty-12-virtual-threads-support/


As soon as you start talking thread pools, you're talking about async task dispatching at some level, even if it's abstracted away at the API level. What I'm talking about is using a virtual thread, not event-driven shuttling of stateful objects between threads via task queues, to complete a request from start to finish.


The main problem I have with reactive programming in Java, at least with Project Reactor which is the only one I've used is the cryptic stack traces. You can regain some information by enabling runtime instrumentation to be used in development, but has too much of a performance hit to be used in production. It doesn't produce much better stack traces either. Trying to read a profiler flamegraph is nigh impossible for reactive code. For the debugger, you need to manually insert breakpoints in each step of the chain, because otherwise you'll just be looking around the reactive library's internal code.

The other problems I have is that as soon as you need to do anything blocking (e.g talking to SQLite or RocksDB over JNI), the whole thing falls apart in terms of threading. Doing a HTTP request somewhere in a reactive chain that does some blocking operations - well shit now you're blocking the reactor-http-netty-* threads, which are fixed so you're worse off than with thread-per-request at that point. Or even just using Caffeine cache, or any caching library really, but Caffeine is one of few that prevents async stampede - well shit now you're running on ForkJoin pool during cache misses, but "reactive" threads on cache hits. You can see how this quickly becomes a tangled mess of thread-hopping and avoiding blocking event loops, and it's extremely complicated to get this right with how subscribeOn/publishOn works and how the profiler can't tell you which threads runs what code because the stack traces are filled with garbage.

You also opt out of a lot of what Java offers in terms of synchronization. Using the synchronization keyword is a big no-no, the JVM can park the thread while waiting for the lock. Now that thread cannot schedule other tasks just because that one task ran into lock contention, and it's highly likely a fixed pool of threads. You're basically left with CAS, AtomicLong/AtomicBoolean/etc, and that's it. I've even seen BlockHound, project reactor's tool to help you find if you are blocking somewhere, trigger "Blocking call to LockSupport.park()!" during a ConcurrentHashMap lookup. Add to this that most caching libraries are implemented on top of ConcurrentHashMap. Getting async right in reactive code has a huge mental overhead and you need intricate knowledge of the libraries you depend on, the platform you run on and what it is that actually enables async in the kernel, and where it is unavailable. Good example of this; that UUID library you use uses a CSPRNG. Have you set the JVM flag to read it from the non-blocking /dev/urandom rather than the default /dev/random on Linux?

I think reactive programming in Java is decent as long as all you're really doing is accepting/performing HTTP requests and talking to a database that has a reactive client connector. In essence, anything that strictly deals with networking, since that's all that can really be made truly async currently, thanks to epoll/kqueue/iocp on linux/mac/windows respectively. Hopefully io_uring will broaden that to file IO as well eventually, but unfortunately that's linux-only for now.

Contrast this with Go's scheduler that just takes care of this for you - no need to think about it. Or Rust + Tokio, where it is explicit with tokio::spawn_blocking and offering async counterparts to std Mutex, RWLock, Channels, etc that doesn't block the current thread on lock contention.


not to distract from your valid points but, when used properly, Caffeine + Reactor can work together really nicely [1].

[1]: https://github.com/ben-manes/caffeine/blob/master/examples/c...


I've been using intellij to write code that uses a Java 21 or 22 dev build for months. Do you perhaps mean support for running the IDE itself on Java 21?


IntelliJ doesn’t have support for Java 21 syntax features, right? I tried running a dev build, but I couldn’t get it to work.


Use the Language Level 20 (preview), or if even that doesn't cover what you need, use "Experimental".


The IntelliJ Early Access Program will be out sometime before then, if you’re willing to go for a potentially unstable IDE experience.


I feel like i have a sufficiently unstable IDE experience using the general availability releases!


> The blocker for me is IntelliJ supporting Java 21

Pretty sure this works with Java 20 with the loom previewed enabled.


This is one of the things that bother me about Java; IDE dependency.


IDE productivity, and it started with Smalltalk and Lisp Machines, was adopted by C++, Visual Basic and Delphi, among several 4GLs, several years before Java was invented.

Everyone is free to use Java in vi with make, if they feel happy doing so.

In fact, there were hardly any Java IDEs when the language was released in 1996, which were quickly provided by Smalltalk, Delphi and C++ vendors.


Java is somewhat unique in how annoying the typical project is to work on without an IDE.


All languages are annoying to use without IDEs, unless we are talking about toy implementations.

I became an XEmacs user in 1995, because everything else in UNIX just sucked in comparison with PC, Mac OS and Amiga IDEs of the time.


I've worked on plenty of projects where I can navigate around in a pretty dumb text editor and run "make" and things mostly work. In Java land most things are several folders deep and things are injected in from places I don't understand without tooling to show me what is going on. I mean, have you tried using jdb? It basically exists as an advertisement for an integrated debugger.


That is what many call gdb as well, so it isn't far off.

As for messy code navigation, I have similar experiences in large C codebases, so maybe C is also unusable without IDEs.


Yeah, no.


Yeah, mostly surely yes, specially offshoring stuff.


No, you’re being facetious. GDB and jdb are not comparable at all. jdb is not meant for actual use. This becomes immediately obvious if you spend like ten seconds with it. There’s no readline support, no tab completion, no fancy breakpoints, poor expression evaluation…I can go on and on. GDB has a learning curve but is unquestionably a professional tool that sees heavy use. Similarly, people can make also sorts of spaghetti projects with C(++) but you can also make very nice ones. With Java the state of the art basically requires you to juggle a classpath at all times. You surely know this: all the tools are designed to be driven by automation. Why are you claiming otherwise?


It's not so much that you can't do Java without an IDE—Java is verbose in part because it doesn't assume that you have an IDE to give you context—it's that IntelliJ is such a good IDE that it's hard to go back to weaker tooling once you've become comfortable with it.

All my co-workers use VS Code for TypeScript, but I feel crippled when I can't use WebStorm. Not because I can't program without it, but because I'm missing a powerful force multiplier.


I used to think that WebStorm was terrible for refactoring; but somehow it's still very far ahead of other editors like vscode, and vscode itself is very far ahead of almost everything else too.


WebStorm is worse than IntelliJ Java, but that's because TypeScript is much harder to statically analyze. There are fewer safe refactorings in a language that has so much flexibility.


Thats because Java IDE's are ridiculously powerful. When I work in vscode and Python, I am always like: This is so much easier when I am using Java and Intellij.

It is not mere IDE Dependency. It is IDE Supremacy. Java is the leader of the IDE master race - the other PL IDE's need to squint to see how far ahead Java IDE's are.


This is a fair point.

I've seen many frameworks shared in hackernews. But rarely do I see anyone commenting on their IDE's capability to be able to use that framework for go-lang, rust, c, c++, etc.

For example, Rust tokio 1.0:

https://news.ycombinator.com/item?id=25520353

I don't see anyone mentioning an IDE there.


Because they tend to be on the stone age of tooling.

In what concerns C++, have a look at VCL, Firemonkey, Qt, Unreal, Godot, DirectX, Metal,...

All Apple frameworks for Objective-C and Swift.


> It is not mere IDE Dependency. It is IDE Supremacy.

Just like how our current civilization is dependent on industrialized agriculture and distribution systems.

We cannot imagine how it would be possible to live without those functions. In a way, yes, it is "supremacy", but only if you have access.


While true, Smalltalk, Common Lisp and .NET are also part of the party.


No it is straightforward to setup a Maven project manually and use Vim if that's your preference.


But then they complain they had to manually type an import statement ... before going back to their opinionating on how bad IDEs are.

I find the whole notion of software engineers rejecting software applications being useful as a concept while it being the primary focus of their own profession quite fascinating.


If a developer does not want to try this framework because an IDE does not yet support a particular version of java, then the situation has gone beyond an IDE being just useful.

Personally I use various IDEs, light weight IDE/Editors and even vi/vim depending on the language & situation.

Think of it this way:

If I asked my team to develop a project/POC with this framework and they came back to me saying that they have to wait for (or prefer to wait for, or whatever) jet brains to add java 21 support to do this, I would not be very happy.


JDK21 hasn't even hit general availability yet according to its own schedule [0]. It seems a little impatient to get upset that tooling isn't 100% there before it's even released. Naturally tool vendors aren't going to release their official support until they can test against the final version. This indeed is part of the maturity people do like about the Java ecosystem.

[0] https://openjdk.org/projects/jdk/21/


So you're saying that you are unhappy that Jetbrains doesn't officially support JDK21 yet? Cool story bro. It's a tool preference. Same deal with any other language. Nothing is preventing your developers writing code for JDK21 today, even with the Jetbrains IDE. You can write and compile JDK21 projects and ignore any related highlighted syntax/semantic errors.


Good job on distorting my comments. Hopefully the developer who made the original comment will read your recommendations and will decide to give this framework a try.


I'm saying your comments basically boiled down to that. You are unhappy that developers might prefer a tool that doesn't officially support newer features of a language, even though it doesn't really prevent them developing with the new features using the same or different IDE or a text editor or whatever. Same deal with C# or C++ and every other language supported by IDE's, hence C.S.B. This is an issue with programming in general, not Java specific.


There are some weirdly hair-shirt beliefs about development practices in this industry.


If a developer says "The blocker for me is IntelliJ supporting Java 21", clearly there's an IDE dependency. Powerful enough for a developer not to adopt a new framework.


It's entirely possible to write Java without an IDE. I've done it, and one of my colleagues has been using VSCode quite happily (we've bullied him into stopping because it doesn't have a formatter and he keeps checking in wonky code - but that's another story).

But writing any language is much more productive with a good IDE. For me, the added value in using Java 21 now rather than in a few months is not enough to outweigh giving up the use of an IDE.



> keeps checking in wonky code

Time to add a format check to pre-merge CI.


Java 21 is expected to be released in two months....

Being a bit entitled regarding support of preview features.

If said developer is savy enough to use a preview SDK, they should be able to cope with the downsides of lacking all the nice support.


This is a killer "app" for Java 21. There's now little reason to choose Kotlin or some other language to build efficient API's etc.


I write both Kotlin and Java but Kotlin is my go to. Kotlin's when statements alone are a reason to use it. Then there's the null safety, extension functions, built in lazy initialization, and smart casts. Kotlin is just a joy to write in.


Plus...Kotlin is compatible with Java, so even a small improvement can still be reason enough to use it


Good point. But getting Java devs (ie the rest of my team) to join in on the Kotlin fun has proven difficult.


Was there ever? What made Kotlin fundamentally different in that respect?

Kotlin is great for fixing some legacy design oopsies (Java really should have been smarter about null and it can't be fixed without breaking legacy code), but I wouldn't expect it to do something that was impossible or it infeasible in Java.


Yes from a pure functional perspective, it's really down to null handling at this point. But kotlin has a lot of nice syntactic sugar still. Same with Groovy.


> it's really down to null handling at this point.

and reified generics, if you care about that sort of thing.

Plus immutability/final by default, the stdlib being a joy to use, etc.


We all know how Groovy ended.

Being syntax sugar isn't enough to keep a language going.


What do you mean, how Groovy ended? It's still going strong with regular releases and continues to keep up with new Java features! It's a pretty nice language if you don't mind dynamic typing, which has its uses.


No one cares it exists, beyond Gradle scripts, Gradle saved the language to join fate with Beanshell, jTcl, jython,....

Long are the days where it was proposed to have parity alongside Java in Java EE, and Spring used to talk about it as it does with Kotlin nowadays.

Or when Grails was a common subject of talk proposals at German JUG meetings.


Well I care a lot that it exists. And many other people I know do as well. Just because you don't seem to like it, you shouldn't imagine everyone else is like you.

Maybe Grails is no longer used as much (like Rails itself), but Groovy found other usages since then, like https://spockframework.org/ and Jenkins pipelines (https://www.jenkins.io/doc/book/pipeline/syntax/). It's not going anywhere, and I see no reason for anyone to be upset about it.


No one is upset, only a reality check how guest languages eventually fade away as the underlying platform and the related "system" language evolved.

I never saw Spock being used outside Grails projects, and Jenkins seems to become less and less relevant in modern cloud pipelines.


There are many solid seeming ones that don't show those signs: Elixir, TypeScript, Clojure, C++, Kotlin etc. I think enumerating short lived languages is not representative since programming languages generally have very high share of short-lived/abandoned ones.

But it's not necessarily a "don't go there" sign that there's a shorter average lifespan. System languages can't really go away unless the whole platform can and are doomed to be a bit clunky. Picking a language from the category that might only have a 15-year heyday is fine for a lot of apps if it's otherwise better than the system language.


> guest languages eventually fade away

I actually think Groovy sits a bit different to other JVM languages in this respect, because it never was at its core, pitched as an alternative to Java (for a brief moment it was considered in that light, but that was mainly because Java itself became so stagnated). They aren't competing to be the same thing the way other JVM languages are - Groovy is trying to be a really good scripting language and Java is trying to be a really good application language. Similar to how Clojure will probably never be popular but it will probably also never go away.


You should go out more. Spock is really popular, and Jenkins is still extremely relevant, quite ridiculous to say otherwise.


Coroutines are one of the most prominent features of Kotlin, so it does make sense to bring this up in the discussion.

OTOH, I'm unconvinced that virtual threads make coroutines, reactive APIs etc. obsolete. Those are about structured concurrency, and with projet Loom, the structured concurrency proposal is still in the incubating stage.


SC will be a preview level API in Java 21: https://openjdk.org/jeps/453


The magic of Kotlin's structured concurrency is that it "just works" without you having to think about it, and it's difficult to screw up. This looks a fair bit more involved, though I'm excited for it nonetheless.


Oooh this looks promising. Seeing helidon nima's take on java microservices with virtual threads is certainly a different direction. Way way back, while working with a java microservices codebase, we were stuck with OJ (ObscuraJ, to be honest I'm not sure if it was publicly available or internal, was so many years ago) and that was pure hell. The configuration overhead, especially the cumbersome dynamic routing setup, and layers upon layers of indirection and dependency injection was a headache. níma's approach piques my interest, albeit with a bit of caution, will need to dig into the source more



I'll take Scala + Akka over this. Underrated multithreading framework to say the least.


Of course, this is not fully comparable to Akka, but you might find this interesting: https://github.com/ebarlas/game-of-life-csp


What problem does this solve?


Taking advantage of a new Java feature for pretty much what the feature is designed for.


Was an entire framework needed for that? What are the chances that none of the current solutions will add support for this?


I clicked on "see the code" and got: "404 - page not found. The main branch of helidon does not contain the path nima."


Yep and it says

> The technology preview is now available with Helidon 4.0.0-ALPHA5. We will continue releasing previews as the development evolves.

and in the default branch `helidon-3.x` there's no `nima` path too, same for a `helidon-4.x` branch.


Great project but the name is very unfortunate for Chinese speakers.


Care to elaborate?


“Nima” could be read as “your momma” in mandarin


It's a weird name, because it's a transliteration of the Greek word for "thread", which would conventionally be "nema" in English (as in nematode worms, nematocysts, Treponema, etc).


It's the most common Chinese swear word, "your mother" (你妈).


“Zip” is penis in Arabic, so I remember at university our female colleagues would read it as an abbreviation “ZIP” zed-eye-pee to avoid the embarrassment!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: