Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Java JEP 461: Stream Gatherers (openjdk.org)
127 points by haspok on Nov 3, 2023 | hide | past | favorite | 169 comments


A lot of comments here focus on the readability, conciseness, and expressiveness of Java streams compared to other languages. IMO they are missing the point and are just reiterating the same complaints everyone has about the Java language in x different ways.

Java streams bring real benefits compared to other mature languages. My favorite is that sequential streams can be efficiently parallelized with a single operation, .parallel(), and sequentialized back, .sequential(), on any stream without having to configure a single knob manually (although you certainly can), an equivalent I am unaware of in any other matured language. These make operations such as .collect() and its mutable reductions leverage multiple threads for effectively 0 additional programming time.

edit: A lot of people are focusing on my favorite feature of parallelizing or serializing a stream with a single command, which apparently you can also do in C#, that was just an example guys. Other cool things you can do with Java streams natively now is leverage virtual threads (take that C#), use them asynchronously with completable futures, define how elements are accessed/gathered in streams via spliterators, etc. Streams in Java are very composable (not just as in composition, but as in utility), and enable leveraging nearly every other part of the language natively. In other mature languages, streams are very rigid and feel non-composable. I'm not saying that everything is impossible in other languages, but that in Java streams feel like a first-class citizen.


> My favorite is that sequential streams can be efficiently parallelized with a single operation, .parallel(), and sequentialized back, .sequential()

That's not actually true. .parallel and .sequential set a state flag for the entire stream. A stream that is opened, parallelized, then sequentalized, will actually just execute sequentially [1]

[1]: https://docs.oracle.com/en/java/javase/14/docs/api/java.base...


In practice, such frivolous parallelism is frowned upon and rarely useful.


+1 almost all uses of parallel streams I’ve seen in Java caused issues in reliability or performance.

Not because of any reason other than the people using it choose to use the feature before learning about the feature.


One of the most important comments in this thread.


That's a common feature nowadays, not unique to Java.

C# has had `enumerable.AsParallel()` since .NET 4.0 (2010). Rust has rayon `input.par_iter()`.


> Rust has rayon `input.par_iter()`

rayon is a 3rd party library though, not part of the language itself, compared to the Java streams discussed here.

With C# I'm not sure if .NET can be called a library or not? All C# tooling ships with .NET by default or not?


That's just the way Rust does things, like with the `rand` crate. Lean stdlib, easy dependency management. For the purposes of "does Rust do X", popular crates should be included in that consideration.


Ok, so then what is this discussion about? Java Streams have existed basically forever, as a 3rd party library, but that doesn't matter here in our conversation about Java language features, where 3rd party libraries somehow changes what the language is...


I guess it's nice for Java that they did this, but the OP's assertion that no other language handles this as nicely doesn't really stand. For example, this quote:

> In other mature languages, streams are very rigid and feel non-composable

I'd like to see some motivating examples that make me say to myself, "Yeah, Java's got a neat trick there".


Whether it ships with it or not is, however, irrelevant. As long as the library is sufficiently popular and accepted by community, it can be seen as the advantage of a particular platform/language.

In this regard, while it is nice that PLINQ and various `Parallel`-related APIs come out of box in C#, it is a marginal difference with Rust where building parallel loops is `cargo add rayon` away.


If it doesn't ship with the default distribution of the language, then there's the risk that multiple mutually incompatible libraries emerge or that the adoption by the ecosystem is spotty. Best example: the async runtime mess in the Rust ecosystem.


There is no async runtime mess. Tokio is the preferred one, and given Rust's goals it is difficult to do it better (pluggable async executors and the degree of flexibility such abstraction offers - running both on big servers and bare cooperative multi-tasking on microcontrollers).

In addition, multiple versions of transitive dependencies can coexist in Rust without conflicting with each other, there is no such risk.


Quite so. But I am under the impression that not every library supports Tokio as a runtime. And while it might be possible to run multiple runtimes in the same process, or use compatibility wrappers, it sounds like trouble.


It is definitely a mess, given incompatible semantics making it an herculean effort to write runtime agnostic async libraries.


> Whether it ships with it or not is, however, irrelevant

When talking about projects in the wild, sure. But if we're specifically talking about language features, then "something being a part of the language" is wildly different than "installable 3rd party library".


This mentality is the exact cause behind both .NET and JVM worlds being worse at enjoying the OSS-first benefits of their ecosystems than Rust or, God forbid, Go where it is expected to import a widely known good packages for solving a particular task.

Sometimes, it is a scar tissue from dealing with NIH syndrome too - at least you can use the OOB tools for combating with it, but the NIH itself it the actual source of people being resistant to adopting proven and good solutions developed by community.


On the contrary, it means I can be sure wherever there is an implementation the features I depend on are available out of the box, and don't depend on someone during late nights to add support for the given platform.

It also means that I don't need to download the whole Internet for basic features.


I'm not sure if I understand your point, but I haven't seen any NIH at the places where I have worked. We have been encouraged to use popular stable libraries when possible.

Java has several 3rd party dependencies that are expected to just be there in projects where I have worked. Examples are Lombok, Apache Commons and SLF4J. These have become so widely used, that I have stopped thinking about them as external dependencies.

Guava used to be more popular too, but now that Java has Optional and Streams, I don't see it as often.


It certainly can, if you want to be technical, it is the Base Class Librarily, short BCL.

.NET is the only language ecosystem that is comparable to Java, as they are a kind of yin/yang between themselves.


Does C# allow you to convert a sequential stream into a parallel stream, or do streams have to be parallelized when initiated? Genuinely asking, I do not know C#


That's literally what AsParallel() does.


As much as I like Java, I don't think this is a good point. Even C lets you do this[1]

[1] https://en.wikipedia.org/wiki/OpenMP


My point was that it is built into Java streams and requires 0 extra configuration or work on the programmer's end. Not that you can't parallelize in other languages.


Well yeah, this is true for openmp as well.

In my experience, parallel streams rarely deliver a substantial performance boost. There are cases where they do, but it's more the exception than the rule. The overhead from the streams API, along with the synchronization penalties means it mostly only makes sense for coarse grained I/O laden operations.


In the entire time I worked at a web shop with a large Java backend I never found the parallelization to be necessary, and in many cases was outright dangerous. We just had it disabled on high-traffic services (thread limit set at 1) to prevent foot injuries.

Don't mean to detract from the point :-). I love how neat and elegant the feature is.


> We just had it disabled on high-traffic services (thread limit set at 1) to prevent foot injuries.

Can you please elaborate?


The default thread pool will use all available cores on the machine. With multiple developers arbitrarily using parallel streams in business logic because it "sounds faster" there is more risk of disrupting concurrent requests.


> The default thread pool will use all available cores on the machine

This is actually a bit of a headache on many-core machines for another reason.

e.g. my production machine has 128 cores, and carelessly parallelizing a memory hungry task (not realizing it will run across 128 threads), risks allocating sevral hundred gigabytes of RAM, and may not even be faster given all the NUMA overhead when thrashing on all cores.

Have to be careful to always down-tune the common pool size to 32 or something.


OpenMP is not a language? You can take any language and add libraries to achieve a specific goal, my point is that Java streams are first-class citizens compared to other matured programming languages that also enable leveraging other aspects of the Java ecosystem without running into the rough edges you will with OpenMP.

Also, that is definitely not the rule? If your problem is parallelizable by nature, parallelism provides massive speedups. Overhead from parallelizing a stream is minimal (effectively nonexistent) now if you leverage Java virtual threads (JDK 21). Additionally, most problems are not parallelizable, yes, but most solutions consist of individual steps and there is a high likelihood that at least one of those steps will benefit from parallelism. Java streams can switch between .parallel() and .sequential() execution on the go. You almost certainly don't want to leverage stream parallelism for most I/O operations, unless you leverage Java's Completeable Futures and Managed Blocking (but any gains here are probably minimal anyways) but the point is that you can leverage them, because streams in Java are a first class citizen.


OpenMP is a compiler flag away.

> Also, that is definitely not the rule? If your problem is parallelizable by nature, parallelism provides massive speedups. Overhead from parallelizing a stream is minimal (effectively nonexistent) now if you leverage Java virtual threads (JDK 21)

This is just not true at all. Not only do virtual threads cope very poorly with I/O other than network I/O, you still get memory barriers. Virtual threads makes the memory overhead lower since you don't need full separate stacks and reduces the cost of spawning new threads, neither of which was ever an issue with streams since they use the common thread pool. Virtual threads don't significantly increase the per-thread performance.


> Not only do virtual threads cope very poorly with I/O other than network I/O, you still get memory barriers.

Which is why I said you probably don't want to use the paradigm I outlined for I/O, but if you did you would want to leverage Java's Managed Blockers and async Completeable Futures for those exact reasons.

> Virtual threads makes the memory overhead lower since you don't need full separate stacks and reduces the cost of spawning new threads, neither of which was ever an issue with streams since they use the common thread pool. Virtual threads don't significantly increase the per-thread performance.

Of course virtual threads don't significantly increase per-thread performance? They make the overhead of spawning multiple threads minimal-to-zero compared to native OS/platform threads, minimizing the cost of jumping from sequential streams to parallel streams. Also, parallel streams don't have to use the global fork join pool, you can use your own fork-join pool? Which is possible in Java, because once again, streams are treated as a first-class citizen and can leverage nearly all other parts of the language efficiently and natively (although I will say Java's verbosity/boilerplateness can suck if you want to leverage your own fork-join pool, but that's widespread complaint of Java not specific to its streams)


>OpenMP is not a language?

Why does it matter that OpenMP is only a standard, rather than a language?

>my point is that Java streams are first-class citizens

They aren't. They were bolted on later and are quite cumbersome.

>without running into the rough edges you will with OpenMP.

I'm pretty sure it is easier to write fast parallel code with OpenMP than with Java streams. In fact, I am always surprised how well OpenMP works, when I get to use it.


Which is a completely separate language.


manjalyc said it is not a language.


#pragma omp target parallel for map(to:v1,v2) map(from:v3)

is sure as hell not part of the C specification.


It's part of the OpenMP specification. C sans OpenMP will just ignore it.


C# had PLINQ (aka arr.AsParallel()) since the time immemorial. It also works quite well with picking the right parallelization strategy and is very similar in its use to Rust Rayon's .par_iter().


Scala has all of the benefits you are listing, plus more, and natively, with a more concise and composable syntax.


Yep, and I love writing Scala as much as I hate writing Java, but I wouldn't say its maturity is in the same league as Java, C/C++, etc.


A language doesn't just keep getting better and better with age. "Maturity" is about achieving a a level of stability and quality in language features, tooling, runtime, library ecosystem, etc. Scala and Java share the same runtime and library ecosystem, and Scala is arguably more mature in its language features, since Java has been forced to add features (and incur extra complexity) playing catch-up to newer JVM languages (including Scala.)

Both languages actually suffer from maturity in their tooling, because their standard build and dependency management tools (maven and sbt) are outdated and crufty, while newer languages such as Go and Rust have tooling that was built more recently, with the benefit of more recent experience.


One of the strengths of Scala is also a weakness. They aren't afraid to break things and make changes that are not backwards compatible. The Scala devs have been much more careful about breakage than in the past, so I think they have reached a good compromise between stability and evolution. The masterful execution of the Scala 3 upgrade is an example.

Java by contrast will never clean up the bad, inconsistent or obsolete cruft in the language. If it is really important for you to run ancient JARs on Java 21, then the Java approach is superior.


> My favorite is that sequential streams can be efficiently parallelized with a single operation, .parallel()

I would never use it because I can't reason what is going under the hood, and if performance will improve or dramatically decrease because all parallel machinery has significant overhead compared to vectorized single thread logic.


I actually really like the java streams, sure it could be better, but it is actually extremely useful. The parallell() though is very bad as its too easy to ruin your entire app. Its backed by the Common ForkJoinPool which many have no control over, and if Java was unable to detect CPU Count, it could be set to unbounded.


How often is it used? I write a lot of Java and I never used this feature. For me, it made streams implementation unbearably complex, to the point that I can't read its sources for a feature that I probably will never use.

I, personally, wish they never implement this parallel feature. For me JDK would be better without it.


Starting to program with Java streams is weird because you are utilizing functional constructs in a language that historically had little notion of them. When I first started using them it felt worthless when I could achieve the same thing in classic OOP faster (and often run it faster too). But after a while you get a feel for the fluid style programming streams enable (and imo cleaner code). These days with ChatGPT, its probably a lot easier to get started.

With that said, you should almost always write a stream thinking only sequentially first, then identify steps which can benefit from .parallel() and only parallelize those steps. Its leveraging .parallel() efficiently that provides an advantage at run-time and why I tend to use it.


I guess I was unclear, but I wrote specifically about parallel streams. Of course I use ordinary streams on every day basis. But using parallel stream in a server application which processes dozens of other requests simultaneously and runs on a server with dozen of other applications (very typical use-case for Java) just makes very little sense, because CPUs are already loaded and it'll just result with more context switches. I could imagine use-case for that (very urgent request which must be completed at expense of other requests and includes heavy collection processing), but I've yet to encounter it.


Yea, I don't imagine native stream parallelism will help when CPUs are already loaded. Presumably you're using Spring or Rx, in which case you probably can leverage reactive streams and/or Managed Blockers, but thats really just taking advantage of async patterns and not necessarily parallelism. The only case I could envision having a concrete benefit is if you used your own fork-join pool leveraging virtual threads instead of the global fork-join pool to prevent platform threads from hogging CPU, and then used reactive streams that leveraged the virtual threads. Although this would (theoretically) raise responsiveness, it would almost certainly come at the cost of throughput. All that is to say, parallelism generally only provides as much value as you have idle CPU cores.


Indeed, and some things become really hard to write because the operations are required to cope with parallel execution that I never want. Anyone doing serious parallelism is going to reach for another library anyway (Rx etc). Making streams parallel was a massive mistake and has just lead to enormous accidental complexity. God how I wish they’d just added map/filter/reduce to the collection interfaces.


The Stream API is insanely useful with just serial execution alone. A nested for loop with random breaks (that over time will do some random side-effect here and there, making it completely unreadable mess) is much worse than the “pipeline-y” behavior of streams.


It is useful, but it also has weaknesses. For example, I’ve lost count of the number of times I’ve seen someone forget to close() an IO-based stream (eg File.lines). But probably for 99% of cases you could get away with slurping the entire file into memory and returning a list. The streams API is optimised for the 1% of cases that need the additional complexity, at the expense of worse ergonomics for the common case.

I get why it ended up that way. I think it would have been difficult at that time to get traction for adding FP features just as a convenience. So they needed to do all the parallel stuff to justify it. But, as I said, anyone I see doing serious parallelism in Java is not using the streams API.


I use parallelization all the time. It's easily added later thanks to this feature, unlike say rewriting a code base from a single threaded language because you thought async constructs would be enough. Never making that mistake again.


Do you have an example or a link to examples that makes you say "In other mature languages, streams are very rigid and feel non-composable"? I'd be interested in seeing what advantages it has over something like Rayon.


Did I get here before AbstractFactoryBean meme? The famous class no one actually used?


The author of the JEP, Viktor Klang, held a nice talk about this at Devoxx. If you're interested, you can watch the talk here: https://www.youtube.com/watch?v=8fMFa6OqlY8


I really wish I convince our ownership these conferences are worth going to.

I’d love to be able to ask questions and learn first hand from the people developing these.


    long numberOfWords =
        Stream.of("the", "", "fox", "jumps", "over", "the", "", "dog")  // (1)
            .filter(Predicate.not(String::isEmpty))                     // (2)
            .collect(Collectors.counting());                            // (3)


    This programming style is both expressive and efficient. 

Err... I assume the author here is aware that it is neither, but perhaps struggles for something else to say in motivating this API. It's certainly an amusing remark.

I'd prefer either an imperative,

    for(w <- words) if(!w.empty) count++
or, an ergonomic dsl,

    words.filter( w -> !w.empty ).count
The strange namespacing of functional primitives, glued into an api via 'gather', 'collect' etc. is neither expressive not efficient. Though I concede it may be a necessary route for a JEP


That example could be easily written like this:

    var num = words.stream().filter(w -> !w.isEmpty()).count();
This is shorter and closer to what you're expecting. The reason it's written the long way is because of the labels. The prior sentence is:

A stream pipeline consists of three parts: a source of elements, any number of intermediate operations, and a terminal operation. For example: ...

so the goal here is to make the different components of the pipeline clear conceptually for the explanation of what's to come, not to write the shortest code possible. It therefore uses code that makes the underlying abstractions explicit rather than the sugar that avoids typing.

The reference to "expressive and efficient" meanwhile is likely about the fact that Java can parallelize streams in many cases (efficient). As for "expressive", well, that's somewhat debatable but for sure there are some cases where the functional pipeline style is easier to read and more expressive than the imperative style. In my own code I use both because indeed especially for short operations over lists and the like, sometimes a for loop is just clearer to my eye. But it's a matter of taste.

BTW the same code written in Kotlin:

    val num = words.filter { !it.isEmpty() }.count()
Very similar. You may slightly prefer the syntax or dislike the "magical" auto-naming of the lambda parameter, I've seen it argued both ways.

----

https://docs.oracle.com/en/java/javase/21/docs/api/java.base...()


Kotlin has lots of nice features for this kind of stuff. Including extension functions and it comes with lots of those for existing Java APIs. And a nice thing with extension functions is that they also work against interfaces, generic types, etc. So, there is no need for a Stream.of in Kotlin. You just call .filter, .map, etc. on anything that you'd want to call that on that is a List, Map, Array, Flow, Sequence, whatever. Java Streams too probably but I haven't had a need for using those in five years.

As for doing things in parallel, that's where Kotlin co-routines shine.


I think Scala is hands down the best collections/stream library, I especially like their _ shorthands:

  val num = words.count(!_.isEmpty())


Understanding bird droppings like “!_.()” relies on quite some prerequisites.


When I was introducing a team to Scala I wrote everything in a more verbose style just to make it clear, e.g.

  val num = words.count(word => word.nonEmpty)
The "bird droppings" (particularly the clever uses of _ found across Scala) were hard for me to grasp when I first started working in the language. Now they're second nature, but I try to remain cognizant of future maintainers who may be coming in to make contributions or bug fixes without first marinating in the language.


! contradicts a boolean. _.isEmpty() is a lambda, applying the isEmpty function on its string parameter. It’s just the two things together.


I know all this. But as I said, pulling it all together involves quite a number of concepts represented by idiosyncratic punctuation, much more than “count all non-empty elements”.


Then your team can write lambdas in a more verbose, but less cryptic style.


The nice thing is you can just write it like `x => !x.isEmpty` if you want


Or more succinctly

  val num = words.count(_.nonEmpty)


I think that is a horrifying crime against man and nature. If I wanted to write PERL I would do so.


What is hard to understand about it? _ is universally used as a placeholder, it also usable in pattern matching. Scala can be used to make some perl-y <!! operators, but it is not like that at all, and makes for very readable, understandable code.


Also targeted at Java 22: https://openjdk.org/jeps/456 (Unnamed Variables & Patterns)


Same, I wish more languages had that shorthand


The kotlin example can be made even more concise:

  val num = words.count { it.isNotEmpty() }


Well, my comment was more about it being a bit funny as an opening example to be described as expressive and efficient -- given how overly namespaced and partitioned the presentation of the API is.

I can appreciate its essentially a desugared version of what you'd usually write.

The JEP doesnt need to be positioned to defend this API to outsiders, but the opening part appears to do that -- but then, imv, oddly offers this example in the course of doing so.


Note that Java also allows you to write that as

    words.filter(w -> !w.isEmpty()).count()
And they just end up calling the same methods. The author probably used "expressive" and "efficient" to refer to the underlying APIs for the fact that you can compose operations without fully evaluating the whole stream until the terminal one.


In fairness, you are omitting the list instantiation, and collecting only a count rather than an assignment of results. Apples to apples, your alternative algorithm would be implemented like this:

    words.stream().filter(w -> !w.isEmpty()).count()
Which is hardly any longer, and arguably more readable intent with the English word "filter".


Streams is too complex for what it does and it doesn’t even parallelize well. Here is something that does roughly the same thing but I think is way better

See https://github.com/paulhoule/pidove

https://central.sonatype.com/artifact/com.ontology2/pidove


This is far less readable than Java streams to me.

  sum(filter(x->x%2==0,List.of(5,3,4,19,75,6)));
Streams are formatted linearly similar to Clojure threading macros.


The advantage of the static import DSL is that it is more composable: anyone else can define operators that interoperate with my operators without the complexity of the scheme linked above not to mention more generality. This goes not just for the low level operators like "filter" that you might want to supplement but the many more operators you can write that are implemented out of mine.

I prototyped an API that works in the same direction as streams by having something like

https://paulhoule.github.io/pidove/apidocs/com/ontology2/pid...

in that it wraps all Iterables returned by my methods and has not just the teardown facility but also all the operators attached as instance methods which lets you write the chaining style you ask for that I know is in demand.

If I was going to go any further on pidove it would have involved more use of code generation and this system

https://github.com/paulhoule/ferocity

which was supposed to be a code generator for writing code generators, and it could code generate stubs that would let you write expression trees in Java as S-expressions and build them up into methods and either compile the code to real Java source code or execute the methods by evaluating the expression tree in place.

Like common LISP you can write syntactic macros in that that metalanguage because an Expression<Expression<X>> can be evaluated at compile time, one of quite a few concepts like "quoting" that I encountered in that spike.

The idea was ferocity would get to the point where it synthesizes a more complete and perfect pidove.


> evaluation begins only when a terminal operation is invoked. In this example, line (1) creates a stream, but does not evaluate it, line (2) sets up an intermediate filter operation but still does not evaluate the stream, and finally the terminal collect operation on line (3) evaluates the entire stream pipeline.

It's quite clear, reading beyond just the code, that the original author is talking about _computational_ efficiency.

It feels like you missed the point — that streams are only evaluated lazily as needed — in order to poke fun at the (toy) example.

The Java code can (and in a real-world application would), be made more consise (e.g. via static imports) but the author is deliberately being more verbose to be more explicit in what's going on.


> It feels like you missed the point — that streams are only evaluated lazily as needed — in order to poke fun at the (toy) example.

GP may be closer to the point than you realise, even if unintentionally so. I mentioned Streams failing to be lazy in another comment, and predictably got down-votes rather than corrections.

So anyway...

https://bugs.java.com/bugdatabase/view_bug?bug_id=8079264 Submitted: 2015-05-01

https://bugs.java.com/bugdatabase/view_bug?bug_id=8149614 Submitted: 2016-02-10

https://bugs.java.com/bugdatabase/view_bug?bug_id=8155217 Submitted: 2016-04-25

https://bugs.java.com/bugdatabase/view_bug?bug_id=8189234 Submitted: 2017-10-11

https://bugs.java.com/bugdatabase/view_bug?bug_id=8196106 Submitted: 2018-01-24

https://bugs.java.com/bugdatabase/view_bug?bug_id=8229983 Submitted: 2019-08-21

https://bugs.java.com/bugdatabase/view_bug?bug_id=8267758 Submitted: 2021-05-25

I wonder if 2024 will be the year of the lazy Stream.


I think the point that you are making — that even if Streams are meant to be efficient on paper, but reality is different because of bugs in implementation — is an interesting one that moves on the discussion! Surprised to hear you were downvoted for it.

Personally, I don't think that's what the other comment was getting at though, as they were commenting on efficiency regards to syntax.


I hope that 2024 will be the year when people stop pointing to duplicate or fixed bugs...


Firstly, I put multiple bug reports there to show that 'real programmers' are running into real bugs in the wild. This is necessary because otherwise when I point out bugs, I'm dismissed as a "PL fan" whose ivory tower views don't line up with the mainstream. See elsewhere in the comments.

Secondly, if I click into my first link (2015), it says resolved 2015. And by 'resolved' it means 'duplicate but not fixed'. Because when I clicked into its duplicate, it wasn't 'resolved' until 2018. What did they resolve if tickets are still being filed in 2019 and 2021?


My understanding is that Stream internals are a very complex piece of software that can have multiple different bugs. By the way, I agree with you that eager versions of the Stream methods should also be available directly on the collections.


The last three are open, open and won't fix. Not so promising for the flatMap method.


efficient compared to what? Previous Java: Yes. C++: Yes. C#: maybe not. Functional Programming Language X with 2 dozen language feature supporting it: Definitely not.

But this is the Java Enhancement Process, and within this language AND the fact that this is specification/documentation example, this is efficient.


Seems expressive and efficient to me.


Expressive, sure.

Efficient? Not so much. It's easy to find cases where performing an operation using the streams API comes at a 90% performance hit.


TSMT

Fingers crossed that the upcoming Valhalla project can improve the situation w.r.t. number of allocations.

I've noticed a general trend where languages without support for proper compile time metaprogramming (and the corresponding optimizations) often create uglier code (despite being higher level) to avoid the performance hit. For example the mutable Vec2D type in various physics related libraries.

Last time I tested it was still faster to implement various string operations using Unsafe, than with standard APIs. I want to be able to trust my compiler/JIT.


Is it that fun to shit on Java with cherrypicked worst-case and unfair examples?

You could use any libraries StringUtils.hasText (any serious app has one) to avoid the Predicate.not and do static imports (StringUtils, Collectors) to avoid explicitedly naming the classes containg those static methods...

Also your example misses the string collection creation part.

Ultimately we should get, put on one line like your code:

  words.stream().filter(hasText).collect(counting());


The kids hate Java, and that's ok. No sense arguing with them. I am happy a tool I use frequently is getting better. I am also happy there are other tools out there.


> Is it that fun to shit on Java with cherrypicked worst-case and unfair examples?

The person doing the shitting did not pick the examples.


If you look closely you’ll see that the given code is equivalent to your second example, though admittedly the Java syntax is a bit clumsy.


The do notation would be ideal here, but it's probably a bit too much to ask...


My favorite is XPath:

   let $numberOfWords := count($words[.])
or

   let $numberOfWords := $words[.] => count()


Looking forward to this JEP. This is, IMO, how Streams should have been implemented from the get go. All intermediate operations can now be expressed with Gatherers rather than having dedicated methods that can never cover all intermediate operations people come up with.


Streams are fine and Java is clearly evolving in a good direction but what makes me tear out my hairs is that they needed 7 years to add a toList() method to streams. This is so typical for the way that the Java language evolved. Ergonomics always came last with Java and if it's not clunky and painful to use at first then the Java stewards don't want it. Maybe in another 7 years we will get filter() and map() directly on Collection.


Java has always preferred introducing features late over introducing something that they later can't get rid off anymore. Whether something is truly useful can often be said only in hinsight. Streams are in the standard library after all.

To enable more rapid iteration, it would have been smarter to place it in a separate package of a `jdk.*` module (like `jdk.httpserver`) for which weaker compatibility guarantees apply. But that would completely divorce it from the other collection classes. Fixing that would require adding extension methods to Java.


Is it really a new feature to have the most traveled path be comfortable? I wouldn't agree. I think whoever is in charge of Java should be conservative and careful but what I would just wish is for them to give the developer experience some priority.


I'd rather they defer decisions until they are sure. Libraries came out immediately to help with verbosity complaints around streams. I'd rather each step they take be sure-footed.


I really like the idea but I don't understand the naming. Why "gatherer"? Doesn't "gather" mean more or less the same as "collect"?


I would say in computing gather usually appears together with scatter and they are used when one thing gets split up into several things (scatter) and then they get collected back together (gather) into one thing. Breaking up one big task into several smaller ones, sending them somewhere for processing, and then collecting the results back together to combine them into one final result would be the prototypical example of scatter gather operations. Also in SIMD processing you can come across scatter and gather operations, there they essentially break apart or assemble the data vectors in memory.


And 'fold'. And 'reduce'.


For not having seriously touched Java in over 10 years since Java 8, I was able to still follow along this JEP. I suppose I could pick it up again if I had to. If my 401k runs out when I retire in 10-15 years, I will always know I can still code some shitty Java for some shitty company and make ends meet.


Precisely. That has been Java's design goal from the beginning. It's not flashy or exciting or quick. It's a language for writing code that will keep on cranking along for decades, and maintained by multiple generations of developers.

They resisted adding any significant features at all for many years. They did gradually add some elements from languages like Scala and Kotlin, once they had proven that those features made really significant improvements to real code rather than just looking good in isolation.

It's the Honda Civic of languages. It's not especially fun and it doesn't keep up with the times. You just get in and go so that you can get something done.


> It's the Honda Civic of languages

That’s Golang. Java 21 is Toyota: unmatched stability, continuity and bang-for-buck in a deceptively simple shell that hides some of the most advanced tech there is.


> It's not flashy or exciting or quick.

Java's performance is fine for the vast majority of applications. Similar languages perform about as well. The JVM is an incredible piece of technology.

> It's not especially fun

I love Java, even though I don't write it much anymore.

The ecosystem is incredibly mature. There are hundreds if not thousands of high quality libraries.


> It's the Honda Civic of languages.

Honda Civic doesn't devoure RAM.


its not quick haha.... clueless.


The example to find suspicious temperature changes (btw Kelvin does not use ° anymore since 1968) is unnecessary complex.

  List<List<Reading>> findSuspicious(Stream<Reading> source) {
    var suspicious = new ArrayList<List<Reading>>();
    Reading previous = null;
    for (Reading next : source.toList()) {
        if (previous != null && isSuspicious(previous, next))
            suspicious.add(List.of(previous, next));
        previous = next;
    }
    return suspicious;
  }
Doesn't look bad to me. Actually the streaming example here is quite underwhelming. It still needs a separate check for the first "iteration" (window.size() == 2). While reading the classic for-loop example I expected that especially that is something the streaming approach can handle better.

In my opinion people overdo streams. I'm not sure I ever saw someone actually using the parallelization which is one of the stronger features of streams. Instead I get entire method bodies with a lot going on all shoved in one line which is for good reason discouraged anywhere else. And streams are tedious to debug. I really dislike when people (auto)refactor a 'for(var x : list) { ... }' into a 'list.stream.forEach(x -> { ... })' without any other benefit/change.

Just this week I encountered a quite crazy case where someone wrote a

   IntStream.iterate(0, i -> i < getSize(), i -> i + 1).mapToObj(this::getObjectByIndex).toList()
Two anonymous classes and who knows how many extra method calls just to not write a simple for-loop where you even could initialize the list with the known size.


Just last week I got really frustrated at the Streams API because the .collect did almost perfectly what I wanted but didn't return a stream


collecting an entire stream means reading the entire stream into a fixed data structure, so it makes sense the result is not a stream.


What's wrong with imperative versions of these examples? Is imperative code not allowed or something?


For code coverage purposes I love the fact that there's only one branch of code to focus on. A lot of my code with streams instantly starts with a return statement and then has a 5-10 lines of stream operation invocations. Very clean stuff.


Streams can tend to be more readable and easier to write for the right problem set. This is because it is super easy to compose basic functions together which are then invoked by the Stream API. It also makes it really easy to parallelize operations.


There's a lot of haters in this thread. You don't need to use Java.

I don't use Java day to day anymore, but I'm excited to see the language evolve. I would highly recommend watching some of the talk from the Java architects. They're very smart and have thought a lot about how to design languages. Java obviously prioritizes different things than newer language, so decisions are always made in the context of maintain backwards compatibility.


Coincidentally I just watch the talk on this from Nikolai: https://www.youtube.com/watch?v=epgJm2dZTSg. This explains what it is. His talk: https://www.youtube.com/watch?v=pNQ5OXMXDbY shows how to use/implement it.


If you're like me and your first thought was "why can't we just use flatMap and mapMulti, the summary table in the linked article convincingly explains why:

https://cr.openjdk.org/~vklang/Gatherers.html


    How to implement flatMap(mapper)

    public final static <T,R> Gatherer<T, ?, R> flatMap(Function<? super T, ? extends Stream<R>> mapper){
        return Gatherer.of(
            () -> (Void)null,
            (nothing, element, downstream) -> {
                try(Stream<? extends R> s = mapper.apply(element)) {
                    return s == null || s.sequential().allMatch(downstream::flush);
                }
            },
            (l,r) -> l,
            (nothing, downstream) -> {}
        );
    }


Better title: how scala won the war but lost every battle


> Better title: how scala won the war but lost every battle

is it opposite? Scala did many individual things better, but lost adaptation war maybe because of excessive complexity..


war = fp battles = how to do FP


So yes, naming is one of those qualitative things, but the term 'Gatherer' doesn't sound great to my ears, and introducing a new 'Integrator' term to represent the Gatherer equivalent of a Collector's accumulator doesn't seem necessary either.

In terms of streams, it seems like this is a fancier 'pipe' ( in unix terms ), so I would have called it .pipe() instead of .gather() ... having a series of .pipe()s seems more intuitive than having a series of .gather()s on a stream.


I wish there was something similar to F# |> operator for using arbitrary functions in call chains.

It would be useful in more cases and have far simpler signature/interface.


Stream Gatherers

Nice name. I think of:

Those Moisture Farmers of Mos Eisley have certainly updated their methods. They are even approaching the Dew Gatherers of Arrakis in efficiency.


Streams are one of my favorite feature in Java, and this looks like a great step in making them even more flexible.


They haven’t referenced clojure transducers, and I find that surprising.

Wouldn’t that be a bit of a nicer interface?


Ah, Java adding yet another C# feature 15 years later.


Well yes. This is Java's stated development strategy. Let other languages evaluate new features, and implement them once they're demonstrated to be good.


LISP has had higher order functions from day 1. Even Java's big rival at the time, Smalltalk, had them. Yet, they were only added to Java in 2014.

I think that Java suffers from NIH syndrome. Nothing that other languages do is good enough for Java until it has been debated for years and reimplemented using new patterns and unknown terminology.


The unstated part is implementing them poorly.

Optionals that throw NPEs, Streams that operate on more than 1 item when called with .limit(1). Futures that don't cancel. Lambdas that don't play nicely with exceptions. Non-extendable streams.


"Poorly" is very subjective. Yes, some aspects require fixing (e.g. the interaction of lambdas with checked exceptions; there's much we can do there, but we need to design this carefully), but others are tradeoffs; e.g. they require complicating the language in a way that hurts in other areas.

It's not a coincidence that many languages that PL fans think do things "right" end up being far less popular than languages that PL fans think do things "wrong". Developers have different and conflicting preferences, and they are not distributed evenly. For example, the sweet spot for mainstream, super-popular languages over more/less compile-time checking in some areas vs. language complexity is probably not the sweet spot preferred by most developers. It's a little like the VHS vs. Betamax debate.

Improving a super-popular language in some small way that doesn't harm its popularity can have a far bigger impact on software quality than features that complicate the language to a point where it will not be taught as a first language in a lot of schools and will thus never be super-popular. Designing language features for industry to maximise their impact requires far more complex considerations than just PL theory.


> Yes, some aspects require fixing (e.g. the interaction of lambdas with checked exceptions; there's much we can do there, but we need to design this carefully)

I do like the Java platform in general, but this sort of argument from hypothetical future Java fixes strikes my as incredibly disingenuous. People aren't programming in a hypothetical future Java.

I can literally pose no argument against your imagined solution to this problem without very specific details. (Implication being, it's a straw man at best.)

And saying saying stuff like (in an adjacent thread):

> You're assuming that deconstructing and exhaustive patterns will continue to be restricted to ADTs only. That may or may not be the case.

Is just complete fiction. (EDIT: Are you going to solve the halting problem, or are you going to provide details before just promising "it'll be ok"? I assume the Halting Problem is out of the question, but what's the plan, then? Details, pls)

Either point to extant JEPs or explain in detail what the exact plan is, please.


First of all, you completely misunderstand my point. I neither ask people to judge Java based on some hypothetical future, nor ask for feedback on future designs. I acknowledged there's a real problem and mentioned we're exploring solutions, but even with the problem Java is the world's most popular typed programming language in 2023. Not every problem is big, and PL fans tend to grossly overestimate the cost of some cumbersomeness in a language and grossly underestimate the cost of language complexity because their preferences are often in the minority.

Having said that, we have a few experiments with type-safe "checked exception transparency" for lambdas (and methods in general), but as always we like to sit on them for a few years because the cost of a bad feature may be much higher than the cost of a missing one. We only publicly discuss specific solutions once there's something to be gained by such a discussion and once we've decided that the problem merits a solution in the short term.

> Is just complete fiction.

I was responding to a statement about deprecating Optional in favour of an ADT, and hinted at this design, which is under exploration (https://openjdk.org/projects/amber/design-notes/patterns/pat...):

    case Optional.empty() -> ...
    case Optional.of(var x) -> ...
It's not complete fiction, this is actual stuff we're working on, but not everything we explore will end up in the language.


Optionals only throw NPE if you’re using them wrong.

You are not wrong about cancelabel futures, though.


Either way it feels like optionals are going to be deprecated real quick now we have sealed interfaces and record matching. They let you get rid of so much of the unpleasantness in dealing with this API. Instead you can

    public sealed interface Maybe<T> {

        <T2> Maybe<T2> map(Function<T, T2> mapper);
        <T2> Maybe<T2> flatMap(Function<T, Maybe<T2>> mapper);
        T elseGet(T fallback);

        record Just<T>(T value) implements Maybe<T> {
            public <T2> Maybe<T2> map(Function<T, T2> mapper) {
                return new Just<T2>(mapper.apply(value));
            }
            public <T2> Maybe<T2> flatMap(Function<T, Maybe<T2>> mapper) {
                return mapper.apply(value);
            }
            public T elseGet(T fallback) {
                return value;
            }
        }

        record Empty<T>() implements Maybe<T> {
            public <T2> Maybe<T2>  map(Function<T, T2> mapper) {
                return new Empty<>();
            }
            public <T2> Maybe<T2>  flatMap(Function<T, Maybe<T2>> mapper) {
                return new Empty<>();
            }
            public T elseGet(T fallback) {
                return fallback;
            }
        }

    }

    public void demo() {
        Maybe<String> foo = new Maybe.Empty<String>();

        System.out.println(
            switch (foo) {
                case Maybe.Just(String val) -> "Hello " + val;
                case Maybe.Empty() -> "Fine, leave me hanging";
            }
        );

    };


Since this defines an interface, does this solve the null problem? e.g.

    Maybe<String> foo = null;
    foo.map(s -> "bar");
explicit pattern matching is usually discouraged anyway though, and you can do this now with Optional

    Optional<String> foo = Optional.empty();
    String message = foo.map(s -> "Hello " + s).orElse("Fine, leave me hanging");
Other languages like Scala also have an Option.fold method for this specific case.


Java started with Records + improved switch expressions to pave the way for Pattern matching, but the feature is far from being done. Deconstruction patterns for classes are being worked. In fact, Optional has been brought up as an example in internal discussions time and time again.

The way Pattern matching works right now is just the beginning.


You're assuming that deconstructing and exhaustive patterns will continue to be restricted to ADTs only. That may or may not be the case.


Not really, I'm just highlighting that this is possible with records today. .


The more likely scenario is that Optional will get made into a value type.


Showing an error on broken code will cost you backwards compatibility.

The best you will get is a warning and the JVM will deoptimize Optional back to a regular object, because somewhere someone set an Optional to null somewhere in a library. It doesn't even have to be as obvious as Optional<String> o = null;

Any cast to Optional can let a null pointer slip from an Object reference. List<Optional<String>> is allowed to contain null pointers and you are allowed to assign o = list.get(0) which generates an implicit cast.

There is no good solution for this. Due to the way generics are implemented.


Ah, you mean the Optional itself being null. That’s not any bigger an issue than anything else being Optional. Also, value types might come combined with nullability — so you might have Optional<String>!, which can’t be null, and will efficiently be stored in-place.


Optional *is going to* be made a value type.

However, unlike the binary choice of class/value type, project valhalla will provide a more granular approach, coming with incremental performance benefits and constraints.

In the current spec draft, the type system boils down to 4 categories:

- Fully mutable, polymorph classes

- Immutable, monomorphic classes

- Immutable, monomorphic classes + null-hostile

- Immutable, monomorphic classes + null-hostile + multithreading-unsafe

Another story for nullability is being investigated as well which would allow for full performance gain as well as full backwards compatibility.


It's like desktop Windows, server Linux or Android of programming languages. So big it does not need to be good or innovative.


It's not about that.

1. The rate of change that the industry demands of super-popular languages is not the same rate that PL fans demand. Java innovates a lot in, e.g., GC algorithms and low-overhead profiling because there's more demand for more rapid innovation in those areas than in language features. Java's strategy from day one is to have an innovative runtime and a conservative language (James Gosling called it "a wolf in sheep's clothing") under the assumption that that's what the industry wants. So far, that strategy has worked very well.

2. Because Java will likely continue to be super-popular for many years to come, it's more important to avoid introducing harmful features than to introduce beneficial ones. If you've introduced a "good" feature five years after a less popular language introduced it, you've only lost five years of possible slightly better productivity; if you're introducing a "harmful" feature, you've harmed your language for decades to come. Super successful language need to think more more long-term than languages vying for more users right now.


I feel that there is a particular "race horse" mindset when it comes to programming languages. It's very academic - for a thing to be good, it needs to have novel, never before implemented features, it needs to have intellectually superior properties, it needs to have them first, etc. I used to think more this way in school. It leads one to view the world as two camps: the language makers, who compete with the best new features in order to woo the language users (the industry). This results in being disappointed and puzzled that the language users consistently ignore the new shiny stuff and pick an ostensibly boring unsexy choice.

However languages are not startup companies trying to sell a product, it's the other way around. The popular ones are largely driven by industry to suit the industry needs, which are largely very conservative. Nobody wants another Perl 6 or Python 3.


I don't think it's fair to couch this in terms of opposition to "breakneck speed" language development. Even when Java was created, many languages supported FP, but Java chose not to. They had to retrofit it in many years later and it's still awkward to use compared to many other languages (not just ones created in the last 10 years).


But Java is a mainstream language, and so we generally try to adopt features that a large majority of programmers are ready for, which means, pretty much by definition, that we adopts things later than most languages. Just to give you an example, we only recently added algebraic data types, but in some ways we still added them too early because far too many programmers don't yet know how to work with ADTs, but we felt we had to add them because we thought they're the best way to work with simple data in a language like Java, especially since we didn't want to add property or Kotlin-like data classes, as we think these are bad features. This emphasises another point, and that is that we're not "catching up". Most features don't prove their worth and will never find their way to Java, because we try to pick the minimal set of features that will give the most bang, and that reduces the growth of language complexity, which can truly harm adoption. In this way we also managed to avoid async/await.

Another example that people often ask about is named and default parameters. It looks like lots of languages have them, but the problem is that it's usually implemented in a way that harms those languages. For example, C#, Kotlin, and Swift all generally try to support separate compilation and binary compatibility, but their named & default parameters break that. We'll only add the feature to Java once we know how to do it right (something that I think no other language that cares about separate compilation has done yet), and we have some ideas.

And I disagree that FP features are awkward to use in Java (well, some pure-functional theorems don't work because of null, but they also don't work because of reflection, anyway, and they don't work in most non-pure-FP languages, either).


You're misattributing cause and effect. Some programmers aren't "ready" for keyword params, proper FP, etc. because Java decided not to include these things, even though languages that were much older, such as Smalltalk, had them. And of course, many other programmers have been "ready" for these things for ages (since the days of LISP).

> And I disagree that FP features are awkward to use in Java

Two words: checked exceptions.


Languages aren't meant to serve features; they're meant to serve users. Our goal is not to educate users about language features, nor to evangelise for features, but to give users the features that will hopefully result in the largest value from the software they write. Back in 1995 I guess Java's original designers decided that ultimately educating users about functional programming may harm Java's adoption (because most programmers prefer familiarity, and most programmers didn't know Lisp, ML, or Smalltalk) and so would offer less value than not trying to do that. More productivity was to be gained by offering a GC in a familiar language than offering it in a language that would have been slightly more productive by more threatening. You can't educate the market on too many things at once; that's just not a winning move from a product perspective.

A feature contributes such value ones there's some reasonable combination of productivity benefit and demand, and that combination differs over time; the same feature can have more value in, say, 2015 than in 2005. The difference can be because the kind of software people write is different, the hardware is different, or fashion is different (which affects demand).

> Two words: checked exceptions.

Checked exceptions come into play primarily when there's IO or other side effects involved, so we're already in an area that isn't entirely functional and there are special challenges with FP, anyway. Checked exceptions are rare in pure computation (and arise there primarily around parsing strings). I fully acknowledge there's a problem in the interaction of checked exceptions and functional combinators (and we're exploring solutions), but while it makes functional composition of IO cumbersome, I wouldn't go so far as to say that it makes FP awkward.

BTW, while it is certainly not as elegant as it could be, combining IO with FP is already much better in JDK 21:

    <T> Result<T> doManyIOTasks(List<Callable<T>> ioTasks) throws MyException {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            var ts = ioTasks.stream().map(scope::fork).toList();
            scope.join().throwIfFailed(this::handleException); // handle/rethrow exceptions
            // all IO is done & no more checked exceptions here; we can now do functional processing
            return ts.stream().map(Supplier::get). ...;
        }
    }


Java was pushed onto universities, so of course what people are familiar with is what Java decided to support. I don't think that it's any harder for people to learn about algebraic data types than it to learn about inheritance, generics, covariance and contravariance - it's just an accident of history that the industry still thinks that overcomplicated OOP hierachies, annotation driven development, the Java classloader etc. are "natural", but other approaches aren't.

You're acting as if Java is just reacting to what people are demanding - but I think that Java has been actively shaping how people learn programming for several decades now, and so should take responsibility for it.

> I fully acknowledge there's a problem in the interaction of checked exceptions and functional combinators (and we're exploring solutions)

Why does it take Java 10 years to fix something that other languages have done correctly from day 1? Swift has "rethrows", what's wrong with just adopting a similar approach?


> You're acting as if Java is just reacting to what people are demanding - but I think that Java has been actively shaping how people learn programming for several decades now, and so should take responsibility for it.

Obviously it's a combination of both, but education about programming paradigms is not the ultimate goal. All features are there to serve the goal of producing working software. Of course, we add new features to serve that goal and to use them requires education but there's only so much you can educate your market to change their habits. You have to do it slowly, as we're doing now with ADTs. Also, PL fans tend to overestimate the actual impact of language paradigms. For example, for years Haskell fans have said that their paradigm leads to significantly more correct programs, but research hasn't found much evidence to support that. For various theoretical and practical reasons (some of which were predicted long ago), programming languages now see diminishing returns, and productivity differences between languages in similar domains are not that big.

The other languages in Java's very small club of super-popular languages are JS and Python (and to a lesser degree C and C++). They're not very innovative language-wise, either, because you can't be unfamiliar and popular. The under-20-year-old languages that are doing very well are TypeScript (which does innovate in the language but in the field of how to add gradual typing to JS) and Go (which is less innovative than Java).

Trying out novel language approaches and seeing how much of an impact they actually make (which is growing ever smaller) is the job for less popular languages. We try to pick ideas that have proven themselves after years of practice.

> Why does it take Java 10 years to fix something that other languages have done correctly from day 1?

Sometimes it's because they haven't done it correctly enough and the problem is harder than it seems, and sometimes we just have more urgent things to do.

> Swift has "rethrows", what's wrong with just adopting a similar approach?

Maybe nothing; maybe the fact that it doesn't work with streams (where exceptions thrown by lambdas aren't thrown by the intermediate operators but by the terminal operation). But whether its something like this or something completely different, we have more urgent things to work on.


> Our goal is not to educate users about language features

Except, you know, that thing Steele said about dragging C programmers halway to Lisp, which can be seen as describing and educational attempt.


The slow rate of change is arguably the biggest selling point for Java. You don't need to waste time on language churn, can spend the entire time building stuff instead.


> The slow rate of change is arguably the biggest selling point for Java.

Unfortunately, since Java 9 the rate of change seems to have increased a lot.

> You don't need to waste time on language churn, can spend the entire time building stuff instead.

In my experience, every time you upgrade to a new Java LTS, either something breaks or a library you depend on is incompatible with that Java LTS, and you have to waste some time fixing the breakage (the most recent one I saw: latest Mockito is incompatible with Java 21, unless you set a magic system property).


Since Java 9, new features are finally coming again. However, the ecosystem had gotten reliant on certain JDK internals being accessible and stable, while in fact they were neither. That will require some more years to properly transition over.

The only feature that really got obsoleted is the Security Manager and the underscore variable name. Primitive wrapper class constructors are on the chopping block next. Most other things are merely being restricted to harden the platform, with ample time to prepare.

Edit: LTS hopping is probably an unwise strategy as one will harvest only disadvantages: few updates, no performance improvements, no new features, and complicated migrations when it is time to upgrade. One best upgrades as soon as the new version becomes available (as people did before Java 8) or stays on LTS until the software is retired.


The latest Mockito works with Java 21; you'll just see a warning about the usage of the bytebuddy agent. In the future, you'll need to explicitly approve the loading of agents (you can call it "magic"), which will improve the security of your programs.


> The latest Mockito works with Java 21

That was not the case a couple of days ago when I last looked; I see now that they released Mockito 5.7.0 two days ago, which probably fixes this issue. But that still shows my point: for every new Java LTS, you have to either fix some breakage, or update some library you depend on to a new version (which might then require further changes to your code, or even dropping compatibility with some older Java LTS).

Edit: I just tried with the latest Mockito (5.7.0), and it still gave me the same "Java 21 (65) is not supported by the current version" error, when run without the magic system property. It seems something else earlier in the dependencies had a dependency on an older release of byte-buddy, so I will have to manually upgrade byte-buddy, and hope that it doesn't break that earlier dependency.


You have to update dependencies that depend on bytecode, like ASM, bytebuddy or aspectj. Those deps upgrades in 99% cases don't break the library that uses them.

I upgraded to JDK 21 on the day of release and haven't had issues with mockito (or spring, hibernate, junit, etc.)


I haven't really experienced this. I'm on Java 21 and haven't had any problems with Mockito what so ever.

I needed to bump the version of guice, but beyond that it was a very smooth transition.


> Unfortunately, since Java 9 the rate of change seems to have increased a lot.

Since Java 9, they don't introduce changes, they introduce new features which is a bit different.

IT is a field that (should) attract people that are for fast paced environments, not old school slow-to-innovate industries.


Why waste such a powerful term as "stream" on good old trivial 'list comprehensions'?

https://en.wikipedia.org/wiki/List_comprehension


Because it applies to things other than lists?


Are you making an assumption based on some specific "lists" or are Java 'streams' infinite?

https://en.wikipedia.org/wiki/List_(abstract_data_type)

https://en.wikipedia.org/wiki/Stream_(abstract_data_type)


Java Streams apply to Collections, which is a broader abstraction than a list.


"A list comprehension is a syntactic construct available in some programming languages for creating a list based on existing lists." (https://en.wikipedia.org/wiki/List_comprehension)

Stream's terminal operations contains much more options than just "creating a list".


> or are Java 'streams' infinite?

They are in toy examples.

I don't follow your original point 100%, but I will say that 90% of the time when I use Streams, I'm really just wishing List had map and filter, etc.


Streams work on any Java Collection, such as Maps.


Map doesn't implement the Collection interface.

https://github.com/openjdk/jdk/blob/master/src/java.base/sha...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: