I can't make it past the first sentance in the opening without just wanting to scream in annoyance. Nothing about immutable objects means that you are going to write thread safe code. Pretty much period.
Worse, nothing about functional programming is necessarily about immutability. It is a convenience on small and easy problems, but is far from required. Consider how your computer adds two numbers. Lots of things change state under highly timed situations to give you that answer. It is not some magic table lookup that works with immutable facts.
I may just be bitter by a system right now that is using immutable objects everywhere. I realize this is a straw man, but with heavy reliance on deep (3+ levels) of immutable relationships, it is not uncommon to find crap that is the equivalent of this:
String foo = "hello";
Character bar = foo.charAt(0);
...
//We need to make sure bar is capitalized everywhere
baz = bar.toUpperCase();
//Now to find everywhere this matters and let the caller know.
I realize lenses and zippers and such can help, but so can just being a bit more disciplined about using modifiable items in the first bloody place.
Obviously this is just a short fragment and explicitly it's a straw man, but I can't help but get this incredibly strong feeling that this code is immutability bolted on instead of embraced. Lenses, yes, could be used to solve something like this, but it feels like you're facing a problem which is better avoided than fixed.
I'm not really trying to rain on your tirade---surely there are ways to misuse immutability---but I can't look at this example and not feel that immutability is telling you something about a better architecture.
Oh, I certainly agree. And really... this is my point. Misused immutability is just as garbage heavy as misused mutability. It does not, ipso facto, prevent bugs or make threaded code easy to do.
Your example makes no sense to me, particularly the "find everywhere this matters" part. Working with immutable data is fairly straightforward. You perform an operation on a data structure, which produces a new version of that data structure, and then you to return it.
This only works if you still have a reference to the root of the structure everywhere. Which is why something like Huet's zipper works. If you did not build such a structure, then you are likely out of luck.
A better example would be
SomeLargeTreeThing foo
bar = sanitizeTreeThing(foo);
//Now, make extra sure nobody uses foo.
//Not even accidentally in the future.
Now, this can be solved with more types and more layers. So, yes, it is bad code. But, "write less crappy code" is far less sexy than "immutability prevents errors!"
Your complaint doesn't make any sense. I think you're struggling with immutability because you don't understand that it actually works very differently from mutability.
In this example, it's not about "making sure nobody uses foo". Nobody is making you use it again. If you don't want to use the variable again, just... don't use it again. Or, you can use it again, if you want to, but if you make an error your tests should fail or your compiler should complain about an unused variable. It's not rocket science.
I literally just had to fix this error in my colleague's code. It looked a little more like:
public foo(Bar bar) {
Bar updatedBar = baz(bar);
//Lots (10+ lines) of code
return finishProcess(bar); //Note, this should have been updatedBar.
}
Now, again, I will not defend this as good code. Neither would my coworker. Mistakes happen, though. And while I will not claim that mutable code is free of defects and makes it easier to write troublefree code, I do take issue with the claim that immutability does accomplish this.
In my opinion, there is a reason for that. Consider the example above, slightly changed:
public foo(Bar bar) {
bar = baz(bar);
//Lots (10+ lines) of code.
bar = qux(bar);
//Lots (10+ lines) of code.
return finishProcess(bar);
}
It is hard to spot the second reuse of variable bar. This can make code harder to understand; it's a lot easier to just look at the point of definition of a variable and assume it never changes. At least, that is why I tend to use final when writing Java, and val when writing Scala.
Of course, usually when adding final to messy Java code I'm trying to understand, this ends up telling me that the whole piece of code was fragile, hard to understand and in dire need of a refactoring. All of which makes final useful to me.
Again, if you forget to use a variable, your compiler should let you know. If that wouldn't have prevented this error, then oh well, but it's not like this kind of problem is particularly difficult to solve or wrap your head around.
Also, I don't know why you're acting like this kind of error represents the ultimate failure of functional programming. Yes, sometimes you get your variables a little mixed up when you're using immutable data structures. You get used to it, and if that's the biggest headache that you come across when working with immutable data structures, then it seems like everything is more or less working as advertised.
The variable was used again, just not in the final line like it should have been.
Where did I say this was the ultimate failure of functional programming? I do not think it is. Nor do I think functional programming is a failure. It is a good skill to have. But so is understanding some of the non-immutable algorithms out there.
If the above function mutated 'bar', then it wouldn't be referentially transparent, which is a lot more error prone and problematic than the trivial error your colleague made.
That said, that function looks strange. What were those other lines of code doing if they didn't contribute to the result? Doesn't look like a referentially transparent function to me.
This this an unfortunate language that won't let you overwrite or "shadow" bindings? In F#, for instance, I try to scope and/or rebind identifiers that shouldn't be used.
This sounds like a scotsman's argument. I could have just reassigned the value, however we have style guidelines saying they have to be final. So, yes, I could have shadowed it in another block or function. At that point, I'm beginning to question just how immutable we wan't the view of the world to be.
This gets back to my point in a sibling about more types and better organization could have helped. Sure. I'm not even going to claim that immutability caused this bug. As it didn't. It also was not a silver bullet that helped make this code bug free.
In any real software development scenario one of the problems faced is precisely "making sure nobody uses foo again", and "just don't use it again" is not a solution.
How do you insure that a maintenance coder ten years from now doesn't use it again?
How do I ensure that a maintenance coder ten years from now doesn't replace the entire function with
printf("butt\n");
? Yes, sometimes developers make mistakes, but -- again -- this is what the tests are for. And I have seen no evidence that immutable data structures are more prone to errors in "maintenance coding" than mutable data structures.
This is a completely disingenuous argument. My point is that a sincerely interested maintenance coder could easily accidentally use the wrong variable. This was the GP's point as well.
This is not something that is not at all difficult to understand... for anyone who has ever actually worked on shipping code.
At no point did I ever indicate that it is impossible for a maintenance coder to accidentally use the wrong variable. In fact, it'll probably happen at some point. I did indicate that you should have tests in place to catch regressions like this. This is true when you're working with mutable data structures as well as immutable ones. I also indicated this type of error, once detected, is typically very easy to spot and correct.
I don't know how anything I've said has even been slightly controversial.
//Now to find everywhere this matters and let the caller know.
Do you mean callers, who may be threads some of which have decided that the first character of foo should be lowercase while others have been sticking a prefix on foo? How do you coordinate all these separate changes to your foo string? Is there a String manager that coordinates all the threads and decides what the best string update policy is? Maybe I'm not following your argument for mutable strings.
Yeah, screaming is a touch heavy. I should have just said drop my head in sadness. And that I lost a little more hope from the industry.
It is funny that this would lead to meditation, as there is a good parable that seems appropriate here.[1] Not all fads are processes, and I'm sure there is a "truly immutable patterns don't have this problem" hiding somewhere.
No one prevents you to use immutable structures in OOP.
Getting some help from language is nice, but after 10+ years of programming all those patterns (e.g. immutable) become so natural, that you don't really think much about them. Because you know what data structure you want, and see what language offers you. So the whole "which language" deal doesn't make so much difference (although it's easier/harder in some languages). Tooling/libs and community matters way more.
Good point. Fortunately, I had pretty good engineers in my recent teams, but, remembering my older projects, I totally agree with you that it's good when language makes it hard to write bad code for less competent engineers.
As long as there is no decent pattern matching in most of the OO languages, they're pretty useless. No matter how big is community and how hip is tooling.
You've hit the nail right on the head. People praising FP don't realize (but start to after awhile) that you should combine the FP paradigm with /OTHER/ paradigms as well. Your arsenal of patterns will help you write code. Don't limit yourself to one.
That's an awful lot of text to advocate for immutable data to prevent shooting yourself in the foot when things go concurrent.
I've got this renderer that takes a huge data structure and uses simple OpenMP constructs to draw images. There is an assumption that the underlying data will not change during a render. Do I need a functional language to enforce this? No. Would it be nice if C++ had a feature to enforce it? Yes. Will I eventually want to make something that can carefully modify the data while rendering? Maybe, but I'm not sure that even makes sense.
I think a bigger threat is event driven code which IMHO is notorious for making things happen in unexpected order.
You can. If you want to write immutable C++ simply declare every single variable const. Done.
The only difference between C++ and "immutable by design" languages is that in C++ you'd have to write your own little processor to ensure that your rule is followed (this is no big deal, and I've written much more complex processors to enforce C++ coding standards as part of the build.)
> I think a bigger threat is event driven code which IMHO is notorious for making things happen in unexpected order.
What do you mean by unexpected order? I find event driven code to be the easiest to understand and maintain.
Typically, you will be forced to specify order explicitly, i.e. nesting callbacks for sequential execution or not for parallel.
Events are find if your app is stateless (boring). Otherwise you may get events when things are not ready for them. You can get around that with lots of checking, or perhaps building a nice OO system underneath may help. I have a hard time explaining this because it can be subtle. See the Therac 25 for an example.
Immutability is basically like explicit version control. If the program is interactive, sooner or later you need to write a state machine that updates HEAD. A good example is the foldp function in Elm. [1]
You can probably do this in any language, but efficiency is another issue.
Immutability is nothing like explicit version control. So far as I know no language has anything close to explicit version control for state.
This is unfortunate, because it could potentially be a very useful if every single atom of data came with a built-in list of previous states, with timestamps, and with an identifier for the code/process that last changed the state.
Obviously, this is usually what debuggers try to do. But there's no reason why something like this couldn't be built into a language.
(Yes, it would be slow. No, it wouldn't be that slow if done right. It wouldn't even have to be slower than many other standard practices. But yes, it could potentially use a lot of memory unless given a cutoff. Even so...)
CS generally still seems stuck in the "memory is a line pigeonholes" model, and immutability seems to be a crude memory management hack, not a complete state control solution.
I sometimes wonder if a lot of the issues with state are caused by the fact that languages have extremely limited tools for dealing explicitly with representations of historical causality and past/present/future, and not because state itself is inherently evil.
> Immutability is nothing like explicit version control. So far as I know no language has anything close to explicit version control for state.
Well, it depends on which version control system we're imagining. In my mind, the immutable data-structure approach is like git, and I think what you're imagining is more like CVS.
Consider: immutability by itself doesn't enforce versioning; it just makes the state-of-the-world equivalent to state-of-the-root-pointers-of-your-heap, because the pointed-to things cannot change. (I.e., referential transparency.) So it pushes the versioning up to whatever agents manage the root pointers. This is sort of like git: you build an immutable DAG, and you have mutable HEAD pointers that move around in it. (The difference is that git has one HEAD for the tree while this world has one HEAD per global or stack-local root.) When you have a reference, you have a snapshot of the world in time that will never change.
Your "each slot has a list of previous states" is more like CVS in that it versions each "file" separately. This is also useful, but for a different reason: sometimes you really do want that history as a first-class concept. You can derive the per-file history in a git-like system too but it involves some computation. (And coming back from the analogy, in such a memory-management scheme, you would need to formalize the changes to root pointers into some sort of transaction system.)
All of that said, I think what you describe would be really useful for debugging, albeit probably pretty expensive (each store instruction touches an undo log?). Time-reversible debuggers may have done something like this?
If you do not care about mutability, you will only get immutable data. You can then send the data between different threads, and it will not get corrupted. The reason is that whenever you attempt to change them, a new instance of the structure being changed will be allocated.
There is a lot to recommend a functional programming style in general and forgoing mutation of data in particular, but these are programming practices. Clojure allows mutation. Common Lisp allows it in ways that are as mind blowingly difficult to grok as macros. The shared structure that allows efficient immutability semantics allows a tsunami of value changes if someone is careless or malicious.
Worse, nothing about functional programming is necessarily about immutability. It is a convenience on small and easy problems, but is far from required. Consider how your computer adds two numbers. Lots of things change state under highly timed situations to give you that answer. It is not some magic table lookup that works with immutable facts.
I may just be bitter by a system right now that is using immutable objects everywhere. I realize this is a straw man, but with heavy reliance on deep (3+ levels) of immutable relationships, it is not uncommon to find crap that is the equivalent of this:
I realize lenses and zippers and such can help, but so can just being a bit more disciplined about using modifiable items in the first bloody place.