Languages do not becomes successful due to their intrinsic qualities. Languages become successful when they are coupled to a successful platform. E.g. C became popular because Unix became popular, JavaScript became popular because of the browser, Objective-C became popular because of the iPhone and so on.
Therefore observing that a language or paradigm is popular or unpopular does not say anything about whether it is good or bad. JavaScript is the most popular language in the world. If Netscape had decided to use a Scheme-like language or a BASIC-like language it would still be the most popular language in the world. So paradigm has nothing to do with it.
Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.
I don't buy that functional languages are unpopular because they are unintuitive. Losts of stuff in JavaScript is highly unintuitive. It didn't prevent it from becoming the most popular language in the world. People will learn what they need to learn to get the job done.
>Languages do not becomes successful due to their intrinsic qualities. Languages because successful when they are coupled to a successful platform.
That doesn't seem right. There are plenty of languages that are born from platforms, but I'm skeptical that its anywhere near the majority. Some platformless examples from the top of my head
- Java
- Rust
- Python
- C++
- Go
- Lua
In my opinion: C is not popular because of Unix. Unix is popular because it's written in C. The same is arguably true of Kubernetes. Neither docker nor k8s drove mainstream adoption of Go. Go did that with it's own properties, the same is true of Rust and C etc.
> Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.
The question here though is why?
The author argues because FP is unwieldy for the general software case (which I assume is enterprise CRUD apps). And I have to agree: State management is a huge part of CRUD apps.
------
EDIT: I am not presuming K8s is popular because of Go. The argument here is the success of the platform and choice of language are related, not directly consequential. K8s could have been written in rust and would probably be as popular because of its feature set
Java was the only way to write applets and code for feature-phones early on. Nobody thought of writing application servers in java in early 90s, java was supposed to be the way to write code for internet of things.
C++ was the easiest way to write kinda-OO code and use C libraries. Then it became de-facto standard for gamedev and desktop app programming.
In Linux world it's still the case that if you want to write a desktop app you should write it in C/C++. Use any other language and you will struggle against the dependency management and package managers forever.
Python is the only clear example of succeeding without a platform out of these languages, and the fact that after decades still Python is less popular than PHP shows that historic accidents and having a good platform is more important than any quality of the language.
On the other hand Perl is dead so maybe it's not as bad :)
For a lot of users, Python is its own platform. As a so called "scientific" programmer, when I'm programming, I'm very much "inside" Python. I don't need to know or care very much about the details of the platform that it's running on. I start up my IDE or Jupyter and the rest of the system melts away.
This may be because Python lends itself to bring used in integrated environments such as Jupyter.
>Java was the only way to write applets and code for feature-phones early on.
That's not how it got its popularity. Applets died very soon (the novelty lasted like 1-2 years), and feature-phone apps were never a big thing. Unlike e.g. mobile apps that quickly overtook the desktop, feature-phone J2ME apps at the time were a peanuts business (and there were other ways, like native APIs from Palm, Blackerry, and Windows mobile edition - of yore, not their post-iPhone smartphone OS).
Java, even back in 2000, was big in the enterprise space and remained so (which is why Sun quickly emphasized that and let the applet sdk languish).
> In Linux world it's still the case that if you want to write a desktop app you should write it in C/C++. Use any other language and you will struggle against the dependency management and package managers forever.
I do agree in a very general sense, but there are definitely a few good options or there for desktop software.
For example, Lazarus/FreePascal is one of the best solutions for writing GUI apps even nowadays. It's a shame that it's dead as far as market share is concerned, even if it has a community around it, is open source and still receives regular updates with pretty good platform support.
Though i guess you could say the same about any technology stack that produces executables that are for the most part statically linked and don't complicate dependency management.
Of course, Java with Swing also hasn't disappeared anywhere and is still perfectly capable of producing most desktop software as long as there is any sort of a JDK or JRE on the device. There's also JavaFX/OpenJFX which is supposed to be more modern, but I've experienced more issues with it in comparison to Swing.
Python took the shell scripting platform from Perl just like PHP took the web scripting platform. It then extended to web servers (e.g., Django as an alternative to Ruby on Rails) and ML, but that was later.
For Java, JVM was the "platform". I still remember learning Java in university - what left an impression on me was that my first ever university homework where I used Java (i.e. non-trivial project) ran perfectly on first successful compilation [+]. GC is huge help, especially if you don't use smart pointers (which were far less prevalent at the time).
[+] to clarify, I was a somewhat experienced programmer, this was one of the final year projects, for distributed systems or parallel computing I believe. A simple system without doubt, but still, a system not a "hello world". Something that I wouldn't have expected to work on first try, had it been written in C.
Yes. I like this in Java, and in Rust, and it's annoying for me in languages like Python where you mostly don't have this. I want the compiler to tell me my program is nonsense, so then I can fix it, rather than wait until the program has done most of its work and then, oh, did you notice this needs to be an integer but you provided a string? Sorry, program crash, fix the bug and run the whole thing again.
But of course although it's the default in Java, and not provided out of the box in Python, I also wrote some web framework stuff that adds these runtime errors to Java (by using Reflection to make decisions at runtime instead) and I've seen Python code with more type safety that would tell you earlier that there's a problem. So it's ultimately not only a matter of programming language although that does definitely set the tone.
I think platform in this argument is meant to mean something users obtain and install apps (or websites) into. The JVM is not a platform really, it's a runtime. For a while the JRE was sort of a platform because users downloaded it separately, but that hasn't been true for a long time and yet Java is still popular.
It is almost forgotten now, but the platform which carried Java to critical mass was the browser. Java successfully pivoted to server-side, but initially it was considered a client-side language.
C++ was the preferred application development language on Windows.
Python is popular because of the numeric and ML ecosystem.
I would say that in the enterprise software world Java was an irrelevant toy before the pivot to server side. At that point though, whoosh stratosphere
I'm not sure that's quite right. I feel like there was a decent span of time where many desktop apps were written in Java (with AWT or Swing), particularly enterprise apps where you valued development speed and ease of deployment more than a slick UI or high performance.
My $JOB is maintaining such a desktop CRUD app that's been in use for the past 20 years. It uses a Swing (via a proprietary higher-level framework) for its GUI. Over the years it's accreted a few dozen services (also in Java) in its orbit, but the central GUI app and its data-base still remain and continue to be extended as new requirements arise.
Python is popular because their teach it in US schools and colleges as a next best thing after BASIC int terms of ease. During ML uprise Python became a common denominator between mostly US-based scholars.
Other than security, the other bad thing about applets was that they were very slow to start, and required a Java runtime that did not come bundled with the browser - you had to download an installer for it manually, install it, and then the thing was constantly nagging you about updates (and, IIRC, the updating agent or whatever it was that sat in my tray noticeably degraded the machine's performance even if nothing was using JRE).
-> Yeah, so? There are all crap, slow to start, big on resource bloat, slow to draw, and don't offer much).
-> Ew. Time to stop doing that.
Then 15+ years passed, during which JS became the dominant way to build CRUD/Enterprise/form-based/and more apps, got big features (from WebRTC to an embedded DB, and from accelerated canvas and 2D graphics to MIDI). And it even hit big on the server too.
Then: - I've had a brilliant idea! We need a bytecode-based VM in the browser so a web page can run arbitrary code.
WebAssembly, is faster than Java Applets was back then, has easier ties to the DOM for UI (as opposed to being a sandbox), can be used to supplement conventionally written web apps, and is not tied to a single company.
So, quite a lot of differences due to time, and also in the characteristics of the technologies involved.
Well, it’s not hard being faster than 20 years old tech. Also, webassembly still doesn’t have proper bindings to the DOM (and seriously, at the time we could be happy that a static DOM could be displayed), and java is one of the very few languages that actually has a proper specification allowing independent implementations, and it is not just saying that “the spec is whatever code we write”.
So these comparisons frankly make no sense as every such shortcoming could have been easily fixed in 20 years.
>Well, it’s not hard being faster than 20 years old tech.
Yes, but it was even more hard to make Java applets run with any acceptable speed in 1997, which is what mattered for their deprecation. Understadingly people didn't just say "let's suffer them for 10 years until the hardware catches up to make them tolerable".
>Also, webassembly still doesn’t have proper bindings to the DOM
Yes, but still better than what applets had :-)
>So these comparisons frankly make no sense as every such shortcoming could have been easily fixed in 20 years.
It would make even less sense for people in 1997-1999 to stick with applets because "such shortcomings will be easily fixed in 20 years". People use what works now, or at least offers a serious advantage despite the shortcomings. Java applets then didn't offer much.
> Understadingly people didn't just say "let's suffer them for 10 years until the hardware catches up to make them tolerable".
It’s funny because that is essentially what happened. Other than flash (which suffered essentially from the same shortcomings as applets, but at least had a productive environment to create them), there was nothing that replaced these technologies for many years to come. Canvas rendering came much later and only with much more recent browsers did it have acceptable performance. It is not accident that many people long for the old web which was strangely more interactive in some cases than what we have today.
Don’t get me wrong, java applets were shitty. But we sort of threw it all away instead of fixing it, which in hindsight seems to have been an easier road (since the JVM has always been the state of the art runtime, and JS engines had to catch up from zero). DOM integration into an object oriented language would have been much easier than what webassembly does through JS bindings, and lack of security was frankly more of a mindset back then, and the jvm could have been sandboxed just as “easily”* as js engines are.
* it is a hard thing to do, but it happened to JS engines because the money was there. Integrating the JVM with proper sandboxing would have required less money/energy.
The problem with Java applets was not the use of bytecode or a VM. The problem was that it was slow to start so you looked at a grey rectangle for a long time before anything happened. Flash was much snappier, which is why it won out over Java. Flash also had better development tools targeted multimedia and games.
>It is almost forgotten now, but the platform which carried Java to critical mass was the browser
I think that's backwards. I was there. Aside from a few e.g. banking and government applets one was forced to use and a couple of exceptions, applets never got anywhere and died fast.
Enterprise Java became what Java is all about very very soon. Java landed in 1996. Servlets/Tomcat landed in 1998.
By 2000 there was plenty of enterprise Java development - in fact that was the year MS created its own Java copy, in the form of C#/.NET after having tried to extend their version of Java for the Windows platform.
Was Shockwave a platform for Flash then, or something?
I don't know any technical details here, but from a user perspective Shockwave did turn into Flash. The Flash file extension "swf" even stands for "ShockWave Flash".
It was two separate products with separate origins. Macromedia had Shockwave and then they purchased Flash (which was called Splash then) and branded it Shockwave Flash.
Shockwave was somewhat similar to Flash, but it was developed for the "multimedia" CD-ROM's, which meant the files was generally too big for online. Flash was much more compact which is why it won out.
Shockwave = plugin for playing Macromedia Director content. Director was a GUI builder for interactive multimedia apps which were programmed in a custom language called Lingo.
Flash was originally called FutureSplash, if I recall correctly. When Macromedia bought it they rebranded it to fit their general branding theme, hence the confusion. Flash wasn't really intended to be an app platform, it started out as a vector animation format, but later they added scripting using a dialect of JavaScript called ActionScript.
Most users ended up with the Flash plugin but not the Shockwave plugin. Macromedia Director was huge in its day - my first programming job involved writing Lingo - but it died out pretty quick when the internet started taking off.
Shockwave was Flash's larger brother, feature wise. Shockwave stuff could contain Flash files, but Shockwave could do things Flash couldn't, or could do them earlier. (And if I remember correctly Shockwave came first, and when Macromedia aquired it they then build Flash)
From my perspective, Python became popular as the simplest and friendliest general scripting language along the lines of Perl or TCL, and the easiest for somebody familiar with other languages like C or Java to pick up.
If I wanted to do something just a bit too fiddly for a shell script, I'd reach for Python (and still do, although I've recently started using Node too).
Perl initially became popular as the best text-wrangling language for CGI scripts and the like, but (I think) was a bit too weird to be a general-purpose hit for the masses like Python. Likewise Tcl/Tk was great for whipping up quick UIs, but both parts seemed a little too weird to stand on their own.
Ruby was another strong contender; I think Python was already too well-established to be displaced by Ruby, but Ruby found its audience via Rails (which I'm guessing is better than Django).
Rails was the first of that type of web framework. Django and all similar frameworks are Rails clones. At that time Rails was the killer app for Ruby. Unfortunately for Ruby the other languages were willing to put in the effort to copy it, and Rails is no longer a unique advantage.
Django and all similar frameworks are Rails clones.
Hmm, are you sure? I’m not saying you’re wrong, because I’m not sure myself! I heard of Django before I ever heard of Rails but that doesn’t mean much.
Wikipedia suggests they were released at around the same time -- Django started slightly earlier, Rails open sourced earlier.
Just a minor nitpick Rust still has miles to go before it catches with Python or Go or Java, so not yet in the same league. Most of the rust crates today depends on C, so it’s not going to replace C in next 2 decades, but will still depend on C.
Hopefully it will start replacing C going forward but in that space Rust is competing with Zig and Nim.
I do not know about Zig, but Nim is used in production at multiple companies - biotech, cryptocurrencies, finance, had an attempted commercial video game app, etc., etc. See here [1] for example.
> And I have to agree: State management is a huge part of CRUD apps.
I see this point come up often, but it doesn’t really line up with my own experiences. I find the explicit state control in functional languages makes it much easier to write enterprise CRUD.
Oh, how I miss BLISS. It was the first language I learned that got rid of the silly distinction between statements and expressions. What a revelation! Everything was an expression. Everything returned a value. I guess a bit like Ruby, but 40 years ago.
While Google's support was definitely a factor, Go also had some important language features going for it. Most importantly it is targeting a relatively empty niche in the programming language landscape, i.e. that of high performance, close to the metal languages with little performance overhead, while still being easy to write. "Easy to write" for 80% comes down to being memory-safe, unlike C/C++. If you're in that niche, you have few alternatives. The other options for mainstream memory-safe languages are interpreted scripting languages and Java-style jitted languages. Go easily beats both in resource consumption without being much harder to program in. Rust isn't really comparable because while it is memory-safe its memory management system still forces the programmer to think about the memory and resource usage, thus being slower to program in.
On a more serious note. I decided to read Delphi documentation recently because I’m old enough to hear a lot about it, but not quite old enough to write anything in it. It had discriminated unions. It did! I can’t imagine my life without them, I write stuff exclusively in Ocaml-like languages, so the only question that I have in my mind is “how the hell we managed to go backwards?”
It’s so weird.
Go has some restrictions, but all in all it's a great little pragmatic language, which solves a lot of practical problems (utf-8 strings, concurrency, garbage collection, cross compilation, single binary deployment, performance, readability) and which I can easily keep in my head as opposed to most of the other languages I've used in my career.
In that way it's like a Delphi for the modern world.
Did google still have fans among the technical crowd when golang launched? I'd expect Rob Pike and Ken Thompson to be the fanboy-attractors rather than google.
> Python was taught in many universities for its flat learning curve, then started to be used in academic research.
Python was used in academia long before it was used for teaching. “Courting” scientists was done from extremely early on in the language history (matrix-sig was founded in 1995) and the “extended” indexing added at their request (auto-tupling and slicing predate PEPs as those were introduced for and by Python 2).
Java & Go are sponsored by megacorps; their success or otherwise has little to do with the language's strength in isolation. They are more examples in favour of successful platforms being leveraged to promote a language.
Python - which took perls lunch over a decade or so and now in more recent history became the obvious language when data scientists and ai/ml practitioners needed to adopt a platform. Its popularity has even led to it also expanding into education in a huge way after Python became the most popular scripting language.
Java - the jvm is an utterly amazing piece of engineering and yet still to this day most people still write java instead of the other better languages (kotlin, scala, clojure) hosted on that platform, and they deploy only to Linux environments. The platform with its write once run anywhere and industry leading garbage collector that gets better than 50% memory utilisation, isn’t the thing that attracts people - its the language.
Functional languages used to be the cool thing before object oriented design came along and unlocked the ability to build bigger systems. The ultimate programming language (lisp) has always encourages functional since what, the 60s?
I would argue that python's popularity comes with numpy/pandas, jupyter notebooks, and the recent popularity of AI/ML/data science. Yes it's useful in other areas (flask and django are somewhat popular) but nowhere near as popular as its use for ML.
Scala actually became MUCH more popular with Spark. I think the original point stands - the platform pulls the language.
Java is popular due to the jvm - it was the first jvm language after all. It got popular before others like scala & clojure managed to get off the ground, at all.
> I would argue that python's popularity comes with numpy/pandas, jupyter notebooks, and the recent popularity of AI/ML/data science.
Python was hugely popular long before these were a thing. It was already seen as one of if not the most beginner friendly language at the beginning of the 2000s. It became the language of choice for ML because it was already popular with beginners not the other way round.
That's not how history played out at all. Java spent much of its life being pitched as a way to make C++ developers more productive.
The jvm was actively hated by many for years - slow startup times, excessive memory usage, it didn't used to enjoy the rich tooling ecosystem it has now and it was seen as opaque and hard to tweak for performance.
To this day it's not that hard to find Java devs who want off the hotspot jvm - whether that's to go onto other JVMs with different tradeoffs (Azul or whatever) or whether to go native compilation (GraalVM) instead.
I wrote another comment where I explained "popular due to jvm" better - the productivity gain was real, but came from the GC which was....JVM. No, the JVM wasn't hated initially - not until it became hugely popular and people started pushing it to the limit. Or well, at least that's how I remember the history, I might be wrong; human brains are notoriously fallible and I'm too lazy to search and validate/confirm my version of it :)
Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance. Java being memory safe is a huge part of that, which is the JVM platform. Java the language had the major feature of being superficially similar to C++, that helped take over the business market but is now no longer relevant. When Java was first marketed as a business application language there were few competitors. Nowadays there are, but nowadays the major reason to choose Java is because it is entrenched and good enough.
However all the Java shops I hear about are also looking into Kotlin at some level of adoption. From all the new JVM languages Kotlin integrates the best with the existing Java environment and for displacing a language in an existing niche an easy upgrade path is the most important. So I think Kotlin will become an important language in the JVM ecosystem, it will just take a long time because these types of businesses are conservative in their tech choices.
> Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance
I fail to think of any other platform that could run these monstrosity CRUD enterprise apps as fast as the JVM can. Sure, C++ can be written to utilize hardware better, but with all the classes and interfaces around with everything being virtual, a good JIT compiler can skip method lookups over AOT compiled languages.
> Java nowadays is mostly used for business applications, where developer productivity is more important than runtime performance
While both of these things are true, they not connected in the way you imply, since Java is a pretty low level language by today's standards.
In the Java school of business app engineering, writing the code is rarely a big part of the effort, so it doesn't matter if the langugae is not very good or expressive. Java wins at having a big commodity-like labour pool of programmers, and there's a lot of inertia and stability in the platform.
There are of course a lot of people who use more expressive and creative tools in making business apps, like eg the many companies using Clojure, Scala, Ruby, Python etc for them, so it's not the only way to skin the cat.
Both of my last 2 large bank gigs (kind of the last places you'd expect cutting edge tech) were going all in on Kotlin. New projects were Kotlin only, and there was active work on sunsetting/migrating Java applications towards Kotlin. None of these were Android applications.
Sure, this is anecdotal. But I'd say the same of Java's dominance in the JVM space. Java's continued dominance is not a sure thing from my vantage point.
> JVM is written in a mix of Java and C++, let me know when they start rewriting it in Kotlin.
This is less relevant today. The host blessed languages do have an advantage, but I would not say it is insurmountable. It might have been the case in the past, but the modern JVM is a platform, it is no longer a glorified Java language interpreter.
> Now a couple of places are adopting Kotlin outside Android, nice, eventually will migrate back in about 5 years time.
Maybe. Maybe not. Most developers I talked to that have experienced the transition do not want to go back to Java.
This isn't to say Java will die. It will continue to thrive. But Java dominance (on the JVM or as a whole) isn't a sure thing anymore.
There is a functional programming language that is coupled to a platform, that most people on HackerNews wouldn't think of: the M language for PowerQuery. It is used to make ETL pipelines for Excel (and I think other Microsoft apps?). It is very popular, although among people who don't consider themselves "programmers" or "software engineers", but rather analysts.
In fact, Excel itself can be considered a functional, reactive programming environment, and if we grant that, then it is the most popular programming language on the planet.
It has just occurred to me that Excel is an excellent example of a functional language to use to explain to people who don't understand what functional languages are.
> If Netscape had decided to use a Scheme-like language or a BASIC-like language it would still be the most popular language in the world.
If whatever Netscape added as their scripting language had been too weird I think there is a good chance that a competing browser would’ve implemented a less weird language and that such a less weird language would have won over the hypothetical too weird language.
For me, Objective-C is too weird so I never bothered with it but I love Swift and only after Swift came out did I start making apps for iOS. And that’s even though I had been wanting to make mobile apps for iOS for a long time.
Microsoft introduced VBScript support in Internet Explorer. VBScript was a variant of Visual Basic and therefore a lot more familiar than JavaScript. VB was already used in MS Office and a whole generation had learned programming starting with BASIC. But it didn't matter.
Nobody would use a language which wasn't supported by the browser with the largest market share, regardless of the merits of the language.
I disagree, at school we studied both C and LISP during the same semester, writing a lot of similar things in procedural Vs functional styles... I can assure you that most people (who didn't have any programming exposure) really preferred C
This data point is confounded by lisp's weird syntax, I would expect people not used to programming to hate the excessive indentation and nestedness that lisp's syntax does to expressions.
> JavaScript became popular because of the browser
I would also say it is mainly due to Gmail and their G Maps extensive use of XMLHttpRequest, and Douglas Crockford insight on that JS has closure and function as first class citizen, it is just Scheme is C clothing. Check out his Little Javascripter. That and of course the intro of JSON.
I think that we must see C always in the context of assembly language. Pascal and C really helped things moving forward in terms of abstractions. C seems hard now and it is, however, assembler is way harder and when things started to evolve from 8bit to 16 and 32bit computers, we were all glad that machine language was something from the past. Same goes in most part to BASIC.
Not sure if it's implied but JavaScript was basically ideated as Scheme with C syntax in the Browser. That concept didn't take off for quite some time but pure functional programming within React actual got quite some popularity. Even "plain React" is borrowing a lot of concepts from FP, especially when combined with Flux and other add-ons. During that time also Clojure due to Om (React with Clojure) gained some popularity.
One language where FP basically arrived in the mainstream is by the way Scala. And there are other newer language which allow for FP'ish programming like Kotlin.
But otherwise it's true, languages like Schema or Haskell will probably not arrive in the mainstream soon.
>Languages do not becomes successful due to their intrinsic qualities. Languages become successful when they are coupled to a successful platform.
That's true for system and low-level application languages (Javascript is an exception, as for the web platform, there's no alternative so it's used for everything).
Perl, Python, Java didn't become succesful because of platform ties (yes, Java was made by a platform vendor, but most of its programmers used Windows and deployed on Linux or AIX, not Solaris).
> Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.
WhatsApp backend is written in Erlang. A large part of the world's telecom infra is written in Erlang (2G, 3G, 4G, 5G). It powers the highest availability systems in the world. Still nobody cares about it....
> JavaScript became popular because of the browser
That is a very popular perspective. In the past I have seen this sentiment used heavily by people who hate JavaScript as a means to rationalize how JavaScript could have become popular at all.
Unfortunately this sentiment is disqualified by data.
The browser became a popular platform in the late 90s and early 2000s as business and social interest on the web grew. JavaScript has been around cross-platform in a mostly consistent way since 1998 due to the publication of early current standards.
This did not make JavaScript popular though. JavaScript would not become popular for more than a decade later.
In 2008 a couple of things happened. Chrome launched offering the first JavaScript JIT. Before this JavaScript was hundreds of times slower than it is now. Also, jQuery started gaining popularity at this time which allowed employers to hire any dumbass off the street to write in it. Douglas Crockford also heavily evangelized the language for its expensive capabilities: functions as first class citizens, optional OOP (easily ignored/bypassed), native lexical scope.
A little after that around 2009/2010 Node.js launched and GitHub became common knowledge. It’s about this time that JavaScript exploded in popularity. The web was already popular for more than a decade before this.
> Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.
I would argue functional programming is incredibly popular but less common in the workplace because OOP conventions are what’s taught at school.
> Unfortunately this sentiment is disqualified by data.
Your comment seems to support, rather than disprove, the claim.
JS was around for years without much success before the browser became a popular platform for applications. People used to only write applications for the server or for platforms like Java and Flash that were just embedded as plugins in the browser. They only started using JS after improvements to browsers made it a better platform than the alternatives.
Javascript became mainstream with the Web 2.0/AJAX hype of 2004. It was still Google's "fault", but it was the slick Google Maps site (and also GMail) and not Chrome that was the inciting factor. With an assist from Microsoft's Web Outlook and XMLHttpRequest.
Really, the reason Chrome invested heavily in optimizing Javascript was because it had started to be widely used in more than trivial scripts.
Functions were invented by functional languages…but they originally were a lot more flexible than the way most procedural languages use them now.
Let’s try rewriting your example this way
add = function(a,b) { + a b }
Why is this better?
Because + is a function, and add is an object, so you can then refactor the code to say this
add = +
Try doing that in C.
As I see it the code for some programs has a high degree of symmetry (ie repeated patterns), and the best language is a notation the let’s you capture that the best.
If you find yourself cutting, pasting and modifying things slightly each time, then your language is holding you back.
Test-script code is the best example of this…particularly hardware tests. You have the same code copied and pasted dozens of hundreds of times.
Thoreau once said that government is best that governs least. That language is best that makes you type the least.
Function pointers. They are abused to no avail in C programs. It has the original callback hell, not JS. (And that is mainly because C has no proper way for abstractions)
Yes, that's functional programming! I think you're right to say that many programmers use FP much of the time. The modern trend is definitely to borrow many FP techniques and use them inside an imperative shell.
Your example is strict FP, where a and b are fully evaluated before add() executes. There's also non-strict FP, where everything uses lazy evaluation by default. That has some big advantages but also a few pitfalls, and generally makes things significantly weirder when compared to imperative languages.
Pure FP is where you only have functions and expressions, no variables at all. I think that's where most mainstream programmers draw the line -- sometimes you just want to store something somewhere, without being forced to jump through what seem like weird hoops.
The original elevator pitch for Haskell was that it's a "non-strict, purely functional" language (aimed a unifying a bunch of different research languages, plus the proprietary language Miranda).
Despite Rust being around for quite some time now, highly liked (hyped) by its users and being a good language with many features that make it stand apart while having major commercial support, it's still a very niche language.
People are learning Python because they want to do ML or scientific computing - not the other way around. Python has its current popularity because of NumPy and the whole numeric and ML ecosystem.
I love Python as a language, but realistically it could just as well have been Ruby or some other language.
Python was already quite popular by the time 3.0 came out in 2008, well before the AI summer was in full swing, which is a large part of what drove the popularity of NumPy et al.
You're not wrong now, but that's just the chicken, the egg included web applications and its use as a scripting language.
And most educators decided on Python because? You don't think it has anything to do with people liking the language and therefore wanting to use it and teach in it?
Because there was a huge push by the Python designers towards this direction?
They got funds from the DARPA in 1999 for a proposal entitled "Computer Programming for Everybody", the first part of which was " Develop a new computing curriculum suitable for high school and college students".
The Python project did a lot of outreach and marketing specifically targeted at educators to explain why Python was a great teaching language (and did a lot of work to make it so - don't get me wrong). It worked.
It can also be due to tooling -- once a student has got the python binary installed you can get them writing and running code without having to make sure they have the correct version of x, y, z (don't even need conda or pip) or teaching them what a compiler is etc.
Javascript is quite popular as an introductory language too -- students can open up a web browser and type stuff in the console in the middle of a lecture.
I agree. It even features a basic IDE, IDLE, as part of the default install. No need to figure out how to configure your text editor to interact with the Python shell. I remember trying to learn Ruby before Python, several years ago, but then got stuck trying to configure Geany to work with Ruby.
>They chose Python since it's essentially a free, open-source alternative to Matlab.
Different domains of education had different reasons but some classes had nothing to do with Matlab.
- The professor Gerry Sussman of the famous MIT 6.001 SICP class said they switched from Scheme to Python because (paraphrasing) it's more high-level with libraries to get immediate work done. (E.g. A class project to control a robot.)
- Peter Norvig teaching Artificial Intelligence classes switched from Lisp to Python because he noticed his students kept getting stuck on Lisp syntax instead of progressing on the more important AI concepts. Switching to Python made teaching the class easier.
One can google for their interviews on why they switched to Python.
Why didn't they use Scheme? That was pushed by a very prestigious institution so had some momentum in the education field. Instead they switched to Python and basically nobody uses Scheme to teach any longer. Python won because people preferred it over lisp, simple as that.
Teaching Scheme etc always got pushback from the outside, because regardless of how well it works for education, people get it in their head that you are teaching something that "industry doesn't use" and thus is bad.
It's a really intuitive one for new people to learn - you're far from the machine, but the syntax doesn't get in the way of the concepts much. Of all the languages I have taught people in, Python is the quickest route to understanding/independence.
Python was popular before those libraries came out though. And then people wrote those libraries in Python because it was popular, making it more popular. People using popular languages to create platforms or on their platforms isn't evidence that having a platform is what drives language popularity.
For me OOP is weird. I never understood what people mean when they say it is close to how they think. It feels like a way to obfuscate the flow of code and requires to make decisions about what thing should have which responsibility and relation with which other thing and building weird hierarchies that will bite you back in the long term.
Relations between Objects change, customers don't know what they want, change is always pain with OOP.
Now people say, composition over inheritance. Right, but isn't that the point of functional programming?
Functional programming maps to how I think. The most important questions is always: What data structures do I need? Get your data in order and the rest will flow naturally.
I don't use functional programming because I am a math nerd or something, I am not even good at math. I use is because it is composible and easy to understand. I can refactor without any fear of side effects.
Now is pure functional programming practical? For some tasks, sure but yeah not always. Imperative programming gets stuff done. Work with the strengths of both.
The thing you are not used to always feels weird, that is a problem with you not the thing you try to learn.
For me functional programming was just like a “missing link”. Wait, I can just code like I think, not in this weird statefull way? It’s just so practical for me. I can deliver 10x the value for 0.1 the effort. It’s just not fair, I know for a fact that there are more people with the same model of thinking, I just want them to feel as liberated
I'd add that, for me, "functional programming at the edges" is sufficient: I don't need to go full straight jacket / Haskell-style to get a huge lot of the benefits of FP. I can still use an imperative outer shell and still have lots of functional parts in my code which are easy to reason about.
I tend to mix OOP and functional together as I use objects as a way to relate functionality or steps within the process. I almost never use objects as a means to model the actual data. At most, I use objects in this instance as glorified structs. The only real benefit that OOP as implemented in most languages has is the ability to write to an interface whether you use an explicit interface type or an abstract class which concrete implementations derive from. Like Bob C Martin said in many lectures the inversion of control is really where OOP shines. Anything else most other paradigms do it better.
I find that OOP languages (as long as they support FP and immutability) have the benefit of using classes as nothing more that first-class parameterised modules (without any mutable state) so I don't have to repeatedly pass arguments from one function to the next but can look them up from a shared context. Haskell doesn't really have that option, although people have told me that OCaml can do that.
You are describing the way I wrote in Scala, and now in Python: using class instances as that are initialised shared contexts like shared database connections, serialisers, etc.
This is exactly how I roll these days! Avoid classes until I find myself doing too much argument plumbing, or having too many arguments to a function. Also love me some memoization. If I'm honest, I haven't tried aggressively using closures, but I suppose that would have been another way to do it...
> I can refactor without any fear of side effects.
Can you? Like say you changed you sum() function to only sum every other element, because for the code you're working on right now that made sense. Well now you've screwed up the other places which relied on sum() summing all the elements.
Dumb example because you'd know better, but surely you can still shoot your foot off like that with FP?
And yeah I know that's not the side effects you were talking about, but when I'm changing my OOP stuff, that's the kind of side effect I'm most worried about.
I've never really used a proper functional language though, so it's certainly possible I'm unenlightened.
> Can you? Like say you changed you sum() function to only sum every other element, because for the code you're working on right now that made sense. Well now you've screwed up the other places which relied on sum() summing all the elements.
I would not change sum but just filter out every other element and feed that new list to the sum function. (If you need it often, write a new helper.)
Your sum function should not decide which elements should or should not be added, that is the callers job. It doesn't even have the context to decide on that.
So yes, I would have the guarantee that nothing would break because I did not change sum in the first place.
In practice you probably wouldn't even start with that sum function but simply implement a function that takes two numbers and adds them together. Then the caller can use higher order function like fold and filter to do whatever it needs.
(Of course we assume it is not literally adding two numbers but some complicated logic, otherwise don't even write a helper for it, just sum your stuff when you need it.)
And yes, you can still have logic bugs in functional programming languages and it can't really protect yourself from that when refactoring but I wanted to note how keeping your design simple and having easy to understand composable functions can help avoid bugs.
> I would not change sum but just filter out every other element and feed that new list to the sum function.
Of course, and I would do the same in my imperative program. A dumb example like I said.
> Then the caller can use higher order function like fold and filter to do whatever it needs.
Right but then at some point you have a chain of ten of these calls, and you have it _all over_, and so you figure hmm lets make a separate function out of that, code duplication is bad after all.
And then you find your new function has a bug and you need to change it... Will all the users of this new function be OK with that bugfix?
If you don't wrap up these chains into new functions, how do find all the places you need to change once you need to make that bugfix?
> having easy to understand composable functions can help avoid bugs
Sure this I get, which is why my imperative code also contains lots of "do one thing" methods that is used as Lego bricks. And I use a fair bit of functional-ish code, ala LINQ, where it makes sense.
I wish I had discovered functional programming at an earlier stage, where I had more time to experiment. I think it would be very informative to make two non-trivial feature-equal programs in either style so I could compare.
One thing I want to point out is that when I first read "do one thing", I thought I knew what they were saying and I didn't. It took a long time for me to finally grasp that message.
One of the ways I finally learned that was after learning async/await in C#. Every beginner to async methods dread in fear once they realize the "zombification" of the Task<T> return type, where if you call an async function 6 layers deep into your program you need to change the return type all the way up the stack. Almost always now I call the async function at the very top layer. If I need some value or list I compute that list or value and return that all the way up then make the async call at the top based on it.
I learned to split my functions into two types, pure functions and impure functions. Async functions are an example of an impure function. The only thing those functions are allowed to do is their impure thing. Make a web call, push a value into a database, whatever. The pure functions are where you actually do your computations and transformations.
If you have a pure function that has a bug and you need to change it, because it's a pure function it is inherently testable. Just run it and see. If you're not sure, it's trivial to create a new function with the bugfix and only call that function from the places that you are sure of. But then try to make sure.
> I know that's not the side effects you were talking about, but when I'm changing my OOP stuff, that's the kind of side effect I'm most worried about.
That's not a "side effect"[1] at all. It's a logical bug in the implementation that only affects the function's return value. You absolutely still have to worry about those sorts of bugs in FP, which is why you still need tests or a fancy enough type system to prove the bug cannot happen (for your example that is unlikely to be practical).
"Without any fear of side effects" doesn't mean you don't have to fear anything at all, but it's one less thing to worry about.
Yes, like I said. Point is, in my imperative OOP code, I'm relatively seldom worried about actual side effects, and more worried about introducing bugs due to me not fully grasping all the interactions of the code I'm changing.
Did someone somewhere make an assumption about how this code works (sums all numbers), and am I breaking that assumption when I'm making this change (summing every other number)?
> I'm relatively seldom worried about actual side effects
I'll give you one concrete example: In a number of ORMs, I have no idea when database calls actually happen and this can lead to a) subtle logic bugs (inconsistent state), or b) bad performance (N+1 queries).
I've had such problems both with Ruby's Active Record and Hibernate, two of the most popular ORMs. I've even deployed code that had subtle bugs, even though I wrote tests, because while my tests verified that the correct state of the object was written to the DB, the updated state was not reflected in the JSON response object. I'm sure similar things happen in other frameworks as well.
A PFP can prevent that sort of craziness from happening.
Confidence in a program's behaviour is not an all-or-nothing proposition. You can statically verify certain properties of your program (through immutability, type systems, linters etc.) without having to write a full formal proof of your program's entire behaviour, and you can gain more confidence in the entire program working correctly.
In the example you just gave (summing over a list of numbers), in almost all programming languages you'd resort to runtime verification instead, i.e. tests, just because it's so much more practical. But if you have a PFP language, you still have the confidence that the behaviour of the inputs and outputs is the only thing you have to test, since the function cannot do anything else.* Even in a non-pure FP language, you know that it cannot mutate other variables, although it might have side effects.
* Ok, it might loop forever or crash, but that's comparatively rare.
There's a certain kind of personality that really gets into that (the 'architecture astronaut'), specifically because it is counterintuitive and they like convoluted things. I think the idea that really gets them going is "look, it knows to XYZ itself!"
I do scientific programming, and most of the time, functional style makes way more sense. Doesn't stop someone with the OOP bug from turning all the logic inside out and making a brittle mess, though.
There are problems (or parts of problems) that are more about behavior and others that are more about data. The latter of course is benefited by a FP mindset/language, but I wager that OOP is a better fit for the previous.
What OOP gives is a way to encapsulate some given part of a program, and make it a working little building block you can later reuse. It can contain mutable state, but is promised to be correct as only that class can touch it, it can be used as a slightly different version of a common interface, etc.
Some of the weirdness is so completely avoidable that it frustrates me, but languages and libraries dig in on ‘you get used to it.’ I’m talking about the f(g(h(x))) vs x | h | g | f. Right to left chains of functions seem more popular than chains of pipes in FP, but even being someone who has drunk the kool-aid deeply, I always prefer pipes.
‘Take the pan out of the over once it’s been in for 30min, where the pan contains a folded together eggs and previously mixed flour and sugar’ isn’t more FP than ‘Mix the flour with the sugar, then combine with eggs, then fold together, then put it into the oven, then take out after 30min.’ But people new to it so often think it’s an FP thing for no good reason. It’s a new user issue that ought to be totally fixable as a community.
FP has enough great ideas that I’d recommend everyone learn pure functional solutions just to put into their tool belt, but it’s absolutely true that getting up to speed is harder than it needs to be. My hot take: it won’t be really mainstream until someone figures out how to make dependent types really ergonomic, which seems a long way away.
I'm relatively an FP newbie, and I have a few questions.
1. I've never seen the "pipe" notation you talk about in any lang except bash, which is not an FP right? In which languages does "|" denote function application?
2. Aren't dependent types orthogonal to whether a language is functional? Just to make sure we're on the same page on what dependent types mean, I wanna give a example (a contrived one, sorry) of dependent types in Python.
from typing import overload, Literal
@overload
def str_if_zero(num: Literal[0]) -> str: ...
@overload
def str_if_zero(num: int) -> int: ...
@overload
def str_if_zero(num: int) -> int | str:
if num == 0:
return "That's a zero"
return num
You have to plan to use them in advance. With a pipeline operator you can figure out the command you want at one stage and then tack on another part of the pipeline without needing to go back and alter the way you began the line.
> which is not an FP right?
> Aren't dependent types orthogonal to whether a language is functional?
You're correct and I think that's kinda the parent's point. I think what the OP is saying is that the much of the syntax ergonomics are orthogonal to FP, which means they don't have to be as weird/frustrating/unusual/insert preference/unergonomic as they are.
A good example is how piping vs nesting are different syntaxes for function composition: x|A|B|C vs C(B(A(x)))
The former might feel more comfortable, familiar, and left-to-right readable for someone new to FP. That said I think a few language do use piping. F# has |> and I think Clojure used >>> maybe? -- signed, humble java programmer.
> piping vs nesting are different syntaxes for function composition: x|A|B|C vs C(B(A(x)))
> The former might feel more comfortable, familiar, and left-to-right readable for someone new to FP.
C(B(A(x))) is the syntax introduced in school, but note that if you continue on in algebra the notation flips from C(B(A(x))) to x_{ABC}. Everyone agrees that it's easier to list the transformations in the order they happen.
Dependent types are not totally orthogonal of FP. They generally require immutability and function purity to work well and become extremely difficult to work with in the absence of those features.
Well it's luckily a little better than that. Type checking doesn't have to be undecidable and the compiler doesn't have to be vulnerable because you don't have to execute any side effects in the compiler even if the code is meant to execute any side effects at runtime.
It's more that the utility of dependent types is generally restricted to functions that are pure and deal with immutable values, which means one possibility is to specially mark which of your functions are allowed to be used in a dependent type and which aren't. But the fragment of your program that can be used in dependent types is generally going to be a pure FP fragment.
Side effects aren't the only reason the compiler can be undecidable though. It depends on the type system, not just referential transparency. Look at Cayenne or ATS. Also, don't forget it's possible to still possible to DoS the system (although this is already more than possible with C++ and such, so what are we complaining about.)
Yeah, dependent types would be nonsensical for code that isn't functional. Marking which parts are and aren't is a good idea. Idris did something similar for totality checking. I think it'd be nice to have a system which differentiated between functions and procedures though.
> I think it'd be nice to have a system which differentiated between functions and procedures though.
That's essentially what 'IO' types are for (or finer-grained alternatives like 'Eff', etc.). At the value level, we have things like 'do notation', 'for/yield', etc.
I don’t know if I would say it’s very common(?) but it was introduced as a named concept (“uniform function call syntax”) AFAIK in D and is also a feature of Nim
(2) No, dependent types are not the same as overloading. Basically it's when the type depends on the values; so like having a type for odd numbers or having an array that knows its length at compile time (for example to avoid array out of bounds exceptions at compile time)
1. Others have answered it well. Tons of languages have pipe operators. I used | in my example because I figured more readers will recognize bash piping than other notations.
2. Yes that's what I mean by dependent types, where types depend on values, like f(0): string and f(1): int. And yes, they totally are orthogonal. However, pure functional programming tends to lean on its type system more than other styles, in my experience. But then without dependent types, something as simple as loading a CSV gets hairy relative to something like pandas's pd.read_csv. Maybe others can go into why pure FP and static typing go so much together, but I gotta get back to work.
The pipeline operator |> exists in Elm[0]. It is available in bash as & and the popular (in my limited experience) package flow[1] also implements it as |> for readability. It is a proposed addendum to Javascript[2] but I have no idea how seriously the JS powers-that-be are taking that.
The example you gave of Python doesn't have tagged union. I really don't know much at all about Python so maybe the type checker can tell you that you're handling all the cases, but with tagged union types you can have to handle every possible output of `str_if_zero` when calling it. That doesn't have to be an FP thing but I only find it when doing Elm and Haskell in my experience.
That sort of chaining seems a bit like the method chains you have in some languages, like Java streams, C# LINQ and Rust iterators. That sort of "functional constructs when it fits" is getting popular, even if pure functional isn't gaining much popularity.
D doesn't have pipes, being a C-like (it probably could, but it's untypical), but it has something almost as good: "uniform function call syntax", meaning that a function that takes X as its first parameter can be called as if it was a method of X. `foo(x)` => `x.foo()`. This makes functional programming in D a lot closer to the pipe example, in that you're usually passing a lazy iterator ("range") through a sequence of chained calls.
That sounds amazing, but I wonder if it could get unwieldy as well.
Would have absolutely loved it in Go, where over the course of time I had to build up dozens if not hundreds of validators on strings, maps, etc. Ended up resorting to storing them in `companyInitials-dataType` packages, EG `abcStrings.VerifyNameIsCJK()`, but the unified call syntax would be super tidy.
I’ve hacked pipes in python many times just because it’s so much better. And no, “a=f(a);\n b=g(a);\n c=h(b);\n a=i(c)” isn’t “more declarative” or “more expressive” than a | f | g | h, and even in python a |pipe| b |pipe(blah)| c |pipe(blah, blah)| d is so much nicer than nesting the parens.
It's easier, but there are several considerations:
- There's workarounds. Eg. in F# you can redefine the pipe operator in debug mode [0] so it can be stepped into and inspected.
- I think some IDE plugins are also working on allowing pipeline contents to be automatically inspected.
- And of course, there's the good old tee operator for printf debugging. FP style rarely mutates variables, so a few print statements are just as good as a live object inspector.
As for why you actively wouldn't want to have those variables... much like comments, sometimes variable names add precious information and should be typed out, but sometimes they're just unnecessary noise, mental overhead, and scope pollution.
E.g. compare:
let suppliers = getSuppliers()
let sortedSuppliers = sortBy getSupplierPriority suppliers
let bestSupplier = first sortedSupplier
to:
let bestSupplier =
getSuppliers()
|> sortBy getSupplierPriority
|> first
Elixir includes IO.inspect[1], which prints a value and then returns it. It makes it easy to insert this function into a pipeline without disrupting the existing logic[2].
In terms of debugging inconvenience for intermediate values, `->(x) | a | b | c` is the same as `c(b(a(x)))`.
I find debugging /w a REPL using functional-esque code much easier. You can play around and exec/inspect lines/snippets at will without changing state & "breaking" the debug session.
Agreed they are pretty much the same inconvenience level, but for me that level is too much either way. Maybe it's just my workflow, but I always preffered step-by-step debugging compared to REPL which is why I like to see many intermediate results being named.
when I hack it into python, it’s slightly annoying. when done properly, no, it’s no worse than f(a+g(b+c())), which is everywhere in code already. And debuggers / error messages should indicate what sub expression caused it, which would clean that up
> ‘Take the pan out of the over once it’s been in for 30min, where the pan contains a folded together eggs and previously mixed flour and sugar’ isn’t more FP than ‘Mix the flour with the sugar, then combine with eggs, then fold together, then put it into the oven, then take out after 30min.’
Oddly enough, the language that put the most thought into this problem is Perl, which provided the pronoun variables to represent "whatever I just computed".
Natural languages always offer multiple strategies for ordering the various parts of a sentence, because the order in which things are mentioned is critical for several purposes, from making it easier for the listener to follow a chain of thoughts to building or defusing tension or correctly positioning the punchline of a joke.
I'm always surprised how unpopular perl (and even more, its spiritual successor raku) is with the HN crowd. It does not square with my experience of various languages' ergonomics.
It's not at all surprising that perl put thought into things like this, the entire language concept optimises for elegance of expression.
I can't speak for Perl's general usefulness, but pronoun variables are just not terribly useful, because you can make them up on the spot. For example,
result = input
result = transform(result)
result = transform2(result)
return result
Perhaps I misunderstood the idea of pronoun variables. A related idea is the idea of `self` or `this`, in that computation is viewed from the perspective of an agent (which is ironically called an object when it is typically a grammatical subject).
Perhaps this is why we like OOP and CSP patterns so much, it meshes well with our social abilities.
> Perhaps I misunderstood the idea of pronoun variables.
Their point is the same as the one in human language. You can't make them up on the spot -- they already exist, with predetermined reference logic that we learn as part of learning a language. All you can do is mention them. This means they often reduce both the reading and comprehension load in comparison with inventing a new name and binding it to some referent.
For example, in this sentence:
> I can't speak for Perl's general usefulness, but pronoun variables are just not terribly useful, because you can make them up on the spot.
Who's "I"? And "you"? What's "them"? These are all obvious. Compare that with:
> I = the person writing this. This = the writing you are reading. You = the person reading this. I can't speak for Perl's general usefulness, but pronoun variables are just not terribly useful, because you... [You = someone writing code] can make them [Them = pronoun variables (though, as noted, you can't make them)] up on the spot.
Consider what it would take to write code displaying the first 10 numbers in the fibonacci sequence in some PL.
Here it is in Raku using pronouns:
say .[^10] given 0, 1, *+* ... Inf # (0 1 1 2 3 5 8 13 21 34)
I grant that you have to learn this aspect of Raku to be able to read and write the above code. But it only takes a few seconds to learn that:
* `.foo` means applying `foo` to "it".
* `[^10]` means up to the 10th element. That's got nothing to do with pronouns, but whatever.
* The `*` in `*+*` is the pronoun "whatever" and, when used in combination with an unary or binary operator forms a lambda for that operation. So `*+*` reads as "whatever plus whatever" and represents a two argument lambda that adds its arguments.
* `...` is Raku's "sequence" operator, which uses the preceding function/lambda as the generator (which in turn uses the preceding N arguments per the generator's arity). Again, nothing to do with pronouns, and yet more you have to learn, but that's how PLs are.
It will no doubt look awfully weird, almost as weird as, say, Chinese (presuming you don't know Chinese). But that (it looking weird) is something completely distinct from whether it (using it) is extremely pleasant once you just accept it.
With a sufficiently open mind it (accepting using a "whatever" pronoun) can take literally a few seconds. (Thus kids tend to find this notion of "whatever" in Raku easy to grok, whereas adults sometimes struggle.)
I think maybe part of it is that most people are introduced to FP through haskell, if I had to guess. And haskell (at least the tutorials) is IMO guilty of the top-down approach where the final value is written first, referencing variables that haven't appeared yet. ocaml, on the other hand, has you define your components before the final value which is more familiar to most programmers
I think haskell doesn't always look super readable, but I can appreciate it at least because it reminds me of math papers where the final theorem is introduced first, and then it's broken into lemmas, and each of those lemmas is proved etc
A lot of the difficulty in learning FP is in unlearning imperative programming.
I hypothesise that for someone whom has never programmed before, FP and declarative paradigms are easier to reason about.
> FP has enough great ideas that I’d recommend everyone learn pure functional solutions just to put into their tool belt
Completely agree! I got my first taste of FP writing unreadable nests of python list comprehensions and it's shaped my Java tremendously. I've never spent much time in FP dedicated languages, but it's as good a frame to understand as OOP. I find they even mix and match well.
I’ve “drunk the koolaid” and I mostly went the opposite direction on left-to-right pipes vs. right-to-left compose: with a(b(c(d))), order of evaluation will always be arguments->function call (in most languages) and, so d is the first thing evaluated. Pipes make it seem more consistent, but it also introduces an inconsistency.
isn’t the first thing you are being the first thing evaluated just better? we read ltr, so having to see and remember a before c(d) but then having some of a’s arguments at the very end is just annoying vs pipes
It's hard for me to verbalize, but the rtl composition just sort of makes sense. For one thing, it's the way functions already sort of work:
a(b(c(d))) === (a • b • c)(d) // the functions are in the same order
And, when you think about the typical order of evaluation, a(b(c(d, e, f))) is evaluated:
- look up the value of d, e and f
- evaluate c(d, e, f) (call it g)
- evaluate b(g) (call it h)
- evaluate a(h) (call it i)
- return i
Pipes add an inconsistency between the order of evaluation of arguments vs. function calls and the order of evaluation of functions. (Also, completely separately, I've mostly come around to disliking operators and thinking Lisps are right here: precedence and associativity are "intuitive" for basic math operators because we've spent years studying math and getting used to PEMDAS, but anything beyond this is just asking for trouble: aside, maybe, from APL/J's strict right-associativity without precedence)
I have an algebra book on my shelf* that puts function arguments on the left, as in (x)f instead of the usual f(x). It is definitely more “natural”, as functions compose in the normal reading direction. But even so, it is increadibly hard to read, because the break with the usual convention is so radical. Needless to say, it didn’t catch on.
* In my office, but I am at home. Don’t remember the author; sorry. But it was written back in the ‘60s, I think.
What I like about pipes it that it allows two ways to write chains of function calls. For calls that are more branch-like, the nested structure of standard function calls makes the most sense to me:
article = create(author("John"), Page.article())
For cases where you're modifying data sequentially, the pipe structure makes it really easy to read:
5th_prime = 0.. | filter(is_prime) | nth(5)
Having a way to write function calls forwards and backwards lets you choose whatever syntax feels most appropriate for the job.
that same logic works for reversed-compose - a|b|c|d = (a)|(b~c~d). It is inconsistent with the normal kind but I think it’s worth it just for the ability to write list |map| func |filter| predicate |group-by| keyfunc |map| .values() |map| sum, which is so nice because you start with the value and as you read the thing that’s happening is precisely the thing you’re reading, and everything’s in one place - it’s
They can be extended arbitrarily by defining collectors. The Stream interface is very well designed. For example your "toFoo" would be .collect(Collector(<foo-provider>, <foo-accept>, <foo-merge>, <foo-finisher>)). All of them can be arbitrary functions working on arbitrary types. That's the standard library implementation of "toList", in fact!
> applies only to streams
You're exactly correct here, but for legacy reasons Java did not have a choice. I'm the first to admit mylist.stream().map(f).toList() is dumb. A list is a functor, dammit!
It's the best possible solution given the existing constraints, but it's definitely limited in what it can do.
Functional programming isn't the norm because — while it's extremely good at describing "what things are and how to describe relationships of actions on them" — it sucks at "describing what things do and describing their relationships to each other". Imperative programming has exactly the opposite balance.
I find the former to just be more valuable and applicable in 80% of real world business cases, as well as being easier to reason about.
Entity Relationship Diagrams for example are an extremely unnatural match to FP in my eyes, and they're my prime tool to model requirements engineering. Code in FP isn't structured around entities, it's structured in terms of flow. That's both a bug as well as a feature, depending on what you're working on.
Most of the external, real world out there is impure. External services, internal services, time. Same thing for anything that naturally has side effects.
If I ask an imperative programmer to turn me on three LEDs after each other, they're like: Sure, boss!
for led in range(3): led.turn_on(); time.sleep 1; led.turn_off()
If I ask an FP guy to turn me on three LEDs after each other, first they question whether that's a good idea in the first place and then they're like... "oh, because time is external to our pure little world, first we need a monad." Whoa, get me outta here!
Obviously with a healthy dose of sarcasm.
Don't get me wrong, for the cases where it makes sense, I use a purely functional language every day: it's called SQL and it's awesome despite looking like FORTRAN 77. I also really like my occasional functional constructs in manipulating sequences and streams.
But for the heavy lifting? Sure give me something that's as impure and practical as all of the rest of the world out there. I'll be done before the FP connaisseur has managed to adapt her elegant one-liner to that dirty, dirty world out there.
I have my share of gripes about Haskell (which I'm assuming is the language you have in mind when you're talking about a pure FP language), but even with the sarcasm disclaimer, this is a pretty extreme strawman.
EDIT: I would also strongly dispute the idea that FP is structured around flow instead of data structures. In fact I'd say that FP tries to reduce everything to data structures (this is most prominently found in the rhetoric in the Clojure community but it exists to varying degrees among all FP languages). Nor is SQL an FP language (the differences between logic programming a la Prolog and ultimately therefore SQL and FP is very very different).
FP's biggest drawback is that to really buy into it, you pretty much need a GC. That also puts an attendant performance cap on how fast your FP code can be. So if you really need blazing fast performance, you at least need some imperative core somewhere (although if you prefer to code in a mainly FP style, you can mainly get around this by structuring your app around a single mutable data structure and wrapping everything else in an FP layer around it).
Without knowing Haskell, it looks like there are bugs in the code. Specifically, there's no delay after LEDTurnOff in the first example, and you have the same function name twice in the second example.
If those are bugs, I'd forgive that. If those AREN'T bugs, then keep me far, far away from FP!
Also, what is the point of i? Clearly, each LED should have its own index, but then i is never used again. (I understand this could be pseudocode or there's a lot of other code not included.)
And the ranges are inclusive in Haskell? I feel like a lot of friction between Matlab and Python involves how each language's indexing/slicing/ranges are represented, so it's interesting to see each language's approach (indenting like Python, lower camel case, delays in us, etc.) --- but with every language difference, I'm personally less inclined to want to learn something new without a great reason.
turnOnThreeLEDs = for_ [1..3] (\led ->
do
turnOn led
threadDelay (10^6)
turnOff led
)
It should be the above (i is changed to led), where I thought the original was automatically going to a new led and didn't realize that `led` was actually an integer and `turn_on` and `turn_off` are basically pseudo-methods (or extension methods). (The original code also only sleeps after turning an LED on, not off)
Indeed the second example is a typo that should have on vs off.
for_ [1..3] (\led -> do { turnOn led; threadDelay (10^6); turnOff led })
The joys of writing code on mobile and too much copy pasting.
Thanks for all the info. I want to give FP a proper try one day, and there are many different roads, but it's always a rocky start for me with a new language. Having a clear translation from one to the other is important, so I'm glad you updated this.
True that this matches the original example. I guess my mind filled in the second delay automatically when it noticed, "this isn't gonna blink to the naked eye!"
The point is to make it easy to program imperatively (with effects, where relevant) while simultaneously reclaiming the ability to check for correctness and maintain laziness by default.
What's so good about implicit sequential evaluation? Shouldn't the effect ordering be explicit? Isn't explicit better than implicit?
Use of that type is easily limited in Haskell code. For instance, in my chess counting project [1] only a few lines in Main.hs use IO (), while the other approximately thousand lines of code have nothing to do with IO ().
So, you recognize imperative is a subset of functional? :-P
do-blocks have perfectly functional semantics, so if you consider that to be imperative as well, this means that a sequence of instructions changing state is both imperative and functional, as long as you declare where the state is being handled in your code.
And yes, of course functional code can handle state. The good thing about this 'Haskell imperative' style is that it doesn't fall prey of side effects, the bane of imperative programs (uncontrolled side effects are NOT a good thing). In Haskell, you control why and where you allow them.
One could also make a language that has exactly the same visual syntax that C where ; is specified as a functional composition operator instead of a separation of instructions. These kind of mindgames are pointless - if your code is sequencing instructions, it's imperative ; if it's denoting it's functional
If you do that, then you have to admit different kinds of imperative code: C-imperative style that can modify any state in the application as side effects, and Haskell-imperative where you can only modify state explicitly declared as input to the procedure.
It's not just mind games, the difference has very real implications to the architecture of the whole program and the control you can exert over unpredictable side errors.
I mean if pure FP is enough to write imperative code as well then the distinction between the two doesn't seem all that important to me. What would be a non-imperative equivalent to illustrate your idea?
Small examples of mutating state in Haskell are about as meaningful as small examples of pure functions in Java. Small examples are really easy to do and not that ugly, bigger more complex examples don't look so nice. The whole reason people want to use Haskell is because doing complex state mutations is horribly ugly and unergonomic so people don't do it, that is a feature of the language.
> If I ask an FP guy to turn me on three LEDs after each other, first they question whether that's a good idea in the first place
a proper FP engineer would model the problem of turning on LEDs one ofter another as a set of states. A simple way would be a bit set of the LEDs, in an array where each element is the LED's on/off state, like ['000', '100', '110', '111'].
Then, the problem decomposes into two, simpler problems: 1) how to create the above representation, and 2) how to turn the above representation into a set of instructions to pipe into hardware (e.g., send signals down a serial cable).
The latter problem is imperative by nature, but the former - that of the representation of states, is very pure by design! So the FP model provides a solution that solves a bigger, more general problem of turning LEDs into patterns, and this solution is just one instance of a pattern.
So if your boss asks you in the future to switch the bit patterns to be odd/even (like flashing christmas lights), you can do it in 1 second, where as the imperative version will struggle to encode that in a for-loop.
I promise you, a "proper" C programmer will do the same thing, faster. And I say this as an FP fan!
A bad C programmer will write horrible spaghetti code, but it will probably be enough to do the job. A bad Haskell programmer will get absolutely nowhere.
> A bad Haskell programmer will get absolutely nowhere.
that's a feature, not a bug in my books! Maintaining code written by other bad programmers is the bane of my life (despite getting paid to do it, so i can't complain).
Now I’m wondering what the worst mainstream language is for maintaining somebody else’s legacy code. C can definitely be pretty bad... but I’m thinking maybe Perl?
> So if your boss asks you in the future to switch the bit patterns to be odd/even (like flashing christmas lights), you can do it in 1 second, where as the imperative version will struggle to encode that in a for-loop.
I guess you are talking about embedded so I'll concentrate in the LED example. In embedded code size and performance matter, so you try to be as straightforward as you can be. And I think applying "your boss might ask you in the future" to every piece of code is what drives some development far beyond the complex point.
Should I spend a week creating a super-complex infrastructure for turning on/off some LEDs just in case my boss asks my to change the pattern? Should I spend a week thinking the right code-pattern, or trying to "solve a bigger, more general problem"? It's just 3 LEDs blinking... just write the damn for-loop!
At the end of the day, my microcontroller only "digests" sequential instructions. So the simplest thing (for embedded) is to think and feed the microcontroller with sequential instructions. All the rest is just ergonomics for the sake of programmer's comfort or taste.
I'll do the sequence. If my boss asks my to change the sequence, I'll change the sequence. It's not a big deal.
I don't know if in this case one would "struggle" modifying this particular for-loop. And I can think at least 3 five-minutes solutions in C that doesn't require FP to structure a program to quickly change the pattern if required.
I'm at a very similar place to you at this point. It make sense for FP to be good at "describing relationships of actions" since the base unit of reasoning is a function, or an action.
The beauty of modern programming is that we don't have to stick to a pure example of either paradigm. We can use FP techniques where it makes sense and turn to imperative otherwise.
In you example, we could have a nice, purely functional model of an LED that enforces the invariants that make sense. We could then "dispatch" the updated led entity to an imperative shell that actually took the action. All without using the M-word!
I'm probably - unfairly - treating your example more seriously than you intended it, but I think I'm leading to the same conclusion as you at a slightly different place. I want to have a purely functional domain that I wrap in an imperative shell. Trying to model side-effects in a purely functional manner using something like applicative functors just doesn't give the productivity boost that I want.
> I use a purely functional language every day: it's called SQL
This is my favourite way to annoy FP advocates (despite probably being one myself). Every one is a closet mathematician in FP-land but no one wants to admit how beautiful relational algebras are.
> I want to have a purely functional domain that I wrap in an imperative shell. Trying to model side-effects in a purely functional manner using something like applicative functors just doesn't give the productivity boost that I want.
Funtional Reactive is a very good way to create that mix. Web Front developers have realized that, and that's the reason why most modern frameworks have been veering towards this model slowly, with Promises and Observers everywhere.
When you represent state as an asynchronous stream that you process with pure functional methods, you get a straightforward model with the best of both paradigms.
I like FRP, but prefer to imitate it in a synchronous manner now - I mainly work on the JVM and I've personally found debugging to be too painful when working asynchronously. If I need async then FRP is definitely the first tool in the toolchest that I'd reach for.
Elixir's pipe operator is a brilliant tool that I wish every language had. I mainly use kotlin day-to-day and definitely abuse the `let` keyword to try to get closer.
> I want to have a purely functional domain that I wrap in an imperative shell.
Like you, I too think the ML-side of functionnal programming got it right. Sadly, their most popular language commited the unforgivable sin of not being written by Americans and is therefore condamned to never be as popular as Haskell. I console myself by using F# when I can.
It's not about the nationality of author. It's about where they worked from and with whom. Except Ruby which failed, all the languages you are talking about where developed in the USA.
Van Rossum moved to the USA in the 90s, got funds from DARPA and went to work at Google quite quickly. Stroustrup developed C++ while at Bell Labs in New Jersey. Lerdorf moved to Canada as a teenager before going to work in the USA. Hejlsberg made C# and TypeScript at Microsoft in Seattle. Yukihiro Matsumoto could be an exception but as you rightfully pointed Ruby always remained somewhat niche even after its move to Heroku in San Francisco. James Gosling is Canadian but did his PhD in the USA before developing Java at Sun. Wirth did its PhD at Berkley before moving to Standford where he did most of the work on ALGOL W what would become Pascal and did multiple sabbaticals at Xerox PARC.
> Yukihiro Matsumoto could be an exception but as you rightfully pointed Ruby always remained somewhat niche even after its move to Heroku in San Francisco.
I'm not sure what you mean in terms of "its move to Heroku in San Francisco". Also, Ruby didn't "fail" and it's not niche (GitHub is written in RoR, as well as discourse). However, I would argue that Ruby remained relatively niche outside Japan until it was discovered by DHH and used for the Ruby on Rails framework (to this date, it's somewhat hard to find work in Ruby outside of RoR). DHH lived in Denmark at the time but moved to the US shortly thereafter.
When I checked, it seemed that Yukihiro Matsumoto moved to San Francisco to work for Heroku but that's after developing Ruby while in Japan.
> Ruby didn't "fail" and it's not niche (GitHub is written in RoR, as well as discourse).
Ruby definitely is a niche language. I have never seen used outside of the web and it's pretty much always mentioned with RoR. That doesn't preclude success stories developed with Ruby to exist.
It failed in the sense that it has little momentum and didn't gain much traction if you compare it to something like Python. In a way, it's somewhat comparable to Ocaml which was the "failure" I was mentioning initially despite being a nice language itself and seeing interesting development right now.
> When I checked, it seemed that Yukihiro Matsumoto moved to San Francisco to work for Heroku but that's after developing Ruby while in Japan.
I didn't actually know that, so fair enough. Still, I think DHH probably had a larger impact in popularising Ruby in the US (and, by extension, other parts of the world).
> Ruby definitely is a niche language. I have never seen used outside of the web
That's only if you consider the web to be "niche" and if you do that, then JavaScript is "niche" too.
It's true that Ruby outside of Ruby on Rails is somewhat rare, but several other successful technologies are other written in Ruby, for example:
- Homebrew (macOS package manager)
- Chef (server provisioning software)
- Vagrant (VM provisioning software)
- Cocoapods (iOS package manager)
> It failed in the sense that it has little momentum and didn't gain much traction if you compare it to something like Python. In a way, it's somewhat comparable to Ocaml [...]
I think you're way off base.
Yes, Python is extremely popular and Ruby can't compare overall - although I have a feeling that Ruby still overtakes Python when it comes to web dev, but obviously Python is huge in other areas and is also not exactly niche in web either.
But Ruby is #13 on TIOBE, while OCaml doesn't even feature in the top 50. Github and Discourse are only examples, we could also mention Airbnb, Shopify, Kickstarter, Travis CI and many others. I've personally worked at several Ruby companies, in fact I maintain a small Ruby codebase even now at my current company (although it's not our main language), etc.
Ruby had huge momentum in the 2000s and even early 2010s. It didn't catch on in the enterprises much, true, but it was the cool thing back when everyone was annoyed at the complexity of Java EE or the mess that was PHP back then. Ruby was also the language Twitter was originally written in before they migrated to Scala. It lost a significant amount of momentum since then and basically all of the hype (people migrated to Node, then later to Elixir, Clojure and co. and some like me jumped back to statically typed languages once they became more ergonomic), but it's still maintained by quite a sizeable number of companies.
More than that, RoR had an outsized influence on the state of current backend frameworks to the point where I claim that even one of the most heavily used frameworks today, Spring Boot, takes a lot of inspiration from it (while, of course, being also very different in many areas). I would also argue that Ruby inspired Groovy, which in turn inspired Kotlin, and that internal DSLs such as RSpec also were emulated by a number other languages later.
I'd also add Roberto Ierusalimschy (Lua) and José Valim (Elixir) to this list, both from Brazil. But as a fellow commenter points out, place of birth is less important, when compared to how well the author is integrated into the anglophone old boys network of computer science.
People need to be familiar with both approaches.
And there's sillyness on both sides. I've seen influential imperative OOP programmers on stackoverflow model a bank account using a single mutable variable, even when they surely know accountants use immutable ledgers.
Most imperative languages since Fortran contain declarative elements, otherwise we'd be adding numbers with side-effects. Similarly most FP languages offer imperative programming. But the real power from FP comes from it's restrictions and yes, query languages are one such (excellent) application. Config languages and contract languages are others.
Your LED example is an interesting one. In the basic model of a computer architecture, the screen is abstracted as a pixel array in memory - set those bits and the screen will render the pixels. The rest is hand waved as hardware.
A pixel array can be trivially modelled as a pure datastructure and then you can use the whole corpus of transformations which are the bread and butter of FP.
A screen is as IO as it comes for the most average consumers of a screen, we aren't peeking into its internals.
And for me, that's the point of FP - it's not that IO is to be avoided, it's about finding ways of separating your IO from the core logic. I loosely see the monad (as used in industry) as a formalised and more generic "functional core imperative shell"
Now when it comes to pure FP languages, they keep you honest and guide you along this paradigm. That said, it's perfectly possible to write very impure imperative Haskell - I've seen it with my own eyes in some of the biggest proprietary Haskell codebases
But imperative languages don't generally help you in the same way, if you want to do functional core imperative shell, you need a tonne of discipline and a predefined team consensus to commit to this
> In the basic model of a computer architecture, the screen is abstracted as a pixel array in memory - set those bits and the screen will render the pixels. The rest is hand waved as hardware.
It was. I still remember the days.
It was nice to be able to put pixels on the screen by poking at a 2D array directly. It simplified so much. Unfortunately, it turned out that our CPUs aren't as fast as we'd like them at this task, said array having 10^5, and then 10^6 cells - and architecture evolved in a way that exposes complex processing API for high-level operations, where the good ol' PutPixel() is one of the most expensive ones.
It's definitely a win for complex 3D games / applications, but if all you want is to draw some pixels on the screen, and think in pixels, it's not so easy these days.
Screen real estate size in memory increased by square law while clock speed and bus speeds increased only linearly, it was pretty clear that hardware acceleration was the way forward by the mid-eighties when the first GDPs became available. I even wrote a driver for one attached to the BBC Micro to allow all of the VDU calls to be transparently routed to the GDP for a fantastic speed increase.
I don't know. What was the GP's point? That FP people like to think too much and sometimes you just want to get stuff done?
Or that FP purists don't know how to actually build useful things? Trololol it took Haskell until the mid 90s to figure out how to do Hello World with IO
To be honest FP is a moving target but I see it as one of the mainstream frontiers of PLT crossing over into industry.
I can accept that to some, exploring FP is not a good for their business requirements today but if companies didn't keep pushing the boat with language adoption, we'd still be stuck writing fortran, cobol or even assembly.
Once upon a time lexical scoping was scoffed at as being quaint and infeasible.
Ruby and Python were also once quaint languages.
Java added lambdas in Java 8.
Rust uses HM type inference.
So what was their point? That FP people spend too much time thinking and don't know how to ship? In which case - I'm grateful that there are people out there treading alternative paths in the space of ways to write code in search for improvement.
In any case their example was pretty spurious, anyone who's written real code in production knows IO boundaries quickly descend into a mess of exception handling because things fail and that's when patterns like railway oriented programming assist developers in containing that complexity
Would love to know what has been proved? Very up for an open and honest discussion.
I'm back to writing imperative after years of functional. I think it is a very pragmatic choice today to go with an imperative language but I find class-oriented programming to be backwards and I think functional code will yield something more robust and maintainable given how IO and failure are treated explicitly. I'm not quite sure where the balance tips between move fast but ship something unmaintainable vs moving slower but having something more robust and maintainable.
Programming in a pure language is quite radical, it's a full paradigm shift so it feels cumbersome especially if you've invested 10+ years in doing something different. I'd liken it to trying to play table tennis with your off hand in terms of discomfort. There are plenty of impure functional languages around - OCaml, Scala, Clojure, Elixir.... And Javascript (!?!?)
FP is relatively new as a discipline and still comparatively untrodden. What if equal amounts of investment occured in FP - maybe an equivalent of that ease of led.turn_on will surface.
And tbh it probably just looks like a couple of bits - one for each LED and a centralised event loop. Which so happens to have been a pattern which works quite nicely in FP but emerged in industry to build some of the most foundational things we rely on...
I think functional programming is less popular simply because people just aren't good at it.
Functional programming is a good way to describe how a system works. By describe the input, the processing, the output. You describe the whole diagram.
However, in reality. People just sucks at thinking systematically. It's likely that the whole education system never taught you about how to do it.
Everyone was taught to do things step by step and smash the results together to see if it works instead of prepare everything before doing any actual work most of time. And if it actually need preparing, there is usually a pre made checklist for you to do it easily.
That type of thinking process isn't that common in our daily live. And of course no one is used to it.
But I think people should at least do it once. Even you are still programming interpretively afterwards. It could benefits you very much and make you a better programmer.
I agree with you but see another relevant reason: in FP, you HAVE to consider side effects 1) from the beginning and 2) completely, which as anyone can guess is quite a task.
In imperative you can just ignore it and produce objectively worse code, as you are not even aware of all side effects possible. And sure, for the LED project it wouldn't even matter, but the decision FP vs imperative is then more of a design / quality criterion in general - the notion of one being better than the other is just wrong.
Also a monad is much more complicated if you don't really understand it which makes judging it a bit unfair
What is a side effect? Getting the time? Pushing a result to an output channel? A debug printf? Setting a flag to cache a computation as an optimization? Is it not: evaluating a thunk? Implicitly allocating some memory to store the result of a computation?
Haskellers are trained to have a very inflexible view of what a side effect is. It is dictated by the runtime / the type system. In my views, there are lots of things that Haskellers call "side effects" that I would just shrug my shoulder on, and also lots of things that they do not call side effects but I care about them. It really depends on the situation.
This fixed dichotomy imposed by the language does more harm than good in my experience. NB: I'm aware that running a computation that for example gets the system time will get a different time it runs. That does not mean that I _have_ to consider it a side effect. I usually do not have a good reason to run the procedure multiple times and expect the runs to be totally identical by all means. In an imperative language, I have very precise control when this procedure runs.
Apart from language-imposed limitations (haskell is nowhere near the theoretical completeness of category theory, e.g. bottom-type), the "pure" nature of FP forces the use of abstract structures able to handle it (e.g. Monads), thus, wanting to write code, you first need to think even possible side effects trough to be able to write code containing them, which by definition is a stronger criterion of catching unwanted effects compared to imperative, where you can produce whatever you want. And sure, it is in no way a guarantee to produce good code, it is just a stronger condition. It effectively boils down do "assembly is just as good as C", and we see where it took us.
Anyone telling me to think it through as rigorous in imperative is lying to him/her self practically, unless they're actually verifying their code.
I don't know, I'm positive I'm not part of the sacred circle, but just a data point. The Haskell applications that I've managed to produce were all uniformly slow-compiling and unmaintainable. And I promise it wasn't for lack of thinking about "side effects".
In my view, the problem is that functional languages give you a toolset to compose functions (code) by connecting them in structures. In Haskell, that is made harder by the restricting type system (very limited language to do type computations) that you must champion, including a myriad of extension, which invariably lead me down to dead-end paths that I didn't know how to back out of without starting all over.
But Haskell's restricting type system aside, every programmer that I consider worth their salt has understood that it's not about the code. Good programmers worry about the data, not the code. Composing code is not a problem for me; I just write one code after the other, there isn't much else that is needed. I just think about aligning the data such that the final thing that the machine has to do is as straightforward as possible. Then the code becomes easy to write as a result.
The possibilities to design data structures in Haskell are obviously limited by its immutability. Which is, quite frankly, hilarious. "State" is almost by definition central to any computation - and Haskell tries to eliminating it (which of course is only an illusion; in practice we're bending over backwards to achieve mutability). For Haskell in particular, which does not even have record syntax, basic straightforward programming is often just not possible in my perception. I refuse to reduce a hard to use library like "Lenses" to do basic operations thing that should be _easy_ to code.
Even though Haskell is popular, and many programmers (including me) go through a Haskell phase, I haven't seen many large mature Haskell codebases (I know basically about Pandoc; and ghc if a Haskell compiler counts). Why is that?
I think trying to eliminate state and replicate the unnecessary state with getter is generally a good thing. One of biggest bug category programmers encounter `forgot to sync XXX` can be totally eliminated by this if you don't copy these state at first place.
But eliminate all of them... just looks silly to me. You need state anyway, why not just write them in a sane way?
I already have plans for improving it (especially the layout system), but overall it works pretty well and is reasonably featureful with little code. It's not perfect but "state" is certainly not a problem at all.
I can't tell you something like this can't be coded in maintainable Haskell, but I can tell you that _I_ wouldn't have managed, and googling around it doesn't seem like there are a lot of people who can do it.
It's been ages since I've used a "real" functional language but wouldn't it be nice to parameterise externalities like time, and have events occur "spontaneously" that force application state to update? Kind of like interrupts. Or now that I think about it... it sounds a bit like React where DOM events etc force application state to update (in a pure fashion)
Has this been done before?
So your LED function in pseudocode looks like
ToggleLeds(leds, t):
for each LED
LED.power = (LED.start + 1s) > t ? ON : OFF
And this is invoked from main() as follows
main():
ToggleLeds(this.LEDs, Events.Time)
Where Events.Time is some kind of event stream which allows the runtime to reevaluate main() and any other dependent functions each time it's updated
edit: And to sidestep the obvious performance issue with the function being reevaluated every few microseconds :D you would implement something like this
Indeed, C came for $0 with UNIX that AT&T wasn't allowed to sell and provided source tapes alonside a symbolic price, in a time where systems cost several hundreds $$$$.
C++ came on the same package as C compilers, some of which it was a compiler switch away.
Both were picked by OS vendors that tried to cater on top of UNIX clones.
The fact that you linked so many different companies is evidence that this wasn't just some push by a single company. Those things happened in parallel because the language was popular and is evidence of a vibrant community more than anything else. Being popular means many will want to make things with it yes, saying that it got popular since many people did things with it doesn't make sense.
Yes, because as I mentioned on the other comment you ignored, thanks UNIX, C, being born at AT&T, and the $0 cost of UNIX tooling up to the mid-80's alongside source code.
Had C++ been born somewhere else, e.g. Objective-C, and its popularity wouldn't exist.
> Yes, because as I mentioned on the other comment you ignored
I am not the other person you responded to.
Anyway, don't you think the fact that so many others decided to copy the language and implement their own versions of it is a testament to its popularity and not just that it got pushed by a single company?
It was, and still proves that a company marketing it with $500M didn't happen. Are you also going to say the same about python, or can we end this discussion?
Exactly! Thanks for proving the point. It was a collective effort of people/companies pushing a good programming language rather than a single company (Oracle) doing it. Not to mention that they license it in certain cases.
My point is that OP’s comparison is useless because Java most definitely has marketing costs mostly associated with that 8+ million Java developers, the same as other popular languages.
How is this relevant when the topic was that Oracle literally spent $500 million marketing Java? The community of anything popular is marketing itself, yes, but that is a very different thing from having an actual marketing budget to push it to popularity.
The sarcasm didn't make sense. The conference was started way, way later, by community organizers from all companies. This completely disproves that it was a major money push from one company.
Fully functional programming (e.g. Haskell) isn't popular because it isn't ideal and computers work imperatively.
Even in a "functional" task (e.g. compiling) sometimes you just want an imperative algorithm. And while Haskell allows this via state monads, they're clunky.
Meanwhile there's practically no tradeoff to writing functional code in an imperative language. In fact there's lots of functional code in most C++/Java/Python programs. Most "imperative" languages are actually hybrid functional and imperative, supporting "functional" constructs like lambdas and pattern matching.
99% of the time a functional program will get executed sequential and strict anyways. The other 1% of the time, in an imperative language you can explicitly encode functionality (e.g. generic stream operators which work concurrently and on lazy streams).
Functional programming absolutely has its use cases, and I love a bit of functional style thrown into imperative language one hundred percent.
That being said, I often end up arguing with 'pure functional programmers' at work - they can be incredibly hard to work with and married to the idea that _everything needs to be functional_ because its their preference, even though your average college student is going to be able to pick up and iterate on code that just uses a damn for loop or whatnot.
I _like_ functional, I find it incredibly useful, but like everything else, its just a tool to solve a problem and there are tradeoffs.
But some of the most annoying and must-do-things-academically-because-look-how-cool-monads-are this-is-the-only-proper-way people are very vocal about functional, which is a pretty big turnoff IMO.
Haskell is a good example. There are totally cases where Haskell will make your life easier. But if you're trying to reinvent the wheel and redefine our architecture by injecting haskell everywhere you can because _you like it_, that drives me absolutely bonkers.
I have personally been from an FP acolyte and back, there's some great stuff there, it will improve your non-FP code to learn an FP language. That said, bending over backwards to fit the FP mindset is suboptimal, the same way as writing fully OO code is. e.g. IMO some stuff from FP should _generally_ be avoided in favour of making code easier to read and debug - e.g. recursion, maps, monads, generators etc. etc.
"Meanwhile there's practically no tradeoff to writing functional code in an imperative language."
I would have to disagree with you there. It partly depends on what you mean, but I quickly give up on functional solutions in, say, Javascript (despite Javascript technically having all the bits I'd need!) just because of the lack of immutable data structures and cheap copying. Not to mention the constant pain point of asking "does this modify in place, or does it return a new" when dealing even with the standard library, let alone -user- code. I either need a library like Immutable.js, or I need to jump through hoops that are both not ergonomic, AND have performance penalties, or I embrace mutation (and at that point frequently rule out a fully recursive approach because the bookkeeping involved far outweighs any benefit).
> Not to mention the constant pain point of asking "does this modify in place, or does it return a new" when dealing even with the standard library, let alone -user- code
I don’t see how this is a pain point, JS is pass-by-value for primitives and pass-by-reference objects. Ergonomics, sure, but isn’t the lack of pattern matching a much bigger obstacle the lack of immutable semantic sugar?
Immutability is easy to gain, though, just make your variable object writable: false.
Because any recursive algorithm that needs to backtrack requires special care.
That is, let's say I have a sodoku solver. I write a recursive solution wherein I guess the next value on the sudoku board based on the constraints. When an execution path completes and fails to be solved; my board is now populated with that 'bad' state, and I've lost the state I was in prior to the recursive call, at each level of recursion.
I either need to make copies at every step (very unwieldy and inefficient), or explicitly write the 'undo' mechanism for each step. It's doable, but it's extra bookkeeping and logic to keep track of. Whereas immutable data structures give me an ergonomic and reasonably efficient way to maintain the prior state, and to create the next state.
And, the mix of mutation and non-mutation constantly messes with me. Array.concat returns a new array, but Array.push() modifies the array in place. Array.slice returns a new array; Array.splice modifies the array in place. This -constantly- messes with me.
The lack of a feature is just inconvenience that I work around. Having a mix of modalities causes me to trip over myself constantly.
That's a slightly more wieldy way of creating a copy of the array. So still has the inefficiencies. It also is only a syntactical help with adding things onto an array; it doesn't help with removing items.
Why would you worry about low level details for a high level language, optimization is an implementation detail. Javascript in my opinion lends itself functional style programming
Because he cares about his users who might be using his sofware on a smartphone? If you disregard performance completely, your users will either use your app because they have to and hate it or switch to something else if they can.
Define functional js/ts. If it's just "oh, hey, I am using first class functions everywhere" - yes, Javascript is great at that. If it's "I'm writing a bunch of recursive solutions for naturally recursive problems", Javascript is a pretty poor fit for it. You can do it, but it requires extra bookkeeping and care, as I mention on another comment.
> isn't popular because it isn't ideal and computers work imperatively.
Eh, I think it has more to do with tooling and framework support.
Imagine if the world were flipped and C++/Java/Python had zero IDE support, slow compilers, and weird/unreliable build and package management tooling, and only third-party bindings to any platform or framework. And Haskell had incredible IDE support, a fast compiler, great build and package management tooling, and official support for stuff everywhere you wouldn't have to think twice about using in a commercial context.
There are tradeoffs to writing functional code in langauges designed primarily for imperatively. Another comment mentioned efficiency.
Another is that in a pure language, you can look at a function and know for a fact that nothing you're seeing can ever be mutated. This eliminates a massive source of cognitive load when trying to read and understand code, as well as eliminating all sorts of bugs related to the program being in the wrong state. If you're trying to write functional code in a language not designed for it, you're missing out on those benefits.
Now, it's absolutely the case that the tradeoff may be worth making depending on the person and circumstance. But it does exist.
> Another is that in a pure language, you can look at a function and know for a fact that nothing you're seeing can ever be mutated. This eliminates a massive source of cognitive load when trying to read and understand code, as well as eliminating all sorts of bugs related to the program being in the wrong state. If you're trying to write functional code in a language not designed for it, you're missing out on those benefits.
When you write functional code in imperative languages, these constraints aren't enforced by language, but they should be enforced by convention and code review.
I use F# a lot and like it, but imperative code was problematic to do.
Now using Rust, I think I'm even more functional than ever. One key thing is that I can put a simple loop here and there (inside functions) and still the interfaces are functional.
I’m surprised you are finding Rust good for functional programming to be honest. It lacks a do-notation system, currying by default and guaranteed tail-call optimisation.
> It lacks a do-notation system, currying by default and guaranteed tail-call optimisation.
And instead gives plain structs with separate impls, traits like Into/From that cut tons of custom-made transformation code, more uniform iterators (in F# some things not implement iterators that other things do, because everyone need to implement again all of them) and other idioms that are very practical.
It really isn't. Just see Racket, Clojure, Elixir, and F# for practical functional programming languages.
This article is pretty terrible. There isn't even a functional implementation, with just a half hearted attempt to even understand how to do it.
Functional languages excel at state machines. The example is pretty terrible too with regards to imperative programming being a good fit, because literally no one enjoys the real life statefulness and mutation when cooking. If you mess up, there's no going back. Why would you want to embrace or repeat that feature in your implementation? Functional state machines just pass explicit state around instead of mutating some random collection of variables.
Concur about the C++ examples, but that's not just on the language. The author seems to be averse to whitespace to separate one chunk of code from the next.
Functional programmer here. It's a different way of thinking is all, I don't see it as solving puzzles exactly. It's unfortunate if people see it as being about complexity or being clever. Done well, it should be the opposite. Once you understand the key abstractions, it can make code a lot easier to write and reason about in some cases. That's the joy of it for me. It furthers my day to day software engineering concerns of readability, maintability, testability etc. Nothing more.
It's just another tool though and there are many places where it's not appropriate. It's also a personal taste thing, I get that. It's worth persisting with it on a deeper level though just for the sheer joy of those 'a-ha' moments that come. Even if you don't use it in your day job, just bending your brain with different programming paradigms is well worth it.
I feel like your first and second paragraphs are directly at odds. It's not about "solving puzzles" and you decry cleverness and complexity, but you also think we should persist for joyous "a-ha" moments and brain-bending?
Ha, yes OK that sounds a bit contradictory so let me clarify.
Firstly learning ANY new paradigm is inherently brain bending. By definition right? If you didn't know logic programming or oop you'd have to bend that brain.
Secondly, using FP day-to-day doesn't require any deep insight. You can use various tools, monads and whatnot, and not think too hard about it.
However, when you really grok the paradigm you will get those a ha moments. The same is true of other paradigms too, but there is a particular satisfaction that comes with deep understanding in this domain that I can't explain. It's kind of beautiful to a certain mind I guess...
> However, when you really grok the paradigm you will get those a ha moments. The same is true of other paradigms too, but there is a particular satisfaction that comes with deep understanding in this domain that I can't explain. It's kind of beautiful to a certain mind I guess...
Right but you understand that 90% of developers can't bend their brain that way so does make maintaining functional code very difficult.
It makes expressing some solutions so elegant and easy it’s totally worth using a functional approach. Other things may be more elegant to express using an imperative approach.
Just try to implement recursion in MS 8-bit BASIC.
I don't know. I don't write functional because it's hard and I'm super smart, I write functional because it's easy and I'm lazy.
I'm also not sure the example in TFA is much of an argument. From a glance looks like the programmer got entangled in their own cleverness, but the slice we are shown is too small to show if there's some external reason to do it like that.
story time: once did a compile time parser generator with c++ templates. "Zero-cost" abstraction and all that jazz. Turns out the binary was so large that a type-erased vtable system ran faster, for all its nonzero cost.
Templates are orthogonal to functional or imperative. It's a code generator. Understand what you generate!
I have settled on the immutable core + imperative shell pattern, because it’s the easiest way to write robust software in my experience. It’s much easier to do this in a functional language like F# or OCaml than it is in a mainstream OOP language, even one with all the bells and whistles like C#.
It's getting much easier in C++ due to constexpr becoming more and more powerful. A constexpr function is a mathematical function, in that the compiler guarantees that its output is a deterministic function of its input.
True. Especially when guiding a team C# makes life hard because so much of immutability is by convention only. I suppose it will get better once we can move to C#9 with Records, but even so, some discipline is required.
I've had this experience too. It doesn't take a lot of template use for both binary size and build time to blow up. When iterating with `make` starts taking over a couple of minutes even on a smallish-medium-sized project, it becomes a bit much.
I didn't do anything fancy like compile-time parser generation. it was overuse of the standard library features, like `std::variant`.
Fact: functional programming takes more time to learn. It's not full of people that are "smarter' or "snobs" it's just full of people that have taken the time and invested in educating themselves to get to the point where they can do it well. It's not "better", it's just a useful tool for certain types of problems. I'd advise any young engineer to just learn how to do both "imperative" and "functional" programming well and avoid getting dogmatic about it. The vitriol here is in dogma here to be sure
When I was in college, a few decades ago, we had a class on object oriented programming. Out of about 60 people in the class, you could count on the fingers of one hand how many people passed. It was so bad the professor had to offer a redo. (Why yes, of course I passed, duh).
It was a long time ago and in C++, which is a touch trickier than Java or C#, but "traditional" programming isn't particularly simpler. It's just what's taught in school and what all the examples you google show. It's a self fulfilling prophecy.
Procedural vs functional? Yeah, the former is more intuitive to a beginner. But as soon as you go a little further than that, it's basically all about what you've been exposed to.
To add more weight here, when I was teaching programming in collage I found it quite surprising which concepts students found difficult to understand.
They struggled to learn pointers, which makes sense to me. They also struggled to learn OO. And they struggled to learn recursion and state machines. All this stuff seems so simple once you've internalized it.
I think there's something real to the idea that humans have a hardwired instinct for stories. And that makes learning imperative programming easier. But you move beyond your intuitive instincts pretty fast. And when you do, it really matters how well your language or environment helps you to think.
The problem with C++ templates is simply that they aren't very good language for thinking in. They're a mashup of functional concepts in an imperative language with bad syntax. But functional programming doesn't have to be done badly. For a comparison, look at spreadsheets. They're arguably the most popular programming languages in the world. And they're purely functional.
I struggled with pointers as a kid until I learned assembly. Then C pointers became trivial.
I think the problem is this: humans love to learn something when it simplifies their lives. That is the spirit of innovation.
A prerequisite for learning is that the student must have already been exposed to the messy complex primeval swamp before they can appreciate what learning has to offer. Where education gets stuck is that they take students with no prior exposure to the problem domain and then teach them the solution as an answer to a set of questions they had never asked.
The mind rebels against pre-canned solutions.
Students who have prior exposure to the problem domain pick it up rapidly. Schools aren’t built to take $$$$$ from students just to show them how not to do things…yet that’s what needed for a student to learn.
I disagree. C++ already has this mechanism - the preprocessor it inherited from C. That one is a textual preprocessor step, and it's very important for the programmer to have this as a mental model - then they'll be able to use it in a smart way[0] and avoid dumb mistakes.
I'm not sure if this is the best model, but a good one for C++ templates is that they are tree-level code generators. Kind of like really dumbed down Lisp macros. They don't do textual substitution, but AST-level one. But they generate code nonetheless.
(I suppose exposure to Lisp and Lisp macros may be helpful here too - just to have a comparison with a system that's a proper compile-time AST-level code generator, and that with minimal syntax, you can actually write your code as AST directly and be as readable as syntaxful languages.)
--
[0] - I know the mainstream opinion in application-level C++ is to avoid using it at beyond includes, include guards, and occasional conditional compilation or third-party package configuration. There are good reasons to avoid it - beyond complexity, quite a lot of support tooling like IDEs get confused by more complex macros. But still, there are many instances where even the newest C++20 features won't help you reduce obvious mechanical boilerplate. I've learned to use the preprocessor in those cases - mostly for readability reasons.
In my university, I was a part of an experimental class that did the introduction to programming class in Ocaml (we had to prove that we can write C beforehand).
There weren't issues with it. If functional programming was introduced earlier, people would have less trouble learning it.
Meanwhile Stanford does an analogous class in Python. And all of those alumni go on to define the industry trends.
Indeed. Some five years ago, I was explaining the basics of OOP (in Java) to an aerospace engineer, who needed to get into IT. It took him several weeks, and a whole bunch of exercises, to master. He already had experience with FORTRAN.
The classes at CMU in SML provide some alternative "facts". FP != Haskell. If you start with FP, OOP seems weird and vice versa. My guess it is just our wetwares' expectations being violated.
I haven’t tracked this down, but I believe the authors of How to Design Programs did some research and basically found that whether FP or OOP is harder depends on which you learned first.
Which kind of makes sense, but then again I learned OOP in college, I always found it really hard to remember all those design patterns they made us learn. Years going on in programming I never really saw any big advantage of OOP hierarchies and spreading one function into 15 different files (yay for the person debugging this).
Then I found out what we call OOP is just a misunderstanding. Erlang kind of did "proper" OOP and that makes sense.
Then I got to functional programming through F# (and Scott Wlaschin's amazing page F# fsharpforfunandprofit.com), and it just immediately clicked. Maybe because Scott's material is so good. But also because there's not so much non-sense boilerplate and extra complexity as you see in every "good" OOP example.
Scott also wrote “Domain Modeling Made Functional”[0], which demonstrates functional modeling of a business system like you’d find in the real world. I found it really helpful even though I don’t work much in F#.
> Fact: functional programming takes more time to learn.
Is this a fact? I was taught scheme at age 12 and in four weeks I had a working compiler. And I have heard that a team taught middle schoolers erlang and got them writing a chat network in 2 days.
Define "functional programming", it's a style and you can write imperative Haskell just like you can write functional C, in fact a language that is expressive enough to use different programming paradigm at the same time will be superior to to a single programming paradigm
Functional programming is popular because it's intuitive!
Perhaps "immanent" or "diagrammatic" is a better word than functional. I mean that a function _is_ what it does, but not so with a procedure. One cannot bake a value as one does a cake!
See React's dominance. See the usefulness of declarative state machines. See async/await.
Functional programming gets a bad rep. People often fail to see the functional elements in more imperative code and the imperative elements of more functional code. In fact very few language constructs are necessary to open up 99% of functional possibilities in imperative languages (functions as values) or imperative possibilities in functional code (do-notation). Any critique that dismisses one or the other misses the point.
> Functional programming is popular because it's intuitive! I mean that a function _is_ what it does, but not so with a procedure. One cannot bake a value as one does a cake!
I suppose it really depends on what you've learned first. Myself, I started programming (C++) in my early teens, a bit before I was introduced to the concept of a function in math classes. I didn't have much problems with that part of math, but I knew something doesn't feel right about it. Only many years later I realized that I never fully internalized the concept of mathematical function being a relationship and not a procedure. And I had the same issue with "=" symbol. After being exposed to the concept of assigning values to variables in imperative programming, it took me years before my brain fully internalized the concept of equivalence relationship.
Maybe the problem is that we evaluate functional programming as a way to write programs, but it’s real potential is that it gives us more ways to rewrite programs.
I've seen a handful of articles and conference talks where the main thrust is "why isn't functional programming more popular". Apparently it works well and is "intuitive" for some people. I think the first thing we should do is recognize the amount of neurodivergence amongst programmers.
The WISV-IV GAI tests for verbal comprehension and perceptual reasoning. It provides an estimate of general intellectual ability, with less emphasis on working memory and processing speed. My GAI is 124. Which means I'm fairly intelligent (sic. "superior") when it comes to understanding language and reasoning. It makes me a fairly good problem solver. I'm an ok programmer (judged against the hundreds of programmers I've paired with through my consulting work).
That said, I've tried to learn Haskell three times and given up each time. The functional model of D3js kicks my ass. I cannot grok LISP.
It turns out my WMI (working memory index from Wechsler Adult Intelligence Scale) is BELOW AVERAGE. This makes holding lots of recursion or abstraction in my head at once quite difficult. I also suffer from adult ADHD.
I love long functions in a procedural style. I'm quite proficient working that way. There's not too much abstraction and behavior hiding (why I grew to hate OO). You can see how my neurological makeup impacts what "good" code looks like. I also understand that James Gosling has a kind of synesthesia and that impacts the structure of his code to the point of causing problems for others (see Lex Friedman interview).
I said all that to say this. What makes code understandable/readable/clean is in the eye of the beholder. There's quite a bit of neurodivergence in our field and we need to account for that when deciding on how to critique code. After all shouldn't code be written first and foremost for humans to understand and only incidentally for compilers? (paraphrased from SICP)
Thank you for bringing up WISV-IV – it's a fascinating way to categorize intelligence.
My son (11yo) is autism spectrum and recently received a WISV-IV evaluation and found him off the charts for spacial reasoning. It immediately had me thinking about object oriented programming and conceptualizing digital processes spatially.
Perhaps functional patterns are preferred by those with faster "processing speed" of thought or "working memory" – basically thinking primarily of "just before" and "just after".
My anecdotal experience has shown that the coworkers who think "the fastest" prefer functional patterns, whereas those who think slower but more comprehensively prefer object oriented.
Also anecdotally, functional programmers rarely leave the keyboard, whereas OOP spend a lot of time on the mouse – both are equally productive at the end of the day.
Yes. I took those tests (and more as I presume your son did) for my ASD diagnosis. It was very enlightening and I feel like everyone should have a psych profile like that done.
Not sure why you are being downvoted (maybe the "I'm a slightly above average programmer").
I think you are entirely correct, FP is both more suitable to certain type of people and to certain types of problems.
I've used FP in some situations and loved it, and had to grapple with FP code that was utterly incomprehensible because the person that produced it seemed more interested in showing off what he could do with FP rather than actually getting the job done (or producing efficient code, for that matter).
It depends on the audience background/worldview. From Christian moral tradition: "modesty is a virture". From modern rational thought: a self-assemsments like this are not reliable (see for example Dunning-Kruger effect) and the fact that author does not realize this erodes trust in the remaining of his argument.
"Apparently it works well and is "intuitive" for some people. I think the first thing we should do is recognize the amount of neurodivergence amongst programmers."
Yep that's it! It suits my brain quite well but I totally get that it doesn't suit everyone's. This sort of point is under-appreciated, not just with FP, but with so many things in computing and life more generally.
Programming is about expressing how to solve a problem. Of course different ways to express that will fit different people. Every time I see people trying to push "that language is the best!", I see people trying to have a language as the only spoken and written language in the world, and I find that incredibly sad. People shape languages, languages shape people. Seeing someone that's very comfortable with a certain programming style that's not the one that you're used too usually means that you can learn a lot by talking to them.
When you look at programming as a form of communication, it's easier to understand a lot of things. You can't build your tower of Babel with thousands of people if you all talk different languages in different ways, you need something that can be understood easily by everyone. But that's also not the language you would use when writing something deeply personal.
Interesting, drawing parllels to religion here. In the end it might be our genes and the inherent tribalism driving us to behave that way, while thinking we are rational agents. There is an essay about it IIRC, the Lambda tribe and the Machine tribe.
Talking about things I don't really understand here, but there's a nice parallel to draw to Gödel's incompleteness theorems. Same thing with the blind men and the elephant, or 2D lizards that see a cube pass through their dimension. Maybe we're cursed to only ever see part of the truth, and only be able to understand part of it. Maybe there's no single "truth" at all and we have to learn to deal with this.
We are not cursed, we just can't see it. Immanuel Kant called reality "Das Ding an sich", which i can try to translate as "The thing in itself". He showed, that since we perceive reality via our senses, which are easily fooled, we can never be sure that we perceive reality as it really is.
I wonder now much C++ matters here. I rarely reach for functional patterns when the language I'm using does not provide proper constructs for readable, declarative code like pattern matching, nominative discriminated unions, and pipe operators.
Sure, you can fake all that with other constructs or patterns. But then it really does feel more like a puzzle.
IMO, good FP is good when it is maximally declarative and organized in the linear way in which humans reason best.
Maybe the author should try F# or OCaml instead of Haskell! I love FP and have also tried and hated Haskell three or four times.
OpenAI Codex is pretty good at translating code between languages. It's far from perfect - but it can get simple cases perfectly. It knows languages like Rust, Haskell, Erlang, and more.
I found that I could translate Python to Haskell using Codex and get more real working Haskell code far, far quicker than I could manually.
I think Codex will be a simply incredible tool for learning new programming languages, could definitely make FP easier. It may not be super "smart", but it's smart enough, it knows a ton, and it's got superhuman recall.
Another idea is that you could filter Codex suggestions for those that pass the type-checker, and keep regenerating until it passes.
It's so incredibly fun. It made me want to never stop coding.
Yeah, I thought the author might mean "some of you may be under the impression that the primary appeal of functional programming is it's weirdness. However, that is not the case."
The easiest and clearest way to resolve it would be to say "unpopular" rather than "not popular", although arguably they don't mean exactly the same thing.
I find it’s not the most straightforward langauge for expressing precise thoughts and logic…
“ The Dog Walking Ordinance *
The following transcript of a Borough Council meeting in England illustrates the difficulties of expressing a simple idea in precise and unambiguous language.
From the Minutes of a Borough Council Meeting:
Councillor Trafford took exception to the proposed notice at the entrance of South Park: "No dogs must be brought to this Park except on a lead." He pointed out that this order would not prevent an owner from releasing his pets, or pet, from a lead when once safely inside the Park.
The Chairman (Colonel Vine): What alternative wording would you propose, Councillor?
Councillor Trafford: "Dogs are not allowed in this Park without leads."
Councillor Hogg: Mr. Chairman, I object. The order should be addressed to the owners, not to the dogs.
Councillor Trafford: That is a nice point. Very well then: "Owners of dogs are not allowed in this Park unless they keep them on leads."
Councillor Hogg: Mr. Chairman, I object. Strictly speaking, this would prevent me as a dog-owner from leaving my dog in the back-garden at home and walking with Mrs. Hogg across the Park.
Councillor Trafford: Mr. Chairman, I suggest that our legalistic friend be asked to redraft the notice himself.
Councillor Hogg: Mr. Chairman, since Councillor Trafford finds it so difficult to improve on my original wording, I accept. "Nobody without his dog on a lead is allowed in this Park."
Councillor Trafford: Mr. Chairman, I object. Strictly speaking, this notice would prevent me, as a citizen, who owns no dog, from walking in the Park without first acquiring one.
Councillor Hogg (with some warmth): Very simply, then: "Dogs must be led in this Park."
Councillor Trafford: Mr. Chairman, I object: this reads as if it were a general injunction to the Borough to lead their dogs into the Park.
Councillor Hogg interposed a remark for which he was called to order; upon his withdrawing it, it was directed to be expunged from the Minutes.
The Chairman: Councillor Trafford, Councillor Hogg has had three tries; you have had only two . . . .
Councillor Trafford: "All dogs must be kept on leads in this Park."
The Chairman: I see Councillor Hogg rising quite rightly to raise another objection. May I anticipate him with another amendment: "All dogs in this Park must be kept on the lead."
This draft was put to the vote and carried unanimously, with two abstentions. ”
It's probably just me, but I found loops much more intuitive than recursions for those non-recursion oriented problems. That is to say that I prefer to use recursion ONLY for those are un-intuitive to solve by loops in a trivial way.
Loops are intuitive and easy to understand, because they are concrete. They can also be cluttered and hard to read due to all the required bookkeeping.
For simple loops, I prefer something like map/filter/fold when appropriate. I try to avoid chaining them, because that often turns the code impossible to understand and makes me feel stupid. If the loop can't be modeled naturally as a higher-order function, an iterator-based loop is easier to read than an index-based loop, which is better than tail recursion.
With complex loops, there is no way around concrete loops with comments describing the invariants and what each part of the iteration does.
Many problems can be modeled naturally with recursion. Such problems are particularly common with graphs. However, you should never use explicit language-level recursion unless you can guarantee that recursion depth will be small. It's easy to get a stack overflow, so recursion should usually mean a loop with an explicit stack of states.
Generally when you learn recursion you also learn about higher order functions. Most of the use cases of loops get replaced with map/filter/fold. What’s left are typically those “unintuitive” problems that need recursion, such as when you’re dealing with trees.
I like functional code, except that because the use of lambdas is awkward to "escape" them:
nums.iter().map(|x| if x > 10 break(?))
This little thing is the major problem of functional chains (this is only a toy version of the issue). Because lambdas are inside the scope is hard to make them truly part of the current execution flow.
So, sometimes I change a chain of iterators to simpler loops for things like this. IF iterators where part of current scope, it will look far more natural, IMHO...
Loops are nice, but after having written enough Erlang, you get used to using foldl and map when they make sense, and recursing yourself when you need to. I can imagine starting with Erlang and learning about loops later and being weirded out.
I've grown to sometimes enjoy how a recursive loop looks in code. Sometimes you put the end case first, sometimes you put the end case last, depending on which makes most sense while writing (or perhaps while editing while reading later).
With languages with loops and no pattern matching, I'll always write a loop as a loop, if I used one with both, I dunno, there's probably a few loops I'd do as recursion and most would be loops.
OTOH, Erlang is certainly functional, but it's very pragmatic as well, there's not a mention of monads, and one tends not to do a whole lot of weird stuff with closures (although you can). Sure, there's a good number of functions that take functions as arguements (which you can do in C anyway), and there are anonymous functions, but you get better error reporting if you use functions with names, so at least I tend to name anything with more than a few lines. I also tend to have very short functions, but that is somewhat influenced by stack traces only having function names and not line numbers when I started 10 years ago.
Recursion is great for recursive structures, like trees. When you say "a binary tree is either a leaf, or a node which is a value and two binary trees", it's really easy to imagine how you will use a recursive function on it.
Having written Haskell for a few years, the concepts are kind of interchangeable in my mind. I think of recursion as being basically a loop, and a loop as basically the same thing as recursion. To the point where it seems natural to me to name a recursive helper function `loop`. Obviously they're not exactly the same, but with familiarity they can be translated trivially.
I think it's really just a matter of prolonged exposure to make recursion seem intuitive.
Functional programming forces you to think a bit like a theoretical physicist seeking to write equations for fundamental laws: the universe is modeled as interacting particles that start free, collide and interact and end up free again. No concept of historical dependence or sequential evolution. In physics these attributes emerge only as descriptive crutches in the context of poorly understood complex, macroscopic, systems undergoing various transitions. We have much less clue how to write universal equations for such systems.
So FP might have a theoretical advantage as a building block of low-level software systems that are not throwaway labor (that is: only valuable in specific historical window / context).
In any case the factors driving programming language popularity have changed dramatically ever since the developer universe got truly "connected". There is a winner-take-all network mechanic that basically inflates whatever initial advantage into a (difficult to explain ex-post) catholic dominance.
Before I learned about functional programming I found that I'd break a task down until I found a pure functional core that transformed data from one form to another. I'd try to separate that from all the imperative and async stuff going on, and all the validation. It sometimes meant my Python programmes looked a bit weird, but I found that when I managed to do that I'd reached a point where I really understood the problem space and what I was trying to do.
Unsurprisingly when I learned functional programming I found that it was a great match for the way I think, or at least the big picture way I approach programming. I doubt it's how all programmers think, but it's definitely the way I do.
That the programming paradigm that's used primarily by a small number of people who are into mathematical purity and is generally regarded as hard to wrap your head around, isn't how people think?
That the programming paradigm that takes so hard to not expose concepts like RAM or a program counter isn't how computers think?
Or that the combination of these makes it unsuitable for communication between the two?
> That the programming paradigm that takes so hard to not expose concepts like RAM or a program counter isn't how computers think?
Are you somehow under the mistaken impression that OOP does this? What happens in C++ when you have exhausted the heap and you try to new() an object? Have you ever done pointer arithmetic in Java?
I'm very much of the opinion that you're not going to find a single answer to "how people think". It's unobservable and not really subject to social pressure, so there's going to be a huge variance in mental models.
That's why you get these unproductive discussions about which is best. It's like having some red-green-colorblind people and some blue-yellow-colorblind people try to agree on wallpaper for you.
> It's like having some red-green-colorblind people and some blue-yellow-colorblind people try to agree on wallpaper for you.
I agree, it's exactly like that.
It's like that one person you're annoyed with because they keep saying the colors don't match, even though they obviously do, but you don't know that person is tetrachromatic - you don't even know tetrachromacy is a thing. Or, "picture this in your mind" meaning something completely different to people with aphantasia (like myself), and most people with and without it are not even aware aphantasia is a thing, and just assume everyone imagines (or doesn't) the world the way they do. Same thing with internal monologue - some people talk to themselves in their heads all the time (I'm in this group), others don't, and - again - most aren't aware there are people who aren't like them.
The more we talk and learn about each other, the more it seems our internal experiences are pretty diverse.
Functional programming. You don't have to have a long memory attention span to understand how you data are changing. It's the case that humans can only keep some small number of things in their head, no?
I am not a huge functional programmer, but the big weapon in FP's wheelhouse is that pure side-effect FP is far more suited to multicore/threading/processing, which is where all the future gains in processing will probably come from.
Functional programming is the most popular current paradigm. What is everyone on about? Javascript, a functional language, is one of the most popular languages ever. Rust and go, two other popular languages, also eschew object-oriented programming, the former dominant paradigm, for a truly functional one.
C++ is gaining more and more functional features, and this is quickly becoming the predominant style.
Rust and Go pull in some stuff from functional programming (first class functions) but are not really designed around functional programming as the way to get things done.
Most go programming I've seen is like python where you are slinging around maps that are getting modified all over the place. Very antithetical to functional programming.
You are confusing functional programming for pure functional programming. This is like confusing Smalltalk-style OOP with OOP in general and then complaining about C++.
Mutability and functionalness are orthogonal, although the immutable style lends itself well to functional programming and mutability is more kludgy in FP.
But given that both Rust and Go have first class closures, they qualify as a functional programming language. The definition of FP is that functions are data. Python, Rust, Go, etc all fit this definition. Modern c++ does as well.
There is no reason not to include these languages, not only they support functions as data, they also support higher order functions (functions that take callbacks accepting callbacks accepting callbacks, ad infinitum).
While I agree with you, I think, in general, it’s functional patterns that people are after but not necessarily pure functional programming which I’d argue often times still is weird and counterintuitive (at least to people who have started with imperative paradigms in a procedural or object oriented language).
JavaScript isn’t a functional language, neither are Rust nor Go (and Go in general is probably one that the least bets on functional patterns all around), they all just provide constructs from functional languages like mapping arrays.
I vehemently disagree that JS, Rust, and Go are not functional languages. We have no problem classifying C++ as OOP despite it being an equally good procedural language. For some reason, we don't extend the same benefit to FP because it's foreign.
It's true that JS, Rust and Go are multi-paradigm languages (so is Haskell and other FP BTW), but FP is one of those paradigms.
I have my doubts. If I need to implement say Dijkstra's in Haskell and pick up a random algorithms book then it's not going to help me at all. If I then ask a hundred programmers how they would do it I'd get a hundred wildly different answers. From my point of view algorithms and data structures in functional languages are not very well understood or explained due to their stateful nature. If functional programming was the most popular paradigm I'd expect a lot more literature on this topic.
IMO with Go. Yes, it eschews object-oriented programming. Yet, funnily enough, whenever I use Go, I am noticing that I can work fine with it when thinking about it in an object-oriented way. Structs + methods were all the features I used in object-oriented programming anyway.
I've been going a similar way. Data and the methods to act upon the data.
But i also find myself writing in a more functional style too, limiting the amount of side effects, pure functions when i can manage it. It makes testing so much easier when you don't have to have a bunch of set-up and teardown logic to get your object into the right state before running a test.
This logical grouping is what Rich Hickey called the attractive thing about objects. So a namespace in Clojure does exactly that: lets you refer to “a logical grouping of data and the functions specific to acting upon it” with a meaningful name. (It also handles the specifying of dependencies, and seems like the right place to do that, but that’s not relevant here.)
Clojure also supports private functions with “defn-“ but oddly requires using metadata for binding ns-private vars. I wonder how many folks have written their own “def-“ macro to do this. It’s never come up for me, but I haven’t written production code with it either.
It eschews inheritance and achieves polymorphism and abstraction through structural typing and implicit interfaces. Whether it is more "object oriented" than other languages is a bigger discussion.
Here's "clippy", Rust's style enforcement system, complaining about an insufficiently functional form.
warning: manual implementation of `Option::map`
--> src/viewer/regionindex.rs:265:13
|
265 | / match vlinkopt {
266 | | Some(vlink) => Some((*vlink).clone()),
267 | | None => None
268 | | }
| |_____________^ help: try this: `vlinkopt.map(|vlink| (*vlink).clone())`
|
note: `#[warn(clippy::manual_map)]` on by default
for further information visit
https://rust-lang.github.io/rust-clippy/master/index.html#manual_map
It wants a map and a closure used on a single value, to replace a simple conditional. I am told that the map and closure will all optimize out, but I haven't looked at the generated code.
Becoming fluent in using all the different ways to iterate over traditional collections (map,flatMap, fold, reduce etc.) already takes some time. Extending that to things that programmers don't normaly think of when they think "loop over something" is definitely a hurdle. It doesn't help that these operations have very generic names.
That being said, once you get used to it it is extremely convenient to just map over any sort of "container" and focus on business logic. The above concept even extends to async programming. You can do things like iterating over a Future and let the runtime handle waiting for the value. Functions returning a Future or a Promise can be composed just like you would iterate over a list.
There are of course different ways of achieving this (async/await) but the power of learning FP concepts is that you can apply the same fundamentals to almost everything that has a defined notion of iteration (--> is a monad).
> It doesn't help that these operations have very generic names
As if "for" and "while" aren't generic names. They're so interchangeable that ziglang has basically replaced C's "for" with "while".
And with imperative loops, you have to reconstruct in your mind the state of what you're looping over and remember all possible variables closed over the loop and how they could be in flight during the loop. Map is basically declarative: "this thing is going to happen to each thing in your list"; Reduce is like a for loop, except you explicitly declare what you have to keep track over through the loop.
Eh. It reminds my of writing vectorized mathematical code in numpy or matlab. Oh it's so much cleaner because you don't have an ugly loop! If all you have is a basic dot product, sure. But a lot of times, a loop is a hell of a lot easier to understand, even if it is slower.
Almost universally, I find the map/reduce/filter idioms to be inscrutable, write-only code compared to a good old loop
Tell me you can write that as an "easier to understand" set of loops. Tell me that this code is "write-only" and that you don't immediately understand what is happening, without even knowing the language it's in.
The pipe operator really is elixirs secret sauce. The whole OTP platform amazing but web servers can live without it. But the developer ergonomics of the pipe operator for code that is to a large extent
for and while aren't generic in the context of collections. It's very obvious what
for item in itemsList:
doSomething(item)
means. In any case I think map etc. are perfectly ok but it gets tricky when you talk about Options, Futures, Either etc. since I would argue most people don't immediately see how "looping" (I know it's not really looping) can apply here.
In some cases I've found it impossible to use .map() - especially when needing to use "self" and dealing with lifetimes of data borrowed from "self". I really wish Rust had proper disjoint capture for private methods in these instances.
I like the puzzle analogy but I'm not sure it holds too much weight. Didn't imperative style also kind of feel like a puzzle when you first started learning how to code?
Remember those light bulbs that went off when you finally incremented an index variable at the right moment, or when you finally figured out the right condition for a while loop? Clearly all styles are puzzle-like when you are first learning them.
I think functional programming has more or less the same learning curve as other styles.
In my view, the reason functional programming isn't more popular is because of Comp Sci curricula.
There are plenty of institutions where there may be one course on functional programming (if you're lucky). Then, many students seem to (incorrectly) internalize the idea that anything their degree doesn't cover extensively must be of lesser importance.
For me the problem is that "functional programming" embodies a lot of questionnable ideas:
- the idea that readability is not the first thing to pursue
- the idea that macros are great
- the idea that leaving performance off the table is OK
- the idea that objects and subtyping are useless
- the idea that mutability is hard to reason about
- the idea that recursion is super important
- the idea that somehow closures are more readable than objects
- the idea that tuple are somehow better than structs with named fields
It is a style that thrive _in opposition_ to the style of useful programs out there: readables, no macros, fast, with an UI, with mutability, with objects, with names.
You're making so many broad, inaccurate and false generalizations that I won't bother replying to every item.
But let's take macros: yes, as an average application developer, you should probably avoid macros. But for some problems they're a way better tool than alternatives like runtime reflection or code generation. For instance I'm happy to use macros-based libraries that automatically derive JSON codecs. Unless you're a library developer, macros and their inherent complexity (outside of the Lisp family I guess) aren't an issue in FP languages that I know of.
> yes, as an average application developer, you should probably avoid macros.
I disagree. I think you should just learn to use them correctly. They're just another tool, and not a hard one to grok. If you're allowed to design code on your own - write modules, classes, APIs - then you should be able to handle macros responsibly as well.
In your experience with FP languages, how often do you reach for macros over functions, if you could ballpark it? Or if it makes more sense, in which scenarios are you more likely to reach for macros?
The example I always hear about is creating DSLs. I haven’t done that yet with FP (Clojure), and I have not yet created a macro outside of learning exercises. So I have only a vague notion of when they are most useful. I’m not arguing to avoid them, but I have internalized the Clojure preference for “data over functions over macros.”
This question is open to anyone. I’m mostly interested in the opinions of folks who have learned to use macros as “just another tool,” and who have a good set of heuristics for knowing when it’s the right tool.
I did a quick scan of one of my side projects, and found a defun:defmacro ratio of 10:1, and (defun + defmethod):defmacro ratio of 16:1. In another codebase I worked on, the ratio is 7:1 and 9:1, respectively, but it gets tricky to count with grep[0]. Correcting for greppability, I'd estimate the ratio closer to 20:1.
That may seem like a lot of macros, but it's actually few functions. A big reason to use macros is to have them generate functions for you, or remove the need for having them in the first place[1].
The macros I write tend to cluster in two categories, that you could almost call "functional" and "imperative".
1. The "functional" macros are code generators that eliminate boilerplate. For example:
(switch-key key-event
("RET" (funcall on-activate widget)
t) ;; return T to mark event as consumed
("ESC" (... do some stuff ...)
t)
(t nil))
The line above looks like a simple switch-case, but it hides quite a bit of complexity. It ultimately expands to `cond` (Common Lisp's if/else/elseif), but:
- It injects appropriate code to compare against data hidden in key-event, which is a structure containing codes and modifiers.
- It injects code to transform string key descriptors like "RET" or "C-a" (CTRL+A) or "C-M-<F12>" (CTRL+ALT+F12) into something that can be compared against data in key-event.
- It detects when the switch keys are string literals (like in example above), and instead of injecting code to convert them, it executes it during compilation[2] time - this means string literals have zero runtime performance impact, they're compiled into the right data structures up front.
Conceptually, the macro is rather trivial - but this is already what Lisp people would call a DSL. This particular example is driven by the desire to have a switch-like API for key events over an Emacs-style keybinding notation, that's also zero-cost if keys are known at compile time. I have more macros in this style.
What makes these macros "functional" is that the code they generate doesn't have any compile-time or runtime side effects (other than those introduced by user code that's given as input to a macro invocation).
2. The "imperative" macros, or state-modifying macros, that modify compilation state. E.g.
- A global variable containing event definition - as metadata for debugging, reflection, compile-time correctness checking.
- A bunch of top-level functions like `make-foo` and `send-foo` (note the function name contains the name used in define-event).
Other macros like this generate classes, or family of classes, or fill in some global hash tables, etc. These macros create proper DSLs - they contain complexity that allows me to make concepts like "events" or "components" behave as if they were a part of the language already. This almost always forces them to be "imperative" - they alter the language runtime state[3], by defining new functions, methods, classes, types, creating or modifying global variables, etc. But the resulting functions, or instances of classes, can be then used in regular functional code.
--
On the "data over functions over macros", I'm not exactly sure what Clojurists mean by this, but I think I may have an idea. Many times, I've found myself writing an "imperative" macro that would define something as a global entity in the system, only to later wish I kept it as a data structure I can pass around.
My favorite example is defining REST API routing. In one project I worked on, we had a bunch of top-level macro invocations like:
The macro did the obvious thing - created a handler function, adorned with REST-related boilerplate, and registered it as the handler for the route "/api/foo/bar" in the global routing table. This pattern is something I see people commonly do, but it becomes a problem the moment you want to have more than one routing table - whether for testing or extending via composition.
So these days, I try to avoid macros like these - that define a global state for things that are only "global" in the sense that, in production, there should be only one of them. From one little job I did in it, I recall the Clojure style is to keep such definitions as data structures, and I think it's a good idea.
--
[0] - Macros are often used to generate functions, so in many places, code that's technically a function definition sits inside a macro invocation, making it hard to count with casual grepping like I'm now doing.
[1] - In my experience, two main roles of a functions is modularity (including testability) and readability. Macros eliminate some of the latter - they let you simplify code that would otherwise have you create functions just for the sake of keeping another function's body simple.
[2] - Actually, during "macro expansion time", which is separate from "compilation time" in Common Lisp, but it's usually the same thing.
[3] - The distinction between "compile time" / "load time" and "run time" in Lisp family of languages is mostly a matter of opinion. Defining a class isn't much different conceptually from modifying a global hash table, except it usually happens during what one would consider "compilation" instead of "execution".
Thanks so much for the extensive reply! There’s a lot to chew on in there.
I think I’d be comfortable implementing the first kind of macro (switch-key) in my own programs because it’s quite targeted in its goal and encapsulates some stuff to make the overall flow more readable. This makes sense to me, and I’m thinking about how they could improve my learning project. I probably just have to try sprinkling some in to get a better feel for them.
Your second example (define-event) is more foreign to me, likely because I’m working in Clojure. Compile-time correctness checking isn’t really a thing there, and debugging is improving but (from what I hear) nothing like other Lisps. That leaves using imperative macros to create top-level functions / classes / hash table entries, and that’s beyond my current level of need/understanding and may not be idiomatic for the language. I don’t know enough to say for sure.
Your last example definitely illustrates “prefer data over functions over macros.” Your estimate of 20 functions : 1 macro is also good evidence for at least the second half of that, too.
I was doing some Haskell and Racket and I found basics not too hard to learn, but I did then too hard to master, but I think thats to do more with the complexities of the problems I tried to solve and less to do with the languages themselves
Something I've wondered is why we can't have a language that looks totally procedural but uses monads under the hood to reason about state at any given point in the process. This would mostly be sugar, but I think it could make that stuff much more accessible.
We get small, specialized fragments of this in Rust's borrow-checker and TypeScript's control-flow analysis, but I'm not aware of something that tries to take the whole of IO-monads and reshape them into a familiar format.
do
putStr "What is your first name? "
first <- getLine
putStr "And your last name? "
last <- getLine
let full = first ++ " " ++ last
putStrLn ("Pleased to meet you, " ++ full ++ "!")
This might work out well in practice when tracking a single effect with a monad (e.g. with Haskell's `do` notation) but I think it quickly falls apart when working with multiple effects. Monads don't have a general form of composition - at the very least the order that they're composed matters to the outcome of the program. Procedural code is generally flexible enough to allow you to mix-and-match how effects gets handled, which I think would be difficult and impractical to work out in monadic contexts.
If you want to mix and match effects without regard to order of composition, that's what Lawvere theories are for (commonly known as 'algebraic effects'). You're right that monads don't give you this, but there are ways of describing these patterns without resorting to "procedural" idioms.
Rust's borrow checker (or rather its affine typing, which the borrow checker partly implements - "affine" rather than linear, b/c the compiler is free to drop() objects that are no longer used) actually gets in the way of using monads to reason about state. That's one key reason why the language still lacks a general monad construct.
It seems to me that if it looks totally procedural, then most programmers are going to reason about it totally procedurally. Using monads under the hood isn't going to buy you anything. In particular, it isn't going to make FP any more accessible, because the programmers won't be doing FP. They'll be doing procedural programming. Sure, the language may turn it into FP under the hood, but that's a detail of how the compiler generates its intermediate forms. That's not the claimed magic of FP; it's all in how you think. And if you think you're doing procedural, well... you don't get any FP magic that way.
True enough, when you're writing imperative Haskell, you're not hitting the claimed magic of FP.
But:
1. Haskell lets your write the majority of your code in pure style. And it helps you separate the imperative parts (like pure core, imperative shell).
2. The compiler tries hard to generate the same machine code as C. The difference is that it gives the programmer more options: because the imperative parts are values (actions), it's strightforward to calculate them, which you can't do in C (you'd need the preprocessor or maybe LISP macros).
I believe that functional programming is not as popular as some think it should be, because the more popular languages stole a lot of its mojo. Other languages absorbed and used many functional programming concepts.
It became like, "No need to go over there, you can do most of it here, in a language you already know."
As for OOP, I think many people have found ways to circumvent or minimize how much they use it, so are not going over the edge with inheritance and another kind of spaghetti. If they can use a struct, record (Delphi/Free Pascal also have advanced records), prototype (JavaScript, Lua...), OOP without classes (Go, Vlang...), or classes with only data then they do. There is more awareness to try and avoid the traps and excesses of class-based OOP.
I actually don't like recipes written in an imperative style and I don't think this is how cooks think about them internally. They are the norm for written recipes, but whenever I read one I (and I assume other experienced cooks) translate it into a functional style for internal storage.
Consider a recipe for macaroni cheese. In imperative style it might be written like this:
Thread 1, step 2: remove sauce from heat and stir in cheese.
Thread 2, step 1: bring water to boil, add salt, add 125g of pasta, simmer.
Thread 2: step 2: drain pasta.
Join threads: Fold pasta into sauce and serve.
Nobody thinks about how to make macaroni cheese like this. My internal record for macaroni cheese is more like this:
"Basically it's macaroni and cheese sauce mixed together. You cook pasta in boiling water with salt. Cheese sauce is béchamel sauce with grated cheese. Béchamel sauce is made with a roux and milk. A roux is butter and flour."
In fact, if you look in any decent cookery book, the information will be stored like this. A good book teaches you techniques and builds up raw ingredients into basic building blocks and then final recipes. Only a stupid book tells you how to make béchamel sauce 20 times because 20 different dishes use it.
Hm, what about a language like Joy or Factor? It is written sequentially, as a recipe, and (in the case of Factor) can have side effects. And yet it is deeply functional, because of the compositionality of words.
I really like Haskell, but I think Factor is going to be my next thing (although it needs better type system, like Kitten has attempted to), for the reason that it further simplifies Haskell's (already simple) syntax.
I am quite intrigued by functional programming and I spent a fair amount of time with Haskell. I believe that there are cases where functional programming should be relevant, for example numerical software, where by definition you pickup an input once, do a lot of calculations and produce an output (think of lapack, or numerical functions). There is no event loop or continuously looking for input to adjust state. Shouldn’t functional languages be ideal in implementing such algorithms in a simple
way and perform better than fortran? They clearly don’t thrive there, but they should. So in the end I got the impression that functional languages about the mental challenge of finding a non trivial problem that an a functional language can solve better than a more traditional language. It might be a form of art in programming: finding the best problem for the tool.
There is an opinion that learning things more thoroughly is helped by making said things in some particular ways[0] harder to learn (so we struggle at first, make our discoveries and get that dopamine boost when we grasp a concept).
If we take it as a given, then (controversial opinion) it seems possible that popularity of languages and paradigms that are easy and make intuitive sense (and are not weird, like functional languages) could by itself lead to a lowering of average knowledge level across the industry, even if the difficulty of a language has nothing to do with it being objectively somehow better.
1. It's weird because it's different. It's different is because it's on the other branch of the skillsets most people used to for years.
2. To learn things on the other branch, people need to unlearn until they reach the common ancestor node.
3. Unlearn is a form of subtraction. Humans suck at subtraction, be it learning things, making products, or just the human civilization itself - it's even easier for it to collapse and start over.
It's the same idea that Japanese is much harder to learn than German: Because it assumes the learner speaks English, while Chinese people that don't speak English would mostly find Japanese is much easier.
I often wonder, if LOGO, which was inspired by Lisp became more popular than BASIC whether functional programming would have become popular earlier. Many Gen X kids like me were exposed to both, but basic was more broadly avaialbe and often was the OS of your computer, But LOGO was really good for its time and it made it simple to structure your program around functions and build upon functions in a semi lispy/schemey way. It instantly change the way I thought about programming and often attempted to replicate that method of programming in basic.
I don’t think so. Most programming is CRUD which is by its very nature impure. Trying to make it pure just adds a lot of overhead and baggage.
I do think the stateless paradigm that serverless cloud has created is a step in the right direction. Most AWS lambdas are by their very nature impure though since you have to send the output somewhere at the final stage of the lambda.
I think the actual problem is that a lot of the functional programming discipline is about writing code to be understood by machines (eg: compilers, theorem provers, etc) in ways that can trade off against the convenience of people reading and writing it. When I have to write a program that computes functions of programs, I always reach for my FP knowledge, first thing, and start using the more "theoretical" branches quite quickly.
But program analysis and synthesis are different actual endeavors from writing some bog-standard, everyday code.
I'm not convinced with a convoluted C++ example. Why would you reason about FP in a language that was never built for it? Also, there are some mixed paradigm languages (like Clojure) that avoid pitfalls of the pure FP programming. And yes, I do consider purity a pitfall for some kinds of applications. Actually, Clojure even has OOP features that are a la carte and don't create the artificial inconveniences that make you build enormous GoF pattern based solutions to overcome those.
For me functional languages lose to imperative ones because they make it harder to reason about what actually happens when the program is executed. When you see a nested loop, you immediately know that it will probably perform O(n^2) operations. This is not so in functional languages. I clearly understood it when I was reading about Haskell and learned that out of `foldl` and `foldr` one works in constant time and another in linear. I don't remember which.
Both have to consume the whole list, so none of them can be constant. You're probably thinking of space complexity, and yeah, in Haskell it's even more tricky - it depends on how the function's caller uses the result.
It depends on the domain. In performance critical code, I agree that it usually makes sense to use an imperative language. However, not all code is performance critical. Also note none of this is to say that fp is inherently incapable of high performance code - properly tuned Haskell can be competitive with C. Libraries like Streamly[1] are designed with extremely high performance in mind. But I agree reasoning about performance is much more difficult and requires more expertise.
Haskell execution's definitely tricky to reason about. Though, a declarative doesn't always lose. The obvious counter example is SQL, which replaced imperative approaches.
Pure Functional programming’s cost comes up front when you are trying to solve and express your solution. Imperative programming’s cost comes after you have written code, because it is inevitably buggy and incorrect. Imperative programmers and managers don’t mind because fixing bugs feels and looks like productive work, and they don’t believe bugs are largely avoidable.
Why does functional programming need to be popular or not popular? Functional programming is one of the several programming paradigms in Python, C++ 20 and Java programming languages. It could be used when appropriate, just as other programing paradigms. I think FP is the norm, when needed, in Python and now increasingly in Java and C++.
This article is from 2016, so if we're going to look back a bit, then maybe some perspective is in order?
In the grand scheme of things, I look at ReactJS today, with its modern incantation of hooks, and that "language" reminds me a lot more of SML (yes, anyone remember SML?), than it does with C or Java. I started programming professionally back in 2005. Back then OCaml was the rage because we thought it would support OO, and bridge FP and OO, and ReactJS did not exist yet. It was taken for granted that OO and "imperative" were practical. So it's interesting to me that modern ReactJS is more like SML than it is like OCaml. And it's actually practical. People for whatever reason don't consider it weird. Meanwhile, "idempotency" is something desired by the masses, even in infra / devops. For example, at least amongst my coworkers, we generally agree that e.g. terraform is "better behaved" than ansible.
If you're looking for a revolution in these sort of things, I don't think you're going to get it, because FP a la Haskell is indeed too "weird." As working engineers, the most important thing at the end of the day is to deliver some working tool or product.
The current generation of actual-FP languages have been too much of a pain to be practical. They had to be. They were breaking new ground as far as what could be done. In my mind, Haskell proved the limits of System F. The language was pretty much a test-bed for what you could do if you eschewed any attempt at "backwards-compatibility" with previous mental methods of programming. From a research perspective, that was hard enough to do.
But meanwhile, Swift is a massively better language to code in because Haskell and SML existed.
Change did not come as a revolution, the way we expected it. People understand FP better than they ever have, on some instinctive level, and that's how the world was able to move further away from C++ / Java, and towards SML. Ironically this happened in UI – and in the web – which in 2005 seemed like the last place where FP could possibly succeed. It was generally understood in 2005 that FP was bad at "state," and UIs, of all things, were highly stateful. (To be clear, Haskell is still a pain as far as state goes, but not as much of a pain as it used to be.) So when you put this all together, we can say that the world works in weird ways. It wouldn't surprise me too much if the languages of tomorrow actually turned out to be weird, but that'll probably take another 15-20 years to see out.
I think the big difficulty in trying to bake the functional cake is that the author tried to name every step, when he didn't do this for the imperative cake.
comments are bringing out grand thesis or bike shed of FP, but wjat OP actually wrote the article about was being forced to use recursion instead of iteration andnconsidered that FP.
If OP had titled his post recursion is weird I think we'd be seeing some different line of comments.
Secondly, his recursion example not really support his opening comments about FP and cakes.
Thirdly, you dont have to do things backwards in FP.
My experience is that fp is notoriously bad at dealing with errors. Errors are quite common in everyday business cases, although usually ignored, (like in c#), which makes it hard to “fit” into whatever you are working with. Any language that doesn’t treat errors as a first class citizen in it's syntax is a bad one, which I feel like that languages like Haskell isn’t doing well. Also errors are contextual so it’s not always so that an error is one in some context and may be so in others. So the language must be flexible enough to handle both in a nice way.
I think Java is kind of a good example of this where it had checked exceptions to signal errors, but when introducing fp syntax they completely botched how errors work. This left you with a mess and programs keeps crashing because people didn’t know that code could throw errors.
>Ah screw it I can’t finish this. I don’t know how to translate the steps without mutable state. I can either lose the ordering or I can say “then mix in the bananas,” thus modifying the existing state. Anyone want to try finishing this translation in the comments? I’d be interested in a version that uses monads and one that doesn’t use monads.
He's wrong and he completely misunderstood functional programming. It's not weird at all. Let's get things straight. First off there's a difference between functional programming and types. Monads are more of a type centric thing and also haskell centric. Haskell is a big part of functional programming but it's only one style. You can do functional programming without monads and also without type checking. Additionally type checking and even haskell-like type checking AND monads can exist in imperative languages as well (like rust.) Does rust having monads make it functional? No.
Second functional programs can be very very very very readable. It depends on how you structure it and your naming conventions. See example below:
I mean that's so simple I can literally translate that into english:
Create a heated oven by heating an oven to 175 degrees.
Create a pan with grease and Flour by putting grease and flour into a pan.
Create a bowl with whisked ingredients by whisking salt, flour and baking soda into a small bowl with a whisk.
....
You get the picture... It's just real practical english tends to be less pedantic that's it.
Think about it this way. Let's say we live in an imperative world where we have an oven. We then heat the oven. Now in the imperative world the oven is destroyed and replaced by a heated oven. We only refer to the "heated oven" as an "oven" for convenience but in fact the original "oven" is actually destroyed as in it no longer exists.
In functional programming THE only difference is that the original oven is NOT DESTROYED. That's it. You have a heated oven, but you still have a reference to the original oven. But you don't ever need to refer to original oven if you don't want to. This makes English describing imperative processes more or less IDENTICAL to functional processes.
Another perspective to think about this is that in the English language we create the heated oven but the original oven is not destroyed but moved! The oven is now moved into a namespace called the past, we can only refer to the original oven by appending or prepending one of the many names and phrases of the "past" namespace to the oven! For example: The oven "before it was heated", the oven "from the past", the "original" oven,... etc.
So from that perspective then the only difference between FP and english is that variables once operated on, are moved to a different namespace. So literally just extra past-tense embellishment on English grammar is the ONLY difference.
It gets crazier than this. Is anything in reality really mutable? What does mutability even mean? We live in a universe where each instance of time is it's own namespace. The universe is a function of time. At t1 there is an oven, at t2 there is a heated oven. So really mutability is just syntactic sugar over immutability where just gloss over mentioning what time namespace we're in for convenience. Yes you heard that right. Everything is actually immutable and mutability doesn't exist. Our concept of "mutability" arises from language shortcuts we take and assumptions of present tense.
Either way from the examples this guy writes it's pretty obvious functional programming didn't click for him. He understands the rules like immutability and he sees all the crazy tricks and nutty types people associate with functional programming but he missed the big picture. Hopefully this explanation will let the concept of functional programming click a bit more with people.
There is in fact a deeper intuition for why functional programming is "better" than imperative programming. But that's another long explanation. Most people don't ever reach this point of realizing how functional programming is better. They could spend years doing functional programming and completely miss it. Instead they reach a state where they think functional programming is sort of a style of programming used to express intellectual arrogance with no practical benefits and they give it up and move back to imperative programming. Well, they are wrong and they missed a deeper understanding of a fundamental concept. If this sounds like you, I'm telling you... you missed the big picture.
Literally the OP is a picture perfect example of this. Completely missed the point of FP but spent enough time with FP to understand what a monad is, even though a monad isn't really technically an FP thing.
This is about as far from "popular kids make fun of nerds for being weird and nerdy" as you can get. It's more like two different groups of nerds having a nerd-fight about which kind of nerd is the better kind.
Declarative languages are characterised by having the programmer specify _what they want_ but not _how to get it_ (i.e to get a sorted list, you would specify that the elements are in increasing order, but not the specific algorithm to use). In contrast, functional languages are characterised by treating functions as first class values (amongst other things, but these are harder to summarise).
Prolog for example is a declarative language, Haskell is a functional language.
Someone can correct me on this, but I've never seen this distinction you're making anywhere else. And, it doesn't make sense to me either. The wikipedia page[0] for FP says that FP is a "declarative programming paradigm". Can you give me an examplel of a "functional" piece of code that is "not declarative"?
Prolog is a declarative language but not a functional language. Hence functional languages and declarative languages are not equivalent. Functional languages are not just defined by lacking side effects but also by using functions as first class citizens, meaning that SQL isn't a functional language either.
Python and Scheme are the two classic examples of non-declarative functional languages; they are both instructing low-level VMs to mutate machine state, but also both have functional-programming tools and first-class functions. The Scheme (set!) form is a great example of imperative mutation within a functional paradigm.
It's impressive to have people say that Lisp is "downright ugly" and Haskell is "freaking unapproachable and the syntax is weird" on an article that includes C++ templates. It's not the "aesthetics" that matters, it's the popularity and what you're used to. Part of the reason why Rust is popular is that they didn't break too much people's habit, and even with that it constantly gets flak for its syntax.
I didn't want to comment about looks because it wouldn't add anything to the discussion but since we are talking about looks I just can't get over how ugly c++ templates are.
I will even admit that the only reason I never gave Rust a chance was because it reminded me too much of this type of angle bracket monstrosity.
Therefore observing that a language or paradigm is popular or unpopular does not say anything about whether it is good or bad. JavaScript is the most popular language in the world. If Netscape had decided to use a Scheme-like language or a BASIC-like language it would still be the most popular language in the world. So paradigm has nothing to do with it.
Functional programming is less popular because no major successful platform has adopted a functional language as the preferred language.
I don't buy that functional languages are unpopular because they are unintuitive. Losts of stuff in JavaScript is highly unintuitive. It didn't prevent it from becoming the most popular language in the world. People will learn what they need to learn to get the job done.