Whilst it's obviously powerful, I often find myself wishing math used syntax even half as easy to understand as any decent programming language.
I suppose it's a result of being developed on a chalk board, but math seems be value _terseness_ above all else. Rather than a handful of primitives and simple named functions, it's single greek characters and invented symbols. Those kind of shenanigans would never pass a code review, but somehow when we're talking about math they're "elegant" and "powerful".
However I'd like to add that often in mathematics, we are discussing very generic situations. For instance, we are not talking about the radius of some specific circle, which perhaps should be named `wheelRadius`, but about the radius of an arbitrary circle or even an arbitrary number.
I wouldn't really know a better name for an arbitrary number than `x`. The alternative `arbitraryNumber` gets old soon, especially as soon as a second number needs to be considered -- should it be called `arbitraryNumber2`? I'll take `y` over that any day :-)
Also there are contextually dependent but generally adhered to naming conventions which help to quickly gauge the types of the involved objects. For instance, `x` is usually a real number, `z` is a complex number, `C` is a constant, `n` and `m` are natural numbers, `i` is a natural number used as an array index, `f` and `g` are functions, and so on.
My favorite symbol is by the way `よ` which denotes the Yoneda embedding and is slowly catching on. All the other commonly symbols for the Yoneda embedding clashed with other common names. This has been a real nuisance when studying category theory.
We use i, x, y, etc. all the time as variable names as professional programmers.
So you're sort of arguing against a straw man there, almost no programmer would expect you to name such a concept 'arbitraryNumber2', we would also name it x or y if it made sense in the code.
Sorry, I didn't want to argue against a straw man. You are right. I just wanted to indicate that in mathematics, we are much more often in such generic situations than in programming. Accordingly, I wanted to argue that the increased brevity in mathematics is to some extent to be expected.
"My favorite symbol is by the way `よ` which denotes the Yoneda embedding" which was named after its mathematician inventor/discoverer Yoneda [1]
The character is the syllable "Yo" in Japanese katakana, and although not everyone knows katakana, still it is mnemonic for "Yoneda" rather than being wholly arbitrary. [2]
Every programming language overloads the same few ASCII non-alpha-numeric characters to have multiple meanings. A colon : can mean multiple things in most computer languages, or it gets combined. Even a symbol like less-than < gets extremely different meanings depending upon context: comparison, template, XML, << operator (bonus if overloaded), <- operator (ugggh mixed with minus for bonus confusion) etcetera.
I think saying programming languages are better than Mathematics is just due to your familiarity.
Oooh don't even get me started on how they name things after people instead of anything remotely descriptive or helpful. Imagine if you named functions after yourself.
And then there is the Hann window function, sometimes called ...
"named after Julius von Hann, and sometimes referred to as Hanning, presumably due to its linguistic and formulaic similarities to the Hamming window." Wikipedia, window function
Krebs' cycle, HeLa cells and Lugol's formula in medicine as well; And also an Allen key, a Phillips screwdriver, many of the physical units (Ampere, Volt, Tesla, Weber, Newton, ...).
It happens in all fields. For that matter, the Hebrew word for masturbation is named after Onan, who was described in the bible as having done so.
I find it a good thing to name it after someone who discovered something or pioneered branch in the field. Sometimes it makes it confusing but most of the time the name reference makes it very easy to remember as well
If that were true, then why don't we do the same thing for programming? We only do that for languages and algorithms (which I'm still not a fan of), but everything else tries to have descriptive names because then it's easier to understand how it fits together with other concepts. If we called for loops "bobs" and while loops "jims", how are you supposed to know how they are related, structurally?
Mathematical concepts are more general and abstract and so a short description is sometimes hard or inconvenient to describe without overlapping dozens of other concepts. Wherever it makes sense there are all sorts of descriptive names : loops, knots and so on
Comparing programming languages to maths doesn't really makes sense because they serve to express vastly different things. Programming languages need to unambiguously describe how to transform input data into output data. Maths language is more like a natural language and is used to communicate. It evolves in the same way as natural languages evolve and an attempt to codify it precisely is futile because there will always be idiomatic expressions, exceptions to the rules and it depends heavily on a context.
You use maths language to write a story or talk with a friend what you did last night, you use programming language to build a shed or bake a bread
It might be awful from the outsider's perspective but so is a foreign language if you never learned it. Hard to complain about it though and if you want to know what others are taking about there is no other way around but to learn it - it won't change in order to make it easier for you, it will change to make it easier for the speakers
A codebase easily contains thousands of identifiers - sometimes millions. You need a verbose, (hopefully) unique way to refer to them, because otherwise you will never find out which variable refers to what.
On the other hand, in a typical math textbook, the kind that will take you a full year to read through, the list of "all the symbols ever used in this book" usually fits in a single page.
There's no point in writing "CircumferenceRatio" when π does the job. Imagine solving a partial differential equation with CircumferenceRatio appearing five times each line.
Algebraically manipulating stuff (factoring, rearranging, cancelling, expanding, simplifying, etc.) without the terseness of math notation sounds like a nightmare, regardless of whether I'm using a chalkboard or an endless sheet of paper.
> Those kind of shenanigans would never pass a code review
Yes, because code is used in very different ways to a mathematical expression. When you see code in a repository or a textbook, I doubt you find yourself copying it out over and over again in your own work.
Both math(s) and (almost?) all programming languages have their quirks and inconsistencies. What immediately comes to mind is (a thing I've recently learned) the template syntax of C++ where if you get a single character wrong you get dozens of lines of error messages (there's a code golf on exactly this).
At least with programming, you generally don't see different semantics depending on the value of something! With math, there's sin^2 as in:
sin^2 theta + cos^2 theta = 1, which reads the square of the sin of theta, etc.
But then there's
sin^-1
which means the inverse sine AKA arcsine, and NOT 1 / sine, which would be consistent with previous usage.
The use of invisible operators is obnoxious because it means symbol names must all be atoms. Why is yz (multiply y z) but 23 is (toint (cat “2” “3”))? A great deal of mathematical syntax is actually ambiguous as written too. Plenty of it is fine, but it’s intellectually dishonest to deny that many common notations have no merit beyond widespread historical usage. Which in case it isn’t clear means yes of course the student should learn them for the benefit of reading great works of the past.
in computer science people can use pseudocode or descriptive variable and function names, and do sometimes, but still often fall back on math notation and Greek letters.
sometimes the terseness, and leaving certain details implicit, does actually add to clarity rather than hurting it. the eye can only take in so much at one time.
My main frivolous gripe with math notation is how everyone uses radians by default, to the point where your first visual clue that something is an angle is not any kind of unit, but rather the fact that it's being multiplied or divided by some multiple or fraction of pi. I think that the most sensible universal angle unit is "rotations". So, 360 degrees is 1, 45 degrees is 1/8, and so forth. Radians are only useful for a few special cases, like determining how far a car rolls if it's 10 inch radius tire rotated by 300 radians. (I wonder if somewhere, there's a mathematician who has modded their car's tachometer to output radians per second rather than revolutions per minute, just to make the math work out easier...)
Anyways, programming languages generally follow math notation, and use radians for trig functions and so on. Usually that's not too much of a problem, but when applied to file formats like VRML which were meant to be human readable, the results are ugly.
For the most part though, I think math notation is pretty good. At least when compared to something like standard music notation, which is full of weird rules and historical accidents.
Algorithms for calculating trig functions would probably not look good using degrees. Maybe it might look OK with what I assume is what's usually used (lookup tables + interpolation?), but for the Taylor series expansion you have to multiply by powers of pi/180 everywhere.
Calculus is generally worse with degrees. The derivative of sin(pi/180 x) is pi/180 cos(pi/180 x). That's pretty inconvenient, especially if you're writing any sort of models that need to solve differential equations. Same reason base e is preferred for exponents.
Radians vs. degrees isn't notation, it's a convention. You even say the reason why it is the convention. That multiplying the radius by the radians gives you the circumference. It is the only representation of angles with this special property. I mean, why should 360 represent 1 rotation? Why not use rotations itself? That way 1/4 is 1/4, 1/8 is 1/8, and so forth.
The reverse operators in APL are very visually suggestive if you imagine a matrix and mirror it about the centerlines-
Reverse around last axis: ⌽
Reverse around first axis: ⊖
The grade operators (indices by which one could index to sort ascending or descending) are likewise easy to remember-
Grade up: ⍋
Grade down: ⍒
I suspect that the reason only a handful of APL's notational ideas made it into mainstream mathematics is because few mathematicians felt the need to describe algorithmic processes, and those who did were willing to settle for big sigma/pi, set builder notation, piecewise function notation, or a handwave at ALGOL, Pascal, or whatever else was in vogue at the time.
This somewhat begs the question that those symbols are the best they can be, though. That is, yes, if you know one of those set of "grade" symbols, and you know that you can grade "up" or "down", then it somewhat follows. If you just show me those symbols flat out, though, I have no idea what they do. (Granted, I'm not entirely clear what it means to "grade" in this sense, either...)
Of course, that also begs the question of "do we need characters for all operations?" But, I'm not entirely sure we do. What is the advantage? There are plenty of operations that are just fine with their symbol being a word. (As evidenced by programming, in general, right?)
"grade" (as opposed to "sort") means "find a permutation that puts this into order". So the sorted version of the list "name" is in APL, name[⍋name] (read "name indexed by the permutation that puts name into ascending order), but if you have two corresponding lists, "name" and "age", then you can get names sorted by age using name[⍋age]. It is, in that sense, a more fundamental operation than sorting.
Before Iverson added the celing and floor symbols, those weren't known either, even though you had a better chance of guessing them.
And w.r.t to "characters for all operators" - this is, of course, subjective. But I've never met a programmer who prefers COBOL's "ADD 1 TO X GIVING X" to C's "++x"; Anything that's common enough deserves a symbol, both because it's shorter and easier to recognize visually, and because it reduces the language barrier (in the same way that "3+4" is easy for a 6-year old Thai in the way that "3 plus 4" in English letters isn't)
In my opinion Arthur Whitney's refinements of Iverson's ideas (called "K", an APL-related-language) is the right way to go. He mostly converged on ~50 operations that deserve symbols. Many algorithms end up as orthogonal combinations of those, e.g. |/0(0|+)\ which is a complete, efficient O(n) solution of the maximum-subarray-sum problem; or ",//:" which is a complete, though not very efficient, implementation of "flatten".
Certainly I think things that you do regularly and that we can pull back to the standard tools of associative and distributed computations makes sense to do so. I confess that I am less clear on why to do this on things that cannot be pulled back to those conventions. I further have to acknowledge that there is a non-fine line on where to stop, though. Otherwise, near every job function at work would have a unique symbol for it. :D
That is to say, I was not intending to contradict you. Just trying to shine light on the ambiguous part.
My personal favorites are x*y and x⍟y for pow and log, and the *x and ⍟x monadic versions for exp and ln. I always found weird that exponentiation and logarithms were not defined by operators in conventional math notation.
Knuth [0] discusses in more detail some advantages of indicator notation, in particular that, when you eliminate the limits on summations, it makes it much easier to do algebra on summations.
I'm sure there's some amount of font twiddling and changing to a different terminal that'll fix this, but using these characters out of the box is problematic.
I only mention this because I've used these to decorate tables in my own notes in the past. It's not really worth the trouble, but I remembered being impressed with how nice these characters looked at the time.
If anyone else wants to try it, paste this into a python REPL:
I don't think it is controllable by the font; the only issue I can see if these lines and corners are not extended all the way to each character cell's bounding box.
I find these kind of discussions of notation really interesting. They're not at all the bike-shedding they might first appear to be. One of the last EWDs[1] is an interesting meditation on the topic.
Whenever someone tells me "it's just syntax sugar, it doesn't matter" I usually point them to mathematics. This is both beautiful and horrifying in equal measure and _I love it_
I knew Iverson introduced his bracket because I've always heard it called "the Iverson bracket"—and I absolutely love it—but I had no idea he invented the floor and ceiling functions, too.
At the risk of flippancy, who would think that the creator of APL was also the source of such intuitive notations? Perhaps you have to be willing to explore crazy out-there notation to be able to find these occasional gems.
Agreed. Say he introduced 100 bits of notation, 3 good and 97 bad. The net result is that the world absorbed 3 good ideas, and the rest have vanished from memory. Not a bad track record. I hope to give the world three good ideas.
Besides notation, APL introduced some ideas about vector computing that have been adopted in languages like R and Python (NumPy).
Echoing a lot of the sentiment here, I wish math would simply use english (or natural language). It clearly seems too much to ask for a programming language, especially when it comes to pure math.
At an undergrad math course, we were allowed to state our proofs in english without any notation. This was the only course that did this. I found it easier to write and reason about my proofs, it was incredibly easy to read my peers', and the professor had no issues with my work while he did have some trouble with some of the notation.
When I first started, the question I had was if there existed an encyclopedia or an OED for mathematical notation. I was told there wasn't. I then enquired how I would go about deciphering something I didn't understand, and was told that I should ask my professors and peers. I brought one of the papers I had found online to him, and he asked me to write to the author because he couldn't understand it either.
Meanwhile I can download any old piece of code, and given enough time with the compiler and myself, I can understand what it means to express. I shudder to think 9f the sheer body of work that relies on a vague understanding of the notations in the underlying proofs.
For a science that believes in precision of logic, I ask: how is this still a thing?
> Amiga called the left and right brackets 'bra' and 'ket' which I always thought was clever.
Wouldn’t know about Amiga, but this is standard notation in quantum mechanics (though you may be aware of this already). <φ| is a ‘bra’, and |ψ> is a ket — the two are adjoints of each other. Naturally, <φ|ψ> is a ‘braket’. (As my QM lecturer is fond of saying, ‘Dirac invented both the notation, and the pun’.)
I use brackets with a subscript as notation for the modulo operation. This way expressions with multiple / nested moduli become much more readable. I'm particularly fond of how concisely the Chinese Remainder Theorem can be stated:
I don't think notation is what's making things hard.
The concepts behind are the hard thing. If you can't figure out the concept, the notation doesn't make much sense - and it is the notation that you meet first, so it seems to be the culprit.
Which is not to say that all notation is equally good - some is exceptionally bad, and other just confusing. The article mentions [x] which was i use for almost 200 years despite being confusing - but was then fixed into something non-confusing.
Similarly, for most uses, Leibniz's differential notation (dy/dx) is superior to Newton's (y with a dot on top) - and is now universally used in those uses - but for a long time Newton's dominated, mostly for political reasons (yes, over 250 years later).
But these are the exceptions: Usually, like in the floor/ceil case, when a better notation comes along, it is quicky (30-40 years...) adopted.
Concepts can be explained and demonstrated. You can read the demonstration, and eventually understand.
If you can find it.
But there is no way to look up any given dumb-ass notation, or even to know how to pronounce it. Most notations are used to mean a dozen different things, so you can't even be sure, if you read about it, you are reading about the right one, if you do find it. And if you find the right one, it still won't say how to pronounce it, so you can't even ask without sounding like a dummy.
It is all very insular. Or, maybe, was, before Youtube.
It doesn't help with stuff found in the wild, but if you're working from a text most math books will either list all symbols at the beginning of the index with references to where the notation is introduced and defined, or they have a separate symbol index or glossary of notation.
I suppose it's a result of being developed on a chalk board, but math seems be value _terseness_ above all else. Rather than a handful of primitives and simple named functions, it's single greek characters and invented symbols. Those kind of shenanigans would never pass a code review, but somehow when we're talking about math they're "elegant" and "powerful".
I call bullshit. Math syntax is bad.