With respect to your CS prof at one of my favorite places (Go Cavaliers!), this is not an issue of discrete vs. continuous. If your structure can have either 9 or 10, but not 9.001, supports, it is discrete, not continuous, regardless of failure mode. And remove one of the 3 legs of your stool, and you probably wouldn't have 2/3rds of its support remaining, but whether you did or didn't would be an issue of proportionality, not continuity.
There are a lot of jumbled concepts here, most of which don't matter anyway, because what you are talking about is the phenomenon of graceful degradation. In the physical world, both natural and man-made, almost nothing at the macro scale is ever perfect, so the best designs tend to be those that remain good enough under the widest range of circumstances vs. a more common software goal of being perfect under perfectly controlled circumstances.
As software gradually moves out from the wall garden of a single mainframe to fill the world with interacting systems spanning diverse machines, sensors, communications channels, data types, etc., design for graceful degradation becomes more and more of a focus for professional software architects.
Coding in the gracefully degrading way is much harder than coding in the "if even one of your ten lines is wrong, you crash" tradition. The fact that even the latter is so hard for us humans means we will need more and more help from machines that learn what to do without being explicitly told by us.
I agree that "discrete vs continuous" is not the perfect way of expressing the difference; it's just an analogy. (But the structural support example is continuous. You could have 9.5 supports by adding a 10th support with half the strength, etc. "Amount of support" is the continuous measure.)
But it's not just an issue of graceful degradation. The fact that tiny changes in a program can have very large effects is a feature, not a bug. We grade programming languages on their ability to concisely express complex operations, and that conciseness necessarily means that very different operations are going to have similar expressions (e.g. subsetting rows vs columns of a matrix typically differ only by a small transposition of characters, but the effect is completely different).
You can write software that degrades gracefully, but one syntax error (or other "off-by-one-character" problem) is still going to kill the program. You can talk about running your program on a large set of redundant servers with no single point of failure, so that you can update them one-by-one with no downtime, and that makes you robust against even syntax errors. But that's not helping you teach novices how to write code.
there are continuous programming languages out there - DNA is one such one i guess. But i don't think the discrete vs continuous nature of a programming language is what makes it difficult. It's more that a person's mind may not conceptualize tasks algorithmically, and to switch to this frame of mind is difficult for someone who isn't already in this frame of mind.
That's a good point. DNA as a programming language has to be at least somewhat continuous, or else evolution has nothing to optimize because every change has a random effect.
DNA is a lot less discrete that you might think. There's epigenetic factors and population proportions, for example.
But even considering DNA as just a 4-letter language with discrete characters, my point is that many, even most, small sequence changes to a genome (e.g. single-nucleotide variants) have small effects or no effect at all, which gives evolution a smooth enough gradient to optimize things over time. That's what I mean by continuous in this context. The opposite would be, for example, a hash function, where any change, no matter how small, completely changes the output. Hence you couldn't "evolve" a string with a hash of all 7s by selecting for "larger proportion of 7s in the hash function", because hash functions are completely discontinuous by design. But you can evolve a bacterium that includes more of a given amino acid in its proteins by selecting for "larger proportion of that amino acid in protein extract".
There are a lot of jumbled concepts here, most of which don't matter anyway, because what you are talking about is the phenomenon of graceful degradation. In the physical world, both natural and man-made, almost nothing at the macro scale is ever perfect, so the best designs tend to be those that remain good enough under the widest range of circumstances vs. a more common software goal of being perfect under perfectly controlled circumstances.
As software gradually moves out from the wall garden of a single mainframe to fill the world with interacting systems spanning diverse machines, sensors, communications channels, data types, etc., design for graceful degradation becomes more and more of a focus for professional software architects.
Coding in the gracefully degrading way is much harder than coding in the "if even one of your ten lines is wrong, you crash" tradition. The fact that even the latter is so hard for us humans means we will need more and more help from machines that learn what to do without being explicitly told by us.