Not sure if it is just you but normally you DO want this information in the production build. It is quite a bad situation to have a run time exception in PROD and having no idea how it happened. Sure, there is defensive programming and checks and asserts but most of the time you cannot foresee everything.
I get the point about external symbols and location database, but oftentimes time is precious and having fully laid out stack trace in the log will allow you to get to the root much faster.
> I get the point about external symbols and location database, but oftentimes time is precious and having fully laid out stack trace in the log will allow you to get to the root much faster.
You can also set up a service that automatically symbolicates everything in a log file as soon as it is generated, before a human ever even looks at it.
Granted, yes, this is slightly more complicated, but the point is that the toolchain should let the developers choose which strategy they want to use.
Absolutely not. Developers making choices about stuff like that is asking for troubles. I'm an SRE - and this is how production issues start, because some dev decided that somehow, in production, something like this should not be there, until problems pop up, but then it's too late.
For us it's simple, the moment it hits the CI/CD pipeline, it's a production build, even if it happens just end up on some test or staging environment. If you possibly need it there, you need it on production. Our way of working means that the exact same build artifact should be promotable to production.
Not a parent commenter but I interpreted that as: given choice between alternatives A (fast and good enough) and B (more complex and much better) you would want to choose B but end up doing A for various reasons (lack of time, unclear ROI etc)
That’s a language feature; we’re talking about build flags. IMO, adding a build flag wouldn’t meaningfully change how simple the language is to learn and use, whereas there is at least a credible argument that generics would.
There are very different production scenarios - in many of them noone will ever look (or even be able to look) at a stack trace if it crashes after it's shipped (at best you'll record bug reports from customers and attempt to reproduce them on your test hardware), so the debug information is literally useless there. And these are the same scenarios extra 50mb of disk and memory matter more than for some software running in a cloud environment.
I was pleasantly surprised how good Microsoft's tooling was around firing up a debugger to examine the final state in a crash dump using external symbols from that build. Everything seemed to work except you couldn't resume any thread. I agree symbols don't need to be embedded in every running binary, but having a warm copy somewhere can be pretty helpful.
External symbols have forever been the default on Windows, with the binary only containing the path to the pdb file.
Of course this is partially out of necessity; Windows is proprietary software, so they don't want to give you full debug info. But then in practice their tooling is just so vastly superior. You can fire up WinDbg, a rock stable debugger, attach to a random process and get a proper backtrace including full symbols for all the proprietary Windows stuff, because they run a public symbol server and all the symbol data you need for your Windows build is downloaded in a few seconds, fully transparently. And they ship the same symbol server with their tooling so you can run it for your own binaries, too. You can't do that on any Linux distro without starting to manually install random -dbg packages.
(And don't get me started on trying to debug crashed processes on Linux. A big reason Android has their custom libc is so that they can install default signal handlers for things like SIGSEGV, where on a normal Linux system that process goes immediately to core dump and is essentially fucked for a debugger wanting to look at its state.)
In my experience you would strip the symbols out of the prod binary, and save them separately somewhere.
Then your production binary will give you stack traces like [0x12345, 0xabcde, ...], but you can use the separately-stored files to symbolicate them and get the source file/line info.
Not sure if this is possible on all platforms but it at least is for all combinations of {C, C++, Objective-C, Rust} and { Linux, macOS, iOS } .
And if that added operational complexity is not worth the size savings, you can freely choose not to do it, and things will work like they do in Go.
Separable debuginfo which can be loaded at runtime.
DWARF uses an efficient compression mechanism much smarter than a table for this sort of mapping. And of course things like coredumps and crash dumps being sent to automated processing where devtools have the full debug symbols, while production deployments do not.
Go's insistance not just reinventing the wheel but on actively ignoring core infrastructure improvements made in the last 20 years is bizarre.
That is debug information. Just have it stored elsewhere (not on the binary you ship everywhere) and use that in conjunction with your core dump to debug.
A lot of them have symbol files separate from the binary. Unixy tooling doesn't do this by default but for example objcopy(1) in binutils can copy symbols to another file before you run strip(1), and on Mac my memory is rusty but I think it may be dsymutil(1) that lets you copy to a .dSYM bundle. Microsoft has its .pdb files and never even keeps debug info inside the binary proper.
The debug info is in a separate file. You only need that file when you’re inspecting a crash report, so it doesn’t need to be pushed out to the host device(s).
Because it's not a problem. So everybody does the same. And is not about the programming language, is about the programmer's choice. If it wants debug info inside a production program, the language let it happens. In today's age size of your executable is a non-issue. The only issue should be your performance.
Here is an example from my past. As embedded programmer I went and added manually a hundred lines of constants, which initially were just an array generated at start and increased that code by about 5%. Why? Because I gained 5 ms in execution speed. And in embedded world that's huge. Especially when your code is executed on the lowest 10 ms cycle. So the department head approved such a huge change because the code size doesn't matter, you can always buy a bigger chip, but if your car don't start because the watchdog keeps resetting your car's on-board computer then speed in code execution is everything.
> Because it's not a problem. So everybody does the same. And is not about the programming language, is about the programmer's choice. If it wants debug info inside a production program, the language let it happens. In today's age size of your executable is a non-issue. The only issue should be your performance.
I figured that it would be fairly obvious why claiming that executable size means little today and that performance is the only metric that matters is a gross misrepresentation of the software industry.
I do not know much about go, but languages like C++ and Java give you the tools to make tradeoffs appropriate to your situation: externalizing or stripping symbols and/or debugging information.
I get the point about external symbols and location database, but oftentimes time is precious and having fully laid out stack trace in the log will allow you to get to the root much faster.