It's a hardcoded map of int to string. That's a fairly reasonable way of doing it in vanilla C. How efficient it actually is is down to your compiler. And some compilers impose maximum limits on the size of `case` statements..
An alternative would be a precomputed hash table, similar to what `gperf` does. Requires more work in the build system though.
Besides the curious length of the switch, I'm not seeing anything particularly 'bad' about this code .. sort of left wondering what the big deal is? This is perfectly reasonable code.
It doesn't look too terrible, but it looks like it is optimized for speed, not maintainability. I'm assuming that's why there is a lot of repeated code (instead of putting it into a function/loop). The comments are a little spartan.
Sure. And it's a no go on a laptop. Though you can probably get a ~5 year old desktop with PCIe for basically free if you ask around friends and relatives. If you ssh into it, no need for a monitor either.
Any OpenCL/CUDA program for GPGPU would require this. Different GPUs may have different characteristics (core count, available instructions, memory sizes and speeds), that need to be taken into account and can be optimized for. This can only be done when you know the run-time target device, which is only at run-time.
This is the reason why for GPGPU programs, you often supply the kernel/shader as C code or another intermediate representation (vendor-specific assembly), and the final compilation step is done by the GPU driver.