Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How useful would it be as a GPU?


Very badly, this is the polar opposite design of a GPU.

It does share the latency-hiding-by-parallelism design, but GPUs do that scheduling on a pretty coarse granularity (viz. warp). The barrel processors on this thing round-robin through each instruction.

GPUs are designed for dense compute: lots of predictable data accesses and control flow, high arithmetic intensity FLOPS.

In contrast, this is designed for lots of data-dependent unpredictable accesses at the 4-8B granularity with little to no FLOPS.


These particular chips. They seem more targeted at HPC work (and price point).

This sort of architecture. I wouldn't be surprised if current GPU were doing something similar.

If you think about executing a shader program. You typically are running that same code over a bunch of data. You can map that to multiple threads.

https://en.wikipedia.org/wiki/Thread_block_(CUDA_programming...

https://yosefk.com/blog/simd-simt-smt-parallelism-in-nvidia-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: