Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are talking about two different things. One is the latency of data from main memory to the gpu memory and the other is whatever video game metric you are using.


No, they are effectively the same thing.

To get 60fps your video card needs to push out a frame every 16ms. If say 6ms of this is actual in card processing and 10ms is CPU/API overhead and you add to it another CPU call + system memory access that adds 30-35MS latency you are now operating at ~50ms per frame which means that you can only output 20 frames each second.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: