Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For cuda and ML you'd be much better off choosing a 3060. Honestly, if you've only got the money for a 4gb 3050, you're prob better off working in google colab


With layering enabled, I don't necessarily agree. Not being able to load an entire model into memory isn't a dealbreaker these days. You can even layer onto swap space if your drive is fast enough, so there's really no excuse not to use the hardware if you have it. Unless you just like the cloud and hate setting stuff up yourself, or what have you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: