Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
justaboutanyone
15 days ago
|
parent
|
context
|
favorite
| on:
Qwen3-Coder-Next
Running llama.cpp rather than vLLM, it's happy enough to run the FP8 variant with 200k+ context using about 90GB vram
cmrdporcupine
15 days ago
[–]
yeah, what did you get for tok/sec there though? Memory bandwidth is the limitation with these devices. With 4 bit I didn't get over 35-39 tok/sec, and averaged more like 30 when doing actual tool use with opencode. I can't imagine fp8 being faster.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: