Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah. But it’s likely it’s an 8-bit quantised, likely very small model with a small number of parameters. Which translates into poor recall and lots of false positives.

How many parameters is the model you are using with hailo? And what’s the quantisation and what model is it actually ?



Honestly I have no idea what you are asking about. It's just dedicated hardware to a yolo-like object detection model


They are asking about LLMs. There is a confusion it seems -- you are thinking of the object detection model (YOLO) which runs perfectly fine in (near) real time with a Coral or other NPU. The parent is referring the Llava part, which is a full-fledged language model with a vision projector glued onto to it for vision capability. Large language models are generally quantized (converted from full precision float values to less precise floats or ints for instance F16, Q8, Q4) because they would otherwise be extremely large and slow and require a ton of RAM (the model has to access the entire weights for every token generated, so if you don't have a gigantic amount of VRAM you would be pushing many tens of gigabytes of model weights through the system bus slowly).


Recall and false positives are classification metrics which relates to the YOLO part.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: