Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think I'm falling for the ELIZA effect.* I just feel like if you have a small enough model that can accurately handle a wide enough range of tasks, and is resistant to a wide enough range of perturbations to the input, it's simpler to assume it's doing some sort of meaningful simplification inside there. I didn't call it intelligence.

* But I guess that's what someone who's falling for the ELIZA effect would say.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: