Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it may not be a great comparison. N-grams (of words) of human speech/writing are way more deterministic than the kinds of things ML usually tries to tackle, I think. If you write the word "because", then "of", "the", or some pronoun are all extremely safe bets for the next word, regardless of their recorded probabilities. I imagine you could also totally randomize the probabilities and not see any issues.

But I'm no expert and hardly even an amateur, so maybe it is a similar kind of thing here with ML. And I know randomized optimization is a big thing in ML, though I'm not sure to what extent that could be analogized with randomizing Markov model probabilities.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: