Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They claim it's twice as fast as tensorflow, which is not blow-you-out-of-the-water (compare to like 50x speedup from GPU on most places), but it's a solid speedup.

It's easily parallelizable on GPU's, or so the claim goes.

Its configuration language is much, much shorter than caffe's, but upon inspection it looks like that the configuration language is also much less flexible than caffe's and they implemented a damn sight less stuff. No recurrent anything, for example, or LSTM, no gating stuff that you would need if you were doing LSTM, no residual net stuff, just off the top of my head.

It looks like much, much less complete docs in comparison to TF and Theano and things. Note the probability of dropout given in the user docs, but the actual documentation for dropout feature is hidden away inside the repo.

The important thing, however, is that they claim that there's a significant improvement on doing training on extraordinarily sparse datasets, like recommender systems and things like that. It seems very specialized for that specific exact purpose: see only accepting NetCDF format data, which is common enough in climatology-land but less common in machine learning-land proper.

The test coverage... To a first approximation, there is no test coverage. It seems quite research project-y.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: