This is great to see, I've seen lots of other lists on the web but always happy for more (perhaps this will keep up to date).
I'd suggest adding a publication date column (should be easy to scrape if need be). One thing I'm usually looking at as an ML researcher is recency, not always a good proxy for quality but I'd much rather take a classification model from 2019 than 2014 for example.
We've packaged a number of models in OpenFaaS and publish the containers in our function store. You can check for nudes, colorise images, do OCR and ImageNet is also available (called inception). I'd welcome contributions, Tensorflow models are very easy to serve.
Thanks for pointing me at OpenFaaS! I've been working on packaging a bunch of tedious things (facial recognition, filling PDF forms, file conversion, etc.) into self-contained Docker containers with REST endpoints for use with my projects and it never occurred to me that I was basically implementing FaaS. Now I know where to find more (and eventually submit some of mine if they're missing).
modelzoo.co is good for finding the best/most-used models but remains far from being a complete zoo and lacks a sufficient list of models for certain tasks.
For a while I liked the GAN Zoo which was specifically for GANs, but I guess there got to be too many GANs so it is no longer maintained.
I like the approach that ONNX is taking by standardizing the format. Hopefully having a standard format also leads to having a central place or way to find all of these models...
Have you thought of adding github like features like `forking` and `cloning`? I have been thinking on those lines. Eg. One could fork the resnet model and then transfer learn it to make it ecommerce apparel specific or self driving specific.
Great idea, hope that will lead to more practical usage of deep learning. Any plans of building one for datasets as well? Something like Kaggle's collection, but used in research papers.
You just re-fit the model on different data. It's called transfer learning. The theory is that it will run with similar features in your input data, therefore the weights will not need to be adjusted too much. You can also freeze the weights of entire layers.
Also, PapersWithCode (https://paperswithcode.com/) has leaderboards and code :) most of the time with a pre-trained model.