There's a lot more than the few applications described at the end of the article. Even with smaller sized models, they can achieve many useful tasks when editing text, making summaries (of not too long documents), writing reasonable emails, expand on existing text, add details to a document, change the turn of phrase, imitate someone's writing style... and more!
RAG is a very difficult topic. A basic RAG will just be crap, and fail to answer questions properly most of the time. Once you however accumulate techniques to improve beyond the baseline, it can become something very similar to a very proficient assistant on a specific domain (assuming you indexed the files of interest) and doubles as a local search engine.
LLMs have many limitations, but once you understand their constraints, they can still do a LOT.
could you elaborate on your opinion's on RAGs? my impression is RAGs are the industry's magic bullet to all the downsides and challenges posed by LLMs.
Sure.
RAG is an effective tool to make up for limited context lengths and ensure you have relevant information to answer specific questions. But where it is not a magic bullet is that the accuracy that you get from a RAG is by default very low. You can verify this by building a set of questions and related ground truths and checking how much your RAG can get to the truth or close enough. A vanilla RAG system without much added work will hover around 50% or less accuracy, and even lower if you focus only on complex questions that require quite a few different chunks to get to the real answer.
Overall, you don't know how good your RAG is until you test it extensively.
RAG is a very difficult topic. A basic RAG will just be crap, and fail to answer questions properly most of the time. Once you however accumulate techniques to improve beyond the baseline, it can become something very similar to a very proficient assistant on a specific domain (assuming you indexed the files of interest) and doubles as a local search engine.
LLMs have many limitations, but once you understand their constraints, they can still do a LOT.