I know it is a Microsoft Project. My reasoning is, if Ollama supports ONNX and if it can provide performance on par or better than llama.cpp, it would make sense for Microsoft to acquire Ollama for distribution reasons.
Llama.cpp is the valuable bit here, and Ollama is only good for end user convenience. It saves you 20 minutes of googling and futzing with the million and one llama.cpp wrappers available for every language, lets you set up things to load on startup, but if you're building something for scale or backend, neither llama.cpp or ollama are coming along for the ride. At best it'll live through a proof of concept stage, but as soon as you start caring about performance it's getting discarded.
Microsoft isn't going to pay for something that amounts to a useful setup script wrapped around an inefficient convenience library intended for people to be able to run AI on consumer hardware. There's no exploitable value proposition, whereas building their own closed source AI systems that are tightly coupled to the Windows ecosystem and favor cloud services allows them to extract maximum rent.