I’ve spent a lot of time with boost asio and serialisation of objects into a boost variant to send that across the wire. The server vists the variant to process the message. Including boost shared memory for file data.
Both for unix domain sockets and TCP.
There’re plenty of boost examples around so, I’d suggest, you take their examples and work them for your framework.
As I’m sure you’re aware, a clean and easy to read example will make a difference.
It’s great that you’re open source and I hope you get some traction.
Indeed, examples from every angle are probably the one deficit of the existing documentation. There are a couple, such as the perf_demo described in the blog post. I’d like to add ones showing integration with
- epoll based event loop
- boost.asio based event loop
(Boost.interprocess and boost.asio are huge inspirations and are both used inside!)
As for traction: it’s tough! Have to get eyeballs; and then have to convey a sense of being worth one’s trust.
Integration with boost asio would be of interest to many - myself included. It is the defacto for anyone who’s got past Stephen’s Unix Network Programming.
For what it is worth at this time - obviously acting on the following statement will require some level of trust -
It is very much ready to use with boost.asio. (I know that, because I myself use boost.asio religiously. If it were not compatible with it, I'd pretty much have to not use Flow-IPC myself.) Though, it could (fairly easily) gain a number of wrapper classes that would turn our stuff into actual boost.asio I/O objects; then it'd be even more straightforward.
There's even the little section entitled, "I'm a boost.asio user. Can't I just give your constructor my io_context, and then you'll place the completion handler directly onto it?"
To summarize, though...
-1- You can have Flow-IPC create background threads as-needed and ping your completion handler (e.g., "message received") from such threads.
-2- You can have it not create any background threads, instead asking you to .async_wait() (via boost.asio, most easily; but also manually with poll() or whatever you want) whenever it needs internally to async-await something. Your own completion handler (e.g., handle just-received message M) shall execute synchronously at only predictable points, in non-blocking fashion.
-3- Direct integration with boost.asio - meaning ipc::transport::Channel (e.g.) would take an io_context/executor/whatever in its ctor, and .async_X(F) would indeed post F onto that io_context/executor/whatever = essentially syntactic sugar = a TODO. (I'd best file an Issue, I just remembered.)
The perf_demo (partially recreated in the blog-post) integrates into a single-threaded boost.asio io_context, using technique #2 above. In the source code snippets in the blog, we avoided anything asynchronous just to keep it focused for the max # of readers (hopefully).
Top tip: ensure your ASIO code is not exported from a shared library.
I’ve been hit by Cephfs using some version and my own code using another.
The fixes were simple though.
Edit: as for performance, I’d not focus on that too much. It’ll depend on circumstances the end user has. Myself, I’d measure the interfaces with stack based timings and dump to a JSON file at exit. Graphs under various loads and an a/b comparison.
As an example, on a dedupe system I measure LZO was better for performance than LZ4. HPE rack units with spinning rust disks.
Edit 2: I’ve forwarded your GitHub to my work account. I’ll offer the research to a colleague (Jira backlog) to look at when “someone” wants our new system to be faster. We have a boost asio solution I wrote that works - local unix domain sockets. Hitachi NAS.
Both for unix domain sockets and TCP.
There’re plenty of boost examples around so, I’d suggest, you take their examples and work them for your framework.
As I’m sure you’re aware, a clean and easy to read example will make a difference.
It’s great that you’re open source and I hope you get some traction.