Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am going to be honest, I am somewhat disappointed.

I was hoping for a CP system with sharding + replication.

Without which we can't have things like multi-key atomic operations etc...

Will wait and see though, maybe things are better than they seem.



I'm pretty sure that if you are a Redis user, whatever you hate or like the current Redis Cluster design, what you never wanted, is a CP system. CP systems need agreement at every query so the latency and OPS/sec figures are not suitable for Redis-alike use cases. Actually with an existing CP system you can easily build the same Redis data structures since CP systems are linearizable so you can have a CP shell that internally runs a Redis kernel. Imagine the CP system is Raft, you just consider Redis your internal state machine, and every operation that is committed is applied to Redis just sending it the command.


That is actually exactly what I wanted. :)

I have done something similar with Zookeeper recently (through I wrote it in Ruby so it's terribly slow).

You would be surprised how little CP stores are actually available. There is an abundancy of AP stores but very very few useful CP stores, especially ones with shared-nothing architecture.

You don't need for every query to traverse the consensus algorithm. It's sufficient to use the consensus algorithm to agree on the master shards for each chunk of data and have sync replication.

Considering Redis is designed for 100% in-memory workloads if you are operating on a low latency network this is more than fine. Especially because this behavior would only be required for writes, reads could be served from either the master or a sync replica (and in theory if you had configurable read consistency a possibly out of date async replica).

I guess what I am sad about is just the lack of CP stores that are useful right now, I probably just need to suck it up and start writing my own.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: