Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there an easy way to get a single-node production-ready Kubernetes instance? I'd like to start using the Auto DevOps features that GitLab is adding, but all the tutorials I can find either have you installing minikube on your laptop or setting up a high-availability cluster with at least 3 hosts. Right now I'm using CoreOS and managing Docker containers with systemd and shell scripts, which works all right but is tedious and kind of hard to keep track of. I don't have anything that needs to autoscale or fail over or do high availability, I just want something that integrates nicely and makes it easy to deploy containers.

EDIT: I should have clarified, I want to self-host this on our internal VMWare cluster, rather than run it on GKE.



> Is there an easy way to get a single-node production-ready Kubernetes instance?

Not really. There are plenty of ways of getting a single node instance. None of them will give you a "production-ready" one, because they don't define it that way (and I happen to agree). You can of course do whatever you want.

Since you are using VMWare anyway, why can't you spin up more VMs (maybe smaller)? You can vmotion them away to different nodes when you are ready to actually make the cluster HA. It is a really really good idea to keep master and workers separate, even if you run a single node for each.

Failure of the worker will of course bring down your applications. When you recover it or spin up another one, K8s will recover your apps for you. Failure of the master will not adversely affect your systems only the cluster's ability to manage itself and its self-healing capabilities (which will affect uptime at some point).

Failure of the combined master/worker/etcd node should be recoverable, but frankly, at this point, should you care? I would just shoot it in the head and add some automation to provision a brand new cluster and deploy those applications again. Since you are not worried about HA and just want a place to deploy the containers, just make the k8s single-node-cluster cattle.


Sure. Install kubeadm on the node, "kubeadm init", install a pod network, then remove the master taint


Reminds me of the plumbis “How is it made”.


^ This.


I am actually in a very similar situation. I want to run a microservices system on a single k8s node for testing purposes, put nginx with SSL in front of it and add Jenkins for automation.

Took the day to setup minikube on a CentOS server and play around, however, I wasn't able to expose anything to the outside world. Looking into Ingress at the moment, however, documentation is a bit loose there, I think.

Another comment suggests to setup a single-node cluster and remove the taint on the master, maybe I will try that instead.

edit: any advise much appreciated!


I ran into the same problem with exposing it to the outside world, also found the documentation there to be a problem. I got closest with kubespray, I was originally working with the Kubernetes documentation on a 4 node cluster.


You can run Google Kubernetes Engine with a single node, just set the size to 1. It will complain about the high availability, etc. but it works just fine if you don't need that.


To the best of my knowledge, there is no such thing as single node production ready k8s cluster.

> I don't have anything that needs to autoscale or fail over or do high availability

You should not be using Kubernetes.


What should you be using, then, if your development workflow outputs container-images, and you want to deploy them to your own hardware with remote orchestration + configuration management (and you consider Docker Swarm dead-in-the-water)?

That is: what is the “run your own thing-that-is-slightly-lower-level-than-Heroku on a fixed pool of local hardware resources” solution in 2018?


Just to be clear: You can run Kubernetes on a single node. It just wont be "production ready", because the minimum qualifications of "production ready" has raised in Kube world. A single node running anything isn't production ready, let alone Kube. A single node running a nginx server isn't "production ready" anymore.

But Kubeadm will still do it. Kops if you're on AWS. GKE if you're on GCP. Just Docker would be easier to set up though, and that's what the OP means.


I love how there is a plurality of solutions which seem to address everything but your use case...


I would probably just use either docker-compose or systemd+runc for a single node, then use Ansible to manage configurations. Kubernetes strongly assumes you've got a cluster, not a single node.


To be honest for something small and simple I’d just throw up a Rancher server and call it from Ansible.


You could try "The canonical distribution of Kubernetes" https://www.ubuntu.com/kubernetes . That has a single node deployment using lxd, IIRC.


that's Canonical, with a capital C


I think kubespray[1] does what you want, but I'm not 100% sure.

[1]: https://github.com/kubernetes-incubator/kubespray


I've built a bunch of simple scripts for this that you can find here [0]. It's not polished for public consumption or updated to k8s 1.10 yet, but it's what I use for production clusters, some of them single-node. Run the preparation script, then the master script, setup an .env file beforehand with the few required variables, and you're good. Feel free to ask questions here or in the repo issues section.

EDIT to add: It's assuming Ubuntu LTS as the node's OS, not sure if that fits your use case. Should be possible to adapt this to ContainerLinux or anything else without much trouble.

I haven't worked with GL's Auto DevOps yet, but I think the cluster should have everything necessary to get going with that.

[0] https://github.com/seeekr/kubeops


If you want a middle ground between hand-written shell scripts and full-blown Kubernetes, we use Hashicorp's Nomad[0] on top of CoreOS at $dayjob and are quite happy with it.

Similar use case - self-hosted VMs, for low-traffic, internal tools, and no need for autoscaling.

I can't speak to how well it integrates with Gitlab's Auto DevOps, but Nomad integrates very well with Terraform[1] and I'd be surprised if there wasn't a way to plug Terraform into Gitlab's process.

0: https://www.nomadproject.io/

1: https://www.terraform.io/


i would be very cautious about calling anything single-node production-ready


This is for a bunch of internal tools, so if it goes down it's more of a nuisance than anything. Is there something that makes a single-node kubernetes setup less reliable than a single server without kubernetes?


I have my group's internal Jenkins service hosted on a single node EC2 instance running Kubernetes (t2.medium) and I would echo all of the advice you're getting. Kubeadm, definitely. And moreover, don't call it production-ready.

A production-ready cluster has dedicated master(s), period. In order to get your single-node cluster to work (so you can schedule "worker" jobs on it) you're going to "remove the dedicated taint," which is signaling that this node is not reserved for "master" pods or kube-system pods only. That will mean that if you do your resource planning poorly with limits and requests, you will easily be able to swamp your "production" cluster and put it underwater, until a reboot.

(The default configuration of a master will ensure that worker pods don't get scheduled there, which makes it harder to accidentally swamp your cluster and break the Kube API, but also won't do anything but basic Kubernetes API stuff.)

If things go south, you're going to be running `kubeadm reset` and `kubeadm init` again because it's 100% faster than any kind of debugging you might try to do, and you're losing money while you try to figure it out. That's not a production HA disaster readiness or recovery plan.

But it 100% works. Practice it well. Jenkins with the kubernetes-plugin is awesome, and if I have a backup copy of the configuration volume and its contents, I can start from scratch and be back to exactly where I was yesterday in about 15-20 minutes of work.

My 1.5.2 cluster's SSL certificate expired a few weeks ago, on the server's birthday, and after several hours trying to reconcile the way that SSL certificate management has changed, to find the proper documentation about how to change the certificate in this ancient version, as well as making considerations that I might upgrade, and what does that mean (read: figuring out how to configure or disable RBAC, at the very least)... I conceded that it was easy to implement the "DR-plan Lite" that we had discussed, went ahead and reinstalled over the same instance "from scratch" again with v1.5.2, and got back to work in short order.

I've spoken with at least half a dozen people that said administering Jenkins servers is an immeasurable pain in the behind. I don't know if that's what you intend to do, but I can tell you that if it's a Jenkins server you want, this is the best way to do it, and you will be well prepared for the day when you decide that it really needs more worker nodes. It was easy to deploy Jenkins from the stable Helm chart.


I've done a number of 1.5 to 1.9 migrations, if you need help figuring out what API endpoints/etc have changed I can give you some guidance if you ping me on k8s slack; mikej.

Once you get onto 1.8+ w/ CRDs you can manage your SSL certs automatically via Jetstacks Certmanager; https://github.com/jetstack/cert-manager/tree/master/contrib...


Thanks! I will check it out!

It just hasn't been a priority. I have no need for RBAC at this point, as I am the only cluster admin, and the whole network is fairly well isolated.

I couldn't really think of a good reason to not upgrade when it came time to kubeadm init again, but then I realized I could probably save ten minutes by not upgrading, it was down, and I didn't know what the immediate consequences of adding RBAC would be for my existing Jenkins deployment and jobs.

Chances are it would have worked.


Honestly for the situation you presented you'll find very few QOL improvements by upgrading. You could probably sit on 1.5 forever on that system (internal jenkins) forever.


The biggest driver is actually just to not be behind.

You can tell already from what little conversation we've had that "always be upgrading" is not a cultural practice here (yet.)

We have regular meetings about changing that! Had two just yesterday. Chuckle


I don't think so, provided it has the necessary resources to run everything in a single node. There are a few more moving parts which you won't really be using to any great extent.


more parts = more things that can go wrong


That's not quite true. More parts == more things that can fail, but whether those failures result in the entire system failing depends on how you've combined the parts.

If you make each of the pieces required parts of the whole, then yes - adding more of them will increase the chance that the whole system fails. But in kubernetes, the additional pieces (nodes) are all redundant parts of the whole, and can fail without affecting the availability of the whole system. The more nodes you add, the more redundancy you're adding, and the less chance that the system as a whole will be affected.

Mathematically:

If a component fails F% of the time, then adding N of them "in series" (all of them need to work) means your whole system fails with a (1-(1-F)^N)% chance. Iow, as N goes up, the system approaches (1-0)% => 100% chance of failure.

Otoh, if you combine the parts "in parallel", and you only need any one[1] of the components to work in order for the whole system to work, then the system has a F^N% chance of failure. As N goes up, this system approaches 0% chance of failure.

[1] Kubernetes (etcd) isn't quite this redundant, since etcd needs a majority quorum to be functional not just any single node. But the principle is similar and still gets more reliable as you add nodes.


You can set the node count to 1 when you create the cluster in GitLab


For more information on how to connect the cluster to GitLab please see https://docs.gitlab.com/ee/user/project/clusters/


Use GKE, it's super easy to set up and the master is free.if it's for internal tools you could even run it on a group of preemptive instances, which will auto restart and self heal when one gets terminated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: