In Kubernetes it is possible to use Persistent Volumes to add persistent storage to your Docker containers. When creating a Persistent Volume (Claim) you have to configure a storage type and storage capacity. When your application gets successful and your storage exceeds the limits, you have to extend the volume or create a new persistent volume. The latter isn't a feasible solution in a production environment, but extending a persistent volume isn't supported out-of-the-box in Kubernetes. There is a solution though! Extending the volume outside Kubernetes!
Generating certificate signing requests (CSR), certificates and keys can be a hassle. CloudFlare introduced the CFSSL and CFSSLJSON tools to make this a lot easier for all of us!
The last few weeks I'm working with Kubernetes and OpenStack. It's a steep learning curve to get a production-ready Kubernetes Cluster running on OpenStack, especially because I didn't want to use the available ready-to-use tools. In the next few blog posts, I want to share my experience how to run Kubernetes on an OpenStack platform.
In this first blog post, I will discuss the infrastructure and how I use the OpenStack platform to run a production-ready Kubernetes cluster.
My continuous integration and continuous delivery pipeline use Docker containers and a private Docker registry to distribute and deploy my applications automatically. Unfortunately, the Docker command-line tool can't really control the Docker registry, actually, it is only capable of pushing and pulling image (tags). This is a bit frustrating because, when you're using your continuous integration pipeline to build containers, push them to the registry, and pull them again to run the QA, the registry will eat up all your disk space due the images are never removed. To clean up your "mess", you have to remove the images manually, but it's way cooler (and simpler) to use the Docker registry API for this job.
For DevOps engineers like me, command-line tools help me to automate stuff. Rancher provides two CLI tools: rancher and rancher-compose. I use these two tools to automate deployments, upgrades, and cleaning up if environments aren't in use anymore. Basically, it is possible to completely automate your continuous integration (CI) and continuous deployment (CD) pipeline (but that is something for another blog post!).
Number one challenge when you are using Docker in production environments is storage. On you local development machine it is possible to mount a local directory path into your docker container, but in production environments, this isn't an option because you don't know if the path exists and if the docker container is always running on the same node. A solution is NFS, and when using Rancher, Rancher-NFS.