I'm using Gitlab CI, configured with a docker+machine executor, to build and test my app on spot instances.
My main app requires a few microservices to be available on production as well as in the test step. All of these microservices are built and tested in the same Gitlab CI server (each in his own pipeline). The output of all microservices are docker images that are pushed to the Gitlab Docker Registry.
The test step I'm trying to build:
Provision a spot instance (if there's no idle one), installed with the microservice
docker
Test step
2.1. Provision a spot instance (if there's no idle one), installed with app docker
2.2. Testing script
2.3. Stop the app container, release the spot instance
Stops the microservice container, release the spot instance
I've got 2.1, 2.2, 2.3 to work by following the instructions here, but I'm not sure how to achieve the rest. I can run docker-machine explicitly in the yaml, but I'd like to use gitlab's docker+machine executor as it's configured with the credentials, limitations, offpeak settings, etc.
Is this possible to with gitlab's executor? How?
What's the "correct" way to go about doing something like this? I'm sure I'm not the first one testing with microservices but I couldn't find any info of how to do so.
You are probably looking for the CI Services functionality. They have a couple of examples of how to use a service (MySQL, PostgreSQL, Redis) or if you were using another docker image, the docker service will have the same hostname as the docker image name (eg, tutum/wordpress will have a dns hostname of tutum-wordpress and tutum__wordpress, for more info, refer to the details about hostnames).
There are also details about running the postgres in the shell executor if you were so inclined and there is a presentation on Testing things with Gitlab CI and docker.
This may be the stupid question.
Does Hyperledger Fabric require Docker for its operations.
I'm just wondering that Docker is needed only if we need to run Fabric peer, orderer or couchDB as virtual machine in the same physical machine. I think Docker might not be necessary if we install those sofwares (peer, order, couchDB, etc) natively on the separate and same server.
Thank you.
Just so this point does not go unnoticed, while you do not need to run the peer in a Docker container, endorsing peers (the ones which run chaincode) need access to a Docker daemon (ideally on the same host). Chaincode is currently only deployed via Docker containers.
The question as to whether Docker is required to run a peer, orderer, fabric-ca, etc. depends on what effort you are willing to expend.
The Hyperledger Fabric community publishes stable, tested Docker images for X86, PowerPC and s390 (mainframe) architectures for each of its releases. These images are based on Ubuntu.
To use the Hyperledger Fabric published release images, you need Docker and some form of orchestration support. For sample use cases, we provide some simple Docker Compose definitions. Hyperledger Cello and other provisioning platforms such as the IBM sandbox, provide kubernetes helm charts.
It is possible to build the binaries outside of their Docker images without modification of the source. However, if you wish to build for an alternative OS (e.g. Windows, RHEL or CENTOS, etc) then you will need to modify the build process. However, it can and has been done. Suggest you reach out to the hyperledger-fabric#lists.hyperledger.org mailing list to see if any in the community that have built for alternative deployment will share their work.
Starting HLF 2.0 things have changed. According to documentation, chaincode can be in 'external containers' also.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/cc_launcher.html
Yes, it is the second heading on the prerequisites page at http://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html
Docker and Docker Compose
I am wondering how do we make machines that host docker to be easily replaceable. I would like something like a Dockerfile that contains instructions on how to set-up the machine that will host docker. Is there a way to do that?
The naive solution would be to create an official "docker host" binary image to install on new machines, but I would like to have something that is reproducible and transparent like the dockerfile?
It seems like tools like Vagrant, Puppet, or Chef may be useful but they appear to be for virtual machine procurement and they seem to all require set-up of some sort of "master node" server. I am not going to be spinning up and tearing down regularly so a master server is a waste of a server, I just want something that is reproducible in the event i need to set-up or replace a new machine.
this is basically what docker-machine does for you https://docs.docker.com/machine/overview/
and other "orchestration" systems will make this automated and easier, as well
There are lots of solutions to this with no real one size fits all answer.
Chef and Puppet are the popular configuration management tools that typically use a centralized server. Ansible is another option that typically runs without a server and just connects with ssh to configure the host. All three of these works very similarly, so if your concern is simply managing the CM server, Ansible may be the best option for you.
For VM's Vagrant is the typical solution and it can be combined with other tools like Ansible to provision the VM after creating it.
In the cloud space, there's tools like Terraform or vendor specific tools like CloudFormation.
Docker is working on a project called Infrakit to deploy infrastructure the way compose deploys containers. It includes hooks for several of the above tools, including Terraform and Vagrant. For your own requirements, this may be overkill.
Lastly, for designing VM images, Docker recently open sourced their Moby project which creates the VM image containing a minimal container OS, the same one used under the covers in Docker for Windows, Docker for Mac, and possibly some of the cloud hosing providers.
We automate Docker installation on hosts using Ansible + Jenkins. Given the propper SSH access, provisioning new Docker hosts is a matter of triggering a Jenkins job.
We're thinking about using mesos and mesosphere to host our docker containers. Reading the docs it says that a prerequisite is that:
Docker version 1.0.0 or later needs to be installed on each slave
node.
We don't want to manually SSH into each new machine and install the correct version of the Docker daemon. Instead we're thinking about using something like Ansible to install Docker (and perhaps other services that may be required on each slave).
Is this a good way to solve it or does Mesosphere/DCOS or any of Mesos ecosystem components have other ways of dealing with this?
I've seen the quick intro where someone from Mesosphere just use dcos resize to change the cluster size on the Google Cloud Platform. Is there a way to hook in to this process and install additional services on the (google) container when it has booted? Or is this something we should avoid and instead just use a "pre-baked image"?
In your own datacenter using your favorite configuration tool such as ansible, salt, ... is probably a good choice.
On the cloud it might be easier to use virtual machine images providing docker, so for example dcos on aws uses coreOS which comes with docker out of the box. Shouldn't be too difficult with Ubuntu either...
Kubernetes seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such).
During development you want to be as close as possible to production environment with some important changes:
Deployed locally (or at least somewhere where you and only you can access)
Use latest source code on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like Yeoman).
Similarly one may want a non-public environment to do continuous integration.
Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work?
Update (2016-07-15)
With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development.
You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.
We've been working on a tool to do this. Basic idea is you have remote Kubernetes cluster, effectively a staging environment, and then you run code locally and it gets proxied to the remote cluster. You get transparent network access, environment variables copied over, access to volumes... as close as feasible to remote environment, but with your code running locally and under your full control.
So you can do live development, say. Docs at http://telepresence.io
The sort of "hot reload" is something we have plans to add, but is not as easy as it could be today. However, if you're feeling adventurous you can use rsync with docker exec, kubectl exec, or osc exec (all do the same thing roughly) to sync a local directory into a container whenever it changes. You can use rsync with kubectl or osc exec like so:
# rsync using osc as netcat
$ rsync -av -e 'osc exec -ip test -- /bin/bash' mylocalfolder/ /tmp/remote/folder
I've just started with Skaffold
It's really useful to apply changes in the code automatically to a local cluster.
To deploy a local cluster, the best way is Minikube or just Docker for Mac and Windows, both includes a Kubernetes interface.
EDIT 2022: By now, there are obviously dozens of way to provision k8s, unlike 2015 when we started using it. kubeadm, microk8s, k3s, kube-spray, etc.
My advice: (If your cluster can't fit on your workstation/laptop,) Rent a Hetzner server for 40 euro a month, and run WSL2 if on Windows.
Set up k8s cluster on the remote machine (with any of the above, I prefer microk8s these days). Set up Docker and Telepresence on your local Linux/Mac/WSL2 env. Install kubectl and connect it to the remote cluster.
Telepresence will let you replace a remote pod with a local docker pod, with access to local files (hopefully the same git repo that's used to build the pod you're developing/replacing), and possibly nodemon (or other language-specific auto-source-code-reload system).
Write bash functions. I cannot stress this enough, this will save you hundreds of hours of time. If replacing the pod and starting to develop isn't one line / two words, then you're doing it not-well-enough.
2016 answer below:
Another great starting point is this Vagrant setup, esp. if your host OS is Windows. The obvious advantages being
quick and painless setup
easy to destroy / recreate the machine
implicit limit on resources
ability to test horizontal scaling by creating multiple nodes
The disadvantages - you need lot of RAM, and VirtualBox is VirtualBox... for better or worse.
A mixed advantage / disadvantage is mapping files through NFS. In our setup, we created two sets of RC definitions - one that just download a docker image of our application servers; the other with 7 extra lines that set up file mapping from HostOS -> Vagrant -> VirtualBox -> CoreOS -> Kubernetes pod; overwriting the source code from the Docker image.
The downside of this is NFS file cache - with it, it's problematic, without it, it's problematically slow. Even setting mount_options: 'nolock,vers=3,udp,noac' doesn't get rid of caching problems completely, but it works most of the time. Some Gulp tasks ran in a container can take 5 minutes when they take 8 seconds on host OS. A good compromise seems to be mount_options: 'nolock,vers=3,udp,ac,hard,noatime,nodiratime,acregmin=2,acdirmin=5,acregmax=15,acdirmax=15'.
As for automatic code reload, that's language specific, but we're happy with Django's devserver for Python, and Nodemon for Node.js. For frontend projects, you can of course do a lot with something like gulp+browserSync+watch, but for many developers it's not difficult to serve from Apache and just do traditional hard refresh.
We keep 4 sets of yaml files for Kubernetes. Dev, "devstable", stage, prod. The differences between those are
env variables explicitly setting the environment (dev/stage/prod)
number of replicas
devstable, stage, prod uses docker images
dev uses docker images, and maps NFS folder with source code over them.
It's very useful to create a lot of bash aliases and autocomplete - I can just type rec users and it will do kubectl delete -f ... ; kubectl create -f .... If I want the whole set up started, I type recfo, and it recreates a dozen services, pulling the latest docker images, importing the latest db dump from Staging env and cleaning up old Docker files to save space.
See https://github.com/kubernetes/kubernetes/issues/12278 for how to mount a volume from the host machine, the equivalent of:
docker run -v hostPath:ContainerPath
Having a nice local development feedback loop is a topic of rapid development in the Kubernetes ecosystem.
Breaking this question down, there are a few tools that I believe support this goal well.
Docker for Mac Kubernetes
Docker for Mac Kubernetes (Docker Desktop is the generic cross platform name) provides an excellent option for local development. For virtualization, it uses HyperKit which is built on the native Hypervisor framework in macOS instead of VirtualBox.
The Kubernetes feature was first released as beta on the edge channel in January 2018 and has come a long way since, becoming a certified Kubernetes in April 2018, and graduating to the stable channel in July 2018.
In my experience, it's much easier to work with than Minikube, particularly on macOS, and especially when it comes to issues like RBAC, Helm, hypervisor, private registry, etc.
Helm
As far as distributing your code and pulling updates locally, Helm is one of the most popular options. You can publish your applications via CI/CD as Helm charts (and also the underlying Docker images which they reference). Then you can pull these charts from your Helm chart registry locally and upgrade on your local cluster.
Azure Draft
You can also use a tool like Azure Draft to do simple local deploys and generate basic Helm charts from common language templates, sort of like buildpacks, to automate that piece of the puzzle.
Skaffold
Skaffold is like Azure Draft but more mature, much broader in scope, and made by Google. It has a very pluggable architecture. I think in the future more people will use it for local app development for Kubernetes.
If you have used React, I think of Skaffold as "Create React App for Kubernetes".
Kompose or Compose on Kubernetes
Docker Compose, while unrelated to Kubernetes, is one alternative that some companies use to provide a simple, easy, and portable local development environment analogous to the Kubernetes environment that they run in production. However, going this route means diverging your production and local development setups.
Kompose is a Docker Compose to Kubernetes converter. This could be a useful path for someone already running their applications as collections of containers locally.
Compose on Kubernetes is a recently open sourced (December 2018) offering from Docker which allows deploying Docker Compose files directly to a Kubernetes cluster via a custom controller.
Kubespary is helpful setting up local clusters. Mostly, I used vagrant based cluster on local machine.
Kubespray configuration
You could tweak these variables to have the desired kubernetes version.
The disadvantage of using minkube is that it spawns another virtual machine over your machine. Also, with latest minikube version it minimum requires to have 2 CPU and 2GB of RAM from your system, which makes it pretty heavy If you do not have the system with enough resources.
This is the reason I switched to microk8s for development on kubernetes and I love it. microk8s supports the DNS, local-storage, dashboard, istio, ingress and many more, everything you need to test your microservices.
It is designed to be a fast and lightweight upstream Kubernetes installation isolated from your local environment. This isolation is achieved by packaging all the binaries for Kubernetes, Docker.io, iptables, and CNI in a single snap package.
A single node kubernetes cluster can be installed within a minute with a single command:
snap install microk8s --classic
Make sure your system doesn't have any docker or kubelet service running. Microk8s will install all the required services automatically.
Please have a look at the following link to enable other add ons in microk8s.
https://github.com/ubuntu/microk8s
You can check the status using:
velotio#velotio-ThinkPad-E470:~/PycharmProjects/k8sClient$ microk8s.status
microk8s is running
addons:
ingress: disabled
dns: disabled
metrics-server: disabled
istio: disabled
gpu: disabled
storage: disabled
dashboard: disabled
registry: disabled
Have a look at https://github.com/okteto/okteto and Okteto Cloud.
The value proposition is to have the classical development experience than working locally, prior to docker, where you can have hot-reloads, incremental builds, debuggers... but all your local changes are immediately synchronized to a remote container. Remote containers give you access to the speed of cloud, allow a new level of collaboration, and integrates development in a production-like environment. Also, it eliminates the burden of local installations.
As specified before by Robert, minikube is the way to go.
Here is a quick guide to get started with minikube. The general steps are:
Install minikube
Create minikube cluster (in a Virtual Machine which can be VirtualBox or Docker for Mac or HyperV in case of Windows)
Create Docker image of your application file (by using Dockerfile)
Run the image by creating a Deployment
Create a service which exposes your application so that you can access it.
Here is the way I did a local set up for Kubernetes in Windows 10: -
Use Docker Desktop
Enable Kubernetes in the settings option of Docker Desktop
In Docker Desktop by default resource allocated for Memory is 2GB so to use Kubernetes
with Docker Desktop increase the memory.
Install kubectl as a client to talk to Kubernetes cluster
Run command kubectl config get-contexts to get the available cluster
Run command kubectl config use-context docker-desktop to use the docker desktop
Build a docker image of your application
Write a YAML file (descriptive method to create your deployment in Kubernetes) pointing
to the image created in above step cluster
Expose a service of type node port for each of your deployment to make it available to
the outside world