Screenshot: my docker-compose for wordpress
I've learned last week how to deploy 3 containers of wordpress, phpmyadmin and mysql. They work fine. The containers were connected between them, using a volume and the same network. The docker was configured from a docker compose file. .yml I used Git of my native operative system to version the changes.
But then I found another way to do the same:
I installed a image of Debian, then added git, apache2, mariadb and phpmyadmin, i connected all and use a "docker commit" to save changes of my development every time.
Then, a coworker told me to use a docker-file and add volumes an use Git for versioning.
Which is the best way?
What problems have the first and second ways?
Is there another way?
From my view you search for optimal deployment structure, its a long way to go and find information about. Here my opinons:
I wouldn't recommend this version because the mix of operation system (win/linux) can cause big problems. Example, Line Breaks, Folder/File Filename.
But the docker compose idea is the right way to setup the test, dev enviroment local.
is outside of git, thats not optimal, but a good solution when save everything.
is alright, but you done already with docker compose. Here the usage of volume can cause same problems as 1. You can use git versioning in commandline mode to develop, but I don't recommend it.
Alternative Ways
Use Software that able to deploy remotely to the php server, like PHPStorm, Eclipse, Winscp use local to develop the application and link it to the Apache/PHP Maschine or Container over FTP/SFTP. You work local and transfer the changed files into the running maschine or container. The Git Versioning would be done on the local maschine. You can also use mysql tools to backup the database local. So if the docker container brake you can setup it easy again.
Make sure you save also config files of apache, php, mysql into git, that makes the resetup of docker container smart.
Use (Gitlab & Gitlab CI), (Bitbucket & Bamboo), (Git & Jenkins) to deploy your php changes to the servers or docker containers.
At best read articles over continuous delivery and continuous integration.
This option is suitable for rollout to customer or dev, beta systems.
Kubernetes seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such).
During development you want to be as close as possible to production environment with some important changes:
Deployed locally (or at least somewhere where you and only you can access)
Use latest source code on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like Yeoman).
Similarly one may want a non-public environment to do continuous integration.
Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work?
Update (2016-07-15)
With the release of Kubernetes 1.3, Minikube is now the recommended way to run Kubernetes on your local machine for development.
You can run Kubernetes locally via Docker. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version.
We've been working on a tool to do this. Basic idea is you have remote Kubernetes cluster, effectively a staging environment, and then you run code locally and it gets proxied to the remote cluster. You get transparent network access, environment variables copied over, access to volumes... as close as feasible to remote environment, but with your code running locally and under your full control.
So you can do live development, say. Docs at http://telepresence.io
The sort of "hot reload" is something we have plans to add, but is not as easy as it could be today. However, if you're feeling adventurous you can use rsync with docker exec, kubectl exec, or osc exec (all do the same thing roughly) to sync a local directory into a container whenever it changes. You can use rsync with kubectl or osc exec like so:
# rsync using osc as netcat
$ rsync -av -e 'osc exec -ip test -- /bin/bash' mylocalfolder/ /tmp/remote/folder
I've just started with Skaffold
It's really useful to apply changes in the code automatically to a local cluster.
To deploy a local cluster, the best way is Minikube or just Docker for Mac and Windows, both includes a Kubernetes interface.
EDIT 2022: By now, there are obviously dozens of way to provision k8s, unlike 2015 when we started using it. kubeadm, microk8s, k3s, kube-spray, etc.
My advice: (If your cluster can't fit on your workstation/laptop,) Rent a Hetzner server for 40 euro a month, and run WSL2 if on Windows.
Set up k8s cluster on the remote machine (with any of the above, I prefer microk8s these days). Set up Docker and Telepresence on your local Linux/Mac/WSL2 env. Install kubectl and connect it to the remote cluster.
Telepresence will let you replace a remote pod with a local docker pod, with access to local files (hopefully the same git repo that's used to build the pod you're developing/replacing), and possibly nodemon (or other language-specific auto-source-code-reload system).
Write bash functions. I cannot stress this enough, this will save you hundreds of hours of time. If replacing the pod and starting to develop isn't one line / two words, then you're doing it not-well-enough.
2016 answer below:
Another great starting point is this Vagrant setup, esp. if your host OS is Windows. The obvious advantages being
quick and painless setup
easy to destroy / recreate the machine
implicit limit on resources
ability to test horizontal scaling by creating multiple nodes
The disadvantages - you need lot of RAM, and VirtualBox is VirtualBox... for better or worse.
A mixed advantage / disadvantage is mapping files through NFS. In our setup, we created two sets of RC definitions - one that just download a docker image of our application servers; the other with 7 extra lines that set up file mapping from HostOS -> Vagrant -> VirtualBox -> CoreOS -> Kubernetes pod; overwriting the source code from the Docker image.
The downside of this is NFS file cache - with it, it's problematic, without it, it's problematically slow. Even setting mount_options: 'nolock,vers=3,udp,noac' doesn't get rid of caching problems completely, but it works most of the time. Some Gulp tasks ran in a container can take 5 minutes when they take 8 seconds on host OS. A good compromise seems to be mount_options: 'nolock,vers=3,udp,ac,hard,noatime,nodiratime,acregmin=2,acdirmin=5,acregmax=15,acdirmax=15'.
As for automatic code reload, that's language specific, but we're happy with Django's devserver for Python, and Nodemon for Node.js. For frontend projects, you can of course do a lot with something like gulp+browserSync+watch, but for many developers it's not difficult to serve from Apache and just do traditional hard refresh.
We keep 4 sets of yaml files for Kubernetes. Dev, "devstable", stage, prod. The differences between those are
env variables explicitly setting the environment (dev/stage/prod)
number of replicas
devstable, stage, prod uses docker images
dev uses docker images, and maps NFS folder with source code over them.
It's very useful to create a lot of bash aliases and autocomplete - I can just type rec users and it will do kubectl delete -f ... ; kubectl create -f .... If I want the whole set up started, I type recfo, and it recreates a dozen services, pulling the latest docker images, importing the latest db dump from Staging env and cleaning up old Docker files to save space.
See https://github.com/kubernetes/kubernetes/issues/12278 for how to mount a volume from the host machine, the equivalent of:
docker run -v hostPath:ContainerPath
Having a nice local development feedback loop is a topic of rapid development in the Kubernetes ecosystem.
Breaking this question down, there are a few tools that I believe support this goal well.
Docker for Mac Kubernetes
Docker for Mac Kubernetes (Docker Desktop is the generic cross platform name) provides an excellent option for local development. For virtualization, it uses HyperKit which is built on the native Hypervisor framework in macOS instead of VirtualBox.
The Kubernetes feature was first released as beta on the edge channel in January 2018 and has come a long way since, becoming a certified Kubernetes in April 2018, and graduating to the stable channel in July 2018.
In my experience, it's much easier to work with than Minikube, particularly on macOS, and especially when it comes to issues like RBAC, Helm, hypervisor, private registry, etc.
Helm
As far as distributing your code and pulling updates locally, Helm is one of the most popular options. You can publish your applications via CI/CD as Helm charts (and also the underlying Docker images which they reference). Then you can pull these charts from your Helm chart registry locally and upgrade on your local cluster.
Azure Draft
You can also use a tool like Azure Draft to do simple local deploys and generate basic Helm charts from common language templates, sort of like buildpacks, to automate that piece of the puzzle.
Skaffold
Skaffold is like Azure Draft but more mature, much broader in scope, and made by Google. It has a very pluggable architecture. I think in the future more people will use it for local app development for Kubernetes.
If you have used React, I think of Skaffold as "Create React App for Kubernetes".
Kompose or Compose on Kubernetes
Docker Compose, while unrelated to Kubernetes, is one alternative that some companies use to provide a simple, easy, and portable local development environment analogous to the Kubernetes environment that they run in production. However, going this route means diverging your production and local development setups.
Kompose is a Docker Compose to Kubernetes converter. This could be a useful path for someone already running their applications as collections of containers locally.
Compose on Kubernetes is a recently open sourced (December 2018) offering from Docker which allows deploying Docker Compose files directly to a Kubernetes cluster via a custom controller.
Kubespary is helpful setting up local clusters. Mostly, I used vagrant based cluster on local machine.
Kubespray configuration
You could tweak these variables to have the desired kubernetes version.
The disadvantage of using minkube is that it spawns another virtual machine over your machine. Also, with latest minikube version it minimum requires to have 2 CPU and 2GB of RAM from your system, which makes it pretty heavy If you do not have the system with enough resources.
This is the reason I switched to microk8s for development on kubernetes and I love it. microk8s supports the DNS, local-storage, dashboard, istio, ingress and many more, everything you need to test your microservices.
It is designed to be a fast and lightweight upstream Kubernetes installation isolated from your local environment. This isolation is achieved by packaging all the binaries for Kubernetes, Docker.io, iptables, and CNI in a single snap package.
A single node kubernetes cluster can be installed within a minute with a single command:
snap install microk8s --classic
Make sure your system doesn't have any docker or kubelet service running. Microk8s will install all the required services automatically.
Please have a look at the following link to enable other add ons in microk8s.
https://github.com/ubuntu/microk8s
You can check the status using:
velotio#velotio-ThinkPad-E470:~/PycharmProjects/k8sClient$ microk8s.status
microk8s is running
addons:
ingress: disabled
dns: disabled
metrics-server: disabled
istio: disabled
gpu: disabled
storage: disabled
dashboard: disabled
registry: disabled
Have a look at https://github.com/okteto/okteto and Okteto Cloud.
The value proposition is to have the classical development experience than working locally, prior to docker, where you can have hot-reloads, incremental builds, debuggers... but all your local changes are immediately synchronized to a remote container. Remote containers give you access to the speed of cloud, allow a new level of collaboration, and integrates development in a production-like environment. Also, it eliminates the burden of local installations.
As specified before by Robert, minikube is the way to go.
Here is a quick guide to get started with minikube. The general steps are:
Install minikube
Create minikube cluster (in a Virtual Machine which can be VirtualBox or Docker for Mac or HyperV in case of Windows)
Create Docker image of your application file (by using Dockerfile)
Run the image by creating a Deployment
Create a service which exposes your application so that you can access it.
Here is the way I did a local set up for Kubernetes in Windows 10: -
Use Docker Desktop
Enable Kubernetes in the settings option of Docker Desktop
In Docker Desktop by default resource allocated for Memory is 2GB so to use Kubernetes
with Docker Desktop increase the memory.
Install kubectl as a client to talk to Kubernetes cluster
Run command kubectl config get-contexts to get the available cluster
Run command kubectl config use-context docker-desktop to use the docker desktop
Build a docker image of your application
Write a YAML file (descriptive method to create your deployment in Kubernetes) pointing
to the image created in above step cluster
Expose a service of type node port for each of your deployment to make it available to
the outside world
In our team we currently use vagrant as a development environment. Now I want to replace it with docker, but I can't understand the team workflow with it.
This is what confuses me: with vagrant I create a project repo with a Vagrantfile in it, and every developer pulls a repo and runs vagrant up. If the project needs some changes in environment, I edit Vagrantfile, chef recipe or requirements-file, and developers must run vagrant provision to get an updated environment.
But with docker I see at least two options:
create a Dockerfile and put it in repo, every developer builds an image from it. On every change they rebuild their own image.
build an image, put it on server, every developer pulls it and run. On every change rebuild and image on server (maybe some auto-rebuilds on server and auto-pull scripts).
Docker phylosophy is 'build once, run anywhere', but the same time we have a Dockerfile in repo... What do you think about it? How do you do this in your team?
Docker is more for production as for development
Docker is a deployment tool to package apps for production (or tests environments). It is not so much a tool for development. It is meant to create an isolated environment to run your already developed application somewhere on a server, the cloud or your laptop.
Use a Dockerfile to document the packaging
I think it is nice to have a Dockerfile in your project. Similar to a Vagrant file, it is a kind of executable documentation which describes how your production environment should look like. Somebody who is new to your project could just run the file and will get a packaged and ready-to-run container. Cool!
Use a registry to integrate Docker
I think you should provide a (private) Docker registry if you integrate Docker into your (CI) workflow (e.g. into test and build systems). A single repository to store validated and tested images of all your products will definitely speed-up your time to create new test or production systems (e.g. to scale your app or to setup an installation for a demo or a customer). If your product is open source, consider the public Docker index so people could find your stuff there. You can configure your build system to create a new Docker image after each (successful) build and to push it to the registry. Since the images are layered (and those layers are shared), this will be fast and will not take to much disk space.
If you want to integrate Docker in your development, I don't see so much possibilities:
You can create a repository with final images (as described before)
Or you can use Docker images to develop against them (e.g. to run a MongoDB)
Maybe you have a team A which programs against the API of team B and always needs a running instance of team B's product. Then you could package this product into a Docker image and share it with team A. In this case, team B should provide the image in a repository (and team A shouldn't take care how to build it and use it as a blackbox).
Edit: If you depend on many external apps
To make this "team A and team B" thing more clear: If you develop an app against many other tools, e.g. an app from another team, a MongoDB or an Elasticsearch, you can package those apps into Docker images, run them (locally) and develop against them. You will also have a good chance to find popular apps (such as MongoDB) in the public Docker Index. So instead of installing them manually, you can just pull and start them. But to put together an environment like this, you will need Vagrant again.
You could also use Docker for test environments (build and run an image and test against it). But this wouldn't be a replacement for Vagrant in development.
Vagrant + Docker
I would suggest to use both. Provide a Vagrantfile to build the development environment and provide a Dockerfile to build the production environment.
Also take a look at http://docs.vagrantup.com/v2/provisioning/docker.html. Vagrant has a Docker integration since a while, so you can create Docker containers/environments with Vagrant.
I work for Docker.
Think of Docker (and the Docker index/registry) as equivalent to Git. You don't have to make this very hard. If you change a Dockerfile, it is a cheap and quick operation to update an image. If you use "Trusted Builds" in our registry, then you can have it build automatically off of any branch at any time you want.
These are basic building blocks, but it works great for development. Docker itself is built and developed inside of Docker containers, so we know it works fine.
I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.
I am a total noob to linux containers and been spending some time learning about Docker, and forgive my confusion thought this question. Currently, I have a Rails app in production deployed via capistrano. My cloud servers are maintained with Opscode Chef on the Debian Wheezy distribution. For development, I have a Vagrant VM preinstalled with the app and services.
If I were to employ Docker, where would my app sit? The container or the host? How would I deploy (production) and share directories (development)? Can I run all my additional services ie memcache, redis, postgresql, etc on the same server using docker? I can maybe envision the potential of Docker but having trouble seeing its practical use.
Seems like containers are part of the future. Any guidance for someone making the switch from virtualization?
If I were to employ Docker, where would my app sit?
It could sit inside the container or it could sit on the host(you can use docker build to copy the app into the container)
How would I deploy (production) and share directories (development)?
Deploying your app would mean committing your local container into an image, publishing it
and running a container out of the published images on your servers. I have not tried sharing directories between host and container, but you can try this : https://gist.github.com/jpetazzo/5668338 . You can also write a Dockerfile which can copy a directory to a target in the container. Docker's docs on building images will help you there.
Can I run all my additional services ie memcache, redis, postgresql, etc on the same server using docker?
Yes. You will be running multiple containers on the same server.
I'm no expert and I haven't even used docker myself, but as I understand it, your app sits inside a docker container. You would deploy ideally a whole container with your own ruby version installed and so on.
The big benefit is, that you can test exactly the same container in your staging system that you're going to ship to production then. So you're able to test the complete system with all installed C extensions, the exact same ls command and so on.