Screenshot: my docker-compose for wordpress
I've learned last week how to deploy 3 containers of wordpress, phpmyadmin and mysql. They work fine. The containers were connected between them, using a volume and the same network. The docker was configured from a docker compose file. .yml I used Git of my native operative system to version the changes.
But then I found another way to do the same:
I installed a image of Debian, then added git, apache2, mariadb and phpmyadmin, i connected all and use a "docker commit" to save changes of my development every time.
Then, a coworker told me to use a docker-file and add volumes an use Git for versioning.
Which is the best way?
What problems have the first and second ways?
Is there another way?
From my view you search for optimal deployment structure, its a long way to go and find information about. Here my opinons:
I wouldn't recommend this version because the mix of operation system (win/linux) can cause big problems. Example, Line Breaks, Folder/File Filename.
But the docker compose idea is the right way to setup the test, dev enviroment local.
is outside of git, thats not optimal, but a good solution when save everything.
is alright, but you done already with docker compose. Here the usage of volume can cause same problems as 1. You can use git versioning in commandline mode to develop, but I don't recommend it.
Alternative Ways
Use Software that able to deploy remotely to the php server, like PHPStorm, Eclipse, Winscp use local to develop the application and link it to the Apache/PHP Maschine or Container over FTP/SFTP. You work local and transfer the changed files into the running maschine or container. The Git Versioning would be done on the local maschine. You can also use mysql tools to backup the database local. So if the docker container brake you can setup it easy again.
Make sure you save also config files of apache, php, mysql into git, that makes the resetup of docker container smart.
Use (Gitlab & Gitlab CI), (Bitbucket & Bamboo), (Git & Jenkins) to deploy your php changes to the servers or docker containers.
At best read articles over continuous delivery and continuous integration.
This option is suitable for rollout to customer or dev, beta systems.
Related
I'm trying to setup the deployment of docker images to Linux server (Debian 10).
I looked over the internet to find an easy solution to deploy images from docker repository onto a server automatically.
I know that Docker Hub has webhooks.
Also, there is an option to use Kubernetes, but it seems to be a bit too much for a simple application running on one server.
What I am looking for is a way for server to detect that docker image has been updated, so that it downloads it and runs the newest version.
Currently, I have setup automatic build of docker images on Azure DevOps that are pushed to private repository on Docker Hub (I will most likely move to privately hosted Nexus repository).
I am looking for suggestions on how to do it with relatively low complexity (e.g. should I use docker-compose for it or some sort of bash script on a server).
The closest thing to what I am looking for is this solution: How to auto deploy Docker Image on own server with GitLab?
I would like to know if this is the recommended way to do or are there any other, possibly easier ways to approach it.
I found this project that looks good as a solution for my case.
https://containrrr.github.io/watchtower/
Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.
I am currently working on a project which needs to be deployed on customer infra (which is not cloud) and also it will not have internet.
We currently deploy manually our application and install dependencies using tarball, can docker help us here?
Note:
Application stack:
NodeJs
MySql
Elasticsearch
Redis
MongoDB
We will not have internet.
You can use docker load and docker save to load Docker images in TAR format or export these images. If you package your application files within these images this could be used to deliver your project to your customers.
Also note that the destination services must all have Docker Engine installed and running.
If you have control over your dev environment, you can also use Nexus or Gitlab as your private Docker repository. You can then pull your images from there into production, if it makes sense for your product.
I think the most advantage can be had in your local dev setup. Instead of installing, say, MySQL locally, you can run it as a Docker container. I use docker-compose for all client services in my current project. This helps keep your computer clean, makes it easy to avoid versioning hell (if you use different versions for each release or stage) and you don't have to mess around with configuration for each dev machine.
In my previous job every developer had a local Oracle SQL install, and that was not a happy state of affairs.
In our team we currently use vagrant as a development environment. Now I want to replace it with docker, but I can't understand the team workflow with it.
This is what confuses me: with vagrant I create a project repo with a Vagrantfile in it, and every developer pulls a repo and runs vagrant up. If the project needs some changes in environment, I edit Vagrantfile, chef recipe or requirements-file, and developers must run vagrant provision to get an updated environment.
But with docker I see at least two options:
create a Dockerfile and put it in repo, every developer builds an image from it. On every change they rebuild their own image.
build an image, put it on server, every developer pulls it and run. On every change rebuild and image on server (maybe some auto-rebuilds on server and auto-pull scripts).
Docker phylosophy is 'build once, run anywhere', but the same time we have a Dockerfile in repo... What do you think about it? How do you do this in your team?
Docker is more for production as for development
Docker is a deployment tool to package apps for production (or tests environments). It is not so much a tool for development. It is meant to create an isolated environment to run your already developed application somewhere on a server, the cloud or your laptop.
Use a Dockerfile to document the packaging
I think it is nice to have a Dockerfile in your project. Similar to a Vagrant file, it is a kind of executable documentation which describes how your production environment should look like. Somebody who is new to your project could just run the file and will get a packaged and ready-to-run container. Cool!
Use a registry to integrate Docker
I think you should provide a (private) Docker registry if you integrate Docker into your (CI) workflow (e.g. into test and build systems). A single repository to store validated and tested images of all your products will definitely speed-up your time to create new test or production systems (e.g. to scale your app or to setup an installation for a demo or a customer). If your product is open source, consider the public Docker index so people could find your stuff there. You can configure your build system to create a new Docker image after each (successful) build and to push it to the registry. Since the images are layered (and those layers are shared), this will be fast and will not take to much disk space.
If you want to integrate Docker in your development, I don't see so much possibilities:
You can create a repository with final images (as described before)
Or you can use Docker images to develop against them (e.g. to run a MongoDB)
Maybe you have a team A which programs against the API of team B and always needs a running instance of team B's product. Then you could package this product into a Docker image and share it with team A. In this case, team B should provide the image in a repository (and team A shouldn't take care how to build it and use it as a blackbox).
Edit: If you depend on many external apps
To make this "team A and team B" thing more clear: If you develop an app against many other tools, e.g. an app from another team, a MongoDB or an Elasticsearch, you can package those apps into Docker images, run them (locally) and develop against them. You will also have a good chance to find popular apps (such as MongoDB) in the public Docker Index. So instead of installing them manually, you can just pull and start them. But to put together an environment like this, you will need Vagrant again.
You could also use Docker for test environments (build and run an image and test against it). But this wouldn't be a replacement for Vagrant in development.
Vagrant + Docker
I would suggest to use both. Provide a Vagrantfile to build the development environment and provide a Dockerfile to build the production environment.
Also take a look at http://docs.vagrantup.com/v2/provisioning/docker.html. Vagrant has a Docker integration since a while, so you can create Docker containers/environments with Vagrant.
I work for Docker.
Think of Docker (and the Docker index/registry) as equivalent to Git. You don't have to make this very hard. If you change a Dockerfile, it is a cheap and quick operation to update an image. If you use "Trusted Builds" in our registry, then you can have it build automatically off of any branch at any time you want.
These are basic building blocks, but it works great for development. Docker itself is built and developed inside of Docker containers, so we know it works fine.
I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.