Lets say I have a container that is fully equipped to serve a Rails app with Passenger and Apache, and I have a vhost that routes to /var/www/app/public in my container. Since a container is supposed to be sort of like a process, what would I do when my Rails code changes? If the app was cloned with Git, and there are pending changes in the repo, how can the container pull in these changes automatically?
You have a choice on how you want to structure your container, depending on your deployment philosophy:
Minimal: You install all your rails pre-reqs in the Docker file (RUN commands), but have the ENTRYPOINT be something like "git pull && bundle install --deployment && rails run". At container boot time it will get your latest code.
Snapshot: Same as above, but have the ENTRYPOINT also be a RUN command. This way, the container has a pre-installed snapshot of the code, but it will still update when the container is booted. Sometimes this can speed up boot time (i.e. if most of the gems are already installed).
Container as Deployment: Same as above, but change the ENTRYPOINT to be "rails run" only. This way, your container is your code. You'll have to make new containers every time you change rails (automation!). The advantage is that your container won't need to contact your code repo at all. The downside is that you have to always remember what the latest container is. (Tags can help) And right now, Docker doesn't have a good story on cleaning up old containers.
In this scenario, it sounds like you have built an image and are now running this image in a container.
Using the image your running container originates from, you could add another build step to git pull your most up to date code. I'd consider this an incremental update as your building upon a preexisting image. I'd recommend tagging and pushing to your (assuming your using a private index) appropriately. The new image would be available to run.
Depending on the need, you could also rebuild the base image of your software. I'm assuming your using a Dockerfile to build your original image which includes a git checkout of your software. You could then tag and push to your index for use appropriately.
In docker v0.8, It will be possible to start a new command in a running container, so you will be able to do what you want.
In the meantime, one solution would consist in using volumes.
Option 1: Docker managed volumes
FROM ubuntu
...
VOLUME ["/var/www/app/public"]
ADD host/src/path /var/www/app/public
CMD start rails
Start and run your container, then when you need to git pull, you can simply:
$ docker ps # -> retrieve the id of the running container
$ docker run -volumes-from <container id> <your image with git installed> sh -c 'cd/var/www/app/public && git pull -u'
This will result in your first running container to have the sources updated.
Option 2: Host volumes
You can start your container with:
$ docker run -v `pwd`/srcs:/var/www/app/public <yourimage>
and then simply git pull in your host's sources directory, it will update the container's sources.
Related
I am trying to execute ubuntu in docker. I use this command docker run -it ubuntu, and I want to install some packages and store some files. I know about volumes, but I have used it only in docker-compose. Is it possible to store all the container's data or how can I do that properly?
when you run a container, Docker creates a namespace and loads the image filesystem in that namespace. any changes you apply in a running container including installing some packages only remains during the lifetime of the container if you remove the container and rerun it they're gone.
if you want to your changes be permanent you have to commit the running container and actually create an image for that using this command:
As David pointed out in the comments
You should pretty much never run docker commit. It leads to images that can't be reproduced, and you'll be in trouble if there's a security fix you're required to take a year down the road.
sudo docker commit [CONTAINER_ID] [new_image_name]
if you have an app inside the container like MySQL and wants the data stored in that app be permanent you should map a volume from the host like this:
docker run -d -v /home/username/mysql-data:/var/lib/mysql --name mysql mysql
I use Docker to execute a website I make.
When a release have to be delivered, I have to build a new Docker image and start a new Container from it.
The problem is that images et containers are accumulating and taking huge space.
Besides the delivery, I need to stop the running container and delete it and the source image too.
I don't need Docker command lines but a checklist or a process to not forget anything.
For instance:
-Stop running container
-Delete stopped container
-Delete old image
-Build new image
-Start new container
Am I missing something?
I'm not used to Docker, maybe there are best practices to this pretty classical use case?
The local workflow that works for me is:
Do core development locally, without Docker. Things like interactive debuggers and live reloading work just fine in a non-Docker environment without weird hacks or root access, and installing the tools I need usually involves a single brew or apt-get step. Make all of my pytest/junit/rspec/jest/... tests pass.
docker build a new image.
docker stop && docker rm the old container.
docker run a new container.
When the number of old images starts to bother me, docker system prune.
If you're using Docker Compose, you might be able to replace the middle set of steps with docker-compose up --build.
In a production environment, the sequence is slightly different:
When your CI system sees a new commit, after running the repository's local tests, it docker build && docker push a new image. The image has a unique tag, which could be a timestamp or source control commit ID or version tag.
Your deployment system (could be the CI system or a separate CD system) tells whatever cluster manager you're using (Kubernetes, a Compose file with Docker Swarm, Nomad, an Ansible playbook, ...) about the new version tag. The deployment system takes care of stopping, starting, and removing containers.
If your cluster manager doesn't handle this already, run a cron job to docker system prune.
You should use:
docker system df
to investigate the space used by docker.
After that you can use
docker system prune -a --volumes
to remove unused components. Containers you should stop them yourself before doing this, but this way you are sure to cover everything.
I would like to run a test a parse-dashboard via Docker, as documented in the readme.
I am getting the error message, "Parse Dashboard can only be remotely accessed via HTTPS." Normally, you can bypass this by adding the line "allowInsecureHTTP": true in your parse-dashboard-config.json file. But even if I have added this option to my config file, the same message is displayed.
I tried to edit the config file in the Docker container, whereupon I discovered that none of my local file changes where present in the container. It appeared as though my project was an unmodified version of the code from the github repository.
Why do the changes that I make to the files in my working directory on the host machine not show up in the Docker container?
But what it is upload to my docker, it's in fact the config file of my master branch.
It depends:
what that "docker" is: the official DockerHub or a private docker registry?
how it is uploaded: do you build an image and then use docker push, or do you simply do a git push back to your GitHub repo?
Basically, if you want to see the right files in your Docker container that you run, you must be sure to run an image you have built (docker build) after a Dockerfile which COPY files from your current workspace.
If you do a docker build from a folder where your Git repo is checked out at the right branch, you will get an image with the right files.
The Dockerfile from the parse-dashboard repository you linked uses ADD . /src. This is a bad practice (because of the problems you're running into). Here are two different approaches you could take to work around it:
Rebuild the Image Each Time
Any time you change anything in the working directory (which the Dockerfile ADDs to /src), you need to rebuild for the change to take effect. The exception to this is src/Parse-Dashbaord/parse-dashboard-config.json, which we'll mount in with a volume. The workflow would be nearly identical to the one in the readme:
$ docker build -t parse-dashboard .
$ docker run -d -p 8080:4040 -v ./src/Parse-Dashbaord/parse-dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json parse-dashboard
Use a Volume
If we're going to use a volume to do this, we don't even need the custom Dockerfile shipped with the project. We'll just use the official Node image, upon which the Dockerfile is based.
In this case, Docker will not run the build process for you, so you should do it yourself on the host machine before starting Docker:
$ npm install
$ npm run build
Now, we can start the generic Node Docker image, and ask it do serve our project directory.
$ docker run -d -p 8080:4040 -v ./:/src node:4.7.2 "cd /src && npm run dashboard"
Changes will take effect immediately because you mount ./ into the image as a volume. Because it's not done with ADD, you don't need to rebuild the image each time. We can use the generic node image because if we're not ADDing a directory and running the build commands, there's nothing our image will do differently than the official one.
So let's say we just spun up a docker container and allows user SSH into the container by mapping port 22:22.
User then installed some software like git or whatever they want. So that container is now polluted.
Later on, suppose I want to apply some patches to the container, what is the best way to do so?
Keep in mind that the user has modified contents in container, including some system level directories like /usr/bin. So I cannot simply replace the running container with another image.
So to give you some real life use cases. Take Nitrous.io as an example. I saw they are using docker containers to serve as user's VM. So users can install packages like Node.js global packages. So how do they update/apply patch to containers like a pro? Similar platforms like Codeanywhere might work in the same way.
I tried google it but I failed. I am not 100 percent sure whether this is a duplicate though.
User then installed some software like git or whatever they want ... I want to apply some patch to the container, what is the best way to do so ?
The recommended way is to plan your updates through Dockerfile. However, if you are unable to achieve that, than any additional changes or new packages installed to the container should be committed before they are exited.
ex: Below is simple container created which does not have vim installed.
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
pingimg 1.5 1e29ac7353d1 4 minutes ago 209.6 MB
Start the container and check if vim is installed.
$ docker run -it pingimg:1.5 /bin/bash
root#f63accdae2ab:/#
root#f63accdae2ab:/# vim
bash: vim: command not found
Install the required packages, inside the container:
root#f63accdae2ab:/# sudo apt-get update && install -y vim
Back on the host, commit the container with a new tag before stopping or exiting the container.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f63accdae2ab pingimg:1.5 "/bin/bash" About a minute ago Up About a minute modest_lovelace
$ docker commit f63accdae2ab pingimg:1.6
378e0359eedfe902640ff71df4395c3fe9590254c8c667ea3efb54e033f24cbe
$ docker stop f63accdae2ab
f63accdae2ab
Now docker images should show to both the tags or versions of the container. Note, the updated container shows larger size.
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
pingimg 1.6 378e0359eedf 43 seconds ago 252.8 MB
pingimg 1.5 1e29ac7353d1 4 minutes ago 209.6 MB
Re-start the recently committed container, you can see that vim installed
$ docker run -it pingimg:1.6 /bin/bash
root#63dbbb8a9355:/# which vim
/usr/bin/vim
Verify the contents of the previous version container and should notice that vim is still missing.
$ docker run -it pingimg:1.5 /bin/bash
root#99955058ea0b:/# which vim
root#99955058ea0b:/# vim
bash: vim: command not found
Hope this helps!
There's a whole branch of software called configuration management that seeks to solve this issue, with solutions such as Ansible and Puppet. Whilst designed with VMs in mind, it is certainly possible to use such solutions with containers.
However, this is not the Docker way. Rather than patch a Docker container, throw it away and replace it with a new one. If you need to install new software, add it to the Dockerfile and build a new container as per #askb's solution. By doing things this way, we can avoid a whole set of headaches (similarly, prefer docker exec to installing ssh in containers).
I'm using docker in Ubuntu. During development phase I cloned all source code from Git in host, edit them in WebStorm, and them run with Node.js inside a docker container with -v /host_dev_src:/container_src so that I can test.
Then when I wanted to send them for testing: I committed the container and pushed a new version. But when I pulled and ran the image on the test machine, the source code was missing. That makes sense as in test machine there's no /host_src available.
My current workaround is to clone the source code on the test machine and run docker with -v /host_test_src:/container_src. But I'd like to know if it's possible to copy the source code directly into the container and avoid that manipulation. I'd prefer to just copy, paste and run the image file with the source code, especially since there's no Internet connection on our testing machines.
PS: Seems docker cp only supports copying file from container to host.
One solution is to have a git clone step in the Dockerfile which adds the source code into the image. During development, you can override this code with your -v argument to docker run so that you can make changes without rebuilding. When it comes to testing, you just check your changes in and build a new image. Now you have a fully standalone alone image for testing.
Note that if you have a VOLUME instruction in your Dockerfile, you will need to make sure it occurs after the git clone step.
The problem with this approach is that if you are using a compiled language, you only want your binaries to live in the final image. In this case, the git clone needs to be replaced with some code that either fetches or compiles the binaries.
Please treat your source codes are data, then package them as data container , see https://docs.docker.com/userguide/dockervolumes/
Step 1 Create app_src docker image
Put one Dockerfile inside your git repo like
FROM BUSYBOX
ADD . /container_src
VOLUME /container_src
Then you can build source image like
docker build -t app_src .
During development period, you can always use your old solution -v /host_dev_src:/container_src.
Step 2 Transfer this docker image like app image
You can transfer this app_src image to test system similar to your application image, probably via docker registry
Step 3 Run as data container
In test system, run app container above it. (I use ubuntu for demo)
docker run -d -v /container_src --name code app_src
docker run -it --volumes-from code ubuntu bash
root#dfb2bb8456fe:/# ls /container_src
Dockerfile hello.c
root#dfb2bb8456fe:/#
Hope it gives help
(give credits to https://github.com/toffer/docker-data-only-container-demo , which I get detail ideas)
Adding to Adrian's answer, I do git clone, and then do
CMD git pull && start-my-service
so the latest code at the checked out branch gets run. This is obviously not for everyone, but it works in some software release models.
You could try and have two Dockerfiles. The base one would know how to run your app from a predevined folder, but not declare it a volume. When developing you will be running this container with your host folder mounted as a volume. Another one, the package one, will inherit the base one and copy/add the files from your host directory, again without volumes, so that you would carry all the files to the tester's host.