I use following module:
https://docs.ansible.com/ansible/latest/modules/docker_service_module.html?highlight=ansible%20doc
I can create and start docker container using this module. However, is it possible to execute tasks (and preserve changes) on this container?
I mean:
install some yum package
insert some bash script into container.
Could you give me some clues?
As a general rule, you don't install software on a running container. If you need a container with some software installed in it, you should build a custom image that has the software you need, and set it up so that it can do everything it needs on its own once you start it up once. (As an even broader rule, you shouldn't need to docker exec into a running container except to debug things; it definitely isn't part of the core "how to do things with containers" workflow.)
I would recommend following a standard Docker tutorial, such as Docker's official tutorial on building and running custom images. Once you have a working Docker image workflow, you'd use the Ansible docker_container module in place of the docker run command.
Related
I'm having difficulties understanding docker. No matter how many tutorials I watch, guides I read, for me docker-compose is like being able to define multiple Dockerfiles, ie multiple containers. I can define environment variables in both, ports, commands, base images.
I read in other questions/discussions that Dockerfile defines how to build an image, and docker-compose is how to run an image, but I don't understand that. I can build docker containers without having to have a Dockerfile.
It's mainly for local development though. Does Dockerfile have an important role when deploying to AWS for example (where it's probably coming out of the box for example for EC2)?
So the reason why I can work locally with docker-compose only is because the base image is my computer (sorting out the task Dockerfile is supposed to do)?
Think about how you'd run some program, without Docker involved. Usually it's two steps:
Install it using a package manager like apt-get or brew, or build it from source
Run it, without needing any of its source code locally
In plain Docker without Compose, similarly, you have the same two steps:
docker pull a prebuilt image with the software, or docker build it from source
docker run it, without needing any of its source code locally
I'd aim to have a Dockerfile that creates an immutable copy of your image, with all of its source code and library dependencies as part of the image. The ideal is that you can docker run your image without -v options to inject source code or providing the command at the docker run command line.
The reality is that there are a lot of moving parts: you probably need to docker network create a network to get containers to communicate with each other, and use docker run -e environment variables to specify host names and database credentials, and launch multiple containers together, and so on. And that's where Compose comes in: instead of running a series of very long docker commands, you can put all of the details you need in a docker-compose.yml file, check it in, and run docker-compose up to get all of those parts put together.
So, do:
Use Compose to start multiple containers together
Use Compose to write down complex runtime options like port mappings or environment variables with host names and credentials
Use Compose to build your image and start a container from it with a single command
Build your application code and a standard CMD to run it into your Dockerfile.
I'm using a platform (Cytomine) on Ubuntu 18.04 to run some deep learning containerized applications (this platform handles the Docker images and containers automatically, so I only need to create the image and provide its download URL to the platform). So far it's working good but now I need to enable GPU support to run the model efficiently. Thus, I did some local tests with nvidia-docker to manually run the model container with GPU support, it was really easy to have it working because I just had to add one option to the run command:
docker run --gpus all
However, because I cannot add this option to the code on the Cytomine platform I need to find a way of adding/enabling that option by default to all the containers run by docker.
I tried adding this option to the files /etc/docker/daemon.json and /etc/docker/key.json and then restarted docker sudo systemctl restart docker. However, it didn't work.
Also, I found how to create docker config files (docker config); however, this seems to work only with Docker Swarm and I'm not going to use a Swarm for this project.
Thus, I'm looking for a straightforward solution that can be deployed properly. Is there any way to enable this option (--gpus all) by default when running any Docker container? (like somehow including it on the Dockerfile?)
Thanks!
I start a Docker container with a special bash script that runs the container and then creates a user X with a dynamic name, UID and GUID in the container. I can then bash into the container and perform actions as this user X. The script also creates an 'alias' user named vscode with the same UID as the earlier created dynamic user X.
In VSCode I can attach to this container. Two questions:
How can I setup VSCode to perform all actions as the 'vscode' user or as the user X? (When using devcontainer.json to create the container this is trivial, but now I attach to an existing container and devcontainer.json is not used).
In devcontainer.json you have the option to automatically install extensions. Which settings file do I need to create to automatically install extensions when attaching to a container?
The solution should be automated. Eg. manual intervention and committing the image as suggested below is possible but will make it much harder for users to just use my Docker image.
I updated to vscode 1.39 and tried to add:
ADD server-env-setup /root/.vscode-server/server-env-setup
But "server-env-setup" seems to be only used for WSL.
I'll answer your questions in reverted order:
VSCode installs extensions after creating the container by using docker exec command.
And now recipe: The easiest way is to take container already created by VSCode:
Run "Open folder on container" for creating dev container.
After container has done and you can work with VSCode. Stop your environment by clicking "Close remote connection".
Run docker ps -a. You should see last died containers something as:
How you can see the latest running container is: a7aa5af7ec08 vsc-typescript-2ea9f347739c5397afc431028000c02b. This your container with all extensions installed. And it doesn't matter how you install extensions manually or by configuring via devcontainer.json.
Run docker commit a7aa5af7ec08 all-installed-vscode-image:latest. Now you have a docker image with all your loved software installed. You can upload this image to your favorite docker registry and use also on other machines.
Now you can run docker run -i -u vscode all-installed-vscode-image:latest. And attach vscode to this container. This is an answer to your first question.
Also, you can review vscode documentation and use devcontainer.json configurations when you attach to already running containers and even containers running on remote machines.
VSCode now implements a "remoteUser" property ehich you can set in the image configuration. This will ensure that VSCode logs into the container as the correct user.
In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.
My goal is to use Docker to create a mail setup running postfix + dovecot, fully configured and ready to go (on Ubuntu 14.04), so I could easily deploy on several servers. As far as I understand Docker, the process to do this is:
Spin up a new container (docker run -it ubuntu bash).
Install and configure postfix and dovecot.
If I need to shut down and take a break, I can exit the shell and return to the container via docker start <id> followed by docker attach <id>.
(here's where things get fuzzy for me)
At this point, is it better to export the image to a file, import on another server, and run it? How do I make sure the container will automatically start postfix, dovecot, and other services upon running it? I also don't quite understand the difference between using a Dockerfile to automate installations vs just installing it manually and exporting the image.
Configure multiple docker images using Dockerfiles
Each docker container should run only one service. So one container for postfix, one for another service etc. You can have your running containers communicate with each other
Build those images
Push those images to a registry so that you can easily pull them on different servers and have the same setup.
Pull those images on your different servers.
You can pass ENV variables when you start a container to configure it.
You should not install something directly inside a running container.
This defeat the pupose of having a reproducible setup with Docker.
Your step #2 should be a RUN entry inside a Dockerfile, that is then used to run docker build to create an image.
This image could then be used to start and stop running containers as needed.
See the Dockerfile RUN entry documentation. This is usually used with apt-get install to install needed components.
The ENTRYPOINT in the Dockerfile should be set to start your services.
In general it is recommended to have just one process in each image.