A while ago I created a vm running postfix and s-nail to pipe mails to a bash script. The script then verifies the sender and recipient, scans for some keywords and does a http GET request. It is a restricted service running only on our local network.
Now I would like to forge this into a docker image. As I am new to creating Docker Images, I need some help.
How can I pack postfix, s-nail and the script into a Docker image? Do the services need to be separated or can it be one single container?
There are many different ways to do that.
You can put it all together in a one single docker-image. You can use ubuntu as a base image and install your software on it. It's works and is simple. Perhaps it is a good starting-point to learn docker.
FROM ubuntu
# install deps with normal apt
RUN apt update && apt install postfix s-nail
# and you need an entrypoint, which will start both services
But generally, we try to put only one service in a container. If you want to split up, you need a way orchestrate them (spinning containers up, ensure that containers can talk to each other).
If you want to split up, then I recommend docker-compose. It's very easy.
If you want a more advanced setup, then you can use kubernetes.
Related
I'm having difficulties understanding docker. No matter how many tutorials I watch, guides I read, for me docker-compose is like being able to define multiple Dockerfiles, ie multiple containers. I can define environment variables in both, ports, commands, base images.
I read in other questions/discussions that Dockerfile defines how to build an image, and docker-compose is how to run an image, but I don't understand that. I can build docker containers without having to have a Dockerfile.
It's mainly for local development though. Does Dockerfile have an important role when deploying to AWS for example (where it's probably coming out of the box for example for EC2)?
So the reason why I can work locally with docker-compose only is because the base image is my computer (sorting out the task Dockerfile is supposed to do)?
Think about how you'd run some program, without Docker involved. Usually it's two steps:
Install it using a package manager like apt-get or brew, or build it from source
Run it, without needing any of its source code locally
In plain Docker without Compose, similarly, you have the same two steps:
docker pull a prebuilt image with the software, or docker build it from source
docker run it, without needing any of its source code locally
I'd aim to have a Dockerfile that creates an immutable copy of your image, with all of its source code and library dependencies as part of the image. The ideal is that you can docker run your image without -v options to inject source code or providing the command at the docker run command line.
The reality is that there are a lot of moving parts: you probably need to docker network create a network to get containers to communicate with each other, and use docker run -e environment variables to specify host names and database credentials, and launch multiple containers together, and so on. And that's where Compose comes in: instead of running a series of very long docker commands, you can put all of the details you need in a docker-compose.yml file, check it in, and run docker-compose up to get all of those parts put together.
So, do:
Use Compose to start multiple containers together
Use Compose to write down complex runtime options like port mappings or environment variables with host names and credentials
Use Compose to build your image and start a container from it with a single command
Build your application code and a standard CMD to run it into your Dockerfile.
I use following module:
https://docs.ansible.com/ansible/latest/modules/docker_service_module.html?highlight=ansible%20doc
I can create and start docker container using this module. However, is it possible to execute tasks (and preserve changes) on this container?
I mean:
install some yum package
insert some bash script into container.
Could you give me some clues?
As a general rule, you don't install software on a running container. If you need a container with some software installed in it, you should build a custom image that has the software you need, and set it up so that it can do everything it needs on its own once you start it up once. (As an even broader rule, you shouldn't need to docker exec into a running container except to debug things; it definitely isn't part of the core "how to do things with containers" workflow.)
I would recommend following a standard Docker tutorial, such as Docker's official tutorial on building and running custom images. Once you have a working Docker image workflow, you'd use the Ansible docker_container module in place of the docker run command.
I am installing OpenStack Keystone now.
For standalone Keystone needs three components: mysql, python, and apache2.
Absolutely I can’t pick all of them to the base, I made python as a base image, and others were inserted as RUN statements for installing mysql and apache2.
I think that the RUN statements are duplication because all the three components exist on Docker Public Registry.
Is there any good solution or proper way to reuse the existing external Dockerfile???
There seems to be some confusion here about what a Dockerfile does: it defines a single Docker image. In general, the recommended way to run applications in Docker is to have a container for each service and have them connect to other services in other containers as needed (more on this later).
In your case, it sounds like your application consists of OpenStack Keystone (which requires Python and Apache to run) and MySQL. So I would install Python & Apache in your Dockerfile, and set up MySQL (possibly just using the image from the public repository) as a separate container that the OpenStack container connects to over the network.
As mentioned above, this scenario is the recommended way to run Docker applications - it follows the Unix paradigm of "each app does only one thing, but does it very well". Each container does one thing only and connects to any other services in other containers. But it is possible to run multiple services in the same container - eg. Keystone running on Apache/python AND MySQL in the same container. If this is your goal, you would write a Dockerfile that installs everything and gets everything running together. This Dockerfile is likely to be pretty complicated and will require an ENTRYPOINT that gets both MySQL and Apache working together. You're likely to wind up duplicating a lot of the work that has already gone into the standard MySQL and Apache images.
You can make use of Docker Compose to run you application having mysql, python, and apache2.
Using Docker compose will allow you to control the application setup using a single command. You just need to write a DockerCompose.yml file and also Dockerfiles corresponding to containers you will setup.
In your case you can have a dockerfile for setting up a python and apache2 container and other Dockerfile having mysql as the base image for setting up the said container.
My goal is to use Docker to create a mail setup running postfix + dovecot, fully configured and ready to go (on Ubuntu 14.04), so I could easily deploy on several servers. As far as I understand Docker, the process to do this is:
Spin up a new container (docker run -it ubuntu bash).
Install and configure postfix and dovecot.
If I need to shut down and take a break, I can exit the shell and return to the container via docker start <id> followed by docker attach <id>.
(here's where things get fuzzy for me)
At this point, is it better to export the image to a file, import on another server, and run it? How do I make sure the container will automatically start postfix, dovecot, and other services upon running it? I also don't quite understand the difference between using a Dockerfile to automate installations vs just installing it manually and exporting the image.
Configure multiple docker images using Dockerfiles
Each docker container should run only one service. So one container for postfix, one for another service etc. You can have your running containers communicate with each other
Build those images
Push those images to a registry so that you can easily pull them on different servers and have the same setup.
Pull those images on your different servers.
You can pass ENV variables when you start a container to configure it.
You should not install something directly inside a running container.
This defeat the pupose of having a reproducible setup with Docker.
Your step #2 should be a RUN entry inside a Dockerfile, that is then used to run docker build to create an image.
This image could then be used to start and stop running containers as needed.
See the Dockerfile RUN entry documentation. This is usually used with apt-get install to install needed components.
The ENTRYPOINT in the Dockerfile should be set to start your services.
In general it is recommended to have just one process in each image.
I want to use OpenStack Heat to create an application which consists of several Docker containers, and monitor some metrics of these containers, like: CPU/Mem utilization, and other application-specific metrics.
So is it possible to install cloud-init and heat-cfntools when prepare the Docker image via Dockerfile, and then run a Docker container based on the image which has cloud-init and heat-cfntools running in it?
Thanks!
So is it possible to install cloud-init and heat-cfntools when prepare the Docker image via Dockerfile
It is possible to use cloud-init inside a Docker container, if you (a) have an image with cloud-init installed, (b) have the correct commands configured in your ENTRYPOINT or CMD script, and (c) your container is running in an environment with an available metadata service.
Of those requirements, (c) is probably the most problematic; unless you are booting containers using the nova-docker driver, is is unlikely that your containers will have access to the Nova metadata service.
I am not particularly familiar with heat-cfntools, although a quick glance at the code suggests that it may work without cloud-init by authenticating against the Heat CFN API using ec2-style credentials, which you would presumably need to provide via environment variables or something.
That said, it is typically much less necessary to run cloud-init inside Docker containers, the theory being that if you need to customize an image, you'll use a Dockerfile to build a new one based on that image and re-deploy, or specify any necessary additional configuration via environment variables.
If your tools require monitoring processes on the host, you'll probably want to run with
docker run --pid=host
This is a feature introduced in Docker Engine version 1.5.
See http://docs.docker.com/reference/run/#pid-settings