How to create Production ready Docker images using best practices? - docker

Creating Docker images would be a simple task for testing environments. But when it comes to Production implementations, we have to follow best practices to overcome any security and workflow issues.
What are the best practices to create a production ready docker image?

As comprehensively described in Create Production Docker Images in 5 Steps by DevopsAnswers, following steps would be considered as a comprehensive guide to create production ready Docker images.
When creating production Docker images, you should have an extensive understating about Docker best practices.
Step 1: Use light weight Base Docker Images
It's a better idea to use a light weight image rather than using a bulky base image, since the usage of resulting Docker image would be more convenient when it's smaller in size.
If you plan to use Docker at highly critical production systems, where you cannot afford a downtime of a few seconds, then first thing you have to choose is a light weight base image for your custom docker image.
Step 2: Reduce intermediate layers
In a Dockerfile, every instruction such as FROM, LABEL, RUN, CMD, ADD etc. would be adding a new layer to the Docker image. So reducing the usage of same instructions multiple times would be a best practice as it would give you a slightly smaller image.
Step 3: Choose specific versions
It’s a good practice to choose specific versions in Docker instructions, because it will keep things nice and steady for a production implementation.
Imagine if we use Ubuntu:latest as the base image. It will use the currently available latest Ubuntu image for our custom Docker image. Additionally, we will setup all the software components based on the same Ubuntu version.
When Ubuntu update the latest tag with a newer base image in Docker Hub, then you might experience some package dependency issues or incompatibilities in your production Docker image.
In addition, we should always try to install the specific package versions rather than installing the general package.
Example
Recommended apt-get install mysql-server-5.5
NOT recommended apt-get install mysql-server
Step 4: Do not include sensitive data
Using sensitive data such as Database credentials and API keys would be a challenging task in Docker.
Do not hard code any login credentials within a Docker image
To overcome this limitation, you should use environment variables effectively.
For example, if you create a Drupal image connecting to a Mysql DB, we can keep the Drupal MySQL DB settings blank as below.
$databases = array (
'default' =>
array (
'default' =>
array (
'database' => '',
'username' => '',
'password' => '',
'host' => '',
'port' => '',
'driver' => 'mysql',
'prefix' => '',
),
),
);
Now we can use ENTRYPOINT to leverage environment variables to replace Drupal MySQL DB credentials in runtime as below.
#!/bin/sh
set -e
# Apache gets grumpy about PID files pre-existing
rm -f /var/run/apache2.pid
# Define Drupal home file path
DRUPAL_HOME="/var/www/html"
# Define Drupal settings file path
DRUPAL_SETTINGS_FILE="${DRUPAL_HOME}/sites/default/settings.php"
# Check the avilability of environment variables
if [ -n "$DRUPAL_MYSQL_DB" ] && [ -n "$DRUPAL_MYSQL_USER" ] && [ -n "$DRUPAL_MYSQL_PASS" ] && [ -n "$DRUPAL_MYSQL_HOST" ] ; then
echo "Setting up Mysql DB in $DRUPAL_SETTINGS_FILE"
# Set Database
sed -i "s/'database' *=> *''/'database' => '"$DRUPAL_MYSQL_DB"'/g" $DRUPAL_SETTINGS_FILE
# Set Mysql username
sed -i "s/'username' *=> *''/'username' => '"$DRUPAL_MYSQL_USER"'/g" $DRUPAL_SETTINGS_FILE
# Set Mysql password
sed -i "s/'password' *=> *''/'password' => '"$DRUPAL_MYSQL_PASS"'/g" $DRUPAL_SETTINGS_FILE
# Set Mysql host
sed -i "s/'host' *=> *''/'host' => '"$DRUPAL_MYSQL_HOST"'/g" $DRUPAL_SETTINGS_FILE
fi
# Start Apache in foreground
tail -F /var/log/apache2/* &
exec /usr/sbin/apache2ctl -D FOREGROUND
Finally, you can simply define the environment variables during the Docker runtime like below.
docker run -d -t -i
-e DRUPAL_MYSQL_DB='database'
-e DRUPAL_MYSQL_USER='user'
-e DRUPAL_MYSQL_PASS='password'
-e DRUPAL_MYSQL_HOST='host'
-p 80:80
-p 443:443
--name <container name>
<custom image>
Step 5: Run CMD/Entypoint from a non-privileged user
It’s always a good choice to run production systems using a non-privileged user, which is better from security perspectives as well.
You can simply put USER entry before CMD or ENTRYPOINT in Dockerfile as follows.
# Set running user of ENTRYPOINT
USER www-data
# Start entrypoint
ENTRYPOINT ["entrypoint"]

Related

Docker "artifact image" vs "services image" vs "single FROM image" vs "multiple FROM image"

I'm trying to understand the pros and cons of these four methods of packaging an application using Docker after development:
Use a very light-weight image (such as Alpine) as the base of the image containing the main artifact, then update the original docker compose file to use it along with other services when creating and deploying the final containers.
Something else I can do is, first docker commit, then use the result image as the base image of my artifact image.
One other method can be using a single FROM only, to base my image on one of the required services, and then use RUN commands to install the other required services as "Linux packages"(e.g. apt-get another-service) inside the container when it's run.
Should I use multiple FROMs for those images? Wouldn't it be complicated and only needed in more complex projects? Also it sounds vague to decide in what order those FROMs need to be written if none of them seems to be more important than the others as much as my application is concerned.
In the development phase, I used a "docker compose file" to run multiple docker containers. Then I used these containers and developed a web application (accessing files on the host machine using a bind volume). Now I want to write a Dockerfile to create an image that will contain my application's artifact, plus those services present in the initial docker compose file.
I'd suggest these rules of thumb:
A container only runs one program. If you need multiple programs (or services) run multiple containers.
An image contains the minimum necessary to run its application, and no more (and no less -- do not depend on bind mounts for the application to be functional).
I think these best match your first option. Your image is built FROM a language runtime, COPYs its code in, and does not include any other services. You can then use Compose or another orchestrator to run multiple containers in parallel.
Using Node as an example, a super-generic Dockerfile for almost any Node application could look like:
# Build the image FROM an appropriate language runtime
FROM node:16
# Install any OS-level packages, if necessary.
# RUN apt-get update \
# && DEBIAN_FRONTEND=noninteractive \
# apt-get install --no-install-recommends --assume-yes \
# alphabetical \
# order \
# packages
# Set (and create) the application directory.
WORKDIR /app
# Install the application's library dependencies.
COPY package.json package-lock.json .
RUN npm ci
# Install the rest of the application.
COPY . .
# RUN npm build
# Set metadata for when the application is run.
EXPOSE 3000
CMD npm run start
A matching Compose setup that includes a PostgreSQL database could look like:
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
environment:
PGHOST: db
db:
image: postgres:14
volumes:
- dbdata:/var/lib/postgresql/data
# environment: { ... }
volumes:
dbdata:
Do not try to (3) run multiple services in a container. This is complex to set up, it's harder to manage if one of the components fails, and it makes it difficult to scale the application under load (you can usually run multiple application containers against a single database).
Option (2) suggests doing setup interactively and then docker commit an image from it. You should almost never run docker commit, except maybe in an emergency when you haven't configured persistent storage on a running container; it's not part of your normal workflow at all. (Similarly, minimize use of docker exec and other interactive commands, since their work will be lost as soon as the container exits.) You mention docker save; that's only useful to move built images from one place to another in environments where you can't run a Docker registry.
Finally, option (4) discusses multi-stage builds. The most obvious use of these is to remove build tools from a final build; for example, in our Node example above, we could RUN npm run build, but then have a final stage, also FROM node, that NODE_ENV=production npm ci to skip the devDependencies from package.json, and COPY --from=build-stage the built application. This is also useful with compiled languages where a first stage contains the (very large) toolchain and the final stage only contains the compiled executable. This is largely orthogonal to the other parts of the question; you could update the Dockerfile I show above to use a multi-stage build without changing the Compose setup at all.
Do not bind-mount your application code into the container. This hides the work that the Dockerfile does, and it's possible the host filesystem will have a different layout from the image (possibly due to misconfiguration). It means you're "running in Docker", with the complexities that entails, but it's not actually the image you'll actually deploy. I'd recommend using a local development environment (try running docker-compose up -d db to get a database) and then using this Docker setup for final integration testing.

How to start two services in one docker container

I need to start two services/commands in docker, from google I got that I can use ENTRYPOINT and CMD to pass different commands. but when I start the container only ENTRYPOINT script runs and CMD seems not running. since I am a new docker can you help me on how to run two commands.
Dockerfile :
FROM registry.suse.com/suse/sle15
ADD repolist/*.repo /etc/zypp/repos.d/
RUN zypper refs && zypper refresh
RUN zypper in -y bind
COPY docker-entrypoint.d/* /docker-entrypoint.d/
COPY --chown=named:named named /var/lib/named
COPY --chown=named:named named.conf /etc/named.conf
COPY --chown=named:named forwarders.conf /etc/named.d/forwarders.conf
ENTRYPOINT [ "./docker-entrypoint.d/startbind.sh" ]
CMD ["/usr/sbin/named","-g","-t","/var/lib/named","-u","named"]
startbind.sh:
#! /bin/bash
/usr/sbin/named.init start
Thanks & Regards,
Mohamed Naveen
You can use supervisor tools for managing multiple services inside a single docker container.
Check out the below example(running Redis and Django server using single CMD):
Dockerfile:
# Base Image
FROM alpine
# Installing required tools
RUN apk --update add nano supervisor python3 redis
# Adding Django Source code to container
ADD /django_app /src/django_app
# Adding supervisor configuration file to container
ADD /supervisor /src/supervisor
# Installing required python modules for app
RUN pip3 install -r /src/django_app/requirements.txt
# Exposing container port for binding with host
EXPOSE 8000
# Using Django app directory as home
WORKDIR /src/django_app
# Initializing Redis server and Gunicorn server from supervisors
CMD ["supervisord","-c","/src/supervisor/service_script.conf"]
service_script.conf file
## service_script.conf
[supervisord] ## This is the main process for the Supervisor
nodaemon=true ## This setting is to specify that we are not running in daemon mode
[program:redis_script] ## This is the part where we give the name and add config for our 1st service
command=redis-server ## This is the main command to run our 1st service
autorestart=true ## This setting specifies that the supervisor will restart the service in case of failure
stderr_logfile=/dev/stdout ## This setting specifies that the supervisor will log the errors in the standard output
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout ## This setting specifies that the supervisor will log the output in the standard output
stdout_logfile_maxbytes = 0
## same setting for 2nd service
[program:django_service]
command=gunicorn --bind 0.0.0.0:8000 django_app.wsgi
autostart=true
autorestart=true
stderr_logfile=/dev/stdout
stderr_logfile_maxbytes = 0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes = 0
Final output:
Redis and Gunicorn service in same docker container
You can read my complete article on this, the link is given below:
Link for complete article
Options to run more than one service within the container described really well in this official docker article:
multi-service_container.
I'd recommend reviewing why you need two services in one container(shared data volume, init, etc) cause by properly separating the services you'll have ready to scale architecture, more useful logs, easier lifecycle/resource management, and easier testing.
Within startbind.sh
you can do:
#! /bin/bash
#start second servvice here, and push it to background:
/usr/sbin/secondesrvice.init start &
#then run the last commands:
/usr/sbin/named.init start
your /usr/sbin/named.init start (the last command on the entry point) command must NOT go into background, you need to keep it on the foreground.
If this last command is not kept in foreground, Docker will exit.
You can add to startbind.sh the two service start. You can use RUN command also. RUN execute commands in docker container. If dont work, you can ask me to keep helping you.

ECS Container Environment Configuration

I have a recently-Dockerized web app that I would like to get running on AWS ECS, and a few fundamental concepts (which I don't see explained in the AWS docs) are throwing me off.
First, when you Edit/configure a new container, it asks you to specify the image to use, but then also has an Environment section:
The Entry point, Command and Working directory fields look suspiciously similar to the commands I already specified when creating my Docker image (here's my Dockerfile):
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
So if ECS is asking me for an image (that's already been built using this Dockerfile), why in tarnation do I need to re-specify the exact same values for WORKDIR, EXPOSE, ENTRYPOINT, CMD, etc.?!?
Also outside of ECS I run my container like so:
docker run -it -p 9200:9200 -d --net="host" --env-file ~/myapp-local.env --name myapp myapp
Notice how I specify the env file? Does ECS support env files, or do I really have to enter each and every env var from my env file into this UI here?
Also I see there is a Docker Labels section near the bottom:
Are these different than env vars, or are they interchangeable?
Yes you need to add environment variable either through UI or through CLI .
For CLI you need to pass it as JSON template .
Also if you have already specified these values in Dockerfile then you dont need to pass these values again.
All the values that will be passed externally will overwrite internal/default values in Dockerfile

Confusion while deploying docker-composer image

I've been working in a sample ruby-on-rails application and deploying docker image in a linux server (ubuntu 14.04).
Here is my Dockerfile:
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
# CMD bundle exec rails s -p 3000 -b 0.0.0.0
# EXPOSE 3000
docker-compose.yml:
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
image: atulkhanduri/rails_docker_demos
volumes:
- .:/rails_docker_demo
ports:
- "3000:3000"
depends_on:
- db
deploy.sh:
#!/bin/bash
docker build -t atulkhanduri/rails_docker_demo .
docker push atulkhanduri/rails_docker_demo
ssh username#ip-address << EOF
docker pull atulkhanduri/rails_docker_demo:latest
docker stop web || true
docker rm web || true
docker rmi atulkhanduri/rails_docker_demo:current || true
docker tag atulkhanduri/rails_docker_demo:latest atulkhanduri/rails_docker_demo:current
docker run -d --restart always --name web -p 3000:3000 atulkhanduri/rails_docker_demo:current
EOF
Now my problem is that I'm not able to use docker-compose commands like docker-compose up, to run the application server.
When I uncomment the last two lines fromDockerfile i.e,
CMD bundle exec rails s -p 3000 -b 0.0.0.0
EXPOSE 3000
then I'm able to run the server on port 3000 but getting error could not translate host name "db" to address: Name or service not known. (my database.yml has "db" as host.) This is because postgres image is not used as I'm not using docker-compose file is not.
EDIT:
Output of docker network ls:
NETWORK ID NAME DRIVER SCOPE
b466c9f566a4 bridge bridge local
7cce2e53ee5b host host local
bfa28a6fe173 none null local
P.S: I've searched a lot in the internet but not yet able to use the docker-compose file.
Assumptions
If I am reading what you've done here correctly, my answer assumes the following two things.
You are using docker-compose to run the database container.
You are using plain docker commands (not docker-compose) to start the application server ("web").
First, I would suggest not doing that, it is a lot simpler to use docker-compose for both. However, I'll answer based on the above, assuming that there is some valid reason you cannot use docker-compose to run the "web" container.
About container and network names
When you run the docker-compose command to start the db container, among other things, two things happen.
The container is given a new name, composed of the directory you run the compose setup from, the static name in compose (db), and a number. So let's say you have this all in a directory name myapp, you would have a new container named myapp_db_1. You can see what it is named using docker ps.
A network bridge is created if it didn't already exist, named something like myapp_default - again, named after the directory that the compose setup is inside of.
Connecting to the right network
The problem is that your non-compose container is attached to the default network (probably docker_default), but your db container is attached to myapp_default. The two networks do not know about each other. You need to connect them. It probably makes more sense to tell the web app container to attach to the compose network.
First, get the correct network name. You can see all networks using docker network ls. It might look like this:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c1f5764a112b bridge bridge local
175efb89adef docker_default bridge local
5185ff0e1054 myapp_default bridge local
Once you have the correct name, update your run command to know about the network using the --network option.
docker run -d --restart always --name web \
-p 3000:3000 --network myapp_default \
atulkhanduri/rails_docker_demo:current
Once it is attached to the proper network, the name "db" should resolve correctly.
If you used docker-compose to start both of them, this would not be necessary (this is one of the things docker-compose just takes care of for you silently).
Getting this to run on your server
In the comments, you mention that you are having some issues with compose on the server. Specifically you said:
Do I need to copy my complete project on the server? Can't I run the application from docker image only? Actually, I've copied docker-compose in server and it throws errors for Gemfile, then I copied Gemfile, then it says it should be a rails app. So I guess I need to copy my complete folder in server. Can you please confirm?
Let's look at some parts of your Dockerfile. I'll add some comments inline.
## Make a new directory, and then make it the current directory
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
## Copy Gemfile and Gemfile.lock into this directory from outside
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
## Run the bundle installer, which will install to this directory
RUN bundle install
## Finally, copy everything from the outside local dir to here
ADD . /rails_docker_demo
So, clearly, /rails_docker_demo is your application directory within the container. You've installed a bunch of stuff here, and this will become a part of your image. When you push your image to the registry, then pull it down on the server (as you do in the deploy script), this will all come with it.
Now let's look at (some of) docker-compose.yml.
services:
web:
volumes:
- .:/rails_docker_demo
Here you have defined a volume mount, mounting the current directory (wherever docker-compose.yml lives) as /rails_docker_demo. When you do that, whatever happens to exist on the server is now available in /rails_docker_demo, but this mount undoes all the work from Dockerfile that I just mentioned above. Instead of having the resources you installed when you built the image, you have only whatever is on the server in the . directory. The mount is on top of the image's existing /rails_docker_demo directory, hiding its contents and replacing them with whatever is on the server at the moment.
Unless there is a reason you put this mount here, you probably just need to remove that volume mount from docker-compose.yml. You will still need docker-compose.yml on the server, but you should not need the rest of it (aside from the image, of course).
This mount you have done is a useful thing - for development purposes. It would let you use the container to run the application and quickly have code changes show up (without rebuilding the image). But in the case of your deployment, it is just causing trouble.
Try moving the EXPOSE above CMD, .e.g.
FROM ruby:2.1.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /rails_docker_demo
WORKDIR /rails_docker_demo
ADD Gemfile /rails_docker_demo/Gemfile
ADD Gemfile.lock /rails_docker_demo/Gemfile.lock
RUN bundle install
ADD . /rails_docker_demo
EXPOSE 3000
CMD bundle exec rails s -p 3000 -b 0.0.0.0

how to ignore some container when i run `docker-compose rm`

I have four containers that was node ,redis, mysql, and data. when i run docker-compose rm,it will remove all of my container that include the container data.my data of mysql is in the the container and i don't want to rm the container data.
why i must rm that containers?
Sometime i must change some configure files of node and mysql and rebuild.So
,I must remove containers and start again.
I have searched using google again over again and got nothing.
As things stand, you need to keep your data containers outside of Docker Compose for this reason. A data container shouldn't be running anyway, so this makes sense.
So, to create your data-container do something like:
docker run --name data mysql echo "App Data Container"
The echo command will complete and the container will exit immediately, but as long as you don't docker rm the container you will still be able to use it in --volumes-from commands, so you can do the following in Compose:
db:
image: mysql
volumes-from:
- data
And just remove any code in docker-compose.yml to start up the data container.
An alternative to docker-compose, in Go (https://github.com/michaelsauter/crane), let's you create contianer groups -- including overriding the default group so that you can ignore your data containers when rebuilding your app.
Given you have a "crane.yaml" with the following containers and groups:
containers:
my-app:
...
my-data1:
...
my-data2:
...
groups:
default:
- "my-app"
data:
- "my-data1"
- "my-data2"
You can build your data containers once:
# create your data-only containers (safe to run several times)
crane provision data # needed when building from Dockerfile
crane create data
# build/start your app.
crane lift -r # similar to docker-compose build && docker compose up
# Force re-create off your data-only containers...
crane create --recreate data
PS! Unlike docker-compose, even if building from Dockerfile, you MUST specify an "image" -- when not pulling, this is the name docker will give the image locally! Also note that the container names are global, and not prefixed by the folder name the way they are in docker-compose.
Note that there is at least one major pitfall with crane: It simply ignores misplaced or wrongly spelled fields! This makes it harder to debug that docker-compose yaml.
#AdrianMouat Now , I can specify a *.yml file when I starting all container with the new version 1.2rc of docker-compose (https://github.com/docker/compose/releases). just like follows:
file:data.yml
data:
image: ubuntu
volumes:
- "/var/lib/mysql"
thinks for your much useful answer

Resources