Multiple Dockerfiles - docker

I am trying to use docker and want to create an Ubuntu base with three containers that do the following:
Container: Install Wildfly
Container: Install MySQL
Container: Other Required Packages
Does that mean, I have to create three Dockerfiles in three different directories containing each the following top line?:
FROM ubuntu:18.04

Orchestrate the containers with docker-compose
1- Create docker-compose.yml
2- Inside define:
version: '3'
services:
wildly:
build:
context: .
dockerfile: Dockerfile_Wildfly
mysql:
build:
context: .
dockerfile: Dockerfile_Mysql
anotherpackages:
build:
context: .
dockerfile: Dockerfile_AnotherPackages
it is not always the case that you need to write a docker file, for example for the database service you can simply pull the image from the docker hub and use it directly.
something like below
db:
image: mysql
3- Create files and both define commands you prefer:
Dockerfile_Wildfly
FROM wildfly
Dockerfile_Mysql
FROM mariadb
Dockerfile_AnotherPackages
FROM node
FROM nginx

You can create many Dockerfile and precise their name in the build command as suggested in another answer (#Krumelur's answer), but you can also use docker compose by calling directly the image from docker.io (if the base image for those dependecies in the hub match your neeed)
In this way you dont need any Dockerfile at all.
It should looks like this :
version: '3.3'
services:
wildfly:
#this image will be automatically downloaded from your registry (by default Docker hub)
image: jboss/wildfly
ports:
- '8080:8080'
- '9990:9990'
volumes:
- 'wildfly_data:/wildfly_data'
environment:
- WILDFLY_PASSWORD=password
mysql:
#this image will be automatically downloaded from your registry (by default Docker hub)
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'db'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'user'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
# <Port exposed> : < MySQL Port running inside container>
- '3306:3306'
expose:
# Opens port 3306 on the container
- '3306'
# Where our data will be persisted
volumes:
- my-db:/var/lib/mysql
otherService :
image: busybox
volumes:
my-db:
wildfly_data:
Then you just need to call the command : docker-compose up

You can have more than one Dockerfile in the same directory if desired. To specify the Dockerfile to use, use the -f argument, e.g
docker build -f wildfly.Dockerfile ./wildfly
docker build -f mysql.Dockerfile ./mysql
docker build -f other.Dockerfile ./other
In Compose, these arguments correspond to the dockerfile and context properties.
It is not always the case that you need to write a docker file, for example for the database service you can simply pull the image from the docker hub and use it directly.
something like below
db:
image: mysql
You can, of course, have them share the same context, e.g.
docker build -f wildfly.Dockerfile .
docker build -f mysql.Dockerfile .
docker build -f other.Dockerfile .
Just be aware that the context is sent in full to the daemon (respecting .dockerignore) so this might lead to longer build times if there is a lot of redundant data.
If there is a lot of reuse between the Dockerfiles, you can even have all of them in one file, e.g.
FROM ubuntu:20.04 as base
...
FROM base AS wildfly
(install wildfly)
FROM base AS mysql
(install mysql)
...
Then you can build the specific image with e.g.
docker build --target wildfly .
In Compose, these arguments correspond to the target and context properties.
This is called multi-stage builds, and is not always a good idea but is sometimes helpful to mitigate Docker's lack of support for #include.

Related

Can you use the current project_name in a docker compose file?

I see lots of questions around setting/changing the COMPOSE_PROJECT_NAME or PROJECT_NAME using ENV variables.
I'm fine with the default project name, but I would like to reference it in my compose file.
version: "3.7"
services:
app:
build: DockerFile
container_name: app
volumes:
- ./:/var/app
networks:
- the-net
npm:
image: ${project_name}_app
volumes:
- ./:/var/app
depends_on:
- app
entrypoint: [ 'npm' ]
networks:
- the-net
npm here is arbitrary , hopefully the fact that could be run as its own container or in other ways does not distract from the questions.
is it possible to reference the project name with out setting it manually or first?
Unfortunately it is not possible.
As alluded to, you can create a .env file and populate it with COMPOSE_PROJECT_NAME=my_name, but the config option does not present itself in your environment by default.
Unfortunately the env substitution in docker-compose is fairly limited, meaning we cannot use the available PWD env variable and greedy match it at all
$ cd ~
$ pwd
/home/tqid
$ echo "Base Dir: ${PWD##*/}"
Base Dir: tqid
When we use this reference, compose has issues:
$ docker-compose up -d
ERROR: Invalid interpolation format for "image" option in service "demo": "${PWD##*/}"
It's probably better to be explicit anyway, the COMPOSE_PROJECT_NAME is based on your dir, and if someone clones to a new folder then it gets out of whack, including the .env file in source control would provide a re-usable and consistent place to reference the name
https://docs.docker.com/compose/reference/envvars/#compose_project_name
using the same image as another container was what I was after ... reuse the image and change the entry point.
Specify the same build: options for both containers.
This seems inefficient, in that it will trigger the build sequence twice and docker images will list both of them. However, the way Docker's layer caching works, if identical RUN commands are run on identical input images, the resulting layer will simply be reused, and the two final images will have the same image ID; they will literally be the same image with two names.
The context I've run into this the most is with a Python application where the same code base is used for a Django or Flask Web server, plus a Celery worker. The Docker-level setup is fairly language-independent, though: specify the same build: for both containers, and override the command: for the container(s) that need to do a non-default task.
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
environment:
REDIS_HOST: redis
worker:
build: . # <-- same as app
command: npm run worker # <-- overrides Dockerfile CMD
environment:
REDIS_HOST: redis
redis:
image: redis
It is also valid to specify build: and image: together in the docker-compose.yml file; this specifies the name of the image that will be built. It's frequently useful to explicitly specify this because you will need to point at a specific Docker Hub or other registry location to push the built image. If you do this, then you'll know the image name and don't need to depend on the context name.
version: '3.8'
services:
app:
build: .
image: registry.example.com/my/app:${TAG:-latest}
worker:
image: registry.example.com/my/app:${TAG:-latest}
command: npm run worker
You will need to manually docker-compose build in this setup. Compose's workflow doesn't have a way to specify that one container's build must run before a different container can start.

Docker container not updating on code change

I have a Dockerfile to build my node container, it looks as follows:
FROM node:12.14.0
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 4500
CMD ["npm", "start"]
based on this docker file, I am using docker compose to run this container and link it to a mongo container such that it refers to mongo-service. The docker-compose.yml looks as follows
version: '3'
services:
backend:
container_name: docker-node-mongo-container
restart: always
build: .
ports:
- '4700:4500'
links:
- mongo-service
mongo-service:
container_name: mongo-container
image: mongo
ports:
- "27017:27017"
Expected behavior: Everytime I make a new change to the project on my local computer, I want the docker-compose to restart so that the new changes are reflected.
Current behavior: To make the new changed reflect on docker-compose, I have to do docker-compose down and then delete images. I am guessing that it has to rebuild images. How do I make it so that whenever I make change, the dockerfile builds a new image?
I understand that need to use volumes. I am just failing to understand how. Could somebody please help me here? docker
When you make a change, you need to run docker-compose up --build. That will rebuild your image and restart containers as needed.
Docker has no facility to detect code changes, and it is not intended as a live-reloading environment. Volumes are not intended to hold code, and there are a couple of problems people run into attempting it (Docker file sync can be slow or inconsistent; putting a node_modules tree into an anonymous volume actively ignores changes to package.json; it ports especially badly to clustered environments like Kubernetes). You can use a host Node pointed at your Docker MongoDB for day-to-day development, and still use this Docker-based setup for deployment.
In order for you to 'restart' your docker application, you need to use docker volumes.
Add into your docker-compose.yml file something like:
version: '3'
services:
backend:
container_name: docker-node-mongo-container
restart: always
build: .
ports:
- '4700:4500'
links:
- mongo-service
volumes:
- .:/usr/src/app
mongo-service:
container_name: mongo-container
image: mongo
ports:
- "27017:27017"
The volumes tag is a simple saying: "Hey, map the current folder outside the container (the dot) to the working directory inside the container".

Deploy with docker-compose.yml

Not sure if it will be a duplicate question but i tried to find out stuff but not sure if i have similar situation like others.
So i am new to docker and trying to setup a deployment for a small website.
So far i have a folder which has 3 files.
index.html - has basic html
Dockerfile - which has
FROM ubuntu:16.04
COPY . /var/www/html/
docker-compose.yml - which has
version: '2.1'
services:
app:
build: .
image: myname/myapp:1.0.0
nginx:
image: nginx
container_name: nginx
volumes:
- ./host-volumes:/cont-volumes
network_mode: "host"
phpfpm56:
image: php-fpm:5.6
container_name: phpfpm56
volumes:
- ./host-volumes:/cont-volumes
network_mode: "host"
mysql:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
volumes:
- mysql:/var/lib/mysql
volumes:
mysql:
Now i am using jenkins to create build, putting my all codes to host volumes to make it available to container and then i would run
docker-compose build
Now it creates an image and i push it to docker hub.
Then i login to remote server and pull the image and run. But that wont work because i still need to run docker-compose up inside the container.
Is this the right approach or i am missing something here?
The standard way to do this is to copy your code into the image. Do not bind-mount host folders containing your code; instead, use a Dockerfile COPY directive to copy in the application code (and in a compiled language, use a RUN command to build it). For example, your PHP container might have a corresponding Dockerfile that looks like (referencing this base Dockerfile)
FROM php-fpm:5.6
# Base Dockerfile defines a sensible WORKDIR
COPY . .
# Base Dockerfile sets EXPOSE 9000
# Base Dockerfile defines ENTRYPOINT, CMD
Then your docker-compose.yml would say, in part
version: '3'
service:
phpfpm56:
build: .
image: me/phpfpm56:2019-04-30
# No other settings
And then your nginx configuration would say, in part (using the Docker Compose service name as a hostname)
fastcgi_pass phpfpm56:9000
If you use this in production you need to comment out the build: lines I think.
If you're extremely set on a workflow where there is no hostname other than localhost and you do not need to rebuild Docker images to update code, you at least need to restart (some of) your containers after you've done the code push.
docker-compose stop app phpfpm56
docker-compose up -d
You might look into a system-automation tool like Ansible or Chef to automate the code-push mechanism. Those same tools can also just install nginx and PHP, and if you're trying to avoid the Docker image build sequence, you might have a simpler installation and deployment system running servers directly on the host.
docker-compose up should not be run inside a container but on a docker host. So this could be run via sh on a host but you need to have access to the composefile wherever you run the command.

How to configure Dockerfile and docker-compose to deploy two containers to docker hub?

I'm trying to migrate working docker config files (Dockerfile and docker-compose.yml) so they deploy working local docker configuration to docker hub.
Tried multiple config file settings.
I have the following Dockerfile and, below, the docker-compose.yml that uses it. When I run "docker-compose up", I successfully get two containers running that can either be accessed independently or will talk to each other via the "db" and the database "container_name". So far so good.
What I cannot figure out is how to take this configuration (the files below) and modify them so I get the same behavior on docker hub. Being able to have working local containers is necessary for development, but others need to use these containers on docker hub so I need to deploy there.
--
Dockerfile:
FROM tomcat:8.0.20-jre8
COPY ./services.war /usr/local/tomcat/webapps/
--
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8089:8080"
volumes:
- /Users/user/Library/apache-tomcat-9.0.7/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
depends_on:
- db
db:
image: mysql:5.7
container_name: test-mysql-docker
ports:
- 3307:3306
volumes:
- ./ZipCodeLookup.sql:/docker-entrypoint-initdb.d/ZipCodeLookup.sql
environment:
MYSQL_ROOT_PASSWORD: "thepass"
Expect to see running containers on docker hub, but cannot see how these files need to be modified to get that. Thanks.
Add an image attribute.
app:
build:
context: .
dockerfile: Dockerfile
ports:
image: docker-hub-username/app
Replace "docker-hub-username" with your username. Then run docker-compose push app

Building and uploading images to Docker Hub, how to from Docker Compose?

I have been working in a docker environment for PHP development and finally I get it working as I need. This environment relies on docker-compose and the config looks like:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reynierpm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
There are some configurations like extra_hosts and env-file that is giving me some headache. Why? Because I don't know if the image will works under such circumstances.
Let's said:
I have run docker-compose up -d and the image reynierpm/php55-dev with tag latest has been built
I have everything working as it should be because I am setting the proper values on the docker-compose.yml file
I have logged in into my account and I push the image to the repository: docker push reynierpm/php55-dev
What happen if tomorrow you clone the repository and try to run docker-compose up but changing the docker-compose.yml file to fit your settings? How the image behaves in this case? I mean makes sense to create/upload the image to Docker Hub if any time I run the command docker-compose up it will be build again due to the changes on the config file?
Maybe I am completing wrong and some magic happen behind scenes but I need to know if I am doing this right
If people clone your git repository and do a docker-compose up -d it will in fact building a new image. If you only want people use your image from docker hub, drop the build section of docker-compose.yml and publish it in your docker hub page. Check this you can see the proposed docker-compose.yml.
Just paste this in your page:
version: '2'
services:
php-apache:
image: reynierpm/php55-dev
ports:
- "80:80"
environment:
DOCKERHOST: 'yourhostip'
PHP_ERROR_REPORTING: 'E_ALL & ~E_DEPRECATED & ~E_NOTICE'
volumes:
- ~/var/www:/var/www
If your env_file just have a couple of variables it is better to show them directly in the Dockerfile. It is better to replace extra_hosts with an environment variable and change in your php.ini or where ever you use the extra host by the variable:
.....
xdebug.remote_host = ${DOCKERHOST}
.....
You can in your Dockerfile define a default value for this variable:
ENV DOCKERHOST=localhost
Hope it helps
Regards

Resources