Newbie here. I created an empty solution, added WebApplication1 and WebApplication2. I then added docker support (Docker for Windows, Windows Containers). Compose file looks like this:
version: '3.4'
services:
webapplication1:
image: compositeapp
build:
context: .\WebApplication1
dockerfile: Dockerfile
webapplication2:
image: compositeapp
build:
context: .\WebApplication2
dockerfile: Dockerfile
So both containers are in a single image. Webapplication1 dockerfile has ENV LICENSE=abc123 and webapplication2 dockerfile has ENV LICENSE=abc456.
After building and starting the containers, I used exec -it powershell to remote into the 2 containers and did get-item env:license. Both containers returned 456.
As a newbie, I was expecting one machine to return abc123 and the other abc456. I just made up the environment name as being license, but what does one do if they need a per container environment variable?
I guess the issue you notice provides from the fact you specified the same image name for both services, which implies that they will have the same ENV variable as defined in the latest-compiled Dockerfile.
Could you try this instead?
version: '3.4'
services:
webapplication1:
image: compositeapp1
build:
context: .\WebApplication1
dockerfile: Dockerfile
webapplication2:
image: compositeapp2
build:
context: .\WebApplication2
dockerfile: Dockerfile
Anyway, even if this is working, I assume your two Dockerfile are almost the same (?), in which case I would rather suggest to use a single Dockerfile and a single image tag, but customize the environment of both services by using some environment section in your docker-compose.yml (or some env_file section, along with some external .env files...).
For example, you may want to write something like this:
version: '3.4'
services:
webapplication1:
image: compositeapp
build:
context: .\WebApplication
dockerfile: Dockerfile
environment:
- LICENSE=abc123
webapplication2:
image: compositeapp
environment:
- LICENSE=abc456
(not forgetting to remove the ENV LICENSE=... line from the Dockerfile)
Related
I am trying to build the two services with the same image, but two different Dockerfile. However Docker will always use only one Dockerfile for both, even though two have been defined:
version: '3.4'
services:
serviceA:
image: myimage
build:
dockerfile: ./Dockerfile
context: ${project.basedir}/${project.artifactId}-docker/target
depends_on:
- serviceB
serviceB:
image: myimage
build:
dockerfile: ./Dockerfile-cloud
context: ${project.basedir}/${project.artifactId}-docker/target
Even though I also say dependsOn, running
docker-compose up -f docker-compose.yml
it only used the Dockerfile-cloud for both.
I guess your problem is that you tag your image as myimage (using latest by default). So docker will build a first version of myimage with Dockerfile, then it'll build another version of myimage with Dockerfile-cloud, and in the end it will only use the latest version. This is why it will use Dockerfile-cloud for both. To fix it remove image: myimage or do something like:
serviceA:
image: myimage:serviceA
...
serviceB:
image: myimage:serviceB
Since you're building the two containers' images from different Dockerfiles, they can't really be the same image; they'll have different content and metadata.
Compose is capable of assigning unique names for the various things it generates. Unless you need to do something like docker-compose push built images to a registry, you can generally just omit the image: line. The two containers will use separate images built from their own Dockerfiles, and the Compose-assigned names will avoid the ambiguity you're running into here.
version: '3.8'
services:
serviceA:
# no image:
# short form of build: with default dockerfile: and no args:
build: ${project.basedir}/${project.artifactId}-docker/target
depends_on:
- serviceB
serviceB:
# no image:
build:
context: ${project.basedir}/${project.artifactId}-docker/target
dockerfile: ./Dockerfile-cloud
I'm running docker compose as follows:
docker-compose -f docker-compose.dev.yml up --build -d
the contents of docker-compose.dev.yml are:
version: '3'
services:
client:
container_name: client
build:
context: frontend
environment:
- CADDY_SUBDOMAIN=xxx
- PRIVATE_IP=xxx
restart: always
ports:
- "80:80"
- "443:443"
links:
- express
volumes:
- /home/ec2-user/.caddy:/root/.caddy
express:
container_name: express
build: express
environment:
- NODE_ENV=development
restart: always
Then I want to create images from these containers to use them in a testing server by pushing them to aws ECR and pulling on the test server, to avoid the time of creating the dockers all over again. Simply using docker commit did not worked.
what is the correct approach to creating images from outputs of docker compose?
thanks
You should basically never use docker commit. The standard approach is to describe how to build your images using a Dockerfile, and check that file into source control. You can push the built image to a registry like Docker Hub, and you can check out the original source code and rebuild the image.
The good news is that you basically have this setup already. Each of your Compose services has a build: block that has the data on how to build the image. So it's enough to
docker-compose build
and you'll get a separate Docker image for each component.
Often if you're doing this you'll also want to push the images to some Docker registry. In the Compose setup, you can specify an image: for each service as well. If you have both build: and image:, that specifies the image name to use for the built image (otherwise Compose will pick one based on the project name).
version: '3.8'
services:
client:
build:
context: frontend
image: registry.example.com/project/frontend
et: cetera
express:
build: express
image: registry.example.com/project/express
et: cetera
Then you can have Compose both build and push the images
docker-compose build
docker-compose push
One final technique that can be useful is to split the Compose setup into two files. The main docker-compose.yml file has the setup you'd need to run the set of containers, on any system, with access to the container registry. A separate docker-compose.override.yml file would support developer use where you have a copy of the source code as well. If you're using Compose for deployment, you only need to copy the main docker-compose.yml file to the target system.
# docker-compose.yml
version: '3.8'
services:
client:
image: registry.example.com/project/frontend
ports: [...]
environment: [...]
restart: always
# volumes: [...]
express:
image: registry.example.com/project/express
ports: [...]
environment: [...]
restart: always
# docker-compose.override.yml
version: '3.8'
services:
client:
build: frontend
# all other settings come from main docker-compose.yml
express:
build: express
# all other settings come from main docker-compose.yml
I have a docker compose setup where I want to use environment variables from env file in my dockerfile. I want to use these variables during the build time since I use this version number in concatenating the string in order to form a download URL.
Here I wrote part of the files I'm using just to keep the focus on the point of my question.
.env
MY_APP_VER=v1.2.3
docker-compose.yml
version: "2"
services:
my-app:
build: .
container_name: my_app
environment:
- my_app_version=$MY_APP_VER
Dockerfile
FROM scratch
ENV my_app_ver=$my_app_version
RUN echo $my_app_ver
I have checked various sources but without any success. I'm not sure if this is even possible or am I using the wrong syntax (should I use quotes or no e.g. "$my_app_ver" or curly brackets ${my_app_ver}).
For version 3.8 you can do it in the following way
version: '3.8'
services:
my-app:
build: .
ports:
- ${CONTAINER_PORT}:${PORT} # for example
env_file: .env
container_name: my-app-${NODE_ENV} # for example
environment:
MYSQL_DATABASE: ${DB_NAME} # for example
my_app_version: ${MY_APP_VER} # for your case
Find more information in documentation
Also, you can find more information about the usage of env variables in Dockerfile and docker-compose here
There is an option called env-file in docker-compose, that you can leverage: https://docs.docker.com/compose/environment-variables/#the-env_file-configuration-option
version: "3"
services:
my-app:
build: .
container_name: my_app
env_file:
- .env.dev
Be aware, that the .env file is loaded by default, if it is present in the current context. So you only have to use env_file, if it is named differently or is in a different folder.
I have written a Dockerfile which uses two arguments:
FROM jessie
MAINTAINER Zeinab Abbasimazar
#Build Arguments
ARG REP_USER
ARG REP_PASS
# Build
RUN echo 'REP_USER:'$REP_USER', REP_PASS:'$REP_PASS
I wrote a docker-compose.yml for build:
version: "2"
services:
ui:
build:
context: .
dockerfile: Dockerfile
args:
REP_USER: $REP_USER
REP_PASS: $REP_PASS
I don't want to define these arguments directly in the compose file, so I tried to send them during docker compose build:
REP_USER=myusername REP_PASS=mypassword docker-compose build
Which didn't work. I changed my Dockerfile to use these arguments as environment variables; so I removed ARG lines:
FROM jessie
MAINTAINER Zeinab Abbasimazar
# Build
RUN echo 'REP_USER:'$REP_USER', REP_PASS:'$REP_PASS
And docker-compose.yml:
version: "2"
services:
ui:
build:
context: .
dockerfile: Dockerfile
And ran REP_USER=myusername REP_PASS=mypassword docker-compose build; still no result.
I also tried to save these information into an env file:
version: "2"
services:
ui:
build:
context: .
dockerfile: Dockerfile
env_file:
- myenv.env
But it seems env files doesn't affect at build time; they are just take part into run time.
EDIT 1:
Docker version is 1.12.6 which doesn't support passing arguments with --build-arg.
EDIT 2:
I tried using .env file as described here:
cat .env
REP_USER=myusername
REP_PASS=mypassword
I then called docker-compose config which returned:
networks: {}
services:
ui:
build:
args:
REP_PASS: mypassword
REP_USER: myusername
context: /home/zeinab/Workspace/ZiZi-Docker/Test/test-exec-1
dockerfile: Dockerfile
version: '2.0'
volumes: {}
Which means this resolved my issue.
EDIT 3:
I also tried third section of docker-compose arg documentation in my docker-compose.yml file:
version: "2"
services:
ui:
build:
context: .
dockerfile: Dockerfile
args:
- REP_USER
- REP_PASS
And executed:
export REP_USER=myusername;export REP_PASS=mypassword;sudo docker-compose build --no-cache
Still not getting what I wanted.
You can set build arguments with docker compose as described here:
docker-compose build [--build-arg key=val...]
docker-compose build --build-arg REP_USER=myusername --build-arg REP_PASS=mypassword
Btw, AFAIK build arguments are a compromise between usability and deterministic building. Docker aims to build in a deterministic fashion. That is, wherever you execute the build the produced image should be the same. Therefore, it appears logical that the client ignores the environment (variables) it is executed in.
The correct syntax for variable substitution in a docker-compose file is ${VARNAME}.
Try with this one:
version: "2"
services:
ui:
build:
context: .
dockerfile: Dockerfile
args:
REP_USER: ${REP_USER}
REP_PASS: ${REP_PASS}
I finally found the solution. I mentioned it in the question too. I first tried it with fail, then I found out that I had a typo naming .env file; it was .evn.
I tried using .env file as described here:
cat .env
REP_USER=myusername
REP_PASS=mypassword
I then called docker-compose config which returned:
networks: {}
services:
ui:
build:
args:
REP_PASS: mypassword
REP_USER: myusername
context: /home/zeinab/Workspace/ZiZi-Docker/Test/test-exec-1
dockerfile: Dockerfile
version: '2.0'
volumes: {}
Which means this resolved my issue. I should mention that this answer was really helpful.
Trying to use docker-compose for the first time, but not having much luck. I have the following setup:
docker-compose version 1.8.0, build f3628c7
/home/GabeThermComposer contains the docker-compose.yml
/home/GabeThermComposer/GabeThermApache contains Dockerfile
/home/GabeThermComposer/GabeThermPHPMyAdmin contains Dockerfile
/home/GabeThermComposer/GabeThermDB contains Dockerfile and nest-init.sql
When I create docker images using the Dockerfile in each subdir, it all works without issues. I was hoping with the docker-compose.yml to do all the seperate building of images at once.
The docker-compose.yml looks like this:
version: '2'
services:
GabeThermDB:
build:
context: ./GabeThermDB
dockerfile: Dockerfile
GabeThermApache:
build:
context: ./GabeThermApache
dockerfile: Dockerfile
ports:
- "80:80"
GabeThermPHPMyAdmin:
build:
context: ./GabeThermPHPMyAdmin
dockerfile: Dockerfile
ports:
- "8080:80"
When trying to run "docker-compose up", I get the following error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.GabeThermPHPMyAdmin.build contains unsupported option: 'ports'
services.GabeThermApache.build contains unsupported option: 'ports'
I have no clue on what is wrong with this. I think I did exactly as other examples have shown. Btw, I do know that the "context:" and "dockerfile:" is overdone, but since I'm new, I wanted to be sure to what files I'm pointing in case I forget it automatically dives into the subdir and runs the Dockerfile.
Any help is appreciated.
You have to move the ports out of the build block.
version: '2'
services:
GabeThermDB:
build:
context: ./GabeThermDB
dockerfile: Dockerfile
GabeThermApache:
build:
context: ./GabeThermApache
dockerfile: Dockerfile
ports:
- "80:80"
GabeThermPHPMyAdmin:
build:
context: ./GabeThermPHPMyAdmin
dockerfile: Dockerfile
ports:
- "8080:80"