I need to add some paths to my PATH in docker-compose.yml
in docker-compose.yml I have tried
app:
...
environment:
- PATH /code/project
however that just overwrites the existing PATH - whereas I want to add to the existing PATH
A docker-compose.yml does not offer you any mean to extend an environment variable which would already be set in a Docker image.
The only way I see to do such things is to have a Docker image which expects some environment variable (let's say ADDITONAL_PATH) and extends at run time its own PATH environment variable with it.
Let's take the following Dockerfile:
FROM busybox
ENV PATH /foo:/bar
CMD export PATH=$PATH:$ADDITIONAL_PATH; /bin/echo -e "ADDITIONAL_PATH is $ADDITIONAL_PATH\nPATH is $PATH"
and the following docker-compose.yml file (in the same directory as the Dockerfile):
app:
build: .
Build the image: docker-compose build
And start a container: docker-compose up, you will get the following output:
app_1 | ADDITIONAL_PATH is
app_1 | PATH is /foo:/bar:
Now change the docker-compose.yml file to:
app:
build: .
environment:
- ADDITIONAL_PATH=/code/project
And start a container: docker-compose up, you will now get the following output:
app_1 | ADDITIONAL_PATH is /code/project
app_1 | PATH is /foo:/bar:/code/project
Also note a syntax error in your docker-compose.yml file: there must be an equal sign (=) character between the name of the environment variable and its value.
environment:
- PATH=/code/project
instead of
environment:
- PATH /code/project
I know this is an old thread, but I think there are a couple of things that can be clarified.
Through docker-compose file one can only address variables from the host machine, therefore it is NOT possible to extend image's PATH from docker-compose.yml:
app:
...
environment:
- PATH=/code/project:$PATH
On the other hand, using RUN or CMD EXPORT directive will not suffice due to EXPORTED variables not persisting through images. Since every Dockerfile directive generates an intermediate image, these values will be reflected in them and not in the main image where you actually need them.
The best option would be to use build option in docker-compose.yml:
app:
build: .
and adding ENV option to a Dockerfile:
ENV PATH /path/to/bin/folder:$PATH
This is suggested in issue #684 and I would also suggest looking at an answer: docker ENV vs RUN export.
You can add your value.
To do so you need to know name or ID of the container, run it to know:
docker ps
This will print details of all running containers. Look for your container and copy its ID or name. Then run this:
docker inspect <container ID>
It will print all values of specified container. Look for ENV section and find PATH environment variable. Then copy its value, add your changes and extend it with your new values then set it again in your docker-compose.yml "environment" section.
app
environment:
- PATH=value-you-copied:new-value:new-value:etc
Note that you shouldn't remove anything from initial value of PATH, just extend it and add new value.
#Thomasleveil's answer works only for containers built directly from the docker-compose file (via the build). And you have no control over the command executed.
I needed this functionality for containers downloaded from (our) repository where this does not quite work.
I have found solution using the entrypoint and command.
Lets have some base container base and another one, java7, that is based upon it. And finaly some docker-compose using the java7 container to run some stuff.
Probably the most important file here, entrypoint.sh
$ cat base/script/entrypoint.sh
#!/bin/bash
export PATH="$PATH_ADD:$PATH"
echo "Path modified to $PATH"
exec $#
Dockerfile for base container
$ cat base/Dockerfile
FROM xxx
# copy entrypoint script that extends current PATH variable by PATH_ADD
COPY script/entrypoint.sh /usr/sbin
ENTRYPOINT ["/usr/sbin/entrypoint.sh"]
Dockerfile for java7 container
$ cat java7/Dockerfile
FROM base
# download java7
curl ... /opt/java/jdk7
ENV JAVA_HOME /opt/java/jdk7
Commands run by docker-compose
$ cat sbin/run-app1.sh
exec $JAVA_HOME/bin/java -version
$ cat sbin/run-app2.sh
exec $JAVA_HOME/bin/java -version
Docker-compose using these:
$ cat docker-compose.yml
version: '3'
services:
app1:
image: java7
command: run-app1.sh
environment:
PATH_ADD: /app/sbin
volumes:
- "./sbin:/app/sbin:cached"
app2:
image: java7
command: run-app2.sh
environment:
PATH_ADD: /app/sbin
volumes:
- "./sbin:/app/sbin:cached"
File structure
$ tree
.
├── base
│ ├── script
│ │ └── entrypoint.sh
│ └── Dockerfile
├── java7
│ └── Dockerfile
├── sbin
│ ├── run-app1.sh
│ └── run-app2.sh
└── docker-compose.yml
to add a single location to PATH in your docker-compose.yml file:
app
environment:
- PATH=/code/project:$PATH
to add multiple locations to your PATH in your docker-compose.yml file
app
environment:
- PATH=/code/project:/code/lib:/foo/bar:$PATH
to add to your PYTHONPATH
app
environment:
- PYTHONPATH=/code/project:/code/lib:/foo/bar
Related
How do you specify a mount volume in docker-compose, so your Dockerfile can access files from it?
I have a docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
context: ..
dockerfile: Dockerfile
volumes:
- /tmp/cache:/tmp/cache
And in my Dockerfile, I want to access files from /tmp/cache via RUN like:
RUN cat /tmp/cache/somebinary.tar.gz | processor.sh
However, running docker-compose gives me the error:
/tmp/cache/somebinary.tar.gz does not exist
Even though on the host, ls /tmp/cache/somebinary.tar.gz confirms it does exist.
Why is docker-compose/Docker unable to mount or access my host directory?
Dockerfile RUN commands are executed at build time of the image.
The volume is mounted at run time once the image is run as a container. So the mounted files will not be available until you spawn a container based on your image.
To define the commands to use at run time, use CMD, or depending on how you intend your image to be used ENTRYPOINT.
You would need to add this at the end of your Dockerfile:
CMD cat /tmp/cache/somebinary.tar.gz | processor.sh
I have my project architecture like this:
.
├── app/
├── context/
│ ├── Dockerfile
│ ├── .dockerignore
│ └── php.ini
├── database/
├── http/
├── composer.json
└── docker-compose.yml
and in docker-compose.yml I have the following configuration:
version: '3.8'
services:
app:
container_name: "ERP"
restart: always
build:
context: ./context
dockerfile: Dockerfile
stdin_open: true
tty: true
ports:
- '8000:80'
links:
- db_server
volumes:
- .:/usr/src/app
working_dir: /usr/src/app
db_server:
container_name: "db_server"
image: 'mysql:8.0'
ports:
- '3306:3306'
But when I set the Dockerfile content to set up the application with docker-compose up, having Dockerfile content as this:
FROM ubuntu:20.04
WORKDIR /usr/src/app
RUN cat composer.json
It says "No such file or directory composer.json". Why?
UPDATE
I managed to solve the problem based on ENTRYPOINT configuration ..
as far I understand - I'm new to docker - , the ENTRYPOINT defines an entry script running at the beginning of a container , thus , starting that script will be definitely in the run time of the container and after the initializations specified by docker-compost.yml file .. so the contents of the actual context will be available to the script to see it in the time it runs ..
Thank you all for your answers .
That's because you define the context to be "./context", so you are stuck into this folder, where composer.json isn't.
Use "." for the context. and context/Dockerfile for the dockerfile.
Then mounting '.' will mount the whole directory, and not only the ./context one.
The build process to create an image occurs before the runtime process that takes that image and runs a container. The compose file includes a build section to allow you to easily build the image before running it, but all of the other parts of the compose file define runtime configurations like volume mounts and networking.
At build time, you do not get to specify the volume sources, at most you can define a required volume target in the image with a VOLUME step. (Note if you do that, future RUN steps within the Dockerfile may have issues modifying that directory since many build tools mount an anonymous volume as you've requested, but only capture the changes to the container filesystem, not the volume filesystem).
If you need the contents of a file or directory in your image, you must perform a COPY or ADD step in the Dockerfile to copy from the build context (typically imported as . or the current directory) into the image.
Key build.context define a path to a directory containing a Dockerfile. This is a context of building image, and during build process docker doesn't have access to composer.json file (it is out of the context).
Command RUN runs command during build. If you want to run it when container is starting, you should use CMD.
FROM ubuntu:20.04
WORKDIR /usr/src/app
CMD cat composer.json
What I'm trying to accomplish is this: I want to cache the current users ~/.aws directory inside a container so that I can use them during the build of another container.
I have the following docker-compose.yml:
version: "3.7"
services:
worker:
depends_on:
- aws
aws:
build:
context: ~/.aws
dockerfile: ./ctx.dockerfile
args:
- workdir=/root/.aws
These are the contents of ctx.dockerfile:
FROM alpine:3.9
ARG workdir
WORKDIR ${workdir}
COPY . .
And in my worker service Dockerfile I have the following:
...
COPY --from=aws_ctx:local /root/.aws /root/.aws
...
The Problem
docker compose isn't treating the dockerfile path in the aws service as relative to the docker-compose.yml, it is instead assuming it is relative to the context path. Is there anyway I can have docker-compose load the ctx.dockerfile from same directory as docker-compose.yml AND set the context the way that I am?
I'm up for changing my approach to the problem, but I have a few contstraints:
any solution must be workable on Windows, OSX, and Linux
any solution must only require docker and/or docker-compose, I can't run a shell script beforehand
Is there anyway I can have docker-compose load the ctx.dockerfile from same directory as docker-compose.yml AND set the context the way that I am?
AFAIK: No, there isn't.
Everything that the Dockerfile interacts with on build time must be in the defined context. So, you need .aws and the current folder where the docker-compose.yml etc. lives to be in the same context, i.e. the context would need to be the highest level of your relevant directory structure and then you would have to define relative paths to the files you need (the Dockerfiles and .aws).
Maybe you could set /home/$USER as your build context (or even higher level, depending on where your Dockerfiles etc. live), but then you would also have to create a .dockerignore file and ignore everything in the context besides .aws and the current folder... As you can see, this would be a mess and not very reproducible.
I would suggest to use a volume instead of COPYing the ~/.aws folder inside your container.
Example:
nico#lapap12:~$ ls -l ~/.aws
total 0
-rw-r--r-- 1 nico nico 0 May 22 17:45 foo.bar
docker-compose.yml:
version: "3.7"
services:
allinone:
image: alpine:latest
volumes:
- ~/.aws:/tmp/aws:ro
command: ls -l /tmp/aws
nico#lapap12:~/local/so$ docker-compose up
Creating so_allinone_1 ... done
Attaching to so_allinone_1
allinone_1 | total 0
allinone_1 | -rw-r--r-- 1 1000 1000 0 May 22 15:45 foo.bar
so_allinone_1 exited with code 0
You could go from there and copy the content of /tmp/aws to /root/.aws if you want to change this folder's content in the container, but don't want to touch it on the actual host.
I am following lynda Docker tutorials and performing stuff related to docker compose file.
This is my docker-compose.yml file.
more docker-compose.yml
version: '3'
services:
web:
image: jboss/wildfly
volumes:
- ~/deployments:/opt/jboss/wildfly/standalone/deployments
ports:
- 8080:8080
As per authors, I am trying to copy webapp.war file to deployments/ folder giving me error. It look like volume mapping for the docker file is not working.
cp /home/user/Demos/docker-for-java/chapter2/webapp.war deployments/
cp: cannot create regular file ‘deployments/’: Not a directory
docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
helloweb_web_1 /opt/jboss/wildfly/bin/sta ... Up 0.0.0.0:8080->8080/tcp
I think you might be misinterpreting tutorial. I haven't seen the tutorial itself, but checking the documentation for the WildFly Docker image here there's a mention that you need to extend base image and add your war file inside:
To do this you just need to extend the jboss/wildfly image by creating a new one. Place your application inside the deployments/ directory with the ADD command (but make sure to include the trailing slash on the deployment folder path, more info). You can also do the changes to the configuration (if any) as additional steps (RUN command).
This means that you need to create a Dockerfile with approximately this contents (change your-awesome-app.war with the path to your war file):
FROM jboss/wildfly
ADD your-awesome-app.war /opt/jboss/wildfly/standalone/deployments/
After that you need to change you docker-compose.yml to build from your Dockerfile instead of using jboss/wildfly (note the use of build: . instead of image: jboss/wildfly):
version: '3'
services:
web:
build: .
ports:
- 8080:8080
Try that and comment if you run into any issues
I've got a directory called thing with my docker compose project:
thing
├── Dockerfile
└── docker-compose.yml
The contents of docker-compose.yml:
master:
build: .
Whenver I run docker-compose build in this folder, this will produce a docker image called thing_master. I'd like to specify in docker-compose.yml another name for my image. Is this possible?
– I know I can run docker-compose -p [image_name] build or set the environment variable COMPOSE_PROJECT_NAME, but that's not what I wish to do.
This is now possible in Compose 1.6 (rc2 is out now).
Using 1.6, you can have both build and image on the same service:
version: "2"
services:
master:
build: .
image: my_image_name:my_tag
Build will use the image name to tag the image.