I'm trying to get the variable from the command line using:
sudo docker-compose -f docker-compose-fooname.yml run -e BLABLA=hello someservicename
My file looks like this:
version: '3'
services:
someservicename:
environment:
- BLABLA
image: docker.websitename.com/image-name:latest
volumes:
- /var/www/image-name
command: ["npm", "run", BLABLA]
All of this is so that I can run a script defined by what I use as BLABLA in the command line, I've tried going with official documentation.
Tried several options including:
sudo COMPOSE_OPTIONS="-e BLABLA=hello" docker-compose -f docker-compose-fooname.yml run someservicename
UPDATE:
I have to mention that as it is, I always get:
WARNING: The FAKE_SERVER_MODE variable is not set. Defaulting to a blank string.
Even when I just run the following command (be it remove, stop..):
sudo docker-compose -f docker-compose-fooname.yml stop someservicename
For the record: I'm pulling the image first, I never build it but my CI/CD tool does (gitlab), does this affect it?
I'm using docker-compose version 1.18, docker version 18.06.1-ce, Ubuntu 16.04
That docker-compose.yml syntax doesn't work the way you expect. If you write:
command: ["npm", "run", BLABLA]
A YAML parser will turn that into a list of three strings npm, run, and BLABLA, and when Docker Compose sees that list it will try to run literally that exact command, without running a shell to try to interpret anything.
If you set it to a string, Docker will run a shell over it, and that shell will expand the environment variable; try
command: "npm run $BLABLA"
That having been said, this is a little bit odd use of Docker Compose. As the services: key implies the more usual use case is to launch some set of long-running services with docker-compose up; you might npm run start or some such as a service but you wouldn't typically have a totally parametrizable block with no default.
I might make the docker-compose.yml just say
version: '3'
services:
someservicename:
image: docker.websitename.com/image-name:latest
command: ["npm", "run", "start"]
and if I did actually need to run something else, run
docker-compose run --rm someservicename npm run somethingelse
(or just use my local ./node_modules/.bin/somethingelse and not involve Docker at all)
Related
What is the difference?
Which is preferred?
Should CMD be omitted if command is defined?
command overrides the CMD in dockerfile.
If you control the dockerfile yourself, put it there. It is the cleanest way.
If you want to test something or need to alter the CMD while developing it is faster than always changing the dockerfile and rebuild the image.
Or if it is a prebuilt image and you don't want to build a derivate FROM ... image just to change the CMD it is also a quick solution doing it by command.
In the common case, you should have a Dockerfile CMD and not a Compose command:.
command: in the Compose file overrides CMD in the Dockerfile. There are some minor syntactic differences (notably, Compose will never automatically insert a sh -c shell wrapper for you) but they control the same thing in the container metadata.
However, remember that there are other ways to run a container besides Compose. docker run won't read your docker-compose.yml file and so won't see that command: line; it's also not read in tools like Kubernetes. If you build the CMD into the image, it will be honored in all of these places.
The place where you do need a command: override is if you need to launch a non-default main process for a container.
Imagine you're building a Python application. You might have a main Django application and a Celery worker, but these have basically the same source code. So for this setup you might make the image's CMD launch the Django server, and override command: to run a Celery worker off the same image.
# Dockerfile
# ENTRYPOINT is not required
CMD ["./manage.py", "runserver", "0.0.0.0:8080"]
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports: ['8080:8080']
# no command:
worker:
build: .
command: celery worker
I have script: docker run -it -p 4000:4000 bitgosdk/express:latest --disablessl -e test
how to put this command to dockerfile with arguments?
FROM bitgosdk/express:latest
EXPOSE 4000
???
Gone through your Dockerfile contents.
The command running inside container is:
/ # ps -ef | more
PID USER TIME COMMAND
1 root 0:00 /sbin/tini -- /usr/local/bin/node /var/bitgo-express/bin/bitgo-express --disablessl -e test
The command is so because the entrypoint set in the Dockerfile is ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/node", "/var/bitgo-express/bin/bitgo-express"] and the arguments --disablessl -e test are the one provided while running docker run command.
The --disablessl -e test arguments can be set inside your Dockerfile using CMD:
CMD ["--disablessl", "-e","test"]
New Dockerfile:
FROM bitgosdk/express:latest
EXPOSE 4000
CMD ["--disablessl", "-e","test"]
Refer this to know the difference between entrypoint and cmd.
You don't.
This is what docker-compose is used for.
i.e. create a docker-compose.yml with contents like this:
version: "3.8"
services:
test:
image: bitgodsdk/express:latest
command: --disablessl -e test
ports:
- "4000:4000"
and then execute the following in a terminal to access the interactive terminal for the service named test.
docker-compose run test
Even if #mchawre's answer seems to directly answer OP's question "syntactically speaking" (as a Dockerfile was asked), a docker-compose.yml is definitely the way to go to make a docker run command, as custom as it might be, reproducible in a declarative way (YAML file).
Just to complement #ChrisBecke's answer, note that the writing of this YAML file can be automated. See e.g., the FOSS (under MIT license) https://github.com/magicmark/composerize
FTR, the snippet below was automatically generated from the following docker run command, using the accompanying webapp https://composerize.com/:
docker run -it -p 4000:4000 bitgosdk/express:latest
version: '3.3'
services:
express:
ports:
- '4000:4000'
image: 'bitgosdk/express:latest'
I omitted the CMD arguments --disablessl -e test on-purpose, as composerize does not seem to support these extra arguments. This may sound like a bug (and FTR a related issue is opened), but meanwhile it might just be viewed as a feature, in line of #DavidMaze's comment…
I am working on a docker app. The purpose of this repo is to output some json into a volume. I am using a Dockerfile, docker-compose and a Makefile. I'll show the contents of each file below. Goal/desired outcome is that when I run using make up that the container runs and outputs the json.
Directory looks like this:
docker-compose.yaml
Dockerfile
Makefile
main/ # a directory
Here are the contents of directory Main:
example.R
Not sure the best order to show these files. Throughout my setup I refer to a variable $PROJECTS_DIR which is a global on the host / local:
echo $PROJECTS_DIR
/home/doug/Projects
Here are my files:
docker-compose.yaml:
version: "3.5"
services:
nextzen_ga_extract_marketing:
build:
context: .
environment:
start_date: "2020-11-18"
start_date: "2020-11-19"
volumes:
- ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline:/home/rstudio/Projects/nextzen_google_analytics_extract_pipeline
Dockerfile:
FROM rocker/tidyverse:latest
ADD main main
WORKDIR "/main"
RUN apt-get update && apt-get install -y \
less \
vim
ENTRYPOINT ["Rscript", "example.R"]
Makefile:
.PHONY: build
build:
docker-compose build
.PHONY: up
up:
docker-compose pull
docker-compose up -d
.PHONY: restart
restart:
docker-compose restart
.PHONY: down
down:
docker-compose down
Here is the contents of the 'main' file of the Docker app, example.R:
library(jsonlite)
unlink("../output_data", recursive = TRUE) # delete any existing data from previous runs
dir.create('../output_data')
write(toJSON(mtcars), '../output_data/ga_tables.json')
If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/main and then run sudo Rscript example.R then the file runs and outputs the json in '../output_data/ga_tables.json as expected.
I am struggling to get this to happen when running the container. If I navigate into ${PROJECTS_DIR}/Zen/nextzen_google_analytics_extract_pipeline/ and then in the terminal run make up for:
docker-compose pull
docker-compose up -d
I then see:
make up
docker-compose pull
docker-compose up -d
Creating network "nextzengoogleanalyticsextractpipeline_default" with the default driver
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 ...
Creating nextzengoogleanalyticsextractpipeline_nextzen_ga_extract_marketing_1 .
It 'looks' like everything ran as expected with no errors. Except no output appears in directory output_data as expected?
I guess I'm misunderstanding or misusing ENTRYPOINT in the Dockerfile with ENTRYPOINT ["Rscript", "example.R"]. My goal is that this file would run when the container is run.
How can I 'run' (if that's the correct terminology) my app so that it outputs json into /output_data/ga_tables.json?
Not sure what other info to provide? Any help much appreciated, I'm still getting to grips with docker.
If you run your application from /main and its output is supposed to go into ../output_data (so effectively /output_data), you need to bind mount this directory to have this output available on host. Therefore I would update your docker-compose.yaml to read something like this:
volumes:
- /path/to/output_data/on/host:/output_data
Bear in mind however that your script will not be able to remove /output_data when bind-mounted this way, so you might want to change your step to removing directory contents and not directory itself.
In my case, I got this working when I used full paths as opposed to relative paths.
I have a container that uses a volume in its entrypoint. for example
CMD bash /some/volume/bash_script.sh
I moved this to compose but it only works if my compose points to a Dockerfile in the build section if I try to write the same line in the command section is not acting as I expect and throws file not found error.
I also tried to use docker-compose run <specific service> bash /some/volume/bash_script.sh which gave me the same error.
The question is - Why dont I have this volume at the time that the docker-compose 'command' is executed? Is there anyway to make this work/ override the CMD in my dockerfile?
EDIT:
I'll show specifically how I do this in my files:
docker-compose:
version: '3'
services:
base:
build:
context: ..
dockerfile: BaseDockerfile
volumes:
code:/volumes/code/
my_service:
volumes:
code:/volumes/code/
container_name: my_service
image: my_service_image
ports:
- 1337:1337
build:
context: ..
dockerfile: Dockerfile
volumes:
code:
BaseDockerfile:
FROM python:3.6-slim
WORKDIR /volumes/code/
COPY code.py code.py
CMD tail -f /dev/null
Dockerfile:
FROM python:3.6-slim
RUN apt-get update && apt-get install -y redis-server \
alien \
unixodbc
WORKDIR /volumes/code/
CMD python code.py;
This works.
But if I try to add to docker-compose.yml this line:
command: python code.py
Then this file doesnt exist at the command time. I was expecting this to behave the same as the CMD command
Hmm, nice point!
command: python code.py is not exactly the same as CMD python code.py;!
Since the first one is interpreted as a shell-form command, where the latter is interpreted as an exec-form command.
The problem is about the differences in these two types of CMDs. (i.e. CMD ["something"] vs CMD "something").
For more info about these two, see here.
But, you may still be thinking of what's wrong with your example?
In your case, based on the specification of YAML format, python code.py in the command: python code.py will be interpreted as a single string value, not an array!
On the other hand, as you've probably guessed, python code.py; in the above-mentioned Dockerfile is interpreted as an array, which provides an exec-form command.
The (partial) answer is that the error that was thrown was not at all what the problem was.
Running the following command: bash -c 'python code.py' worked fine. I still cant explain why there was a difference between CMD in Dockerfile and docker-compose "command" oprtion. but this solved it for me
I found out this will work:
command: python ./code.py
I have the following entryfile
FROM <image-of-nodejs>
COPY docker/node/entry.sh /var/entries/entry.sh
RUN apt-get update
RUN apt-get install ant -y
CMD ["/var/entries/entry.sh"]
the image is used by a docker-compose file:
version: "3.3"
services:
my_node:
build:
context: ./
dockerfile: docker/node/Dockerfile-build-dev
volumes:
- type: bind
source: ./
target: /var/proj
and the entry.sh file is the following:
#!/bin/bash
export QNAMAKER_SUB_KEY=b13615t
If I then start the image and I enter the docker, I won't find my env variable set:
docker-compose up --force-recreate -d
docker-compose run my_node bash
root#9c081bedde65:/# echo ${QNAMAKER_SUB_KEY}
<empty>
I would prefer to set my variables throug my script in place of the ENV Dockerfile command. What's wrong?
There are a couple of things going on here.
First, docker-compose run doesn't run a command inside the container you started with docker-compose up. It starts a new container to run a one-off command. You probably want docker-compose exec.
The reason you don't see the variable when using docker-compose run is that you are overriding your CMD by providing a new command (bash) on the docker-compose run command line.
You could consider:
Using ENV statements in your Dockerfile.
Using the environment key in your docker-compose.yml
The former will embed the information into your image, while the latter would mean that the variable would be unset if you didn't explicitly set it in your docker-compose.yaml file (or using -e on the docker run command line).
You may be able to accomplish your goal using an ENTRYPOINT script and setting the value there, but that won't impact the environment visible to you when using docker exec (or docker-compose exec).