I have script: docker run -it -p 4000:4000 bitgosdk/express:latest --disablessl -e test
how to put this command to dockerfile with arguments?
FROM bitgosdk/express:latest
EXPOSE 4000
???
Gone through your Dockerfile contents.
The command running inside container is:
/ # ps -ef | more
PID USER TIME COMMAND
1 root 0:00 /sbin/tini -- /usr/local/bin/node /var/bitgo-express/bin/bitgo-express --disablessl -e test
The command is so because the entrypoint set in the Dockerfile is ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/node", "/var/bitgo-express/bin/bitgo-express"] and the arguments --disablessl -e test are the one provided while running docker run command.
The --disablessl -e test arguments can be set inside your Dockerfile using CMD:
CMD ["--disablessl", "-e","test"]
New Dockerfile:
FROM bitgosdk/express:latest
EXPOSE 4000
CMD ["--disablessl", "-e","test"]
Refer this to know the difference between entrypoint and cmd.
You don't.
This is what docker-compose is used for.
i.e. create a docker-compose.yml with contents like this:
version: "3.8"
services:
test:
image: bitgodsdk/express:latest
command: --disablessl -e test
ports:
- "4000:4000"
and then execute the following in a terminal to access the interactive terminal for the service named test.
docker-compose run test
Even if #mchawre's answer seems to directly answer OP's question "syntactically speaking" (as a Dockerfile was asked), a docker-compose.yml is definitely the way to go to make a docker run command, as custom as it might be, reproducible in a declarative way (YAML file).
Just to complement #ChrisBecke's answer, note that the writing of this YAML file can be automated. See e.g., the FOSS (under MIT license) https://github.com/magicmark/composerize
FTR, the snippet below was automatically generated from the following docker run command, using the accompanying webapp https://composerize.com/:
docker run -it -p 4000:4000 bitgosdk/express:latest
version: '3.3'
services:
express:
ports:
- '4000:4000'
image: 'bitgosdk/express:latest'
I omitted the CMD arguments --disablessl -e test on-purpose, as composerize does not seem to support these extra arguments. This may sound like a bug (and FTR a related issue is opened), but meanwhile it might just be viewed as a feature, in line of #DavidMaze's comment…
Related
I have a django docker image and using docker-compose to start it along with postgresql.
# docker-compose -p torsha-single -f ./github/docker-compose.yml --project-directory ./FINAL up --build --force-recreate --remove-orphans
# docker-compose -p torsha-single -f ./github/docker-compose.yml --project-directory ./FINAL exec fastapi /bin/bash
# My version of docker = 18.09.4-ce
# Compose file format supported till version 18.06.0+ is 3.7
version: "3.7"
services:
postgresql:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ./DO_NOT_DELETE_postgres_data
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: project
POSTGRES_USER: postgres
POSTGRES_PASSWORD: abc123
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
webapp:
image: "django_image"
depends_on:
- postgresql
ports:
- 8000:8000
networks:
- postgresql_network
networks:
postgresql_network:
driver: bridge
Now when after I do docker-compose up for first time I have to create dummy data using
docker-compose exec webapp sh -c 'python manage.py migrate';
docker-compose exec webapp sh -c 'python manage.py shell < useful_scripts/intialize_dummy_data.py'
After this anytime later I dont need to do the above.
Where to place this script so it checks if its first time then run these commands.
One of the Django documentation's suggestions for Providing initial data for models is to write it as a data migration. That will automatically load the data when you run manage.py migrate. Like other migrations, Django records the fact that it has run in the database itself, so it won't re-run it a second time.
This then reduces the problem to needing to run migrations when your application starts. You can write a shell script that first runs migrations, then runs some other command that's passed as arguments:
#!/bin/sh
python manage.py migrate
exec "$#"
This is exactly the form required for a Docker ENTRYPOINT script. In your Dockerfile, COPY this script in with the rest of your application, set the ENTRYPOINT to run this script, and set the CMD to run the application as before.
COPY . . # probably already there
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD python manage.py runserver 0.0.0.0:8000 # unchanged
(If you already have an entrypoint wrapper script, add the migration line there. If your Dockerfile somehow splits ENTRYPOINT and CMD, combine them into a single CMD.)
Having done this, the container will run migrations itself whenever it starts. If this is the first time the container runs, it will also load the seed data. You don't need any manual intervention.
docker-compose run is designed for this type of problem. Pair it with the -rm flag to remove the container when the command is complete. Common examples are doing the sort of migrations and initialization you are trying to accomplish.
This is right out of the manual page for the docker-compose run command:
docker-compose run --rm web python manage.py db upgrade
You can think of this as a sort of disposable container, that does one job, and exits. This technique can also be used for scheduled jobs with cron.
I have a docker file
FROM tomcat:9.0.45-jdk8-adoptopenjdk-hotspot
RUN mkdir -p /opt/main
WORKDIR /opt/main
COPY run.sh test.sh runmain.sh /opt/main
RUN chmod +x /opt/main/run.sh && bash /opt/main/run.sh
ENTRYPOINT bash /usr/local/tomcat/bin/runmain.sh && /usr/local/tomcat/bin/catalina.sh run
An env file
ENV_MQ_DETAILS=tcp://10.222.12.12:61616
ENV_DB_HOST=10.222.12.12
runmain.sh file has the following code
#!/bin/bash
echo ${ENV_MQ_DETAILS}
echo ${ENV_DB_HOST}
when I run the docker run command
docker run --env-file .env bootstrap -d
The docker logs shows both env variable values printed.
when I use the docker-compose file
version: "3"
services:
bootstrap:
image: bootstrap
container_name: bootstrap
hostname: bootstrap
ports:
- 8080:8080
and run the command
docker-compose -f docker-compose-bootstrap.yaml --env-file .env bootstrap -d
I get two issues
While running the docker-compose-bootstrap.yaml the environment variables aren't shown in the logs hence can use them in the latest part of the code, why is it so and please help to resolve this (highest priority).
2 In both the cases (docker run and docker-compose run) , it keeps echoing the files in /opt/main/ folder
but nothing to bother though, but why ?
Please help in resolving the above issues.
I have docker-compose.yml like this:
version: '3'
services:
php-fpm:
command: php-fpm --allow-to-run-as-root
restart: always
links:
- postgresql
build: ./php
ports:
- '9090:9000'
volumes:
- ../../:/var/www/html/
- ./php/config/php.ini:/usr/local/etc/php/php.ini
networks:
- backend
And I want set environment variable with ip of php-fpm container to same container. For example if I call
docker exec -it php-fpm /bin/sh export ALLOWED_ID
I see my dynamic ip address of container 172.21.0.4 (for example)
I tried add to Dockerfile this code:
RUN export ALLOWED_ID=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')
But it isn't work when I entrance to container.
I tried add sh command to docker-compose command section like this:
command: php-fpm --allow-to-run-as-root && export ALLOWED_ID=<some expression>
But it isn't correct syntax. Also I read about entrypoint sections in docker-compose file but I don't understand how it works and how to keep this "php-fpm --allow-to-run-as-root" code.
It's pretty unusual to need to know a container's Docker-internal IP address. Docker provides an internal DNS service where the name of each Docker Compose block resolves to that IP address; you can use the block names like php-fpm or postgresql as host names as normal (without a links: block; I would recommend removing it on general principle).
When Docker starts up, if the container has an entrypoint, it runs the entrypoint (only), passing the command (if any) as command-line arguments to it. So a very typical path if you do need to do first-time setup like this is to write an entrypoint as a shell script that sets up environment variables and then runs the command it's given. In your case this script could look like
#!/bin/sh
export ALLOWED_ID=$(ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1')
exec "$#"
(Assuming ifconfig is installed in your container; it's usually present, especially on Debian/Ubuntu based images, but rarely actually used.)
In your Dockerfile, at the same place you COPY in the application code, also COPY in this entrypoint script, and set it to be the image's default entrypoint (being careful to use the square-bracket form).
...
COPY . /var/www/html
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm", "--allow-to-run-as-root"]
(Because this pattern is so useful, I tend to recommend defaulting to running the main container process as the CMD, so that you don't have to redesign everything if you do need to add an ENTRYPOINT wrapper.)
I found a solution:
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
ENV ALLOWED_IP=${ALLOWED_IP}
and entrypoint.sh
#!/bin/sh
set -e
export ALLOWED_IP="$(hostname -i)"
exec "$#"
And then if I run container
docker-compose run --rm php-fpm env
I see my ALLOWED_IP variable with container ip
I'm trying to get the variable from the command line using:
sudo docker-compose -f docker-compose-fooname.yml run -e BLABLA=hello someservicename
My file looks like this:
version: '3'
services:
someservicename:
environment:
- BLABLA
image: docker.websitename.com/image-name:latest
volumes:
- /var/www/image-name
command: ["npm", "run", BLABLA]
All of this is so that I can run a script defined by what I use as BLABLA in the command line, I've tried going with official documentation.
Tried several options including:
sudo COMPOSE_OPTIONS="-e BLABLA=hello" docker-compose -f docker-compose-fooname.yml run someservicename
UPDATE:
I have to mention that as it is, I always get:
WARNING: The FAKE_SERVER_MODE variable is not set. Defaulting to a blank string.
Even when I just run the following command (be it remove, stop..):
sudo docker-compose -f docker-compose-fooname.yml stop someservicename
For the record: I'm pulling the image first, I never build it but my CI/CD tool does (gitlab), does this affect it?
I'm using docker-compose version 1.18, docker version 18.06.1-ce, Ubuntu 16.04
That docker-compose.yml syntax doesn't work the way you expect. If you write:
command: ["npm", "run", BLABLA]
A YAML parser will turn that into a list of three strings npm, run, and BLABLA, and when Docker Compose sees that list it will try to run literally that exact command, without running a shell to try to interpret anything.
If you set it to a string, Docker will run a shell over it, and that shell will expand the environment variable; try
command: "npm run $BLABLA"
That having been said, this is a little bit odd use of Docker Compose. As the services: key implies the more usual use case is to launch some set of long-running services with docker-compose up; you might npm run start or some such as a service but you wouldn't typically have a totally parametrizable block with no default.
I might make the docker-compose.yml just say
version: '3'
services:
someservicename:
image: docker.websitename.com/image-name:latest
command: ["npm", "run", "start"]
and if I did actually need to run something else, run
docker-compose run --rm someservicename npm run somethingelse
(or just use my local ./node_modules/.bin/somethingelse and not involve Docker at all)
In the documentation of docker composer version 3, from what I understood, to run some commands after a container has started I need to add the "command" tag as follows:
version: "3"
services:
broker:
image: "toke/mosquitto"
restart: always
ports:
- "1883:1883"
- "9001:9001"
command: ["cd /etc/mosquitto", "echo \"\" > mosquitto.pwd", "mosquitto_passwd -b /etc/mosquitto/mosquitto.pwd user pass", "echo \"password_file mosquitto.pwd\" >> mosquitto.conf", "echo \"allow_anonymous false\" >> mosquitto.conf"]
The log returns /usr/bin/docker-entrypoint.sh: 5: exec: cd /etc/mosquitto: not found
A workaround could be specify in the composer file what dockerfile to run and add the commands that should run there, so I created one dockerfile:
FROM toke/mosquitto
WORKDIR .
EXPOSE 1883:1883 9001:9001
ENTRYPOINT cd /etc/mosquitto
ENTRYPOINT echo "" > mosquitto.pwd
ENTRYPOINT mosquitto_passwd -b mosquitto.pwd usertest passwordtest
ENTRYPOINT echo "password_file mosquitto.pwd" >> mosquitto.conf
ENTRYPOINT echo "allow_anonymous false" >> mosquitto.conf
The container's keep restarting and the log doesn't return anything. I've also tried changing the "ENTRYPOINT" for "CMD" with no changing in the output.
As an addend specifying the docker composer file to use a specific dockerfile it fails to parse and says:
ERROR: The Compose file '.\docker-compose.yml' is invalid because:
Unsupported config option for services.broker: 'dockerfile'
As in it can't parse or doesn't understand "dockerfile" tag. Does anyone know how to configure a dockerfile or even docker-composer to run the commands intended in this post to configure a mqtt broker?
The command entry in the compose file is not a list of commands to run, it's a single command and it's arguments
e.g. to run mosquitto -c /etc/mosquitto/mosquitto.conf
command: ["mosquitto", "-c", "/etc/mosquitto/mosquitto.conf"]
As for the Dockerfile, There should only be one ENTRYPOINT or CMD. If you want to run multiple commands then you should create a shell script to do run them, add it to the container then use ENTRYPOINT or CMD to run the script.