I am trying to run Docker-compose file that runs jupyter notebooks, and I want it to execute a command to export the current notebooks as html (for visual reference) every time I run it. But the container doesn't continue running. How do I fix that?
My docker-compose file:
version: "3"
services:
jupy:
build: .
volumes:
- irrelevant:/app/
ports:
- "8888:8888"
#This command executes and exists
#I want it to run and then I continue working
command: bash -c "jupyter nbconvert Untitled.ipynb --template toc2 --output "Untitled_toc2.html""
My Dockerfile:
FROM python:3.7-slim-stretch
# Setup and installations
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]
You are overriding the command that would normally be executed in your container with the jupyter nbconvert command. Since this is a one-off command the behaviour you see is expected.
A simple solution would be to modify the CMD of your container to include the jupyter nbconvert, something like this:
FROM you_image
#
# YOUR DOCKERFILE LINES
#
CMD YOUR_CURRENT_CMD && jupyter nbconvert Untitled.ipynb --template toc2 --output Untitled_toc2.html
The nbconvert command is a one-off command, just set up the container for it's main purpose, to run jupyter, and use the nbconvert when you need it, as it's not needed for the container to run.
Maybe set up an alias or a Makefile to avoid typing the command, in other way you need to restart the container to re-export those html files, which is not worth it at all.
Thank you all, your advice is appreciated.
The easiest ways I found is to run the command in a separate container and assign the same image name so it doesn't build again:
version: "3"
services:
jupy_export:
image: note-im
build: .
volumes:
- irrelevant:/app/
command: jupyter nbconvert Untitled.ipynb --template toc2 --output "Untitled.html"
jupy:
image: note-im
build: .
volumes:
- irrelevant:/app/
ports:
- "8888:8888"
Otherwise I could run this command on my notebooks as:
!jupyter nbconvert Untitled.ipynb --template toc2 --output "Untitled.html"
Which would allow me to keep working without stopping the container.
Related
I am trying to make the binary file /bin/wkhtmltopdf from the container wkhtmltopdf available in the web container. I try to achieve this with a named volume.
I have the following docker container setup in my docker-compose.yml:
services:
web:
image: php:7.4-apache
command: sh -c "mkdir -p /usr/local/bin && touch /usr/local/bin/wkhtmltopdf"
entrypoint: sh -c "exec 'apache2-foreground'"
volumes:
- wkhtmltopdfvol:/usr/local/bin/wkhtmltopdf
wkhtmltopdf:
image: madnight/docker-alpine-wkhtmltopdf
command: sh -c "touch /bin/wkhtmltopdf"
entrypoint: sh -c "tail -f /dev/null" # workaround to keep container running
volumes:
- wkhtmltopdfvol:/bin/wkhtmltopdf
volumes:
wkhtmltopdfvol:
However, I get the following error when running docker-compose up:
ERROR: for wkhtmltopdf Cannot create container for service wkhtmltopdf:
source /var/lib/docker/overlay2/42e7082b8024ae4ebb13a4f0003a9e17bc18b33ef0677431dd002da3c21dde88/merged/bin/wkhtmltopdf is not directory
.../bin/wkhtmltopdf is not directory
Does that mean that I can't share one file between containers but only directories through a named volume? How do I achieve this?
Edit: I also noticed that /usr/local/bin/wkhtmltopdf inside the web container is a directory and not a file as I expected.
It can be tricky to share binaries between containers like this. Volumes probably aren't the mechanism you're looking for.
If you look at the Docker Hub page for the php image you can see that php:7.4-apache is an alias for (currently) php:7.4.15-apache-buster, where "Buster" is the name of a Debian release. You can then search on https://packages.debian.org/ to discover that Debian has a prepackaged wkhtmltopdf package. You can install this using a custom Dockerfile:
FROM php:7.4-apache
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
wkhtmltopdf
# COPY ...
# Base image provides EXPOSE, CMD
Then your docker-compose.yml file needs to build this image:
version: '3.8'
services:
web:
build: .
# no image:, volumes:, or command: override
Just in terms of the mechanics of sharing binaries like this, you can run into trouble where a binary needs a shared library that's not present in the target container. The apt-get install mechanism handles this for you. There are also potential troubles if a container has a different shared-library ecosystem (especially Alpine-based containers), or using host binaries from a different operating system.
The Compose file you show mixes several concepts in a way that doesn't really work. A named volume is always a directory, so trying to mount that over the /bin/wkhtmltopdf file in the second container causes the error you see. There's a dependency issue for which container starts up first and gets to create the volume. A container only runs a single command, and if you have both entrypoint: and command: then the command gets passed as extra arguments to the entrypoint (and if the entrypoint is an sh -c ... invocation, effectively ignored).
If you really wanted to try this approach, you should make web: {depends_on: [wkhtmltopdf]} to force the dependency order. The second container should mount the volume somewhere else, it probably shouldn't have an entrypoint:, and it should do something like command: cp -a /bin/wkhtmltopdf /export. (It will exit immediately once this cp finishes, but that shouldn't matter.) The first container can then mount the volume on, say, /usr/local/bin, and not specially set command: or entrypoint:. There will still be a minor race condition (you're not guaranteed the cp command will complete before Apache starts) but it probably wouldn't be a practical problem.
I have script: docker run -it -p 4000:4000 bitgosdk/express:latest --disablessl -e test
how to put this command to dockerfile with arguments?
FROM bitgosdk/express:latest
EXPOSE 4000
???
Gone through your Dockerfile contents.
The command running inside container is:
/ # ps -ef | more
PID USER TIME COMMAND
1 root 0:00 /sbin/tini -- /usr/local/bin/node /var/bitgo-express/bin/bitgo-express --disablessl -e test
The command is so because the entrypoint set in the Dockerfile is ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/node", "/var/bitgo-express/bin/bitgo-express"] and the arguments --disablessl -e test are the one provided while running docker run command.
The --disablessl -e test arguments can be set inside your Dockerfile using CMD:
CMD ["--disablessl", "-e","test"]
New Dockerfile:
FROM bitgosdk/express:latest
EXPOSE 4000
CMD ["--disablessl", "-e","test"]
Refer this to know the difference between entrypoint and cmd.
You don't.
This is what docker-compose is used for.
i.e. create a docker-compose.yml with contents like this:
version: "3.8"
services:
test:
image: bitgodsdk/express:latest
command: --disablessl -e test
ports:
- "4000:4000"
and then execute the following in a terminal to access the interactive terminal for the service named test.
docker-compose run test
Even if #mchawre's answer seems to directly answer OP's question "syntactically speaking" (as a Dockerfile was asked), a docker-compose.yml is definitely the way to go to make a docker run command, as custom as it might be, reproducible in a declarative way (YAML file).
Just to complement #ChrisBecke's answer, note that the writing of this YAML file can be automated. See e.g., the FOSS (under MIT license) https://github.com/magicmark/composerize
FTR, the snippet below was automatically generated from the following docker run command, using the accompanying webapp https://composerize.com/:
docker run -it -p 4000:4000 bitgosdk/express:latest
version: '3.3'
services:
express:
ports:
- '4000:4000'
image: 'bitgosdk/express:latest'
I omitted the CMD arguments --disablessl -e test on-purpose, as composerize does not seem to support these extra arguments. This may sound like a bug (and FTR a related issue is opened), but meanwhile it might just be viewed as a feature, in line of #DavidMaze's comment…
I found the following guideline to set up a Jupyter notebook locally:
version: "3"
services:
datascience-notebook:
image: jupyter/datascience-notebook
volumes:
- /Absolute/Path/To/Where/Your/Notebook/Files/Will/Be/Saved:/home/jovyan/work
ports:
- 8888:8888
container_name: datascience-notebook-container
Now I want to add one more library to this image. The command is conda install -c conda-forge fbprophet It's explained here how to achieve it with a .Dockerfile. However, how can I achieve that using compose?
You can override entrypoint in docker compose file, as this will override the entrypoint command in any ancestor of the docker file, you need to make sure you also call that entry point command.
The entrypoint of jupyter/base-notebook (root of docker image you are using) is
ENTRYPOINT ["tini", "-g", "--"]
Adding following line in the compose file may do what you wanted to do (I haven't tried it)
entrypoint: [ "conda", "install", "-c", "conda-forge", "fbprophet", "&&", "tini", "-g", "--"]
But downside of it is that this command will run every time container is started slowing down container start up time. So recommended way is the solution you mentioned in you question
I have a container that uses a volume in its entrypoint. for example
CMD bash /some/volume/bash_script.sh
I moved this to compose but it only works if my compose points to a Dockerfile in the build section if I try to write the same line in the command section is not acting as I expect and throws file not found error.
I also tried to use docker-compose run <specific service> bash /some/volume/bash_script.sh which gave me the same error.
The question is - Why dont I have this volume at the time that the docker-compose 'command' is executed? Is there anyway to make this work/ override the CMD in my dockerfile?
EDIT:
I'll show specifically how I do this in my files:
docker-compose:
version: '3'
services:
base:
build:
context: ..
dockerfile: BaseDockerfile
volumes:
code:/volumes/code/
my_service:
volumes:
code:/volumes/code/
container_name: my_service
image: my_service_image
ports:
- 1337:1337
build:
context: ..
dockerfile: Dockerfile
volumes:
code:
BaseDockerfile:
FROM python:3.6-slim
WORKDIR /volumes/code/
COPY code.py code.py
CMD tail -f /dev/null
Dockerfile:
FROM python:3.6-slim
RUN apt-get update && apt-get install -y redis-server \
alien \
unixodbc
WORKDIR /volumes/code/
CMD python code.py;
This works.
But if I try to add to docker-compose.yml this line:
command: python code.py
Then this file doesnt exist at the command time. I was expecting this to behave the same as the CMD command
Hmm, nice point!
command: python code.py is not exactly the same as CMD python code.py;!
Since the first one is interpreted as a shell-form command, where the latter is interpreted as an exec-form command.
The problem is about the differences in these two types of CMDs. (i.e. CMD ["something"] vs CMD "something").
For more info about these two, see here.
But, you may still be thinking of what's wrong with your example?
In your case, based on the specification of YAML format, python code.py in the command: python code.py will be interpreted as a single string value, not an array!
On the other hand, as you've probably guessed, python code.py; in the above-mentioned Dockerfile is interpreted as an array, which provides an exec-form command.
The (partial) answer is that the error that was thrown was not at all what the problem was.
Running the following command: bash -c 'python code.py' worked fine. I still cant explain why there was a difference between CMD in Dockerfile and docker-compose "command" oprtion. but this solved it for me
I found out this will work:
command: python ./code.py
I have just started learning Docker, and run into this issue which don't know how to go abound.
My Dockerfile looks like this:
FROM node:7.0.0
WORKDIR /app
COPY app /app
COPY hermes-entry /usr/local/bin
RUN chmod +x /usr/local/bin/hermes-entry
COPY entry.d /entry.d
RUN npm install
RUN npm install -g gulp
RUN npm install gulp
RUN gulp
My docker-compose.yml looks like this:
version: '2'
services:
hermes:
build: .
container_name: hermes
volumes:
- ./app:/app
ports:
- "4000:4000"
entrypoint: /bin/bash
links:
- postgres
depends_on:
- postgres
tty: true
postgres:
image: postgres
container_name: postgres
volumes:
- ~/.docker-volumes/hermes/postgresql/data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
ports:
- "2345:5432"
After starting the containers up with:
docker-compose up -d
I tried running a simple bash cmd:
docker-compose run hermes ls
And I got this error:
/bin/ls cannot execute binary file
Any idea on what I am doing wrong?
The entrypoint to your container is bash. By default bash expects a shell script as its first argument, but /bin/ls is a binary, as the error says. If you want to run /bin/ls you need to use -c /bin/ls as your command. -c tells bash that the rest of the arguments are a command line rather than the path of a script, and the command line happens to be a request to run /bin/ls.
You can't run Gulp and Node at the same time in one container. Containers should always have one process each.
If you just want node to serve files, remove your entrypoint from the hermes service.
You can add another service to run gulp, if you are having it run tests, you'd have to map the same volume and add a command: ["gulp"]
And you'd need to remove RUN gulp from your dockerfile (unless you are using it to build your node files)
then run docker-compose up