Installing Jupyter notebook locally with docker: Extra commands - docker

I found the following guideline to set up a Jupyter notebook locally:
version: "3"
services:
datascience-notebook:
image: jupyter/datascience-notebook
volumes:
- /Absolute/Path/To/Where/Your/Notebook/Files/Will/Be/Saved:/home/jovyan/work
ports:
- 8888:8888
container_name: datascience-notebook-container
Now I want to add one more library to this image. The command is conda install -c conda-forge fbprophet It's explained here how to achieve it with a .Dockerfile. However, how can I achieve that using compose?

You can override entrypoint in docker compose file, as this will override the entrypoint command in any ancestor of the docker file, you need to make sure you also call that entry point command.
The entrypoint of jupyter/base-notebook (root of docker image you are using) is
ENTRYPOINT ["tini", "-g", "--"]
Adding following line in the compose file may do what you wanted to do (I haven't tried it)
entrypoint: [ "conda", "install", "-c", "conda-forge", "fbprophet", "&&", "tini", "-g", "--"]
But downside of it is that this command will run every time container is started slowing down container start up time. So recommended way is the solution you mentioned in you question

Related

CMD in dockerfile vs command in docker-compose.yml

What is the difference?
Which is preferred?
Should CMD be omitted if command is defined?
command overrides the CMD in dockerfile.
If you control the dockerfile yourself, put it there. It is the cleanest way.
If you want to test something or need to alter the CMD while developing it is faster than always changing the dockerfile and rebuild the image.
Or if it is a prebuilt image and you don't want to build a derivate FROM ... image just to change the CMD it is also a quick solution doing it by command.
In the common case, you should have a Dockerfile CMD and not a Compose command:.
command: in the Compose file overrides CMD in the Dockerfile. There are some minor syntactic differences (notably, Compose will never automatically insert a sh -c shell wrapper for you) but they control the same thing in the container metadata.
However, remember that there are other ways to run a container besides Compose. docker run won't read your docker-compose.yml file and so won't see that command: line; it's also not read in tools like Kubernetes. If you build the CMD into the image, it will be honored in all of these places.
The place where you do need a command: override is if you need to launch a non-default main process for a container.
Imagine you're building a Python application. You might have a main Django application and a Celery worker, but these have basically the same source code. So for this setup you might make the image's CMD launch the Django server, and override command: to run a Celery worker off the same image.
# Dockerfile
# ENTRYPOINT is not required
CMD ["./manage.py", "runserver", "0.0.0.0:8080"]
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports: ['8080:8080']
# no command:
worker:
build: .
command: celery worker

Docker-compose run command and keep the container running with ports

I am trying to run Docker-compose file that runs jupyter notebooks, and I want it to execute a command to export the current notebooks as html (for visual reference) every time I run it. But the container doesn't continue running. How do I fix that?
My docker-compose file:
version: "3"
services:
jupy:
build: .
volumes:
- irrelevant:/app/
ports:
- "8888:8888"
#This command executes and exists
#I want it to run and then I continue working
command: bash -c "jupyter nbconvert Untitled.ipynb --template toc2 --output "Untitled_toc2.html""
My Dockerfile:
FROM python:3.7-slim-stretch
# Setup and installations
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]
You are overriding the command that would normally be executed in your container with the jupyter nbconvert command. Since this is a one-off command the behaviour you see is expected.
A simple solution would be to modify the CMD of your container to include the jupyter nbconvert, something like this:
FROM you_image
#
# YOUR DOCKERFILE LINES
#
CMD YOUR_CURRENT_CMD && jupyter nbconvert Untitled.ipynb --template toc2 --output Untitled_toc2.html
The nbconvert command is a one-off command, just set up the container for it's main purpose, to run jupyter, and use the nbconvert when you need it, as it's not needed for the container to run.
Maybe set up an alias or a Makefile to avoid typing the command, in other way you need to restart the container to re-export those html files, which is not worth it at all.
Thank you all, your advice is appreciated.
The easiest ways I found is to run the command in a separate container and assign the same image name so it doesn't build again:
version: "3"
services:
jupy_export:
image: note-im
build: .
volumes:
- irrelevant:/app/
command: jupyter nbconvert Untitled.ipynb --template toc2 --output "Untitled.html"
jupy:
image: note-im
build: .
volumes:
- irrelevant:/app/
ports:
- "8888:8888"
Otherwise I could run this command on my notebooks as:
!jupyter nbconvert Untitled.ipynb --template toc2 --output "Untitled.html"
Which would allow me to keep working without stopping the container.

docker-compose debugging service show `pwd` and `ls -l` at run?

I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.

How does the command key in docker-compose file work

I am trying to understand the docker sample application 'example-voting-app'. I am trying to build the app with docker-compose. I am confused with the behaviour of 'command' key in docker compose file and the CMD Instruction in Dockerfile. The application consists of a service called 'vote'. The configuration for the vote service in docker-compose.yml file is:
services: # we list all our application services under this 'services' section.
vote:
build: ./vote # specifies docker to build the
command: python app.py
volumes:
- ./vote:/app
ports:
- "5000:80"
networks:
- front-tier
- back-tier
The configuration of the Dockerfile provided in ./vote directory is as below:
# Using official python runtime base image
FROM python:2.7-alpine
# Set the application directory
WORKDIR /app
# Install our requirements.txt
ADD requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
# Copy our code from the current folder to /app inside the container
ADD . /app
# Make port 80 available for links and/or publish
EXPOSE 80
# Define our command to be run when launching the container
CMD ["gunicorn", "app:app", "-b", "0.0.0.0:80", "--log-file", "-", "--access-logfile", "-", "--workers", "4", "--keep-alive", "0"]
My doubt here is which command ( 'python app.py' or 'gunicorn app:app -b ...') will be executed when i try building the application using docker-compose up
The Docker Compose command:, or everything in a docker run invocation after the image name, overrides the Dockerfile CMD.
If the image also has an ENTRYPOINT, the command you provide here is passed as arguments to the entrypoint in the same way the Dockerfile CMD does.
For a typical Compose setup you shouldn't need to specify a command:. In a Python/Flask context, the most obvious place it's useful is if you're also using a queueing system like Celery with the same shared code base: you can use command: to run a Celery worker off of the image you build, instead of a Flask application.

command/CMD in docker-compose is not equivalent to CMD in Dockerfile

I have a container that uses a volume in its entrypoint. for example
CMD bash /some/volume/bash_script.sh
I moved this to compose but it only works if my compose points to a Dockerfile in the build section if I try to write the same line in the command section is not acting as I expect and throws file not found error.
I also tried to use docker-compose run <specific service> bash /some/volume/bash_script.sh which gave me the same error.
The question is - Why dont I have this volume at the time that the docker-compose 'command' is executed? Is there anyway to make this work/ override the CMD in my dockerfile?
EDIT:
I'll show specifically how I do this in my files:
docker-compose:
version: '3'
services:
base:
build:
context: ..
dockerfile: BaseDockerfile
volumes:
code:/volumes/code/
my_service:
volumes:
code:/volumes/code/
container_name: my_service
image: my_service_image
ports:
- 1337:1337
build:
context: ..
dockerfile: Dockerfile
volumes:
code:
BaseDockerfile:
FROM python:3.6-slim
WORKDIR /volumes/code/
COPY code.py code.py
CMD tail -f /dev/null
Dockerfile:
FROM python:3.6-slim
RUN apt-get update && apt-get install -y redis-server \
alien \
unixodbc
WORKDIR /volumes/code/
CMD python code.py;
This works.
But if I try to add to docker-compose.yml this line:
command: python code.py
Then this file doesnt exist at the command time. I was expecting this to behave the same as the CMD command
Hmm, nice point!
command: python code.py is not exactly the same as CMD python code.py;!
Since the first one is interpreted as a shell-form command, where the latter is interpreted as an exec-form command.
The problem is about the differences in these two types of CMDs. (i.e. CMD ["something"] vs CMD "something").
For more info about these two, see here.
But, you may still be thinking of what's wrong with your example?
In your case, based on the specification of YAML format, python code.py in the command: python code.py will be interpreted as a single string value, not an array!
On the other hand, as you've probably guessed, python code.py; in the above-mentioned Dockerfile is interpreted as an array, which provides an exec-form command.
The (partial) answer is that the error that was thrown was not at all what the problem was.
Running the following command: bash -c 'python code.py' worked fine. I still cant explain why there was a difference between CMD in Dockerfile and docker-compose "command" oprtion. but this solved it for me
I found out this will work:
command: python ./code.py

Resources