Docker jupyter/datascience-notebook won't copy to own NB_USER - docker

I think I'm officially going mad as I can't seem to figure out what I am doing wrong with my docker configuration.
What I'm trying to achieve:
set up a Jupyter server using Jupyter/datascience-notebook as a base
copy a bunch if start scripts into the /home/my-user/.ipython/profile_default/startup folder
I don't want to use the default jovyan user but override to my own name
What I have:
a Dockerfile
a docker-compose.yml
the startup file I'm trying to copy
Dockerfile
FROM jupyter/datascience-notebook
USER $NB_USER
ENV JUPYTER_ENABLE_LAB=yes
COPY --chown=${NB_UID}:${NB_GID} notebook-startup-scripts/00-s3fs.py .ipython/profile_default/startup
docker-compose.yml
version: "3.9"
services:
jupyter-1:
build:
context: ./jupyter-lab
user: root
working_dir: /home/my-user/work
ports:
- "8888:8888"
volumes:
environment:
NB_USER: "my-user"
CHOWN_HOME: "yes"
restartable: "yes"
stdin_open: true
tty: true
command: /usr/local/bin/start-notebook.sh --ServerApp.allow_origin="*" --ServerApp.open_browser=False --ServerApp.allow_remote_access=True --ServerApp.trust_xheaders=True --ServerApp.password=${JUPYTER_PASSWORD}
I start the beast using docker compose up.
For some reason I always end up with my 00-s3fs.py file being synced to /home/jovyan/.../ as opposed to /home/my-user/.../ and it drives me nuts.
I tried all kinds of things, e.g. using
COPY --chown=${NB_UID}:${NB_GID} notebook-startup-scripts/00-s3fs.py /home/${NB_USER}.ipython/profile_default/startup
but nothing works :(.

If you move NB_USER within your docker-compose.yml:
version: "3.9"
services:
jupyter-1:
build:
context: ./jupyter-lab
args:
NB_USER: "my-user"
user: root
working_dir: /home/my-user/work
ports:
- "8888:8888"
volumes:
environment:
CHOWN_HOME: "yes"
restartable: "yes"
stdin_open: true
tty: true
command: /usr/local/bin/start-notebook.sh --ServerApp.allow_origin="*" --ServerApp.open_browser=False --ServerApp.allow_remote_access=True --ServerApp.trust_xheaders=True --ServerApp.password=${JUPYTER_PASSWORD}
This updates $NB_USER before the build process, unlike setting it within environment, which only sets $NB_USER after the image has already been build (and the file COPY already occurring).

Related

Conditional volumes_from path

I'm struggling with providing environment-specific configuration (say dev/qa/prod) to an application container. The closest I've got to my goal is this docker-compose.yml:
version: '2'
services:
app:
image: my-application:latest
tty: true
volumes_from:
- configs:ro
configs:
image: my-configs:latest
tty: true
volumes:
- /configs/$ENV
If, say, we're deploying in the QA environment, the relevant configuration will be accessible at /configs/qa (from within the app service), meaning that in order to access these configs application will have to be aware of the environment it is running in, and I don't think it is something the application developers should be concerned about.
So my goal is to have config's /configs/$ENV accessible as simply /configs from within the app service. How can I achieve that? My current idea is rebuilding config's image:
FROM my-configs:latest
ARG env
RUN cp -rf /configs/$env /tmp/configs && rm -rf /configs && cp -rf /tmp/configs /configs
ENTRYPOINT /bin/sh
Then updated docker-compose.yml will look like this:
version: '2'
services:
app:
image: my-application:latest
tty: true
volumes_from:
- configs:ro
configs:
build:
context: .
args:
env: $ENV
tty: true
volumes:
- /configs
Are there any better options or should I go through with my idea?
You can directly use environment variables in the volumes: specification
volumes:
- /config/$ENV:/config
I'd put this in the service definition that's actually using it. You don't need to put this into its own image. I'd also tend to avoid volumes_from: in favor of being explicit about what volumes are actually getting mounted.
version: '3'
services:
app:
image: my-application:latest
volumes:
- /config/$ENV:/config

Docker: Why does my project have a .env file?

I'm working on a group project involving Docker that has a .env file, which looks like this:
DATABASE_URL=xxx
DJANGO_SETTINGS_MODULE=xxx
SECRET_KEY=xxx
Couldn't this just be declared inside the Dockerfile? If so, what is the advantage of making a .env file?
Not sure if I'm going in the right direction with this, but this Docker Docs page says (emphasis my own):
Your configuration options can contain environment variables. Compose
uses the variable values from the shell environment in which
docker-compose is run. For example, suppose the shell contains
POSTGRES_VERSION=9.3 and you supply this configuration:
db:
`image: "postgres:${POSTGRES_VERSION}"`
When you run docker-compose up with this configuration, Compose looks for the POSTGRES_VERSION environment variable in the shell and substitutes its value in. For this example, Compose resolves the image to postgres:9.3 before running the configuration.
If an environment variable is not set, Compose substitutes with an empty string. In the example above, if POSTGRES_VERSION is not set, the value for the image option is postgres:.
You can set default values for environment variables using a .env file, which Compose automatically looks for. Values set in the shell environment override those set in the .env file.
If we're using a .env file, then wouldn't I see some ${...} syntax in our docker-compose.yml file? I don't see anything like that, though.
Here's our docker-compose.yml file:
version: '3'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
env_file: .env.dev
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- ./server:/app
ports:
- "8500:8000"
depends_on:
- db
stdin_open: true
tty: true
db:
image: postgres
client:
build:
context: ./client
dockerfile: Dockerfile
command: bash -c "npm install; npm run start"
volumes:
- ./client:/app
- /app/node_modules
ports:
- "3000:3000"
depends_on:
- server
Idea there is probably to have a place to keep secrets separated from docker-compose.yml, which you then can keep in VCS and/or share.

golang program running fine outside of docker, but exiting with 0 when dockerized

I have the following docker-compose.yml file:
version: "3.3"
services:
api:
build: ./api
expose:
- '8080'
container_name: 'api'
ports:
- "8080:8080"
depends_on:
- db
stdin_open: true
tty: true
networks:
- api-net
db:
build: ./db
expose:
- '27017'
container_name: 'mongo'
ports:
- "27017:27017"
networks:
- api-net
networks:
api-net:
driver: bridge
and the Dockerfile for the api container is as follows:
FROM iron/go:dev
RUN mkdir /app
COPY src/main/main.go /app/
ENV SRC_DIR=/app
ADD . $SRC_DIR
RUN go get goji.io
RUN go get gopkg.in/mgo.v2
# RUN cd $SRC_DIR; go build -o main
CMD ["go", "run", "/app/main.go"]
If I run the code for main.go outside of a container it runs as expected, however if I try to run the container as part of docker-compose I get an exit 0. I have seen other threads on stackoverflow that have suggested using stdin_open and tty, but these have not helped. I have also tried creating an .env file in the same directory I issue docker-compose up from with COMPOSE_HTTP_TIMEOUT=8000 in it and this has not worked either. I am looking for helped and suggestions as to what I need to do in order for my api container to stay up.
I know that --verbose can be issued with docker-compose, however I'm not sure what I should be looking for in the output that this produces.
I finally managed to get to the bottom of this, in my code which worked outside of a container I had:
http.ListenAndServe("localhost:8080", mux)
the fix was to simply remove localhost such that I now have:
http.ListenAndServe(":8080", mux)

Running docker-compose up, stuck on a "infinite" "creating...[container/image]" php and mysql images

I'm new to Docker, so i don't know if it's a programming mistake or something, one thing i found strange is that in a Mac it worked fine, but running on windows, doesn't.
docker-compose.yml
version: '2.1'
services:
db:
build: ./backend
restart: always
ports:
- "3306:3306"
volumes:
- /var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=demo
- MYSQL_USER=user
- MYSQL_PASSWORD=123
php:
build: ./frontend
ports:
- "80:80"
volumes:
- ./frontend:/var/www/html
links:
- db
Docker file inside ./frontend
FROM php:7.2-apache
# Enable mysqli to connect to database
RUN docker-php-ext-install mysqli
# Document root
WORKDIR /var/www/html
COPY . /var/www/html/
Dockerfile inside ./backend
FROM mysql:5.7
COPY ./demo.sql /docker-entrypoint-initdb.d
Console:
$ docker-compose up
Creating phpsampleapp_db_1 ... done
Creating phpsampleapp_db_1 ...
Creating phpsampleapp_php_1 ...
It stays forever like that, i tried a bunch of things.
I'm using Docker version 17.12.0-ce. And enabled Linux container mode.
I think i don't need the "version" and "services", but anyway.
Thanks.
In my case, the fix was simply to restart Docker Desktop. After that all went smoothly

Docker-compose volumes doesn't copy any files

I'm in Fedora 23 and i'm using docker-compose to build two containers: app and db.
I want to use that docker as my dev env, but have to execute docker-compose build and up every time i change the code isn't nice. So i was searching and tried the "volumes" option but my code doesn't get copied to docker.
When i run docker-build, a "RUN ls" command doesn't list the "app" folder or any files of it.
Obs.: in the root folder I have: docker-compose.yml, .gitignore, app (folder), db (folder)
ObsĀ¹.: If I remove the volumes and working_dir options and instead I use a "COPY . /app" command inside the app/Dockerfile it works and my app is running, but I want it to sync my code.
Anyone know how to make it work?
My docker-compose file is:
version: '2'
services:
app:
build: ./app
ports:
- "3000:3000"
depends_on:
- db
environment:
- DATABASE_HOST=db
- DATABASE_USER=myuser
- DATABASE_PASSWORD=mypass
- DATABASE_NAME=dbusuarios
- PORT=3000
volumes:
- ./app:/app
working_dir: /app
db:
build: ./db
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=dbusuarios
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypass
Here you can see my app container Dockerfile:
https://gist.github.com/jradesenv/d3b5c09f2fcf3a41f392d665e4ca0fb9
Heres the output of the RUN ls command inside Dockerfile:
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
A volume is mounted in a container. The Dockerfile is used to create the image, and that image is used to make the container. What that means is a RUN ls inside your Dockerfile will show the filesystem before the volume is mounted. If you need these files to be part of the image for your build to complete, they shouldn't be in the volume and you'll need to copy them with the COPY command as you've described. If you simply want evidence that these files are mounted inside your running container, run a
docker exec $container_name ls -l /
Where $container_name will be something like ${folder_name}_app_1, which you'll see in a docker ps.
Two things, have you tried version: '3' version two seems to be outdated. Also try putting the working_dir into the Dockerfile rather than the docker-compose. Maybe it's not supported in version 2?
This is a recent docker-compose I have used with volumes and workdirs in the respective Dockerfiles:
version: '3'
services:
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- 3001:3001
volumes:
- ./frontend:/app
networks:
- frontend
backend:
build: .
ports:
- 3000:3000
volumes:
- .:/app
networks:
- frontend
- backend
depends_on:
- "mongo"
mongo:
image: mongo
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
networks:
- backend
networks:
frontend:
backend:
You can extend or override docker compose configuration. Please follow for more info: https://docs.docker.com/compose/extends/
I had this same issue in Windows!
volumes:
- ./src/:/var/www/html
In windows ./src/ this syntax might not work in regular command prompt, so use powershell instead and then run docker-compose up -d.
it should work if it's a mounting issue.

Resources