Issues rewriting url route/add a PathPrefix to jupyter lab with traefik - docker

I am having trouble rewriting the route or adding a path prefix to a route for a jupyterlab services in docker so that http://jupyter-test.localhost/user starts jupyterlab. I also tried removing the stripprefix with no luck. Any help would be appreciated, thank you
docker-compose.yml
version: "3.8"
services:
reverse-proxy:
image: traefik:v2.4
command: --api.insecure=true --providers.docker # --log.level=DEBUG
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- traefik.enable=false
jupyter_rewrite_path:
restart: always
image: jupyter/scipy-notebook
command: jupyter-lab --ip='*' --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.base_url=/user
labels:
- traefik.http.routers.jupyter_rewrite_path.rule=Host(`jupyter-test.localhost`) && PathPrefix(`/user`)
- traefik.http.services.jupyter_rewrite_path.loadbalancer.server.port=8888
- "traefik.http.routers.jupyter_rewrite_path.middlewares=jupyter_rewrite_path_stripprefix"
- "traefik.http.middlewares.jupyter_rewrite_path_stripprefix.stripprefix.prefixes=/user"
use docker-compose up

When I start containers using your docker-compose.yaml file, I see that the jupyter_rewrite_path container is marked as "unhealthy". Look at the STATUS column in this output:
$ docker compose ps
NAME ... STATUS ...
jupyter_jupyter_rewrite_path_1 ... Up 58 seconds (unhealthy) ...
jupyter_reverse-proxy_1 ... Up 58 seconds ...
Traefik will not direct traffic to an unhealthy service; if you look at your Traefik dashboard (http://localhost:8080/dashboard/#/http/routers), you'll see that the Jupyter container doesn't show up in the list.
The container is marked unhealthy because of a healthcheck defined in the image; we can see that with docker image inspect which shows us:
"Healthcheck": {
"Test": [
"CMD-SHELL",
"wget -O- --no-verbose --tries=1 --no-check-certificate http${GEN_CERT:+s}://localhost:${JUPYTER_PORT}${JUPYTERHUB_SERVICE_PREFIX:-/}api || exit 1"
],
"Interval": 5000000000,
"Timeout": 3000000000,
"StartPeriod": 5000000000,
"Retries": 3
},
So it's connecting to /api on the container and expecting a successful response. As we can see from the container logs, it is in fact getting a 404 error:
jupyter_rewrite_path_1 | [W 2023-02-02 20:50:38.456 ServerApp] 404 GET /api (6d36d539cca44c57bb06702c21c5cc9b#127.0.0.1) 0.84ms referer=None
And that's because you've set --NotebookApp.base_url=/user, but the healthcheck is request /api rather than /user/api.
If you look at the healthcheck, you can see that it builds the URL from a number of variables:
http${GEN_CERT:+s}://localhost:${JUPYTER_PORT}${JUPYTERHUB_SERVICE_PREFIX:-/}api
By setting the JUPYTERHUB_SERVICE_PREFIX variable, we can get the healthcheck to connect to Jupyter at the expected path. That looks like:
jupyter_rewrite_path:
restart: always
image: docker.io/jupyter/scipy-notebook
environment:
JUPYTERHUB_SERVICE_PREFIX: /user/
command:
- jupyter-lab
- --ip=*
- --NotebookApp.token=
- --NotebookApp.password=
- --NotebookApp.base_url=/user
labels:
- traefik.enable=true
- traefik.http.routers.jupyter_rewrite_path.rule=Host(`jupyter-test.localhost`) && PathPrefix(`/user`)
- traefik.http.services.jupyter_rewrite_path.loadbalancer.server.port=8888
You'll note I've dropped the stripprefix bits here, because they're no longer necessary -- by setting the --NotebookApp.base_url option, you're telling Jupyter that it's hosted at /user, so we don't need (or want) to strip the prefix.
With the above configuration, I can successfully access the notebook server at http://localhost/user/.

Related

Docker: Unable to access Minio Web Browser

I am having trouble accessing the Minio embedded web based object browser. The http://127.0.0.1:9000 and http://127.0.0.1:45423 addresses immediately shows a "This page isn't working. ERR_INVALID_HTTP_RESPONSE".
The http://172.22.0.8:9000 and http://172.22.0.8:45423 addresses will load until timeout and land on a "This page isn't working. ERR_EMPTY_RESPONSE "
Am I missing something from my Docker setups?
docker-compose.yml:
version: "3.7"
services:
minio-image:
container_name: minio-image
build:
context: ./dockerfiles/dockerfile_minio
restart: always
working_dir: "/minio-image/storage"
volumes:
- ./Storage/minio/storage:/minio-image/storage
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio-image
MINIO_ROOT_PASSWORD: minio-image-pass
command: server /minio-image/storage
Dockerfile
FROM minio/minio:latest
CMD wget https://dl.min.io/client/mc/release/linux-amd64/mc && \
chmod +x mc
From minio-image container logs:
API: http://172.22.0.8:9000 http://127.0.0.1:9000
Console: http://172.22.0.8:45423 http://127.0.0.1:45423
Documentation: https://docs.min.io
WARNING: Console endpoint is listening on a dynamic port (45423), please use --console-address ":PORT" to choose a static port.
Logging into the docker container through cli and running pwd and ls results in: minio-image/storage and airflow-files mlflow-models model-support-files, respectively.
I see a few problems here.
First, you're only publishing port 9000, which is the S3 API port. If I run your docker-compose.yml, access to port 9000 works just fine; on the Docker host, I can run curl http://localhost:9000 and get:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><Resource>/</Resource><RequestId>16A25441E50432A4</RequestId><HostId>b1eed50d-9218-488a-9df6-fe008e758b27</HostId></Error>
...which is expected, because I haven't provided any credentials.
If you want to access the console, you need to do two things:
As instructed by the log message, you need to set a static console port using --console-address.
You need to publish this port in the ports section of your docker-compose.yml.
That gives us:
version: "3.7"
services:
minio-image:
container_name: minio-image
build:
context: ./dockerfiles/dockerfile_minio
restart: always
working_dir: "/minio-image/storage"
volumes:
- ./Storage/minio/storage:/minio-image/storage
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: minio-image
MINIO_ROOT_PASSWORD: minio-image-pass
command: server /minio-image/storage --console-address :9001
Running the above docker-compose.yml, I can access the MinIO console
at http://localhost:9001, and log in using the
minio-image/minio-image-pass credentials.

Traefik + Docker for Windows: Failed to create a client for docker, error: protocol not available & Provider connection error protocol not available

I'm having troubles getting a basic Traefik routing setup to work.
My goal is to get basic routing with two helloworld apps (each different to tell apart), both on port 80, e.g.:
demo1.localhost -> helloworld1
demo2.localhost -> helloworld2
Each of the images works fine if I run them via docker run in isolation.
Using Powershell from my project dir, /app, when I run docker-compose up I get the following:
The Traefik service launches, I can visit the dashboard just fine but the routing table doesn't show my routes. demo1 and demo2 launch just fine, but obviously I can't connect to them because the routing isn't working.
Even though the services all launch successfully - I repeatedly get the following errors:
traefik | ... "Failed to create a client for docker, error: protocol not available" providerName=docker
traefik | ... "Provider connection error protocol not available, retrying ..." providerName=docker
I've included my docker-compose.yml file below, which is the only file in my dir, /app.
docker-compose.yml:
# app/docker-compose.yml
version: '3.8'
networks:
myweb:
driver: nat
services:
proxy:
image: traefik:v2.3.0-rc4-windowsservercore-1809
container_name: traefik
ports:
- "80:80"
- "8080:8080"
volumes:
- source: '\\.\pipe\docker_engine'
target: '\\.\pipe\docker_engine'
type: npipe
command:
- "--api.insecure=true"
- "--providers.docker"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
networks:
- myweb
labels:
- "traefik.http.routers.dashboard.rule=Host(`dash.localhost`)"
- "traefik.docker.network=app_myweb"
demo1:
image: helloworld:1
container_name: demo1
labels:
- "traefik.enable=true"
- "traefik.docker.network=app_myweb"
- "traefik.port=80"
- "traefik.http.routers.demo1.rule=Host(`demo1.localhost`)"
# Have tried this below, doesn't help.
# volumes:
# - source: '\\.\pipe\docker_engine'
# target: '\\.\pipe\docker_engine'
# type: npipe
networks:
- myweb
depends_on:
- proxy
demo2:
image: helloworld:2
container_name: demo2
labels:
- "traefik.enable=true"
- "traefik.docker.network=app_myweb"
- "traefik.port=80"
- "traefik.http.routers.demo2.rule=Host(`demo2.localhost`)"
networks:
- myweb
depends_on:
- proxy
I saw a suggestion somewhere that I should enable the setting "Expose daemon on tcp://localhost:2375 without TLS" in Docker Desktop settings, which I have done but doesn't help.
My setup is:
Docker Desktop (v19.03.12) for Windows
Docker using Windows Containers
Windows 10 (10.0.18363 Build 18363)
Question #1:
Anybody have any idea what might be causing the problem?
Question #2:
Notice in my file I also have a route set up for the dashboard, to route from dash.localhost to localhost:8080/dashboard, but even that doesn't work. Any idea how to get that working? Do I need to tell it to route from 80->8080 for the dashboard?
According to a ticket on their GitHub you seem to be:
Missing --providers.docker.endpoint=npipe:////./pipe/docker_engine in Traefik command line
Sharing \\.\pipe\docker_engine when Docker is expecting .\pipe\docker_engine
Try making those two changes and see if that helps Traefik connect to your Docker daemon. None of your routes will work until Traefik can talk to Docker to read the labels of your containers.

Docker-compose setting problem about Domjudge server

I want to build a domjudge server with mriadb, phpmyadmin, judgehost in the docker base on Debian9,
I've install the docker and docker compose
here is the docker-compose.yml code below.
and I use docker-compose up -d and there are some WARNING and ERROR pop out.
here is the entire docker-compose.yml file code
http://codepad.org/souBFdFz
WARNING and ERROR messages:
WARNING: some networks were defined but are not used by any service: phpmyadmin, dj-judgedameons_1, dj-judgedameons_2
ERROR: dor domjudge_dj-judgedameons_2_1 Cannot start service dj-judgedameons_1 : OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:311:getting Starting domjudge_dj-judgedameons_1_1
...and a lots of error messages that I cant even read(binary code or address i think)
Please help me fix it or if there is a easy way to set up domjudge server with mariadb, phpmyadmin, judgehost
THANKS!
Update
I've tried this file several times and it has a drifferent result but it still can't connect to the server (domjudge & phpmyadmin).
here is the message
https://i.stack.imgur.com/qDcDd.jpg
Unfortunately what you want to do is not really possible because of how the application is built: containers need to wait for each other and some of them need manual actions.
However, this is a sequence of actions that works and will bring all containers up and running.
NOTE: I removed the networks declarations because they don't add any value.
version: '3'
services:
dj-mariadb:
image: mariadb
environment:
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=domjudge
- MYSQL_USER=domjudge
- MYSQL_PASSWORD=djpw
command:
--max-connections=1000
dj-domserver:
image: domjudge/domserver:latest
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
environment:
- CONTAINER_TIMEZONE=Asia/Taipei
- MYSQL_ROOT_PASSWORD=rootpw
- MYSQL_DATABASE=domjudge
- MYSQL_USER=domjudge
- MYSQL_PASSWORD=djpw
ports:
- 9090:80
links:
- dj-mariadb:mariadb
dj-judgehost:
image: domjudge/judgehost:latest
privileged: true
hostname: judgedaemon-0
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
environment:
- DAEMON_ID=0
- JUDGEDAEMON_PASSWORD=domjudge
links:
- dj-domserver:domserver
dj-judgehost_1:
image: domjudge/judgehost:latest
privileged: true
hostname: judgedaemon-1
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
environment:
- DAEMON_ID=1
- JUDGEDAEMON_PASSWORD=domjudge
links:
- dj-domserver:domserver
dj-judgehost_2:
image: domjudge/judgehost:latest
privileged: true
hostname: judgedaemon-2
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
environment:
- DAEMON_ID=2
- JUDGEDAEMON_PASSWORD=domjudge
links:
- dj-domserver:domserver
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: myadmin
ports:
- 8888:80
environment:
- PMA_ARBITRARY=1
- PMA_HOST=dj-mariadb
links:
- dj-mariadb:db
Start the database and wait for it to initialize (otherwise the server will exit because it cannot find the schema it needs)
docker-compose up -d dj-mariadb
Start the server:
docker-compose up -d dj-domserver
Get the admin password from the logs:
docker-compose logs dj-domserver
Look for the line saying: Initial admin password is .... and save the password.
Set the judgehost password in the web interface: open http://localhost:9090 and login with user admin and the password you saved from the previous step. Go to Users and click on judgehost user. In there change the password to domjudge (according to what you set in the docker-compose.yml for JUDGEDAEMON_PASSWORD. Save the data.
Start the rest of the containers:
docker-compose up -d
Verify that all containers are up and running:
docker-compose ps
Output should look similar to this:
Name Command State Ports
---------------------------------------------------------------------------------------------------
domjudge_dj-domserver_1 /scripts/start.sh Up 0.0.0.0:9090->80/tcp
domjudge_dj-judgehost_1 /scripts/start.sh Up
domjudge_dj-judgehost_1_1 /scripts/start.sh Up
domjudge_dj-judgehost_2_1 /scripts/start.sh Up
domjudge_dj-mariadb_1 docker-entrypoint.sh --max ... Up 3306/tcp
myadmin /run.sh supervisord -n -j ... Up 0.0.0.0:8888->80/tcp, 9000/tcp

How to fix 'Cookie file /var/lib/rabbitmq/.erlang.cookie must be accessible by owner only' error in windows server 2019 with DockerProvider service

I'm installed docker in windows server 2019 with DockerProvider
I'm using this code
Install-Module DockerProvider
Install-Package Docker -ProviderName DockerProvider -RequiredVersion preview
[Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", "1", "Machine")
after that I install Docker-Compose with this code
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\Docker\docker-compose.exe
after that I use a docker compose file
version: "3.5"
services:
rabbitmq:
# restart: always
image: rabbitmq:3-management
container_name: rabbitmq
ports:
- 5672:5672
- 15672:15672
networks:
- myname
# network_mode: host
volumes:
- rabbitmq:/var/lib/rabbitmq
networks:
myname:
name: myname-network
volumes:
rabbitmq:
driver: local
everything is Ok up to here
but after i call http://localhost:15672/ url in my browser
rabbitmq crashes and I see this error in docker logs <container-id>
Cookie file /var/lib/rabbitmq/.erlang.cookie must be accessible by owner only
this .yml file is working correctly in docker for windows
but after running the file in windows server, I see this error
Solution is to map a different volume where the cookie file will be created;
https://github.com/docker-library/rabbitmq/issues/171#issuecomment-316302131
So for your example, not;
- rabbitmq:/var/lib/rabbitmq
but;
- rabbitmq:/var/lib/rabbitmq/mnesia
You also have the option to overwrite the command of the docker image to fix the issue it is complaining about. Assuming that your cookie file is /var/lib/rabbitmq/.erlang.cookie, replace the original docker image command, which is probably:
["rabbitmq-server"]
with:
["bash", "-c", "chmod 400 /var/lib/rabbitmq/.erlang.cookie; rabbitmq-server"]
In your docker-compose file it will look like this:
...
image: rabbitmq:3-management
...
ports:
- "5672:5672"
- "15672:15672"
volumes:
- ...
command: ["bash", "-c", "chmod 400 /var/lib/rabbitmq/.erlang.cookie; rabbitmq-server"]
Of course, you introduce here some workaround/technical debt that you assume rabbitmq-server will stay like that in the future.

Docker-Compose can't resolve names of containers?

I have a wierd problem, as it seems to have been working fine until today. I can't tell what's changed since then, however. I run docker-compose up --build --force-recreate and the build fails saying that it can't resolve the host name.
The issue is specifically because of CURL commands inside one of the Dockerfiles:
USER logstash
WORKDIR /usr/share/logstash
RUN ./bin/logstash-plugin install logstash-input-beats
WORKDIR /tmp
COPY templates/winlogbeat.template.json winlogbeat.template.json
COPY templates/metricbeat.template.json metricbeat.template.json
RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/metricbeat-6.3.2 -d#metricbeat.template.json
RUN curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_template/winlogbeat-6.3.2 -d#winlogbeat.template.json
Originally, I had those commands running inside of the Elasticsearch Container, but it stopped working, reporting Could not resolve host: elasticsearch; Unknown error
I thought maybe it was trying to do the RUN commands too soon, so moved the process to the Logstash container, but the issue remains. Logstash depends on Elasticsearch, so Elastic should be up and running by the time that the Logstash container is trying to run this.
I've tried deleting images, containers, network, etc but nothing seems to let me run these CURL commands during the build process;
I'm thinking that perhaps the Docker daemon is caching DNS names, but can't figure out how to reset it, as I've already deleted and recreated the network several times.
Can anyone offer any ideas?
Host: Ubuntu Server 18.04
SW: Docker-CE (current version)
ELK stack: All are the official 6.3.2 images provided by Elastic.
Docker-Compose.YML:
version: '2'
services:
elasticsearch:
build:
context: elasticsearch/
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
# ports:
# - "9200:9200"
# - "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
HOSTNAME: "elasticsearch"
networks:
- elk
logstash:
build:
context: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5000:5000"
- "5044:5044"
- "5045:5045"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config:ro
# Port 5601 is not exposed outside of the container
# Can be accessed through Nginx Reverse Proxy only
# ports:
# - "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
nginx:
build:
context: nginx/
environment:
- APPLICATION_URL=http://docker.local
volumes:
- ./nginx/conf.d/:/etc/nginx/conf.d:ro
ports:
- "80:80"
networks:
- elk
depends_on:
- elasticsearch
fouroneone:
build:
context: fouroneone/
# No direct access, only through Nginx Reverse Proxy
# ports:
# - "8181:80"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
esdata:
Running a curl to elasticsearch is a wrong shortcut as it may not be up, plus Dockerfile may be the wrong place altogether
Also I would not put this script in the Dockerfile but possibly use it to alter the ENTRYPOINT for the image if I really wanted to use Dockerfile (again I would not advise it)
Best to do here is to have a logstash service in docker-file with the image on updated input plugin only and remove all the rest of lines in Dockerfile. And you could have a logstash_setup service which does the setup bits (using logstash image or even cleaner a basic centos image which should have bash and curl installed - since all you do is run a couple of curl commands passing some files)
Script I am talking about might look something like this :
#!/bin/bash
set -euo pipefail
es_url=http://elasticsearch:9200
# Wait for Elasticsearch to start up before doing anything.
until curl -s $es_url -k -o /dev/null; do
sleep 1
done
#then put your code here
curl -XPUT -H 'Content-Type: application/json' http://elasticsearch:9200/_ ...

Resources