I have a docker-compose setup with the following setup:
./
+-- .env
+-- docker-compose.yml
+-- jupyterhub/
| +-- Dockerfile
| +-- jupyter-config.py
+-- jupyterlab/
| +-- Dockerfile
+-- reverse-proxy/
+-- traefik.toml
I follow the recipe from opendreamkit.org and manage to get the system up and running. However, when I run the command docker-compose down and up again I get the following error:
*jupyterhub_hub | [E 2020-03-31 08:28:38.108 JupyterHub user:477] Unhandled error starting tester1's server: The 'ip' trait of a Server
instance must be a unicode string, but a value of None was specified.
I suspect it* has something to do with the following Message I get when I build the system:
WARNING: The DOCKER_NETWORK_NAME variable is not set. Defaulting to a blank string.
But I was wondering if anyone could provide with me a workaround or explanation to why this error occur?
Thanks in advance for any help in the matter(during these Corona times)
edit: my file docker-compose.yml
version: '3'
services:
# Configuration for Hub+Proxy
jupyterhub:
build: jupyterhub # Build the container from this folder.
container_name: jupyterhub_hub # The service will use this container name.
volumes: # Give access to Docker socket.
- /var/run/docker.sock:/var/run/docker.sock
environment: # Env variables passed to the Hub process.
DOCKER_JUPYTER_IMAGE: jupyterlab_img
DOCKER_NETWORK_NAME: ${COMPOSE_PROJECT_NAME}_default
HUB_IP: jupyterhub_hub
labels: # Traefik configuration.
- "traefik.enable=true"
- "traefik.frontend.rule=Host:x.x.x.x"
# Configuration for reverse proxy
reverse-proxy:
image: traefik
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- ./reverse-proxy/traefik.toml:/etc/traefik/traefik.toml
- /var/run/docker.sock:/var/run/docker.sock
- /etc/certs:/etc/certs
# Configuration for the single-user servers
jupyterlab:
build: jupyterlab
image: jupyterlab_img
command: echo
volumes:
jupyterhub_data:
networks:
jupyter:
# internal:
Disclaimer: take this answer with a huge pinch of salt as I am just learning how to run JupyterHub and JupyterLab in Docker containers. And in all fairness to all JupyterHub and Docker and Traefik experts, it is not easy for a beginner to figure out which settings to use.
I had the same problem as the OP, but his solution (in the comment of 2020-04-06) didn't work for me. What did work was the following entry in jupyterhub_config.py:
c.DockerSpawner.remove = True
This removes the JupyterLab containers after their users log out of JupyterHub. If they are left to "linger", then the "Unhandled error starting tester1's server: The 'ip' trait of a Server instance must be a unicode string..." will occur if the same user logs in again. Don't ask me why. Inspiration came from this SO question.
PS: I used Docker version 19.03.12 in an Ubuntu 20.04.1 host, Traefik Version 2.2, JupyterHub and -Lab Version 1.2.2.
Command docker-compose down kill current network. After docker-compose up -d new network is being created with same name, but different NetworkID. You can check NetworkID of containers with:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' CONTAINER_ID
I suppose this is the main reason of error:
The 'ip' trait of a Server instance must be a unicode string, but a value of None was specified
because in order to restart specific user container after docker-compose down you can disconnect it from old network, connect to new one:
docker network disconnect NETWORK_NAME CONTAINER_ID
docker network connect NETWORK_NAME CONTAINER_ID
and finally, you can start this container after logging in JupyterHub web-interface under a specific user, which container got this error.
Indeed, in this situation you can use Laryx Decidua answer to remove containers after docker-compose down. Another option is to create an external network like in this accepted answer: https://stackoverflow.com/a/51476836/12247535
More about this behavior / problem of docker-compose in thread https://github.com/docker/compose/issues/5745
Related
I'm new using docker and spark.
My docker-compose.yml file is
volumes:
shared-workspace:
services:
notebook:
image: docker.io/jupyter/all-spark-notebook:latest
build:
context: .
dockerfile: Dockerfile-jupyter-jars
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
And the Dockerfile-jupyter-jars is:
FROM docker.io/jupyter/all-spark-notebook:latest
USER root
RUN wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar
RUN mv mysql-connector-java-8.0.28.jar /usr/local/spark/jars/
USER jovyan
To it start up a run
docker-compose up --build
The server is up and running and I'm interested to use spark-sql, but it is throwing and error trying to connect to mysql server:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
I can see the mysql-connector-java-8.0.28.jar in the "jars" folder, and I have used same sql instruction in apache spark non docker version and it works.
Mysql db server is also reachable from the same server I'm running the Docker.
Do I need to enable something to reach external connections? Any idea?
Reference: https://hub.docker.com/r/jupyter/all-spark-notebook
The docker-compose.yml and Dockerfile-jupyter-jars files were correct, since I was using mysql-connector-java-8.0.28.jar it requires a SSL or to disable explicitly.
jdbc:mysql://user:password#xx.xx.xx.xx:3306/inventory?useSSL=FALSE&nullCatalogMeansCurrent=true
I'm going to left this example for: Docker - all-spark-notebook with MySQL dataset
So I need rolling-updates with docker on my single node server. Until now, I was using docker-compose but unfortunately, I can't achieve what I need with it. Reading the web, docker-swarm seems to be the way to go.
I have found how to run an app with multiple replicas on a single node using swarm:
docker service create --replicas 3 --name myapp-staging myapp_app:latest
myapp:latest being built from my docker-compose.yml:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
working_dir: /app
depends_on:
- "postgres"
env_file:
- ".env"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Unfortunately, this doesn't work since it doesn't get the config from the docker-compose.yml file: .env file, command entry etc.
Searching deeper, I find that using
docker stack deploy -c docker-compose.yml <name>
will create a service using my docker-compose.yml config.
But then I get the following error message:
failed to update service myapp-staging_postgres: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
So it seems I have to use the registry and push my image there so that it works. I understand this need in case of a multiple node architecture, but in my case I don't want to do that. (Carrying images are heavy, I don't want my image to be public, and after all, image is here, so why should I move it to the internet?)
How can I set up my docker service using local image and config written in docker-compose.yml?
I could probably manage my way using docker service create options, but that wouldn't use my docker-compose.yml file so it would not be DRY nor maintainable, which is important to me.
docker-compose is a great tool for developers, it is sad that we have to dive into DevOps tools to achieve such common features as rolling updates. This whole swarm architecture seems too complicated for my needs at this stage.
You don't have to use registeries in your single node setup. you can build your "app" image on your node from a local docker file using this command -cd to the directory of you docker file-
docker build . -t my-app:latest
This will create a local docker image on your node, this image is only visible to your single node which is benefitial in your use case but i wouldn't recommend this in a production setup.
You can now edit the compose file to be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
image: "my-app:latest"
depends_on:
- "postgres"
env_file:
- ".env"
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
And now you can run your stack from this node and it will use your local app image and benefit from the usage of the image [updates - rollbacks ...etc]
I do have a side note though on your stack file. You are using the same env file for both services, please mind that swarm will look for the ".env" file relative/next to the ".yml" file, so if this is not intentional please revise the location of your env files.
Also on a side note this solution is only feasable on a single node cluster and if you scale your cluster you will have to use a registery and registeries dont have to be public, you can deploy a private registery on your cluster and only your nodes can access it -or you can make it public- the accessibility of your registery is your choice.
Hope this will help with your issue.
Instead of docker images, you can directly use the docker file there. please check the below example.
version: "3.7"
services:
webapp:
build: ./dir
The error is because of compose unable to find an image on the Docker public registry.
Above method should solve your issue.
Basically you need to use docker images in order to make the rolling update to work in docker swarm. Also I would like to clarify that you can host a private registry and use it instead of public one.
Detailed Explanation:
When you try out rolling update how docker swarm works is that it sees whether there is a change in the image which is used for the service if so then docker swarm schedules service updation based on the updation criteria's set up and will work on it.
Let us say there is no change to the image then what happens? Simply docker will not apply the rolling update. Technically you can specify --force flag to make it force update the service but it will just redeploy the service.
Hence create a local repo and store the images into that and use that image name in docker-compose file to be used for a swarm. You can secure the repo by using SSL, user credentials, firewall restrictions which is up to you. Refer this for more details on deploying docker registry server.
Corrections in your compose file:
Since docker stack uses the image to create service you need to specify image: "<image name>" in app service like done in postgres service. AS you have mentioned build instruction image-name is mandatory as docker-compose doesn't know what tho name the image as.Reference.
Registry server is needed if you are going to deploy the application in multi-server. Since you have mentioned it's a single node deployment just having the image pulled/built on the server is enough. But private registry approach is the recommended.
My recommendation is that don't club all the services into a single docker-compose file. The reason is that when you deploy/destroy using docker-compose file all the services will be taken down. This is a kind of tight coupling. Of course, I understand that all the other services depend on DB. in such cases make sure DB service is brought up first before other services.
Instead of specifying the env file make it as a part of Docker file instruction. either copy the env file and source it in entry point or use ENV variable to define it.
Also just an update:
Stack is just to group the services in swarm.
So your compose file should be:
version: "3.6"
services:
postgres:
env_file:
- ".env"
image: "postgres:11.0-alpine"
volumes:
- "/var/run/postgresql:/var/run/postgresql"
app:
build: "."
image: "image-name:tag" #the image built will be tagged as image-name:tag
working_dir: /app # note here I've removed .env file
depends_on:
- "postgres"
command: iex -S mix phx.server
volumes:
- ".:/app"
volumes:
postgres: {}
static:
driver_opts:
device: "tmpfs"
type: "tmpfs"
Dockerfile:
from baseimage:tag
COPY .env /somelocation
# your further instructions go here
RUN ... & \
... & \
... && chmod a+x /somelocation/.env
ENTRYPOINT source /somelocation/.env && ./file-to-run
Alternative Dockerfile:
from baseimage:tag
ENV a $a
ENV b $b
ENV c $c # here a,b,c has to be exported in the shell befire building the image.
ENTRYPOINT ./file-to-run
And you may need to run
docker-compose build
docker-compose push (optional needed to push the image into registry in case registry is used)]
docker stack deploy -c docker-compose.yml <stackname>
NOTE:
Even though you can create the services as mentioned here by #M.Hassan I've explained the ideal recommended way.
Traefik's Getting Started guid is difficult to follow in any step by step fashion. It has the following problems:
Getting Started suggests running traefik as a command, but no commands can be run on the traefik image and you must instead use traefik:alpine, even to shell into the container with docker exec -it ....
Getting Started makes hardly any mention of a traefik.toml file.
#1 makes a new reader confused as to weather traefik is intended to be run as a container that automatically updates per newly deployed containers like jwilder's nginx proxy or if it's intended to be run on a docker host.
Their original docker-compose.yml file looks like this:
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker #--consul --consul.endpoint=127.0.0.1:8500 # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
whoami:
image: containous/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"
Then you can run it with:
docker-compose up -d reverse-proxy
This is fine and you can add new services here and specify new labels like the one above, i.e. traefik.frontend.rule=Host:whoami-other.docker.localhost.
You can test this with curl, specifying the host heading like so:
curl -H Host:whoami.docker.localhost http://127.0.0.1
Issue 1)
Line 5 must be changed to use the image traefik:alpine.
image: traefik:alpine # The official Traefik docker image
You can now actually docker exec this container. You can only use sh (not /bin/bash) on the alpine image. We can now do the following:
docker exec -it traefik_reverse-proxy_1 sh
docker exec -it traefik_reverse-proxy_1 traefik --help
Issue 2)
From the default docker-compose.yml, there is no mention of a traefik.toml file. Even if I docker-compose up -d [some_new_service] and can reach those services, shelling into the container has no traefik.toml file. It's nowhere in the container, despite that per the bottom of Basics, it says looks for it default locations such as /etc/traefik/ and $HOME/.traefik/ and . or the working directory. Is this referring to the host or the container? In the container I run grep find and only see the binary:
/ # find / | grep traefik
/usr/local/bin/traefik
Is traefik storing my services configuration in memory?
The next logical page in the documentation (Basics), immediately starts detailing configuration of the traefik.toml, but I have to such file to experiment with.
I had to to back to Getting Started read at the bottom of that page to find that using a static traefik.toml file must be specified in a volume when they suggest using their official image and running like this:
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
So with this, I change the volumes section in the original docker-compose.yml under the service reverse-proxy to use something similar:
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- $PWD/traefik.toml:/etc/traefik/traefik.toml
Even with this, I don't have a base traefik.toml file to even use (there isn't even one in the examples folder of their GitHub). I had to go find one, but wasn't even sure how it would apply to the existing configuration of services I had running (i.e. whoami and/or whoami-other). Finally, running find / | grep traefik on the container shows the same traefik.toml file in /etc/traefik/traefik.toml, but it has no mention of the services (which I can still reach with curl -H Host:whoami.docker.localhost http://127.0.0.1 from my docker host). Where is the configuration then?
It is here
https://raw.githubusercontent.com/containous/traefik/v2.0/traefik.sample.toml
Somehow the traefik documents is confusing for newbie (I am).
the situation is this:
I have three different dockers compose files for three different projects: a frontend, a middleware, and a backend. FE is Ember, middleware and backend spring (boot). Which should not matter here though. Middleware uses an external_link to the backend, and frontend (UI) is using an external_link to middleware.
When I start with a clean docker (docker stop $(docker ps -aq), docker rm $(docker ps -aq)), everything works fine: I start the backend with docker-compose up, then middleware, then frontend. Everything is nice all external links do work (also running Cypress e2e tests on this setup - works fine).
Now, when I change something in the middleware, rebuild the image, stop the container (control+c) and restart it using docker-compose up, and then try to restart the frontend (control + c and then docker-compose up), docker will tell me:
Starting UI ... error
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: Encountered errors while bringing up the project.
Now what irritates me:
where is the "32f2db8e96a1" coming from? The middleware container name is set to "middleware", which is also used in the external link of the UI, and works fine for every clean startup (meaning, using docker rm "-all" before). Also, docker ps shows me that a container for the middleware is actually running.
Unfortunately, I cannot post the compose files here, but I am willing to add any info needed.
Running on Docker version 18.09.0, build 4d60db4
Ubuntu 18.04.1 LTS
I would like to restart any of these containers without a broken external link. How do I achieve this?
Since you guys take time for me, I took time to clear out two of the compose. This is the UI/frontend one:
files:
version: '2.1'
services:
ui:
container_name: x-ui
build:
dockerfile: Dockerfile
context: .
image: "xxx/ui:latest"
external_links:
- "middleware:backend"
ports:
- "127.0.0.1:4200:80"
network_mode: bridge
This is the middleware:
version: '2.1'
services:
middleware:
container_name: x-middleware
image: xxx/middleware:latest
build:
dockerfile: src/main/docker/middleware/Dockerfile
context: .
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9003:9000"
external_links:
- "api"
network_mode: "bridge"
The "api" one is essentially the same as middleware.
Please note: I removed volumes and environment. Also I renamed, so that the error message names will not fit perfectly. Please note the naming schema is the same: service name goes like "middleware", container name uses a prefix "x-middleware".
My problem is that I have a docker-compose.yml file and an haproxy.cfg file and I want docker-compose to copy the haproxy.cfg file to the docker container. As per the post Docker composer copy files I can use volumes to do it but in my case I'm getting the below error. Can anybody help me achieve this.
Below is the code and everything
docker-compose.yml
version: "3.3"
services:
###After all services are up, we are initializing the gateway
gateway:
container_name: gateway-haproxy
image: haproxy
volumes:
- .:/usr/local/etc/haproxy
ports:
- 80:80
network_mode: "host"
Folder Structure
Command output
root#ubuntu:/home/karunesh/Desktop/Stuff/SelfStudy/DevOps/docker# docker-compose up
Creating gateway-haproxy ...
Creating gateway-haproxy ... done
Attaching to gateway-haproxy
gateway-haproxy | <7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -f /usr/local/etc/haproxy/haproxy.cfg -Ds
gateway-haproxy | [ALERT] 219/163305 (6) : [/usr/local/sbin/haproxy.main()] No enabled listener found (check for 'bind' directives) ! Exiting.
gateway-haproxy | <5>haproxy-systemd-wrapper: exit, haproxy RC=1
gateway-haproxy exited with code 1
Try this:
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
Instead of mounting the whole directory, this will only mount haproxy.cfg. The ro is an abbreviation for read-only, and its usage guarantees the container won't modify it after it gets mounted.
In order to add additional files to the container, you have to build on top of the existing image from haproxy.
For example, your Dockerfile should look like this:
FROM haproxy:latest
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Then you can update your docker compose file accordingly.
If you plan on using this for local development, just mount the file(s), see #MatTheWhale's answer
See more at the official haproxy Docker page