Traefik config file location using docker - docker

Traefik's Getting Started guid is difficult to follow in any step by step fashion. It has the following problems:
Getting Started suggests running traefik as a command, but no commands can be run on the traefik image and you must instead use traefik:alpine, even to shell into the container with docker exec -it ....
Getting Started makes hardly any mention of a traefik.toml file.
#1 makes a new reader confused as to weather traefik is intended to be run as a container that automatically updates per newly deployed containers like jwilder's nginx proxy or if it's intended to be run on a docker host.
Their original docker-compose.yml file looks like this:
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker #--consul --consul.endpoint=127.0.0.1:8500 # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
whoami:
image: containous/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"
Then you can run it with:
docker-compose up -d reverse-proxy
This is fine and you can add new services here and specify new labels like the one above, i.e. traefik.frontend.rule=Host:whoami-other.docker.localhost.
You can test this with curl, specifying the host heading like so:
curl -H Host:whoami.docker.localhost http://127.0.0.1
Issue 1)
Line 5 must be changed to use the image traefik:alpine.
image: traefik:alpine # The official Traefik docker image
You can now actually docker exec this container. You can only use sh (not /bin/bash) on the alpine image. We can now do the following:
docker exec -it traefik_reverse-proxy_1 sh
docker exec -it traefik_reverse-proxy_1 traefik --help
Issue 2)
From the default docker-compose.yml, there is no mention of a traefik.toml file. Even if I docker-compose up -d [some_new_service] and can reach those services, shelling into the container has no traefik.toml file. It's nowhere in the container, despite that per the bottom of Basics, it says looks for it default locations such as /etc/traefik/ and $HOME/.traefik/ and . or the working directory. Is this referring to the host or the container? In the container I run grep find and only see the binary:
/ # find / | grep traefik
/usr/local/bin/traefik
Is traefik storing my services configuration in memory?
The next logical page in the documentation (Basics), immediately starts detailing configuration of the traefik.toml, but I have to such file to experiment with.
I had to to back to Getting Started read at the bottom of that page to find that using a static traefik.toml file must be specified in a volume when they suggest using their official image and running like this:
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
So with this, I change the volumes section in the original docker-compose.yml under the service reverse-proxy to use something similar:
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
- $PWD/traefik.toml:/etc/traefik/traefik.toml
Even with this, I don't have a base traefik.toml file to even use (there isn't even one in the examples folder of their GitHub). I had to go find one, but wasn't even sure how it would apply to the existing configuration of services I had running (i.e. whoami and/or whoami-other). Finally, running find / | grep traefik on the container shows the same traefik.toml file in /etc/traefik/traefik.toml, but it has no mention of the services (which I can still reach with curl -H Host:whoami.docker.localhost http://127.0.0.1 from my docker host). Where is the configuration then?

It is here
https://raw.githubusercontent.com/containous/traefik/v2.0/traefik.sample.toml
Somehow the traefik documents is confusing for newbie (I am).

Related

Can docker write logs to an external directory?

We have written our own logger that writes about 10 different log files: skl log, http request log, a separate log for each client, etc.
If you run the service through docker, is it possible to tell him to write these logs not inside himself, but in an external folder?
From what I've read, I've only realized so far that docker only logs output to the console, and in one shared file.
You can run a docker container with volumes where you can mention your expected log directory, settings file, app resource, etc.
Here is the straightforward way to create the docker container with volumes
docker create --name YOUR_SERVICE_NAME -p 80:80 -v /APP_DIR_OUT_SIDE_OF_CONTAINER/settings/appsettings.json:/app/appsettings.json -v /APP_DIR_OUT_SIDE_OF_CONTAINER/Logs:/app/Logs:z YOUR_DOCKER_IMAGE_REPO_URL:IMAGE_TAG
Below is the docker-compose sample that should be running a container with volumes.
version: '3.4'
services:
YOUR_SERVICE_NAME:
image: IMAGE_URL
container_name: CONTAINER_NAME
ports:
- "80:80"
volumes:
- /APP_DIR_OUT_SIDE_OF_CONTAINER/config/appsettings.json:/app/appsettings.json
- /APP_DIR_OUT_SIDE_OF_CONTAINER/logs:/app/Logs:z
restart: always
Also, you can get all possible ways to run the docker container with volumes from Use volumes

Docker-compose jupyterhub issue

I have a docker-compose setup with the following setup:
./
+-- .env
+-- docker-compose.yml
+-- jupyterhub/
| +-- Dockerfile
| +-- jupyter-config.py
+-- jupyterlab/
| +-- Dockerfile
+-- reverse-proxy/
+-- traefik.toml
I follow the recipe from opendreamkit.org and manage to get the system up and running. However, when I run the command docker-compose down and up again I get the following error:
*jupyterhub_hub | [E 2020-03-31 08:28:38.108 JupyterHub user:477] Unhandled error starting tester1's server: The 'ip' trait of a Server
instance must be a unicode string, but a value of None was specified.
I suspect it* has something to do with the following Message I get when I build the system:
WARNING: The DOCKER_NETWORK_NAME variable is not set. Defaulting to a blank string.
But I was wondering if anyone could provide with me a workaround or explanation to why this error occur?
Thanks in advance for any help in the matter(during these Corona times)
edit: my file docker-compose.yml
version: '3'
services:
# Configuration for Hub+Proxy
jupyterhub:
build: jupyterhub # Build the container from this folder.
container_name: jupyterhub_hub # The service will use this container name.
volumes: # Give access to Docker socket.
- /var/run/docker.sock:/var/run/docker.sock
environment: # Env variables passed to the Hub process.
DOCKER_JUPYTER_IMAGE: jupyterlab_img
DOCKER_NETWORK_NAME: ${COMPOSE_PROJECT_NAME}_default
HUB_IP: jupyterhub_hub
labels: # Traefik configuration.
- "traefik.enable=true"
- "traefik.frontend.rule=Host:x.x.x.x"
# Configuration for reverse proxy
reverse-proxy:
image: traefik
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- ./reverse-proxy/traefik.toml:/etc/traefik/traefik.toml
- /var/run/docker.sock:/var/run/docker.sock
- /etc/certs:/etc/certs
# Configuration for the single-user servers
jupyterlab:
build: jupyterlab
image: jupyterlab_img
command: echo
volumes:
jupyterhub_data:
networks:
jupyter:
# internal:
Disclaimer: take this answer with a huge pinch of salt as I am just learning how to run JupyterHub and JupyterLab in Docker containers. And in all fairness to all JupyterHub and Docker and Traefik experts, it is not easy for a beginner to figure out which settings to use.
I had the same problem as the OP, but his solution (in the comment of 2020-04-06) didn't work for me. What did work was the following entry in jupyterhub_config.py:
c.DockerSpawner.remove = True
This removes the JupyterLab containers after their users log out of JupyterHub. If they are left to "linger", then the "Unhandled error starting tester1's server: The 'ip' trait of a Server instance must be a unicode string..." will occur if the same user logs in again. Don't ask me why. Inspiration came from this SO question.
PS: I used Docker version 19.03.12 in an Ubuntu 20.04.1 host, Traefik Version 2.2, JupyterHub and -Lab Version 1.2.2.
Command docker-compose down kill current network. After docker-compose up -d new network is being created with same name, but different NetworkID. You can check NetworkID of containers with:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' CONTAINER_ID
I suppose this is the main reason of error:
The 'ip' trait of a Server instance must be a unicode string, but a value of None was specified
because in order to restart specific user container after docker-compose down you can disconnect it from old network, connect to new one:
docker network disconnect NETWORK_NAME CONTAINER_ID
docker network connect NETWORK_NAME CONTAINER_ID
and finally, you can start this container after logging in JupyterHub web-interface under a specific user, which container got this error.
Indeed, in this situation you can use Laryx Decidua answer to remove containers after docker-compose down. Another option is to create an external network like in this accepted answer: https://stackoverflow.com/a/51476836/12247535
More about this behavior / problem of docker-compose in thread https://github.com/docker/compose/issues/5745

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

How to configure a dockerfile and docker-compose for Jenkins

Im absolutely new in Docker and Jenkins as well. I have a question about the configuration of Dockerfile and docker-compose.yml file. I tried to use the easiest configuration to be able to set-up these files correctly. Building and pushing is done correctly, but the jenkins application is not running on my localhost (127.0.0.1).
If I understand it correctly, now it should default running on port 50000 (ARG agent_port=50000 in jenkins "official" Dockerfile). I tried to use 50000, 8080 and 80 as well, nothing is working. Do you have any advice, please? Im using these files: https://github.com/fdolsky321/Jenkins_Docker
The second question is, whats the best way to handle the crashes of the container. Lets say, that if the container crashes, I want to recreate a new container with the same settings. Is the best way just to create a new shell file like "crash.sh" and provide there the information, that I want to create new container with the same settings? Like is mentioned in here: https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
Thank you for any advice.
docker-compose for Jenkins
docker-compose.yml
version: '2'
services:
jenkins:
image: jenkins:latest
ports:
- 8080:8080
- 50000:50000
# uncomment for docker in docker
privileged: true
volumes:
# enable persistent volume (warning: make sure that the local jenkins_home folder is created)
- /var/wisestep/data/jenkins_home:/var/jenkins_home
# mount docker sock and binary for docker in docker (only works on linux)
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
Replace the port 8080, 50000 as you need in your host.
To recreate a new container with the same settings
The volumne mounted jenkins_home, is the placewhere you store all your jobs and settings etc..
Take the backup of the mounted volume jenkins_home on creating every job or the way you want.
Whenever there is any crash, run the Jenkins with the same docker-compose file and replace the jenkins_home folder with the backup.
Rerun/restart jenkins again
List the container
docker ps -a
Restart container
docker restart <Required_Container_ID_To_Restart>
I've been using a docker-compose.yml that looks like the following:
version: '3.2'
volumes:
jenkins-home:
services:
jenkins:
image: jenkins-docker
build: .
restart: unless-stopped
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- jenkins-home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
container_name: jenkins-docker
My image is a locally built Jenkins image, based off of jenkins/jenkins:lts, that adds in some other components like docker itself, and I'm mounting the docker socket to allow me to run commands on the docker host. This may not be needed for your use case. The important parts for you are the ports being published, which for me is only 8080, and the volume for /var/jenkins_home to preserve the Jenkins configuration between image updates.
To recover from errors, I have restart: unless-stopped inside the docker-compose.yml to configure the container to automatically restart. If you're running this in swarm mode, that would be automatic.
I typically avoid defining a container name, but in this scenario, there will only ever be one jenkins-docker container, and I like to be able to view the logs with docker logs jenkins-docker to gather things like the initial administrator login token.
My Dockerfile and other dependencies for this image are available at: https://github.com/bmitch3020/jenkins-docker
HyperV with docker for Windows.
In that case, you must be sure you port-forward any published port (like 5000).
Open HyperV manager, and right-click on the machine defined there: you will be able to add port-forwarding rules in order for localhost:5000 to access your VM:5000.

Development workflow for server and client using Docker Compose?

I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose

Resources