How to configure Jaeger with elasticsearch? - docker

I have tried executing this docker command to setup Jaeger Agent and jaeger collector with elasticsearch.
sudo docker run \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-e SPAN_STORAGE_TYPE=elasticsearch \
--name=jaeger \
jaegertracing/all-in-one:latest
but this command gives the below error. How to configure Jaeger with ElasticSearch?
"msg":"Failed to init storage factory","error":"health check timeout: no Elasticsearch node available","errorVerbose":"no Elasticsearch node available\

After searching a solution for some time, I found a docker-compose.yml file which had the Jaeger Query,Agent,collector and Elasticsearch configurations.
docker-compose.yml
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
networks:
- elastic-jaeger
ports:
- "127.0.0.1:9200:9200"
- "127.0.0.1:9300:9300"
restart: on-failure
environment:
- cluster.name=jaeger-cluster
- discovery.type=single-node
- http.host=0.0.0.0
- transport.host=127.0.0.1
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- xpack.security.enabled=false
volumes:
- esdata:/usr/share/elasticsearch/data
jaeger-collector:
image: jaegertracing/jaeger-collector
ports:
- "14269:14269"
- "14268:14268"
- "14267:14267"
- "9411:9411"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
command: [
"--es.server-urls=http://elasticsearch:9200",
"--es.num-shards=1",
"--es.num-replicas=0",
"--log-level=error"
]
depends_on:
- elasticsearch
jaeger-agent:
image: jaegertracing/jaeger-agent
hostname: jaeger-agent
command: ["--collector.host-port=jaeger-collector:14267"]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
depends_on:
- jaeger-collector
jaeger-query:
image: jaegertracing/jaeger-query
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- no_proxy=localhost
ports:
- "16686:16686"
- "16687:16687"
networks:
- elastic-jaeger
restart: on-failure
command: [
"--es.server-urls=http://elasticsearch:9200",
"--span-storage.type=elasticsearch",
"--log-level=debug"
]
depends_on:
- jaeger-agent
volumes:
esdata:
driver: local
networks:
elastic-jaeger:
driver: bridge
The docker-compose.yml file installs the elasticsearch, Jaeger collector,query and agent.
Install docker and docker compose first
https://docs.docker.com/compose/install/#install-compose
Then, execute these commands in order
1. sudo docker-compose up -d elasticsearch
2. sudo docker-compose up -d
3. sudo docker ps -a
start all the docker containers - Jaeger agent,collector,query and elasticsearch.
sudo docker start container-id
access -> http://localhost:16686/

As I mentioned in my comment on the OP's first answer above, I was getting an error when running the docker-compose exactly as given:
Error: unknown flag: --collector.host-port
I think this CLI flag has been deprecated by the Jaeger folks since that answer was written. So I poked around in the jaeger-agent documentation a bit:
https://www.jaegertracing.io/docs/1.20/deployment/#discovery-system-integration
https://www.jaegertracing.io/docs/1.20/cli/#jaeger-agent
And I got this to work with a couple of small modifications:
I added port range "14250:14250" to the jaeger-collector ports
I updated the jaeger-agent command input with: command: ["--reporter.grpc.host-port=jaeger-collector:14250"]
Finally, I updated the ellastic search version in the image tag to the latest version they have available at this time (though I doubt this was required).
The updated docker-compose.yaml:
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
networks:
- elastic-jaeger
ports:
- "127.0.0.1:9200:9200"
- "127.0.0.1:9300:9300"
restart: on-failure
environment:
- cluster.name=jaeger-cluster
- discovery.type=single-node
- http.host=0.0.0.0
- transport.host=127.0.0.1
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- xpack.security.enabled=false
volumes:
- esdata:/usr/share/elasticsearch/data
jaeger-collector:
image: jaegertracing/jaeger-collector
ports:
- "14269:14269"
- "14268:14268"
- "14267:14267"
- "14250:14250"
- "9411:9411"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
command: [
"--es.server-urls=http://elasticsearch:9200",
"--es.num-shards=1",
"--es.num-replicas=0",
"--log-level=error"
]
depends_on:
- elasticsearch
jaeger-agent:
image: jaegertracing/jaeger-agent
hostname: jaeger-agent
command: ["--reporter.grpc.host-port=jaeger-collector:14250"]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
networks:
- elastic-jaeger
restart: on-failure
environment:
- SPAN_STORAGE_TYPE=elasticsearch
depends_on:
- jaeger-collector
jaeger-query:
image: jaegertracing/jaeger-query
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- no_proxy=localhost
ports:
- "16686:16686"
- "16687:16687"
networks:
- elastic-jaeger
restart: on-failure
command: [
"--es.server-urls=http://elasticsearch:9200",
"--span-storage.type=elasticsearch",
"--log-level=debug"
]
depends_on:
- jaeger-agent
volumes:
esdata:
driver: local
networks:
elastic-jaeger:
driver: bridge

If you would like to deploy the Jaeger with Elasticsearch and Kibana to quickly validate and check the stack e.g. in kind or Minikube, the following snippet may help you.
#######################
## Add jaegertracing helm repo
#######################
helm repo add jaegertracing
https://jaegertracing.github.io/helm-charts
#######################
## Create a target namespace
#######################
kubectl create namespace observability
#######################
## Check and use the jaegertracing helm chart
#######################
helm search repo jaegertracing
helm install -n observability jaeger-operator jaegertracing/jaeger-operator
#######################
## Use the elasticsearch all-in-one operator
#######################
kubectl apply -f https://download.elastic.co/downloads/eck/1.1.2/all-in-one.yaml
#######################
## Create an elasticsearch deployment
#######################
cat <<EOF | kubectl apply -n observability -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.7.0
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
EOF
PASSWORD=$(kubectl get secret -n observability quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
kubectl create secret -n observability generic jaeger-secret --from-literal=ES_PASSWORD=${PASSWORD} --from-literal=ES_USERNAME=elastic
#######################
## Kibana to visualize the trace data
#######################
cat <<EOF | kubectl apply -n observability -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.7.0
count: 1
elasticsearchRef:
name: quickstart
EOF
kubectl port-forward -n observability service/quickstart-kb-http 5601
## To get the pw
kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
login:
https://localhost:5601
username: elastic
pw: <see above to outcome of the command>
#######################
## Deploy a jaeger tracing application
#######################
cat <<EOF | kubectl apply -n observability -f -
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod
spec:
agent:
strategy: DaemonSet
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: https://quickstart-es-http:9200
tls:
ca: /es/certificates/ca.crt
num-shards: 1
num-replicas: 0
secretName: jaeger-secret
volumeMounts:
- name: certificates
mountPath: /es/certificates/
readOnly: true
volumes:
- name: certificates
secret:
secretName: quickstart-es-http-certs-public
EOF
## to visualize it
kubectl --namespace observability port-forward simple-prod-query-<POP ID> 16686:16686
#######################
## To test the setup
## Of course if you set it up to another namespace it will work, the only thing that matters is the collector URL and PORT
#######################
cat <<EOF | kubectl apply -n observability -f -
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-k8s-example
labels:
app: jaeger-k8s-example
spec:
replicas: 1
selector:
matchLabels:
app: jaeger-k8s-example
strategy:
type: Recreate
template:
metadata:
labels:
app: jaeger-k8s-example
spec:
containers:
- name: jaeger-k8s-example
env:
- name: JAEGER_COLLECTOR_URL
value: "simple-prod-collector.observability.svc.cluster.local"
- name: JAEGER_COLLECTOR_PORT
value: "14268"
image: norbertfenk/jaeger-k8s-example:latest
imagePullPolicy: IfNotPresent
EOF

If Jaeger needs to be set up in Kubernetes cluster as a helm chart, one can use this: https://github.com/jaegertracing/helm-charts/tree/master/charts/jaeger
It can delploy either Elasticsearch or Cassandara as a storage backend. Which is just a matter of right value being passed in to the chart:
storage:
type: elasticsearch
This section shows the helm command as an example:
https://github.com/jaegertracing/helm-charts/tree/master/charts/jaeger#installing-the-chart-using-a-new-elasticsearch-cluster

For people who are using OpenTelemetry, Jaeger, and Elasticsearch, here is the way.
Note the image being used are jaegertracing/jaeger-opentelemetry-collector and jaegertracing/jaeger-opentelemetry-agent.
version: '3.8'
services:
collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/conf/opentelemetry-collector.config.yaml", "--log-level=DEBUG"]
volumes:
- ./opentelemetry-collector.config.yaml:/conf/opentelemetry-collector.config.yaml
ports:
- "9464:9464"
- "55680:55680"
- "55681:55681"
depends_on:
- jaeger-collector
jaeger-collector:
image: jaegertracing/jaeger-opentelemetry-collector
command: ["--es.num-shards=1", "--es.num-replicas=0", "--es.server-urls=http://elasticsearch:9200", "--collector.zipkin.host-port=:9411"]
ports:
- "14250"
- "14268"
- "9411"
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- LOG_LEVEL=debug
restart: on-failure
depends_on:
- elasticsearch
jaeger-agent:
image: jaegertracing/jaeger-opentelemetry-agent
command: ["--config=/config/otel-agent-config.yml", "--reporter.grpc.host-port=jaeger-collector:14250"]
volumes:
- ./:/config/:ro
ports:
- "6831/udp"
- "6832/udp"
- "5778"
restart: on-failure
depends_on:
- jaeger-collector
jaeger-query:
image: jaegertracing/jaeger-query
command: ["--es.num-shards=1", "--es.num-replicas=0", "--es.server-urls=http://elasticsearch:9200"]
ports:
- "16686:16686"
- "16687"
environment:
- SPAN_STORAGE_TYPE=elasticsearch
- LOG_LEVEL=debug
restart: on-failure
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.9.0
environment:
- discovery.type=single-node
ports:
- "9200/tcp"
Then just need
docker-compose up -d
Reference: https://github.com/jaegertracing/jaeger/blob/master/crossdock/jaeger-opentelemetry-docker-compose.yml

Related

Trouble converting docker-compose containing volume to Kubernetes manifest

I am learning Kubernetes and I am starting to convert an existing docker-compose.yml to Kuberbetes manifest one component at a time. Here is the component I am currently trying to convert
version: '3'
services:
mongodb:
container_name: mongodb
restart: always
hostname: mongo-db
image: mongo:latest
networks:
- mynetwork
volumes:
- c:/MongoDb:c:/data/db
ports:
- 27017:27017
networks:
mynetwork:
name: mynetwork
I am able to log into the Mongo instance when the container is running in Docker but I cannot get this working in Kubernetes. Here is the Kubernetes manifests I have tried so far
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:latest
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
hostPort: 27017
volumes:
- name: mongodb-data
hostPath:
path: "C:\\MongoDb"
With this manifest I see the error Error: Error response from daemon: invalid mode: /data/db when I do a kubectl describe pods.
I understand the mapping of volumes from Docker to Kubernetes isn't 1-to-1, so is this reasonable to do in the Kubernetes space?
Thank you and apologies if this feels like a silly question.

Swarm: Traefik returning 404 on compose services

I have two devices running Docker; an Intel NUC and a Raspberry Pi. My NUC is used as a mediaplayer/mediaserver. This also is the manager node. The Pi is being used as Home Assistant and MQTT machine and is set as worker node. I wanted to add them to a swarm so I could use Traefik for reverse proxy and HTTPS on both machines.
NUC:
1 docker-compose file for Traefik, Consul and Portainer.
1 docker-compose file for my media apps (Sabnzbd, Transmission-vpn, Sonarr, Radarr etc).
Pi:
1 docker-compose file for Home Assistant, MQTT etc.
Traefik and Portainer are up and running. I got them setup with `docker stack deploy`. Next I tried to setup my media apps, but they don't need to be connected with the Pi so I tried `docker compose`. Portainer shows the apps are running, but when I go to their subdomain Traefik returns 404 page not found. This makes me conclude that apps running outside the swarm, but connected to Traefik don't work. They also don't show up in the Traefik dashboard.
docker-compose.traefik.yml - 'docker stack deploy'
version: '3.7'
networks:
traefik_proxy:
external: true
agent-network:
attachable: true
volumes:
consul-data-leader:
consul-data-replica:
portainer-data:
services:
consul-leader:
image: consul
command: agent -server -client=0.0.0.0 -bootstrap -ui
volumes:
- consul-data-leader:/consul/data
environment:
- CONSUL_BIND_INTERFACE=eth0
- 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}'
networks:
- traefik_proxy
deploy:
labels:
- traefik.frontend.rule=Host:consul.${DOMAINNAME?Variable DOMAINNAME not set}
- traefik.enable=true
- traefik.port=8500
- traefik.tags=${TRAEFIK_PUBLIC_TAG:-traefik-public}
- traefik.docker.network=traefik_proxy
- traefik.frontend.entryPoints=http,https
- traefik.frontend.redirect.entryPoint=https
- traefik.frontend.auth.forward.address=http://oauth:4181
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
- traefik.frontend.auth.forward.trustForwardHeader=true
consul-replica:
image: consul
command: agent -server -client=0.0.0.0 -retry-join="consul-leader"
volumes:
- consul-data-replica:/consul/data
environment:
- CONSUL_BIND_INTERFACE=eth0
- 'CONSUL_LOCAL_CONFIG={"leave_on_terminate": true}'
networks:
- traefik_proxy
deploy:
replicas: ${CONSUL_REPLICAS:-3}
placement:
preferences:
- spread: node.id
traefik:
image: traefik:v1.7
hostname: traefik
restart: always
networks:
- traefik_proxy
ports:
- target: 80
published: 80
- target: 443
published: 443
- target: 8080
published: 8145
deploy:
replicas: ${TRAEFIK_REPLICAS:-3}
placement:
constraints:
- node.role == manager
preferences:
- spread: node.id
labels:
traefik.enable: 'true'
traefik.backend: traefik
traefik.protocol: http
traefik.port: 8080
traefik.tags: traefik-public
traefik.frontend.rule: Host:traefik.${DOMAINNAME}
traefik.frontend.headers.SSLHost: traefik.${DOMAINNAME}
traefik.docker.network: traefik_proxy
traefik.frontend.passHostHeader: 'true'
traefik.frontend.headers.SSLForceHost: 'true'
traefik.frontend.headers.SSLRedirect: 'true'
traefik.frontend.headers.browserXSSFilter: 'true'
traefik.frontend.headers.contentTypeNosniff: 'true'
traefik.frontend.headers.forceSTSHeader: 'true'
traefik.frontend.headers.STSSeconds: 315360000
traefik.frontend.headers.STSIncludeSubdomains: 'true'
traefik.frontend.headers.STSPreload: 'true'
traefik.frontend.headers.customResponseHeaders: X-Robots-Tag:noindex,nofollow,nosnippet,noarchive,notranslate,noimageindex
traefik.frontend.headers.customFrameOptionsValue: 'allow-from https:${DOMAINNAME}'
traefik.frontend.auth.forward.address: 'http://oauth:4181'
traefik.frontend.auth.forward.authResponseHeaders: X-Forwarded-User
traefik.frontend.auth.forward.trustForwardHeader: 'true'
domainname: ${DOMAINNAME}
dns:
- 1.1.1.1
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${USERDIR}/docker/traefik:/etc/traefik
- ${USERDIR}/docker/shared:/shared
environment:
CF_API_EMAIL: ${CLOUDFLARE_EMAIL}
CF_API_KEY: ${CLOUDFLARE_API_KEY}
command:
#- "storeconfig" #This is the push to consul, secondary traefik must be created and interfaced to this traefik. Remove this traefik's open ports, it shuts down once consul is messaged.
- '--logLevel=INFO'
- '--InsecureSkipVerify=true' #for unifi controller to not throw internal server error message
- '--api'
- '--api.entrypoint=apiport'
- '--defaultentrypoints=http,https'
- '--entrypoints=Name:http Address::80 Redirect.EntryPoint:https'
- '--entrypoints=Name:https Address::443 TLS TLS.SniStrict:true TLS.MinVersion:VersionTLS12 CipherSuites:TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256'
- '--entrypoints=Name:apiport Address::8080'
- '--file'
- '--file.directory=/etc/traefik/rules/'
- '--file.watch=true'
- '--acme'
- '--acme.storage=/etc/traefik/acme/acme.json'
- '--acme.entryPoint=https'
# not yet ready?
# - "--acme.TLS-ALPN-01=true"
- '--acme.dnsChallenge=true'
- '--acme.dnsChallenge.provider=cloudflare'
- '--acme.dnsChallenge.delayBeforeCheck=60'
- '--acme.dnsChallenge.resolvers=1.1.1.1,1.0.0.1'
- '--acme.onHostRule=true'
- '--acme.email=admin#${DOMAINNAME}'
- '--acme.acmeLogging=true'
- '--acme.domains=${DOMAINNAME},*.${DOMAINNAME},'
- '--acme.KeyType=RSA4096'
#Let's Encrypt's staging server,
#caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
- '--docker'
- '--docker.swarmMode'
- '--docker.domain=${DOMAINNAME}'
- '--docker.watch'
- '--docker.exposedbydefault=false'
#- "--consul"
#- "--consul.endpoint=consul:8500"
#- "--consul.prefix=traefik"
- '--retry'
- 'resolvers=[192,168.1.1:53,1.1.1.1:53,]'
depends_on:
- consul-leader
docker-compose.media.yml - 'docker compose'
sabnzbd:
image: linuxserver/sabnzbd
container_name: sabnzbd
restart: always
network_mode: service:transmission-vpn
# depends_on:
# - transmission-vpn
# ports:
# - '${SABNZBD_PORT}:8080'
volumes:
- ${USERDIR}/docker/sabnzbd:/config
- /media/Data/Downloads:/Downloads
# - ${USERDIR}/Downloads/incomplete:/incomplete-downloads
environment:
PUID: ${PUID}
PGID: ${PGID}
TZ: ${TZ}
UMASK_SET: 002
deploy:
replicas: 1
labels:
traefik.enable: 'true'
traefik.backend: sabnzbd
traefik.protocol: http
traefik.port: 8080
traefik.tags: traefik_proxy
traefik.frontend.rule: Host:sabnzbd.${DOMAINNAME}
# traefik.frontend.rule: Host:${DOMAINNAME}; PathPrefix: /sabnzbd
traefik.frontend.headers.SSLHost: sabnzbd.${DOMAINNAME}
traefik.docker.network: traefik_proxy
traefik.frontend.passHostHeader: 'true'
traefik.frontend.headers.SSLForceHost: 'true'
traefik.frontend.headers.SSLRedirect: 'true'
traefik.frontend.headers.browserXSSFilter: 'true'
traefik.frontend.headers.contentTypeNosniff: 'true'
traefik.frontend.headers.forceSTSHeader: 'true'
traefik.frontend.headers.STSSeconds: 315360000
traefik.frontend.headers.STSIncludeSubdomains: 'true'
traefik.frontend.headers.STSPreload: 'true'
traefik.frontend.headers.customResponseHeaders: X-Robots-Tag:noindex,nofollow,nosnippet,noarchive,notranslate,noimageindex
# traefik.frontend.headers.frameDeny: "true" #customFrameOptionsValue overrides this
traefik.frontend.headers.customFrameOptionsValue: 'allow-from https:${DOMAINNAME}'
traefik.frontend.auth.forward.address: 'http://oauth:4181'
traefik.frontend.auth.forward.authResponseHeaders: X-Forwarded-User
traefik.frontend.auth.forward.trustForwardHeader: 'true'
​
​
I already tried multiple things like removing the deploy command and just using labels etc but that didn't help at all. My Traefik logs also don't show anything that might be saying what's going wrong.
Are you running de .env file to set the environment variables? Because the feature .env is not currently supported by docker stack. You must source manually the .env running export $(cat .env) before running docker stack.

Traefik v2 listen on port

I am using Traefik v2 with Docker Swarm. I want to achieve the following routing:
mydomain.com:9000 -> Traefik dashboard
mydomain.com:5000 -> my application
docker-compose-traefik.yml
version: "3.7"
services:
traefik:
image: "traefik:v2.0"
networks:
- traefik-net
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:5000"
ports:
- "80:80"
- "9000:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
traefik-net:
external:
name: traefik-net
docker-compose-whoami.yml
version: "3.7"
services:
whoami:
image: "jwilder/whoami"
networks:
- traefik-net
deploy:
replicas: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`mydomain.com`)"
- "traefik.http.routers.whoami.entrypoints=web"
- "traefik.http.services.whoami.loadbalancer.server.port=8000"
networks:
traefik-net:
external:
name: traefik-net
jwilder/whoami exposes port 8000 in its Dockerfile. I want to redirect port 5000 (my entrypoint defined in docker-compose-traefik.yml) to port 8000 in container.
I created network traefik-net with: docker network create -d bridge traefik-net.
I deployed both stacks with:
docker-stack deploy -c docker-compose-traefik.yml Traefik
docker-stack deploy -c docker-compose-whoami.yml Whoami
When I visit mydomain.com:9000 it opens Traefik dashboard as it should. When I visit mydomain.com:5000 it says that "This site can’t be reached".
My question is: How to redirect request to port 5000 (mydomain.com:5000) to port 8000 inside whoami container?
For anyone else having similar problems, I found a solution. I needed to change ports section in docker-compose-traefik.yml from
ports:
- "80:80"
- "9000:8080"
to
ports:
- "80:80"
- "9000:8080"
- "5000:5000" <-- add this
Hope this helps someone. :)

Docker Compose file to Docker Stack

I have found an awesome repo that uses docker-compose however I would like to use it Docker Cloud Stacks (I am a newbie).
What would I have to do to be able to swap it over?
docker-compose:
version: "2"
services:
mariadb:
image: wodby/mariadb:10.2-3.0.2
php:
# 1. Images with vanilla WordPress – wodby/wordpress:[WP_VERSION]-[PHP_VERSION]-[STABILITY_TAG].
image: wodby/wordpress:4-7.2-3.3.1
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
PHP_FPM_CLEAR_ENV: "no"
volumes:
- codebase:/var/www/html
nginx:
image: wodby/wordpress-nginx:4-1.13-3.0.2
environment:
NGINX_STATIC_CONTENT_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
volumes:
- codebase:/var/www/html
depends_on:
- php
labels:
- 'traefik.backend=nginx'
- 'traefik.port=80'
- 'traefik.frontend.rule=Host:wp.docker.localhost'
varnish:
image: wodby/wordpress-varnish:4.1-2.3.1
depends_on:
- nginx
environment:
VARNISH_SECRET: secret
VARNISH_BACKEND_HOST: nginx
VARNISH_BACKEND_PORT: 80
labels:
- 'traefik.backend=varnish'
- 'traefik.port=6081'
- 'traefik.frontend.rule=Host:varnish.wp.docker.localhost'
redis:
image: wodby/redis:4.0-2.1.4
blackfire:
image: blackfire/blackfire
environment:
BLACKFIRE_SERVER_ID:
BLACKFIRE_SERVER_TOKEN:
mailhog:
image: mailhog/mailhog
labels:
- 'traefik.backend=mailhog'
- 'traefik.port=8025'
- 'traefik.frontend.rule=Host:mailhog.wp.docker.localhost'
portainer:
image: portainer/portainer
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- 'traefik.backend=portainer'
- 'traefik.port=9000'
- 'traefik.frontend.rule=Host:portainer.wp.docker.localhost'
traefik:
image: traefik
command: -c /dev/null --web --docker --logLevel=INFO
ports:
- '8000:80'
# - '8080:8080' # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
codebase:

One host for multiple containers and switch container based on the path

I use traefik with a docker backend. Here is how I starter traefik:
$ cat docker-compose.yml
version: '2'
networks:
default:
external:
name: proxy
services:
traefik:
image: traefik
command: --web --docker --docker.domain=docker --logLevel=WARNING
container_name: traefik
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
labels:
- "traefik.frontend.rule=Host:dashboard.docker"
- "traefik.port=8080"
I want 2 containers, one that is the docker registry, and a second one is a UI for the registry. I would like that all HTTP request like registry.docker/v2/* go through the registry container, but any other requests (registry.docker/, registry.docker/repositories/20, ...) go through the UI container.
Here it's what I tried:
$ cat docker-compose.yml
version: '2'
networks:
default:
external:
name: proxy
services:
registry:
image: registry:2
container_name: registry
environment:
- REGISTRY_STORAGE_DELETE_ENABLED=true
labels:
- traefik.frontend.rule=Host:registry.docker, PathPrefix:/v2
- traefik.frontend.port=5000
registry-ui:
image: konradkleine/docker-registry-frontend:v2
container_name: registry-ui
environment:
- ENV_DOCKER_REGISTRY_HOST=registry.docker
- ENV_DOCKER_REGISTRY_PORT=80
- ENV_DOCKER_REGISTRY_USE_SSL=false
labels:
- traefik.frontend.rule=Host:registry.docker
But all requests go through the registry container. What should I change ?
I think you have a typo here, based on files I have, here is a possible solution
Try this :
version: '2'
networks:
default:
external:
name: proxy
services:
registry:
image: registry:2
container_name: registry
environment:
- REGISTRY_STORAGE_DELETE_ENABLED=true
labels:
- traefik.frontend.rule: Host:registry.docker;PathPrefix:/v2
- traefik.frontend.port: 5000
registry-ui:
image: konradkleine/docker-registry-frontend:v2
container_name: registry-ui
environment:
- ENV_DOCKER_REGISTRY_HOST=registry.docker
- ENV_DOCKER_REGISTRY_PORT=80
- ENV_DOCKER_REGISTRY_USE_SSL=false
labels:
- traefik.frontend.rule: Host:registry.docker

Resources