Tracing requests over their lifetime … through Docker Containers? - docker

Tracing makes finding parts in code, worthwhile a developers time and attention, much easier. For that reason, I attached Jaeger as tracer to a set of microservices inside Docker containers. I use Traefik as ingress controller/ service-mesh to route and proxy requests.
The problem I am facing is, that something's wrong with the tracing config in Traefik. Jaeger can not find the span context to connect the single/ service-dependend spans to a whole trace.
The following line appears in the logs:
{
"level":"debug",
"middlewareName":"tracing",
"middlewareType":"TracingEntryPoint",
"msg":"Failed to extract the context: opentracing: SpanContext not found in Extract carrier",
"time":"2021-02-02T23:16:51+01:00"
}
What I tried/ searched/ confirmed so far:
I already checked ports (they are open inside the Docker host network) and everything's reachable. So interconnectivity is not the problem here.
The forwarding of headers is set via Docker Compose labels: loadbalancer.passhostheader=true.
The following snippets describe the Docker Compose setup.
Traefik: Ingress Controller
This is a stripped down version of the traefik Container.
# Network
ROOT_DOMAIN=example.test
DEFAULT_NETWORK=traefik
---
version: '3'
services:
image: "traefik:2.4.2"
hostname: "controller"
restart: on-failure
security_opt:
- no-new-privileges:true
ports:
- "443:443"
- "80:80"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
- "8082:8082"
- "8083:8083"
networks:
- default
working_dir: /etc/traefik
volumes:
- /private/etc/localtime:/etc/localtime:ro
- ${PWD}/controller/static.yml:/etc/traefik/traefik.yml:ro
- ${PWD}/controller/dynamic.yml:/etc/traefik/dynamic.yml:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- cert-storage:/usr/local/share/ca-certificates:ro
- ${PWD}/logs/traefik:/var/log/traefik
volumes:
cert-storage:
driver_opts:
type: none
o: bind
device: ${PWD}/certs/certs
networks:
default:
external: true
name: ${DEFAULT_NETWORK}
Traefik is set up using the file provider as base and Docker Compose labels on top of it:
# static.yaml (Traefik conf)
debug: true
log:
level: DEBUG
filePath: /var/log/traefik/error.log
format: json
serversTransport:
insecureSkipVerify: true
api:
dashboard: true
insecure: true
debug: true
providers:
docker:
exposedByDefault: false
swarmMode: false
watch: true
defaultRule: "Host(`{{ normalize .Name }}.example.test`)"
endpoint: "unix:///var/run/docker.sock"
network: traefik
file:
filename: /etc/traefik/dynamic.yml
watch: true
tracing:
serviceName: "controller"
spanNameLimit: 250
jaeger:
samplingType: const
samplingParam: 1.0
samplingServerURL: http://tracer:5778/sampling
localAgentHostPort: 127.0.0.1:6831
gen128Bit: true
propagation: jaeger
traceContextHeaderName: "traefik-trace-id"
collector:
endpoint: http://tracer:14268/api/traces?format=jaeger.thrift
Jaeger: Open Tracing/ Open Telemetry
---
version: '3'
services:
tracer:
image: "jaegertracing/all-in-one:1.21.0"
hostname: "tracer"
command:
- "--log-level=info"
- "--admin.http.host-port=:14269"
- "--query.ui-config=/usr/local/share/jaeger/ui/conf.json"
environment:
SPAN_STORAGE_TYPE: memory
restart: on-failure
security_opt:
- no-new-privileges:true
expose:
- 5775/udp
- 6831/udp
- 6832/udp
- 5778
- 14250
- 14268
- 14269
- 14271
- 16686
- 16687
volumes:
- /private/etc/localtime:/etc/localtime:ro
- ${PWD}/tracer/conf:/usr/local/share/jaeger
- ${PWD}/logs/jaeger:/var/log/#TODO
- cert-storage:/usr/local/share/ca-certificates
networks:
- default
labels:
- "traefik.enable=true"
- "traefik.docker.network=${DEFAULT_NETWORK}"
# Admin UI router
- "traefik.http.routers.tracer-router.rule=Host(`tracer.$ROOT_DOMAIN`)"
- "traefik.http.routers.tracer-router.entrypoints=https"
- "traefik.http.routers.tracer-router.tls=true"
- "traefik.http.routers.tracer-router.tls.options=default"
- "traefik.http.routers.tracer-router.service=tracer"
# Service/ Load Balancer
- "traefik.http.services.tracer.loadbalancer.passhostheader=true"
- "traefik.http.services.tracer.loadbalancer.server.port=16686"
- "traefik.http.services.tracer.loadbalancer.server.scheme=http"

I'm not 100% sure what the problem is you're experiencing, but here's some things to consider.
According to this post on the Traefik forums, that message you're seeing is debug level because it's not something you should be worried about. It's just logging that no trace context was found, so a new one will be created. That second part is not in the message, but apparently that's what happens.
You should check to see if you're getting data appearing in Jaeger. If you are, that message is probably nothing to worry.
If you are getting data in Jaeger, but it's not connected, that will be because Traefik can only only work with trace context that is already in inbound requests, but it can't add trace context to outbound requests. Within your application, you'll need to implement trace propagation so that your outbound requests include the trace context that was received as part of the incoming request. Without doing that, every request will be sent without trace context and will start a new trace when it is received at the next Traefik ingress point.

The problem actually was with the traceContextHeaderName. Sadly I can not tell exactly what the problem was as the git diff only shows that nothing changed around traefik and jaeger at the point where I fixed it. I assume config got "stuck" somehow. I tracked down the related lines in source, but as I am no Go-Dev, I can only guess if there's a bug.
What I did was to switch back to uber-trace-id, which magically "fixed" it. After I ran some traces and connected another service (node, npm jaeger-client with process.env.TRACER_STATE_HEADER_NAME set to an equal value), I switched back to traefik-trace-id and things worked.

Related

Traefik 2: access to .well-known acme folder, What do you think?

I had problem with my mailserver is it was not working properly when i added mailbox to Outlook. It warned me that i use wrong certificate, on closer inspection it showed default email docker creator certificate. Even though webmail is protected with SSL from Traefik.
After researching I manage to come up with this, it is combination of ones guy docker compose, cannot find the link for his post. And my previous attempt to host acme challenge on my flask website, what was overwritten by default with Nginx Proxy manager, so I abandon it.
But now while working with Traefik, what provides much more flexibility i was able to do it:
This is one page on my flask website, what returns files from within the .well-known folder, which is mapped in each docker container what needs access to lets encrypt certification process, behind reverse proxy.
#app.route('/.well-known/acme-challenge/<acmechalleng>')
def acme(acmechalleng):
try:
# acme_folder is file path to folder where challenges are stored
# for example: acmechall_folder = os.path.abspath('acmefiles/')
acme_folder=config.acmechall_folder
extr = {}
# Sets up ignored file names what will not be showed
ignore_files='.paths.json'
try:
items = json.load(open(os.path.join(acme_folder, '.paths.json')))
except Exception:
items = ""
for root, dirs, files in os.walk(acme_folder):
for f in files:
# this filters files we do not want to show ie: paths file
if f not in ignore_files:
full_path = os.path.join(root, f)
path = full_path.split(acme_folder,1)[1]
path = path.replace("\\", "/")
split_path = path.split('/')
extr.update({path[1:]: 0})
if items != extr:
with open(os.path.join(acme_folder, '.paths.json'), 'w', encoding='utf-8') as f:
json.dump(extr, f, ensure_ascii=False, indent=4)
# Only files what are in .pahts.json will be allowed to be read
items = json.load(open(os.path.join(acme_folder, '.paths.json')))
existing = []
for key, arr in items.items():
existing.append(key)
if acmechalleng in existing:
with open(os.path.join(acme_folder, acmechalleng),"rb") as f:
content = f.readlines()
return Response(content)
else:
return render_template('404.html', **_auto_values), 404
except Exception as err:
return render_template('404.html', **_auto_values), 404
Poste.io mail server Docker File, with access to .well-known challenges for Letsencrypt
version: '3'
networks:
traefikauth_net:
external: true
services:
mailserver:
image: analogic/poste.io
container_name: mailserver
hostname: mail.mydomain.com
networks:
- traefikauth_net
restart: unless-stopped
ports:
- "25:25"
- "587:587"
- "993:993"
environment:
- HOSTNAME=poste.mydomain.com
- TZ=Europe/London
- LETSENCRYPT_EMAIL=admin#mydomain.com
- LETSENCRYPT_HOST=mydomain.com
- VIRTUAL_HOST=poste.mydomain.com
- HTTPS=OFF
- DISABLE_CLAMAV=TRUE
- DISABLE_RSPAMD=TRUE
volumes:
- /opt/traefik/.well-known:/opt/www/.well-known/acme-challenge
- /opt/mailserver:/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.poste-letsencrypt.rule=HostRegexp(`mydomain.com`,`{subdomain:[a-z]*}.mydomain.com`) && PathPrefix(`/.well-known/`)"
- "traefik.http.routers.poste-letsencrypt.entrypoints=http"
- "traefik.http.routers.poste-letsencrypt.service=mail"
- "traefik.http.services.poste-letsencrypt.loadbalancer.server.port=80"
- "traefik.http.routers.mail.rule=Host(`poste.mydomain.com`)"
- "traefik.http.routers.mail.entrypoints=https"
- "traefik.http.routers.mail.tls.certresolver=cloudflare"
- "traefik.http.routers.mail.service=mail"
- "traefik.http.services.mail.loadbalancer.server.port=80"
The main website, on "mydomain.com" needs also Traefik labels:
volumes:
- /opt/traefik/.well-known:/var/www/wellknown
labels:
- "traefik.enable=true"
- "traefik.http.routers.flask-letsencrypt.entrypoints=http"
- "traefik.http.routers.flask-letsencrypt.rule=HostRegexp(`mydomain.com`) && PathPrefix(`/.well-known/`)"
- "traefik.http.services.flask-letsencrypt.loadbalancer.server.port=80"
- "traefik.http.routers.flask-letsencrypt.service=myflask"
- "traefik.http.routers.myflask.entrypoints=https"
- "traefik.http.routers.myflask.rule=Host(`mydomain.com`)"
- "traefik.http.routers.myflask.service=myflask"
- "traefik.http.routers.myflask.tls=true"
- "traefik.http.routers.myflask.tls.certresolver=cloudflare"
- "traefik.http.services.myflask.loadbalancer.server.port=80"
I also have normal CertResolver for all the other domains what is in main Traefik file as follow traefik.yml:
certificatesResolvers:
cloudflare:
acme:
email: admin#mydomain.com
storage: /letsencrypt/acme.json
dnsChallenge:
provider: cloudflare
resolvers:
- "1.1.1.1:53"
- "1.0.0.1:53"
entryPoints:
http:
address: ":80"
https:
address: ":443"
Traefik 2 docker-compose:
version: '3'
services:
traefik:
image: traefik
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
- /opt/traefik/traefik.yml:/traefik.yml:ro
- /opt/traefik/conf:/additionalsettings
- /opt/traefik/folderacme:/letsencrypt
labels:
- 'traefik.enable=true'
- 'traefik.http.routers.api.rule=Host(`traefik.mydomain.com`)'
- 'traefik.http.routers.api.entrypoints=https'
- 'traefik.http.routers.api.service=api#internal'
- 'traefik.http.routers.api.tls=true'
- 'traefik.http.routers.api.middlewares=authelia#docker'
- "traefik.http.routers.api.tls.certresolver=cloudflare"
- "traefik.http.routers.api.tls.domains[0].main=mydomain.com"
- "traefik.http.routers.api.tls.domains[0].sans=*.mydomain.com"
- "traefik.http.middlewares.authelia.forwardauth.address=http://authelia:9091/api/verify?rd=https://logmein.mydomain.com/"
- "traefik.http.middlewares.authelia.forwardauth.trustforwardheader=true"
- 'traefik.http.middlewares.authelia.forwardauth.authResponseHeaders=Remote-User, Remote-Groups, Remote-Name, Remote-Email'
- 'traefik.http.middlewares.authelia-basic.forwardauth.address=http://authelia:9091/api/verify?auth=basic'
- 'traefik.http.middlewares.authelia-basic.forwardauth.trustForwardHeader=true'
- 'traefik.http.middlewares.authelia-basic.forwardauth.authResponseHeaders=Remote-User, Remote-Groups, Remote-Name, Remote-Email'
# global redirect to https
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=http"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
# middleware redirect
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
environment:
- CF_API_EMAIL=<cloudflare api email>
- CF_API_KEY=<cloudflare api key for certresolver>
ports:
- 80:80
- 443:443
command:
- '--api'
- '--providers.docker=true'
- '--providers.docker.exposedByDefault=false'
- '--entrypoints.http=true'
- '--entrypoints.http.address=:80'
- '--entrypoints.http.http.redirections.entrypoint.to=https'
- '--entrypoints.http.http.redirections.entrypoint.scheme=https'
- '--entrypoints.https=true'
- '--entrypoints.https.address=:443'
What do you think? I believe that python code could be much smaller, especially that part what stores the path in Json but i use that from my other project, and there I needed more then path to the file. This way I believe nobody should be able access other files, only those in ACME folder.

Ghost Docker SMTP setup

I created a ghost instance in my vps with the official docker compose file of the ghost cms
and I modified it to use a mailgun SMTP account as follows
version: '3.1'
services:
mariadb:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_ghost
- MARIADB_DATABASE=bitnami_ghost
volumes:
- 'mariadb_data:/bitnami'
ghost:
image: 'ghost:3-alpine'
environment:
MARIADB_HOST: mariadb
MARIADB_PORT_NUMBER: 3306
GHOST_DATABASE_USER: bn_ghost
GHOST_DATABASE_NAME: bitnami_ghost
GHOST_HOST: localhost
mail__transport: SMTP
mail__options__service: Mailgun
mail__auth__user: ${MY_MAIL_USER}
mail__auth__pass: ${MY_MAIL_PASS}
mail__from: ${MY_FROM_ADDRESS}
ports:
- '80:2368'
volumes:
- 'ghost_data:/bitnami'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
ghost_data:
driver: local
but when I try to invite authors to the site
it gives me following error
Failed to send 1 invitation: dulara#thinksmart.lk. Please check your email configuration, see https://ghost.org/docs/concepts/config/#mail for instructions
I am certain that my SMTP credentials are correct.
I logged in to ghost containers bash shell and checked its files there.
it's mail section is empty
I still can't find what is my mistake. I am not sure about the variable names. but I took them from the official documentation.
My exemple :
url=https://www.exemple.com/
# admin__url=XXX // Remove it (For my side, the redirection is failed)
database__client=mysql
database__connection__host=...
database__connection__port=3306
database__connection__database=ghost
database__connection__user=ghost
database__connection__password=XXX
privacy__useRpcPing=false
mail__transport=SMTP
mail__options__host=smtp.exemple.com
mail__options__port=587
# mail__options__service=Exemple // Remove it
mail__options__auth__user=sys#exemple.com
mail__options__auth__pass=XXX
# mail__options__secureConnection=true // Remove it
mail__from=Exemple Corp. <sys#exemple.com>
In your case change :
mail__auth__user => mail__options__auth__user
mail__auth__pass => mail__options__auth__pass
And delete : mail__options__service
(https://github.com/metabase/metabase/issues/4272#issuecomment-566928334)

Certificate of K3S cluster

I'm using a K3S Cluster in a docker(-compose) container in my CI/CD pipeline, to test my application code. However I have problem with the certificate of the cluster. I need to communicate on the cluster using the external addres. My docker-compose script looks as follows
version: '3'
services:
server:
image: rancher/k3s:v0.8.1
command: server --disable-agent
environment:
- K3S_CLUSTER_SECRET=somethingtotallyrandom
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- k3s-server:/var/lib/rancher/k3s
# get the kubeconfig file
- .:/output
ports:
# - 6443:6443
- 6080:6080
- 192.168.2.110:6443:6443
node:
image: rancher/k3s:v0.8.1
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_URL=https://server:6443
- K3S_CLUSTER_SECRET=somethingtotallyrandom
ports:
- 31000-32000:31000-32000
volumes:
k3s-server: {}
accessing the cluster from python gives me
MaxRetryError: HTTPSConnectionPool(host='192.168.2.110', port=6443): Max retries exceeded with url: /apis/batch/v1/namespaces/mlflow/jobs?pretty=True (Caused by SSLError(SSLCertVerificationError("hostname '192.168.2.110' doesn't match either of 'localhost', '172.19.0.2', '10.43.0.1', '172.23.0.2', '172.18.0.2', '172.23.0.3', '127.0.0.1', '0.0.0.0', '172.18.0.3', '172.20.0.2'")))
Here are my two (three) question
how can I add additional IP adresses to the cert generation? I was hoping the --bind-address in the server command triggers taht
how can I fall back on http providing an --http-listen-port didn't achieve the expected result
any other suggestion how I can enable communication with the cluster
changing the python code is not really an option as I would like o keep the code unaltered for testing. (Fallback on http works via kubeconfig.
The solution is to use the parameter tls-san
server --disable-agent --tls-san 192.168.2.110

rsyslog not connecting to elasticsearch in docker

I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.

Adding Traefik StripPrefix middleware to docker-compose labels results in 504

I have developed several docker-ized full-stack web applications that I am trying to route requests to using Traefik. I want to take advantage of the dynamic configuration via docker-compose labels. I would like to apply the stripPrefix middleware option so I can use the same application routing as if each app were served at the root. However, once these rules are applied it results in a 504 Gateway Timeout response.
Here's my set up:
Traefik 2.0.1
Docker 19.03.2, Compose 1.24.1
NGINX:latest images
A global docker network on which the Traefik container runs
Multiple multi-container applications, each of which includes an NGINX web server
All applications have their own local networks and the NGINX containers are also on the global network.
Each application is configured to be listening at /
Here is the docker-compose.yml definition for the offending NGINX container:
nginx:
image: nginx:latest
container_name: "mps_nginx"
volumes:
- ./nginx/confs/nginx.conf:/etc/nginx/default.conf
- ./static:/www/static
restart: "always"
labels:
- traefik.http.routers.mps.rule=Host(`localhost`) && PathPrefix(`/mps`)
- traefik.http.middlewares.strip-mps.stripprefix.prefixes=/mps
- traefik.http.routers.mps.middlewares=strip-mps#docker
networks:
- default
- mps
The aggravating part is, when I comment out the middlewares labels, it runs just fine but cannot find a matching URL pattern.
Prior to this I tested my approach using the whoami container that is defined in the Traefik Quickstart Tutorial:
# Test service to make sure our local docker-compose network functions
whoami:
image: containous/whoami
labels:
- traefik.http.routers.whoami.rule=Host(`localhost`) && PathPrefix(`/whoami`)
- traefik.http.middlewares.strip-who.stripprefix.prefixes=/whoami
- traefik.http.routers.whoami.middlewares=strip-who#docker
A request to http://localhost/whoami returns (among other things)
GET / HTTP/1.1.
This is exactly how I expected my routing approaches to work for all my other applications. The Traefik dashboard shows green all around for every middleware I register and yet all I see is 504 errors.
If anyone has any clues I would sincerely appreciate it.
To summarize my solution to a similar problem
my-service:
image: some/image
ports: # no need to expose port, just to show that service listens on 8090
- target: 8090
published: 8091
mode: host
labels:
- "traefik.enable=true"
- "traefik.http.routers.myservice-router.rule=PathPrefix(`/myservice/`)"
- "traefik.http.routers.myservice-router.service=my-service"
- "traefik.http.services.myservice-service.loadbalancer.server.port=8090"
- "traefik.http.middlewares.myservice-strip.stripprefix.prefixes=/myservice"
- "traefik.http.middlewares.myservice-strip.stripprefix.forceslash=false"
- "traefik.http.routers.myservice-router.middlewares=myservice-strip"
Now details
Routing rule
... redirect all calls thats path starts with /myservice/
- "traefik.http.routers.myservice-router.rule=PathPrefix(`/myservice/`)"
... to service my-service
- "traefik.http.routers.myservice-router.service=my-service"``
... to port 8090
- "traefik.http.services.myservice-service.loadbalancer.server.port=8090"
... and uses myservice-strip middleware
- "traefik.http.routers.myservice-router.middlewares=myservice-strip"
... this middleware will strip /myservice from path
- "traefik.http.middlewares.myservice-strip.stripprefix.prefixes=/myservice"
... and will not add trailing slash (/) if path becomes an empty string
- "traefik.http.middlewares.myservice-strip.stripprefix.forceslash=false"
There is an issue with prefix without ending with '/'.
Test your config like this:
- "traefik.http.routers.whoami.rule=Host(`localhost`) && (PathPrefix(`//whoami/`) || PathPrefix(`/portainer`))"
- "traefik.http.middlewares.strip-who.stripprefix.prefixes=/whoami"
I had a similar problem that was solved with the addition of
- "traefik.http.middlewares.strip-who.stripprefix.forceslash=true"
It makes sure the strip prefix doesn't also remove the forward slash.
You can read more about the forceslash documentation https://docs.traefik.io/middlewares/stripprefix/

Resources