With making reverse proxy on Docker and Traefik, I want to dispatch several paths on the same host into two different backend servers like these,
1. traefik.test/ -> app1/
2. traefik.test/post/blabla -> app1/post/blabla
3. traefik.test/user/blabla -> app2/user/blabla
If the rules are only #2 and #3, I could do like this in docker-compose.yml
app1:
image: akky/app1
labels:
- "traefik.backend=app1"
- "traefik.frontend.rule=Host:traefik.test;PathPrefix:/post,/comment"
app2:
image: akky/app2
labels:
- "traefik.backend=app2"
- "traefik.frontend.rule=Host:traefik.test;PathPrefix:/user,/group"
However, adding the root '/' into the first PathPrefix seems to cloak /user on app2. The following does not work, and everything goes to app1 backend.
- "traefik.frontend.rule=Host:traefik.test;PathPrefix:/,/post,/group"
The rules "Host:" and "PathPrefix" seems working as 'AND', but I wanted to use 'OR' ( exact /, OR starting with /post ). I searched and came to know that multiple rules can be directed since version 1.3.0, according to pull request #1257 by making multiple lines with adding service names.
By knowing that, what I did is like this,
app1:
image: akky/app1
labels:
- "traefik.app1_subfolder.backend=app1"
- "traefik.app1_subfolder.frontend.rule=Host:traefik.test;PathPrefix:/post,/group"
- "traefik.app1_rootfolder.backend=app1"
- "traefik.app1_rootfolder.frontend.rule=Host:traefik.test;Path:/"
app2:
image: akky/app2
labels:
- "traefik.backend=app2"
- "traefik.frontend.rule=Host:traefik.test;PathPrefix:/user"
Now it works as required, the root access is dispatched to app1/ .
My question is, is this the proper way? It does not look like so for me, as this root and subfolder dispatch should be a typical use case.
You might consider adding priority labels so the app2 rules take precedence over app1 rules. Then you should be able to simplify the app1 config.
app1:
image: akky/app1
labels:
- "traefik.backend=app1"
- "traefik.frontend.priority=10"
- "traefik.frontend.rule=Host:traefik.test;PathPrefix:/,/post,/group"
app2:
image: akky/app2
labels:
- "traefik.backend=app2"
- "traefik.frontend.priority=50"
- "traefik.frontend.rule=Host:traefik.test;PathPrefix:/user"
Update: I had the priorities in the wrong order. Larger priority values take precedence over smaller priority values. According to the docs, it's based on (priority + rule length), and the larger value wins.
Related
I have a systemd service evil, with state described in init.sls. This service requires the presence of a file /etc/evil/wicked.txt. This file pulls in updates from a python function fetch_darkness that contacts a server, the response of which changes very infrequently.
evil:
pkg.installed:
- name: {{ package_name }}
- watch:
- file: /etc/evil/wicked.txt
- pkg: evil
/etc/evil/wicked.txt:
file.managed:
- contents: {{ salt.evil.fetch_darkness(grains['id'], pillar.node.hostname }} # this changes infrequently
- show_changes: false
- allow_empty: false
- user: evil
- group: evil
- mode: 600
- makedirs: true
- require:
- user: evil
- watch_in:
- service: evil
This creates a problem, because every highstate, it causes the service evil to be restarted, even if the actual contents of the file /etc/evil/wicked.txt haven't changed, which is 99% of the time.
So the next solution was creating a temporary file /etc/evil/wicked-temp.txt that wasn't watched by evil. /etc/evil/wicked.txt pulls its contents from wicked-temp.txt when that is changed, this way evil is only restarted when the file actually updates instead of every highstate:
evil:
pkg.installed:
- name: {{ package_name }}
- watch:
- file: /etc/evil/wicked.txt
- pkg: evil
/etc/evil/wicked-temp.txt:
file.managed:
- contents: {{ salt.evil.fetch_darkness(grains['id'], pillar.node.hostname }} # this changes infrequently
- show_changes: false
- allow_empty: false
- user: evil
- group: evil
- mode: 600
- makedirs: true
- require:
- user: evil
/etc/evil/wicked.txt:
file.managed:
- source: /etc/evil/wicked-temp.txt
- show_changes: false
- allow_empty: false
- user: evil
- group: evil
- mode: 600
- makedirs: true
- onchanges:
- file: /etc/evil/wicked-temp.txt
- require:
- user: evil
- watch_in:
- service: evil
"rm /etc/evil/wicked-temp.txt":
cmd.run:
- onfail:
- file: /etc/evil/wicked.txt
However, now the issue is, there are several servers that each get highstated, and there's many times that fetch_darkness isn't able to reach the main server, causing the highstate to fail as wicked.txt/wicked-temp.txt can't get populated.
This isn't great, because >99% of the time, the contents of the server response doesn't change. If wicked.txt doesn't exist, then sure, the highstate should fail. But if it does, then I'd like to keep using whatever already exists on the file wicked.txt and not have a failed highstate.
Is there a way to solve this problem:
Service must not restart unless file contents change
File must pull updates
Highstate must not fail if file already exists, but is unable to pull updates
EDIT: I should've mentioned that the file's contents do change every time. The file is a randomly generated key that is fetched from the central server, with a certain label attached, so something like 1-xxxxxxx, or 2-yyyyyy. However, 1-xxxxx is identical to 1-zzzzzz, even if the actual contents differ, which is why evil should not restart when 1-xxxxx changes to 1-zzzzz, but only when it changes to 2-yyyyyyy.
you already have a custom module that does the fetch, why not create a custom state module that handles the rest? this way you can determine if the file actually changed, handle it when it does and correct for errors when the fetch doesn't work.
in the first example however it shouldn't do anything that triggers that the file changed unless something with the file actually is changing. you might want to double check what is changing in the file.
I'm using traefik 2.0 with docker provider (swarm mode) and I wish to provide a default way for services publishing themselves on traefik avoiding conflicts.
I managed to create a default rule matching my needs, but I'm now struggling because I don't see a way to provide a default middleware to strip away prefixes.
Is there a way to add a docker service label without having to provide a specific router name, but still adding a middleware to whatever router was implicitly created by traefik?
Or is there a way to define a default middleware as there is for the default rule?
The solution I'm trying to approach is to remove all the variable substitutions in the following labels, thus reducing the verbosity of the whole definition but without exposing myself to naming conflicts:
- traefik.enable=true
- traefik.http.services.${ENV:-dev}_${STACK}_whoami.loadbalancer.server.port=80
- traefik.http.middlewares.${ENV:-dev}_${STACK}_whoami.stripprefix.prefixes=/${STACK}
- traefik.http.routers.${ENV:-dev}_${STACK}_whoami.entrypoints=http
- traefik.http.routers.${ENV:-dev}_${STACK}_whoami.rule=PathPrefix(`/${STACK}/whoami`)
- traefik.http.routers.${ENV:-dev}_${STACK}_whoami.middlewares=${ENV:-dev}_${STACK}_whoami#docker
Hoping it could become something like the following, where default is the magic word for using the implicit service name assigned by Docker when deploying the stack:
- traefik.enable=true
- traefik.http.services.default.loadbalancer.server.port=80
- traefik.http.middlewares.default.stripprefix.prefixes=/${STACK}
- traefik.http.routers.default.entrypoints=http
- traefik.http.routers.default.rule=PathPrefix(`/${STACK}/whoami`)
- traefik.http.routers.default.middlewares=default#docker
I tried the following, but apparently the go template doesn't get replaced:
- traefik.enable=true
- traefik.http.services.{{ .Name }}.loadbalancer.server.port=80
- traefik.http.middlewares.{{ .Name }}.stripprefix.prefixes=/${STACK}
- traefik.http.routers.{{ .Name }}.entrypoints=http
- traefik.http.routers.{{ .Name }}.rule=PathPrefix(`/${STACK}/whoami`)
- traefik.http.routers.{{ .Name }}.middlewares={{ .Name }}#docker
I didn't tested it but according to this doc, Go templating is only supported in dynamic (Yaml/Toml) configuration file.
So I suggest you try to add a dynamic configuration file (see here) and write something like :
http:
routers:
{{range $i, $e := until 100 }}
router{{ $e }}:
middlewares = {{ $e }}
{{end}}
Hopes this can help
I have a bunch of micro-services hosted on AWS. I am using StatsD, Graphite and Grafana to monitor them. Now I want to expand it to monitor the queues (SQS) through which these micro-services are talking to each other. How can I leverage Graphite/ Grafana to do this? Or a better approach if there aint any support/ plugin for the same. Thanks :)
PS : If it's gotta be Zipkin, please tell me they can co-exist or is there a catch to using multiple tracers.
Alright, so I'm going to answer this based on what you said here:
Or a better approach if there aint any support/ plugin for the same.
The way that I do it us through Prometheus, in combination with cloudwatch_exporter, and alertmanager.
The configuration for cloudwatch_exporter to monitor SQS is going to be something like (this is only two metrics, you'll need to add more based on what you're looking to monitor):
tasks:
- name: ec2_cloudwatch
default_region: us-west-2
metrics:
- aws_namespace: "AWS/SQS"
aws_dimensions: [QueueName]
aws_metric_name: NumberOfMessagesReceived
aws_statistics: [Sum]
range_seconds: 600
- aws_namespace: "AWS/SQS"
aws_dimensions: [QueueName]
aws_metric_name: ApproximateNumberOfMessagesDelayed
aws_statistics: [Sum]
You'll then need to configure prometheus to scrape the cloudwatch_exporter endpoint at an interval, for ex what I do:
- job_name: 'somename'
scrape_timeout: 60s
dns_sd_configs:
- names:
- "some-endpoint"
metrics_path: /scrape
params:
task: [ec2_cloudwatch]
region: [us-east-1]
relabel_configs:
- source_labels: [__param_task]
target_label: task
- source_labels: [__param_region]
target_label: region
You would then configure alertmanager to alert based on those scraped metrics; I do not alert on those metrics so I cannot give you an example. But, to give you an idea how of this architecture, a diagram is below:
If you need to use something like statsd you can use statsd_exporter. And, just in-case you were wondering, yes Grafana supports prometheus.
In Prometheus with blackbox exporter I have managed to configure 10+ URL for application availability, All of them are identified via their URL some of them are longer than the example shown below So Instead of display URL as an instance name how can I specify each with unique label.
For example
static_configs:
- targets:
- https://www.google.co.in/ # called as GoogleIndia
- https://www.google.co.uk/ # called as GoogleUK
- https://www.google.fr/ # called as GoogleFrance
You can use metric_relabel_configs to construct an instance (or completely new) label based on the instance name you specified, as described in this blog post.
Or you can specify your targets like this, assigning them arbitrary labels in the process:
static_configs:
- targets: ['https://www.google.co.in/']
labels:
name: `GoogleIndia`
- targets: ['https://www.google.co.uk/']
labels:
name: `GoogleUK`
- targets: ['https://www.google.fr/']
labels:
name: `GoogleFrance`
It's more verbose, but also easier to understand and more powerful.
I would like to employ Prometheus' relabeling for adding a label hostname, which should be a more concise version of instance as provided by targets. This should allow more compact legends in Grafana dashboards.
For instance, when __address__ has been set to myhost.mydomain.com:8080, hostname should be set to myhost. I am using __address__ rather than instance as source_label, because the second is apparently not yet set when relabeling occurs.
The relevant excerpt of my prometheus.yaml looks as follows (it is meant to employ a lazy regular expression):
- job_name: 'node_exporter'
static_configs:
- targets: ['myhost1.mydomain.com:8080',
'myhost2.mydomain.com:8080']
relabel_configs:
- source_labels: ['__address__']
regex: '^([^\.:]+?)'
replacement: ${1}
target_label: 'hostname'
The expected new label hostname is not yet added. What could be wrong in my setup?
With this regex (with a non-capturing group) things have come to work: '(.+?)(?:[\\.:].+)?'.