Using robots.txt with Traefik 1.7 - docker

Is there a way in Traefik v1 to serve a static robots.txt file on /robots.txt on all the sites proxied?

Related

How to use service name as URL in Docker?

What I want to do:
My docker-composer file contains several services, which each have a service name. All services use the exact same image. I want them to be reachable via the service name (like curl service-one, curl service-two) from the host. The use-case is that these are microservices that should be reachable from a host system.
services:
service-one:
container_name: container-service-one
image: customimage1
service-two:
container_name: container-service-two
image: customimage1
What's the problem
Lots of tutorials say that that's the way to build microservices, but it's usually done by using ports, but I need services names instead of ports.
What I've tried
There are lots of very old answers (5-6 years), but not a single one gives a working answer. There are ideas like parsing the IP of each container and then using that, or just using hostnames internally between docker containers, or complex third party tools like building your own DNS.
It feels weird that I'm the only one who needs several APIs reachable from a host system, this feels like standard use-case, so I think I'm missing something here.
Can somebody tell me where to go from here?
I'll start from basic to advanced as far as I know.
For starter, every service that's part of the docker network (by default everyone that's part of the compose file) so accessing each other by their service name is already there "for free".
If you want to use the service names from the host itself you can set a reverse proxy like nginx and by the server name (in your case would be equal to service name) route the appropriate port on the host running the docker containers.
the basic idea is to intercept all communication to port 80 on the server and send the communication by the incoming DNS name.
Here's an example:
compose file:
version: "3.9"
services:
nginx-router:
image: "nginx:latest"
volumes:
- type: bind
source: ./nginx.conf
target: /nginx/nginx.conf
ports:
- "80:80"
service1:
image: "nginx:latest"
ports:
- "8080:80"
service2:
image: "httpd:latest"
ports:
- "8081:80"
nginx.conf
worker_processes auto;
pid /tmp/nginx.pid;
events {
worker_connections 8000;
multi_accept on;
}
http {
server {
listen 80;
server_name 127.0.0.1;
location / {
proxy_set_header Host $host;
proxy_pass http://service1:80;
}
}
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $host;
proxy_pass http://service2:80;
}
}
}
in this example if my server name is localhost I'm routing to service2 which is an httpd image or Apache HTTP and we're getting it works which is the default apache image HTML page:
and when we're accessing through 127.0.0.1 server name we should see nginx and indeed this is what we're getting:
in your case you'd use the service names instead after setting them as a DNS record and using this DNS record to route to the appropriate service.

Nginx API Gateway in Docker Compose

(Disclaimer: I've seen a lot of version of this question asked on here but none seem to really answer my question.)
I want to use NGINX as an API Gateway to route requests to microservice APIs in docker-compose.
For my sample app, I have two microservice APIs (A and B). Any request endpoint that starts with /a should go to API-A and any request endpoint that starts with /b should go to API-B.
Some issues I've had are:
I want paths like /a/foo/bar to match API-A but not /ab/foo
I want routing to work regardless of whether or not the path ends in a / (aka both /a/foo and /a/foo/ work)
My docker-compose file looks like this:
version: "3.8"
services:
gateway:
build:
context: ./api-gw
ports:
- 8000:80
apia:
build:
context: ./api-a
ports:
- 8000
apib:
build:
context: ./api-b
ports:
- 8000
and my sample NGINX config file looks like this:
server {
listen 80;
server_name localhost;
location ^~ /a {
proxy_pass http://apia:8000/;
}
location ^~ /b {
proxy_pass http://apib:8000/;
}
}
How can I setup my NGINX config to properly route my requests?
Thanks for your help!
you need to change your Nginx regex rules to these :
match for Api-A :
^a(\/.*)?
match for Api-B :
^b(\/.*)?

How do I configure nginx to serve files with a prefix?

I am trying to host static files in kubernetes with an nginx container, and expose them on a private network with istio.
I want the root of my server to exist at site.com/foo, as I have other applications existing at site.com/bar, site.com/boo, etc.
My istio virtual service configuration:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: cluster-foo
namespace: namespace
spec:
gateways:
- cluster-gateway
hosts:
- site.com
http:
- match:
- name: http
port: 80
uri:
prefix: /foo
route:
- destination:
host: app.namespace.svc.cluster.local
port:
number: 80
All of my static files exist in the directory /data on my nginx container. My nginx config:
events {}
http {
server {
root /data;
location /foo {
autoindex on;
}
}
}
Applying this virtual service, and a kube deployment that runs an nginx container with this config, I get a nginx server at site.com/foo that serves all of the static files in /data on the container. All good. The problem is that the autoindexing that nginx does, does not respect the prefix /foo. So all the file links that nginx indexes at site.com/foo, look like this:
site.com/page.html, rather than site.com/foo/page.html. Furthermore, when I put site.com/foo/page.html in my browser manually, site.com/foo/page.html is displayed correctly, So I know that nginx is serving it in the correct location, just the links that it is indexing are incorrect.
Is there any way to configure nginx autoindex with a prefix?

serving static files from jwilder/nginx-proxy

I have a web app (django served by uwsgi) and I am using nginx for proxying requests to specific containers.
Here is a relevant snippet from my default.conf.
upstream web.ubuntu.com {
server 172.18.0.9:8080;
}
server {
server_name web.ubuntu.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi://web.ubuntu.com;
}
}
Now I want the static files to be served from nginx rather than uwsgi workers.
So basically I want to add something like:
location /static/ {
autoindex on;
alias /staticfiles/;
}
to the automatically generated server block for the container.
I believe this should make nginx serve all requests to web.ubuntu.com/static/* from /staticfiles folder.
But since the configuration(default.conf) is generated automatically, I don't know how to add the above location to the server block dynamically :(
I think location block can't be outside a server block right and there can be only one server block per server?
so I don't know how to add the location block there unless I add dynamically to default.conf after nginx comes up and then reload it I guess.
I did go through https://github.com/jwilder/nginx-proxy and I only see an example to actually change location settings per-host and default. But nothing about adding a new location altogether.
I already posted this in Q&A for jwilder/nginx-proxy and didn't get a response.
Please help me if there is a way to achieve this.
This answer is based on this comment from the #553 issue discussion on the official nginx-proxy repo. First, you have to create the default_location file with the static location:
location /static/ {
alias /var/www/html/static/;
}
and save it, for example, into nginx-proxy folder in your project's root directory. Then, you have to add this file to /etc/nginx/vhost.d folder of the jwilder/nginx-proxy container. You can build a new image based on jwilder/nginx-proxy with this file being copied or you can mount it using volumes section. Also, you have to share static files between your webapp and nginx-proxy containers using a shared volume. As a result, your docker-compose.yml file will look something like this:
version: "3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx-proxy/default_location:/etc/nginx/vhost.d/default_location
- static:/var/www/html/static
webapp:
build: ./webapp
expose:
- 8080
volumes:
- static:/path/to/webapp/static
environment:
- VIRTUAL_HOST=webapp.docker.localhost
- VIRTUAL_PORT=8080
- VIRTUAL_PROTO=uwsgi
volumes:
static:
Now, the server block in /etc/nginx/conf.d/default.conf will always include the static location:
server {
server_name webapp.docker.localhost;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi://webapp.docker.localhost;
include /etc/nginx/vhost.d/default_location;
}
}
which will make Nginx serve static files for you.

Traefik and Nginx with HTTPS on Docker / 400 Bad Request

I'm trying to build stack with Traefik and Nginx based on Docker. Without HTTPS is everything fine, but I get an error as soon as I put on HTTPS configuration.
I'm getting this error from Nginx on example.com: 400 Bad Request / The plain HTTP request was sent to HTTPS port. In the address bar I can see the green lock saying connection is secure.
Certbot works fine so I have real SSL certificate inside the proper folder.
I can get to the Traefik dasboard when I visit traefik.example.com but I have to accept no SSL browser warning and dasboard is also working without HTTPS.
docker-compose.yml
version: '3.4'
services:
traefik:
image: traefik:latest
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/traefik.toml:/etc/traefik/traefik.toml
- ../letsencrypt:/etc/letsencrypt
labels:
- traefik.backend=traefik
- traefik.frontend.rule=Host:traefik.example.com
- traefik.port=8080
networks:
- traefik
nginx:
image: nginx:latest
volumes:
- ../www:/var/www
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ../letsencrypt:/etc/letsencrypt
labels:
- traefik.backend=nginx
- traefik.frontend.rule=Host:example.com
- traefik.port=80
- traefik.port=443
networks:
- traefik
networks:
traefik:
driver: overlay
external: true
attachable: true
traefik.toml
defaultEntryPoints = ["http", "https"]
[web]
address = ":8080"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/letsencrypt/live/example.com/fullchain.pem"
keyFile = "/etc/letsencrypt/live/example.com/privkey.pem"
[docker]
domain="example.com"
watch = true
exposedByDefault = true
swarmMode = false
nginx.conf
server {
listen 80;
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
return 301 https://www.example.com$request_uri;
}
server {
listen 443 ssl http2;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
root /var/www/public;
index index.html;
}
Thanks for your help.
First there is no need to have SSL redirection configured in both Traefik and Nginx. Also Traefik frontend is matching only non www variant but backend app expects www. Finally Traefik web provider is deprecated so there should be newer api provider.
As I just stumbled upon a similar problem with Traefik v2
400 Bad Request / The plain HTTP request was sent to HTTPS port
with an Nginx error log stating
400 client sent plain HTTP request to HTTPS port while reading client request headers
and scratching my head over it, I finally found the source of that error. It's not that the TLS certs were invalid or something in the transport broken, but that the wiring between routers, services and port mappings were off.
Previously I did not see, that the Docker Compose stack had an Nginx container only listening on 80/tcp. I assumed everything's ok as I attached the ports to Traefik load balancers attached to a separate service per http/https endpoints with separated routers. This somehow did not work:
- "traefik.http.services.proxy.loadbalancer.server.port=80"
- "traefik.http.services.proxy-secure.loadbalancer.server.port=443"
Intermediary I now opened port: - "8008:80" - "8443:443" and got it working. Investigating further what's wrong with Traefik ports as those should get exposed per default. This is not a solution as those ports are now available to the outside world, but I am leaving this explanation here as I could not find anything on this topic that would point me in the right direction, so hopefully it's helpful for someone else later on.

Resources