Traefik yml acme email value from environment variables - docker

I use a compose file with two services (Python app & Traefik), in the docker file I load all the environment variables.
For Traefik I use a YML file to define the services, In that YML file I have a node for certificateResolvers, that node looks like this:
certificatesResolvers:
letsencrypt:
acme:
email: "email#domain.com"
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
I want to set the email from a environment variable so the YML file should looks like this:
certificatesResolvers:
letsencrypt:
acme:
email: '{{env "USER_EMAIL"}}'
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
Having the YML in this way I got this in the Logs:
level=info msg="Starting provider *acme.Provider {\"email\":\"{{env \\\"USER_EMAIL\\\"}}\",\"caServer\":\"https://acme-v02.api.letsencrypt.org/directory\",\"storage\":\"/etc/traefik/acme/acme.json\",\"keyType\":\"RSA4096\",\"httpChallenge\":{\"entryPoint\":\"web\"},\"ResolverName\":\"letsencrypt\",\"store\":{},\"ChallengeStore\":{}}"
level=error msg="Unable to obtain ACME certificate for domains \"domain.com\": cannot get ACME client acme: error: 400 :: POST :: https://acme-v02.api.letsencrypt.org/acme/new-acct :: urn:ietf:params:acme:error:invalidEmail :: Error creating new account :: \"{{env \\\"USER_EMAIL\\\"}}\" is not a valid e-mail address, url: " providerName=letsencrypt.acme routerName=web-secure-router#file rule="Host(`domain.com`)"
I tried with:
email: '{{env "USER_EMAIL"}}'
email: '`{{env "USER_EMAIL"}}`'
email: "{{env 'USER_EMAIL'}}"
email: "{{env USER_EMAIL}}"
But none of those worked.
In the same YML file I have a node that looks like this:
http:
routers:
web-secure-router:
rule: 'Host(`{{env "PROJECT_HOSTNAME"}}`)'
entryPoints:
- web-secure
service: fastapi
tls:
certResolver: letsencrypt
In that section, I get the right value of the PROJECT_HOSTNAME variable, in this case domain.com as you can see in the Logs above

this may not be the solution, but it is a different way of doing things, you can try with:
instead of using traefik yml, use commands in the docker compose yml;
https://github.com/nasatome/docker-network-utils/blob/389324b6795d07684dac9bfe7dc5315bcd7eef7c/reverse-proxy/traefik/docker-compose.yml
Another thing to try would be to use:
${USER_EMAIL}
instead of
{{env "USER_EMAIL"}}

To clarify on why you cannot use your own user defined environment variable for certificatesResolvers is because this is part of the static configuration, whereas the http is part of the dynamic configuration (where you can use your own like PROJECT_HOSTNAME)
You can still use TRAEFIK Environment variables to set the email for your certificate resolver with the variable TRAEFIK_CERTIFICATESRESOLVERS_<NAME>_ACME_EMAIL.
I haven't tested this myself, but I think the following should do the trick:
services:
traefik:
environment:
TRAEFIK_CERTIFICATESRESOLVERS_LETSENCRYPT_ACME_EMAIL: ${USER_EMAIL}

Related

Traefik certificate resolver folder/file privilege

I set traefik in docker compose. I create folder acme and give it to certificate Resolvers and in code down. Then traefik creates acme.json.
I see that owner:group is same as owner of folder acme. Is there some restraints in what owner/group of acme folder or file acme.json it needs to be? Because this certificate is used by traefik and mode is 600. So I assume owner is important but it seems to work for any owner I set there.
certificatesResolvers:
certResolver:
acme:
email: "admin#setlog.com"
storage: "/acme/acme.json"
httpChallenge:
entryPoint: "web"

Serverless Lambda: Socket hangout Docker Compose

We have a lambda that has a post event. Before deploying we need to test the whole flow on local so we're using serverless-offline. We have our main app inside a docker-compose and we are trying to test calling this lambda from the main app. We're getting this error: Error: socket hang up
I first thought that could be a docker configuration on the dockerfile or docker-compose.yml but we tested with an express app using the same dockerfile from the lambda and I can hit the endpoint from the main app. So know we know that is not docker issue but rather the serverless.yml
service: csv-report-generator
custom:
serverless-offline:
useDocker: true
dockerHost: host.docker.internal
httpPort: 9000
lambdaPort: 9000
provider:
name: aws
runtime: nodejs14.x
stage: local
functions:
payments:
handler: index.handler
events:
- http:
path: /my-endpoint
method: post
cors:
origin: '*'
plugins:
- serverless-offline
- serverless-dotenv-plugin
Here is our current configuration, we've been trying different configurations without success. Any idea how to be able to hit the lambda?

How to collect docker logs using Filebeats?

I am trying to collect this kind of logs from a docker container:
[1620579277][642e7adc-74e1-4b89-a705-d271846f7ebc][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set] ex02 set
[1620579277][ac9f99b7-0126-45ed-8a74-6adc3a9d6bc5][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction] Aval
=201 Bval =301 after performing the transaction
[1620579277][9211a9d4-3fe6-49db-b245-91ddd3a11cd3][channel1]
[afca2a976fa482f429fff4a38e2ea49f337a8af1b5dca0de90410ecc792fd5a4][usecase_cc][set][Transaction]
Transaction makes payment of X units from A to B
[1620579280][0391d2ce-06c1-481b-9140-e143067a9c2d][channel1]
[1f5752224da4481e1dc4d23dec0938fd65f6ae7b989aaa26daa6b2aeea370084][usecase_cc][get] Query Response:
{"Name":"a","Amount":"200"}
I have set the filebeat.yml in this way:
filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/container-id/container-id.log'
processors:
- add_docker_metadata:
host: "unix:///var/run/docker.sock"
- dissect:
tokenizer: '{"log":"[%{time}][%{uuid}][%{channel}][%{id}][%{chaincode}][%{method}] %{specificinfo}\"\n%{}'
field: "message"
target_prefix: ""
output.elasticsearch:
hosts: ["elasticsearch:9200"]
username: "elastic"
password: "changeme"
indices:
- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
logging.json: true
logging.metrics.enabled: false
Although elasticsearch and kibana are deployed successfully, I am getting this error when a new log is generated:
{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"}],"type":"index_not_found_exception","reason":"no such index
[filebeat]","resource.type":"index_or_alias","resource.id":"filebeat","index_uuid":"_na_",
"index":"filebeat"},"status":404}
Note: I am using version 7.12.1 and Kibana, Elastichsearch and Logstash are deployed in docker.
I have used logstash as alternative way instead filebeat. However, a mistake was made by incorrectly mapping the path where the logs are obtained from, in the filebeat configuration file. To solve this issue
I have created an enviroment variable to point to right place:
I passed the environment variable as part of the docker volume:
I have pointed the path of the configuration file to the path of the volume inside the container:

Traefik: no space left on device

I'm trying to enable file provider for registering dynamic configuration, but I get the error:
Cannot start the provider *file.Provider: error adding file watcher: no space left on device
Traefik uses fsnotify for adding new watchers and it get a limit from variable of Linux: /proc/sys/fs/inotify/max_user_watches
I tried to change the variable inside the docker container by sudo:
sudo sysctl -w fs.inotify.max_user_watches=12288
but I'm getting a error:
sysctl: error setting key 'fs.inotify.max_user_watches': Read-only file system
Traefik configuration:
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
providers:
docker: {}
file:
directory: '/config'
watch: true
api:
dashboard: true
certificatesResolvers:
le:
acme:
email: myemail#mail.com
storage: acme.json
httpChallenge:
entryPoint: web
Traefik version: 2.2.1
When I run traefik on another machine or my Mac or when I set a watch of configuration to false then it works like a charm, but I need to watch file changes
Please, tell me how I can change the variable by sudo in Alpine container or how to solve this issue in another way
Well, I try to change max_user_watches inside docker container. It was a wrong idea. I need to change max_user_watches inside my linux machine where I run docker containers.
After run command:
sudo sysctl -w fs.inotify.max_user_watches=12288
it worked fine

Configure a docker-registry to use multiple certificates using config.yml

I'm setting up a private docker registry on centOS using "docker-distribution". Therefore, I use the "config.yml" to configure the registry. This file looks as follows:
version: 0.1
log:
fields:
service: registry
storage:
cache:
layerinfo: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
addr: 0.0.0.0:443
tls:
certificate: /certs/certificate.crt
key: /certs/key.key
auth:
htpasswd:
realm: somerealm
path: /auth/registry.password
Everything works well so far, but I would like to use two different certificates, one for local traffic and one for remote traffic over the internet. The problem is that I don't know how to specify multiple certificate/key-files in the config. I already tried using a wildcard like "/certs/*.crt" or to add another entry "certificate: /certs/certificate_2.crt"...but it did not work.
I could not find any documentation nor post about this. Does anyone have an idea how I could achieve this?

Resources