Authelia (Docker-Compose) can't find/read existing configuration files (Volumes) - docker

I tried to install Authelia as oAuth Server with Docker-Compose. But everytime when I start the container, the logs are saying this
time="2020-05-23T16:51:09+02:00" level=error msg="Provide a JWT secret using \"jwt_secret\" key"
time="2020-05-23T16:51:09+02:00" level=error msg="Please provide `ldap` or `file` object in `authentication_backend`"
time="2020-05-23T16:51:09+02:00" level=error msg="Set domain of the session object"
time="2020-05-23T16:51:09+02:00" level=error msg="A storage configuration must be provided. It could be 'local', 'mysql' or 'postgres'"
time="2020-05-23T16:51:09+02:00" level=error msg="A notifier configuration must be provided"
panic: Some errors have been reported
goroutine 1 [running]:
main.startServer()
github.com/authelia/authelia/cmd/authelia/main.go:41 +0xc80
main.main.func1(0xc00009c000, 0xc0001e6100, 0x0, 0x2)
github.com/authelia/authelia/cmd/authelia/main.go:126 +0x20
github.com/spf13/cobra.(*Command).execute(0xc00009c000, 0xc000020190, 0x2, 0x2, 0xc00009c000, 0xc000020190)
github.com/spf13/cobra#v0.0.7/command.go:842 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0xc00009c000, 0xc0007cdf58, 0x4, 0x4)
github.com/spf13/cobra#v0.0.7/command.go:943 +0x317
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra#v0.0.7/command.go:883
main.main()
github.com/authelia/authelia/cmd/authelia/main.go:143 +0x166
and the container is restarting.
I don't realy understand why and where this behavior comes from. I've used named volumes just like binded volumes but it is still the same error. Maybe someone can tell me where I'm doing a (probably stupid) mistake, becuase I don't see it.
My compose.yml file:
version: '3.7'
services:
authelia:
image: "authelia/authelia:latest"
container_name: authelia
restart: "unless-stopped"
# security_opt:
# - no-new-privileges:true
networks:
- "web"
- "intern"
volumes:
- ./authelia:/var/lib/authelia
- ./configuration.yml:/etc/authelia/configuration.yml:ro
- ./users_database.yml:/etc/authelia/users_database.yml
# Had to bind this volumen, without it, docker creates an own volumen with empty configuration.yml and
# users_database.yml
- ./data:/etc/authelia
environment:
- TZ=$TZ
labels:
- "traefik.enable=true"
# HTTP Routers
- "traefik.http.routers.authelia-rtr.entrypoints=https"
- "traefik.http.routers.authelia-rtr.rule=Host(`secure.$DOMAINNAME`)"
- "traefik.http.routers.authelia-rtr.tls=true"
- "traefik.http.routers.authelia-rtr.tls.certresolver=le"
# Middlewares
- "traefik.http.routers.authelia-rtr.middlewares=chain-no-auth#file"
# HTTP Service
- "traefik.http.routers.authelia-rtr.service=authelia-svc"
- "traefik.http.services.auhtelia-svc.loadbalancer.server.port=9091"
networks:
web:
external: true
intern:
external: true
The files and folders under the volumes section are existing and configuration.yml is not empty. I use an admin (non-root) user with sudo permissions.
Can anybody tell me what I'm doing wrong and why authelia isn't able to find or read the configuration.yml?

Verify your configuration.yml file. These errors show up when your yml syntax is incorrect. In particular:
double-check indentation,
put your domain names in quotation marks (this was my problem when I encountered that).
See also discussion here.

Related

docker-compose Permissions Denied when accessing secrets with authelia

I'm starting on a fresh system to deploy a simple docker-compose with swag and authelia. Previously I've just included my "secrets" in the .env file or directly in authelia configuration file, but I'm trying to employee some best practices here and properly hide the secrets using docker secrets. However, when starting up my containers, authelia is complaining about permission denied when trying to access.
In the different guides I've looked at, none of them mention permissions on anything other than the secrets directory/files to be root owned and 600 permissions.
My docker directory is in ~/docker with the secrets in ~/docker/secrets. The secrets directory is root owned with 600 permissions. My docker directories is owned by uid 1100:1100, and in my docker compose, I have the following docker-compose (slightly edited for public):
version: "3.9"
secrets:
authelia_duo_api_secret_key:
file: $DOCKERSECRETS/authelia_duo_api_secret_key
authelia_jwt_secret:
file: $DOCKERSECRETS/authelia_jwt_secret
authelia_notifier_smtp_password:
file: $DOCKERSECRETS/authelia_notifier_smtp_password
authelia_session_secret:
file: $DOCKERSECRETS/authelia_session_secret
authelia_storage_encryption_key:
file: $DOCKERSECRETS/authelia_storage_encryption_key
x-environment: &default-env
TZ: $TZ
PUID: $PUID
PGID: $PGID
services:
swag:
image: ghcr.io/linuxserver/swag
container_name: swag
cap_add:
- NET_ADMIN
environment:
<<: *default-env
URL: $DOMAINNAME
SUBDOMAINS: wildcard
VALIDATION: dns
CERTPROVIDER: zerossl #optional
DNSPLUGIN: cloudflare #optional
EMAIL: <edit>
DOCKER_MODS: linuxserver/mods:swag-dashboard
volumes:
- $DOCKERDIR/appdata/swag:/config
ports:
- 443:443
restart: unless-stopped
authelia:
image: ghcr.io/authelia/authelia:latest
container_name: authelia
restart: unless-stopped
volumes:
- $DOCKERDIR/appdata/authelia:/config
user: "1100:1100"
secrets:
- authelia_jwt_secret
- authelia_session_secret
- authelia_notifier_smtp_password
- authelia_duo_api_secret_key
- authelia_storage_encryption_key
environment:
AUTHELIA_JWT_SECRET_FILE: /run/secrets/authelia_jwt_secret
AUTHELIA_SESSION_SECRET_FILE: /run/secrets/authelia_session_secret
AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE: /run/secrets/authelia_notifier_smtp_password
AUTHELIA_DUO_API_SECRET_KEY_FILE: /run/secrets/authelia_duo_api_secret_key
AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE: /run/secrets/authelia_storage_encryption_key
And the errors I'm getting in my log are:
authelia | 2022-07-28T23:45:05.872818847Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_session_secret into key 'session.secret': open /run/secrets/authelia_session_secret: permission denied"
authelia | 2022-07-28T23:45:05.872844527Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_jwt_secret into key 'jwt_secret': open /run/secrets/authelia_jwt_secret: permission denied"
authelia | 2022-07-28T23:45:05.872847757Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_duo_api_secret_key into key 'duo_api.secret_key': open /run/secrets/authelia_duo_api_secret_key: permission denied"
authelia | 2022-07-28T23:45:05.872850957Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_storage_encryption_key into key 'storage.encryption_key': open /run/secrets/authelia_storage_encryption_key: permission denied"
authelia | 2022-07-28T23:45:05.872853157Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: secrets: error loading secret path /run/secrets/authelia_notifier_smtp_password into key 'notifier.smtp.password': open /run/secrets/authelia_notifier_smtp_password: permission denied"
authelia | 2022-07-28T23:45:05.872855307Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: option 'jwt_secret' is required"
authelia | 2022-07-28T23:45:05.872857277Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: duo_api: option 'secret_key' is required when duo is enabled but it is missing"
authelia | 2022-07-28T23:45:05.872859417Z time="2022-07-28T21:15:05-02:30" level=error msg="Configuration: storage: option 'encryption_key' is required"
authelia | 2022-07-28T23:45:05.872861397Z time="2022-07-28T21:15:05-02:30" level=fatal msg="Can't continue due to the errors loading the configuration"
I'm sure I'm missing something simple here. Does everything have to be run as root in order to access the secrets? Does that mean changing all my docker directory in my home folder to root, just to hide credentials? I'm a little confused by this, any help would be greatly appreciated.
I had similar permissions errors which i could get rid of by using docker volumes. I oriented myself on this example here.

Why am I getting this error when creating a stack on docker?

I'm a complete docker noob. Just installed docker and docker-compose as well as portainer following a tutorial.
Now I would like to set up traefik reverse proxy on portainer.
This is the traefik.yml file in /etc/traefik
global:
checkNewVersion: true
sendAnonymousUsage: false # true by default
# (Optional) Log information
# ---
# log:
# level: ERROR # DEBUG, INFO, WARNING, ERROR, CRITICAL
# format: common # common, json, logfmt
# filePath: /var/log/traefik/traefik.log
# (Optional) Accesslog
# ---
# accesslog:
# format: common # common, json, logfmt
# filePath: /var/log/traefik/access.log
# (Optional) Enable API and Dashboard
# ---
api:
dashboard: true # true by default
insecure: true # Don't do this in production!
# Entry Points configuration
# ---
entryPoints:
web:
address: :80
# (Optional) Redirect to HTTPS
# ---
# http:
# redirections:
# entryPoint:
# to: websecure
# scheme: https
websecure:
address: :443
# Certificates configuration
# ---
# TODO: Custmoize your Cert Resolvers and Domain settings
#
This is the docker-compose file:
version: '3'
volumes:
traefik-ssl-certs:
driver: local
services:
traefik:
image: "traefik:v2.5"
container_name: "traefik"
ports:
- "80:80"
- "443:443"
# (Optional) Expose Dashboard
- "8080:8080" # Don't do this in production!
volumes:
- /etc/traefik:/etc/traefik
- traefik-ssl-certs:/ssl-certs
- /var/run/docker.sock:/var/run/docker.sock:ro
But when I try to start the container I get this error:
2021/12/08 18:08:07 command traefik error: yaml: line 19: did not find expected key
I can get the container to run when I remove the whole "volumes" section under "services" from the docker-compose file, but I need it for my traefik set up. I have no clue what I did wrong as I am following a video tutorial for this 1:1
I think you should check your traefik.yml indentation. There are some keys at different levels and YAML is pretty sensible to this. I'm talking specially about:
global
checkNewVersion
sendAnonymousUsage
api
Check the number of spaces before them.
I cannot recreate your exact error message, but I got an error when using your exact traefik.yml config file (as posted in the question) as the syntax is invalid (as pointed out in another answer).
I reduced the compose file to the minimum:
version: '3'
services:
traefik:
image: "traefik:v2.5"
container_name: "traefik"
volumes:
- 'c:\temp\traefik\etc\traefik.yml:/etc/traefik/traefik.yml'
And mounted just the traefik.yml file into the container as you can see. The file is shown below with commented out lines removed:
global:
checkNewVersion: true
sendAnonymousUsage: false # true by default
api:
dashboard: true # true by default
insecure: true # Don't do this in production!
entryPoints:
web:
address: :80
websecure:
address: :443
Running a docker-compose up on this gives the following error:
c:\Temp\traefik>docker-compose up
[+] Running 1/0
- Container traefik Created 0.0s
Attaching to traefik
traefik | 2021/12/09 08:44:36 command traefik error: no valid configuration found in file: /etc/traefik/traefik.yml
traefik exited with code 1
When I fix the indentation in the traefik.yml file (and turn on DEBUG logging) I have this:
global:
checkNewVersion: true
sendAnonymousUsage: false # true by default
api:
dashboard: true # true by default
insecure: true # Don't do this in production!
entryPoints:
web:
address: :80
websecure:
address: :443
log:
level: DEBUG
and running docker-compose up again I now get this:
c:\Temp\traefik>docker-compose up
[+] Running 2/2
- Network traefik_default Created 0.2s
- Container traefik Created 29.5s
Attaching to traefik
traefik | time="2021-12-09T08:49:30Z" level=info msg="Configuration loaded from file: /etc/traefik/traefik.yml"
traefik | time="2021-12-09T08:49:30Z" level=info msg="Traefik version 2.5.4 built on 2021-11-08T17:41:41Z"
traefik | time="2021-12-09T08:49:30Z" level=debug msg="Static configuration loaded {\"global\":{\"checkNewVersion\":true},\"serversTransport\":{\"maxIdleConnsPerHost\":200},\"entryPoints\":{\"traefik\":{\"address\":\":8080\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"udp\":{\"timeout\":\"3s\"}},\"web\":{\"address\":\":80\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"udp\":{\"timeout\":\"3s\"}},\"websecure\":{\"address\":\":443\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":\"10s\"},\"respondingTimeouts\":{\"idleTimeout\":\"3m0s\"}},\"forwardedHeaders\":{},\"http\":{},\"udp\":{\"timeout\":\"3s\"}}},\"providers\":{\"providersThrottleDuration\":\"2s\"},\"api\":{\"insecure\":true,\"dashboard\":true},\"log\":{\"level\":\"DEBUG\",\"format\":\"common\"},\"pilot\":{\"dashboard\":true}}"
traefik | time="2021-12-09T08:49:30Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://doc.traefik.io/traefik/contributing/data-collection/\n"
traefik | time="2021-12-09T08:49:30Z" level=info msg="Starting provider aggregator.ProviderAggregator {}"
...
...
So it can be seen that traefik starts up. This is not necessarily the same issue you have, but it shows how to approach troubleshooting it. Once you know your traefik configuration is good, you can add DEBUG logging and then try adding the other volumes and configuration so see if they are OK too.

Ghost Docker SMTP setup

I created a ghost instance in my vps with the official docker compose file of the ghost cms
and I modified it to use a mailgun SMTP account as follows
version: '3.1'
services:
mariadb:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_ghost
- MARIADB_DATABASE=bitnami_ghost
volumes:
- 'mariadb_data:/bitnami'
ghost:
image: 'ghost:3-alpine'
environment:
MARIADB_HOST: mariadb
MARIADB_PORT_NUMBER: 3306
GHOST_DATABASE_USER: bn_ghost
GHOST_DATABASE_NAME: bitnami_ghost
GHOST_HOST: localhost
mail__transport: SMTP
mail__options__service: Mailgun
mail__auth__user: ${MY_MAIL_USER}
mail__auth__pass: ${MY_MAIL_PASS}
mail__from: ${MY_FROM_ADDRESS}
ports:
- '80:2368'
volumes:
- 'ghost_data:/bitnami'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
ghost_data:
driver: local
but when I try to invite authors to the site
it gives me following error
Failed to send 1 invitation: dulara#thinksmart.lk. Please check your email configuration, see https://ghost.org/docs/concepts/config/#mail for instructions
I am certain that my SMTP credentials are correct.
I logged in to ghost containers bash shell and checked its files there.
it's mail section is empty
I still can't find what is my mistake. I am not sure about the variable names. but I took them from the official documentation.
My exemple :
url=https://www.exemple.com/
# admin__url=XXX // Remove it (For my side, the redirection is failed)
database__client=mysql
database__connection__host=...
database__connection__port=3306
database__connection__database=ghost
database__connection__user=ghost
database__connection__password=XXX
privacy__useRpcPing=false
mail__transport=SMTP
mail__options__host=smtp.exemple.com
mail__options__port=587
# mail__options__service=Exemple // Remove it
mail__options__auth__user=sys#exemple.com
mail__options__auth__pass=XXX
# mail__options__secureConnection=true // Remove it
mail__from=Exemple Corp. <sys#exemple.com>
In your case change :
mail__auth__user => mail__options__auth__user
mail__auth__pass => mail__options__auth__pass
And delete : mail__options__service
(https://github.com/metabase/metabase/issues/4272#issuecomment-566928334)

Certificate of K3S cluster

I'm using a K3S Cluster in a docker(-compose) container in my CI/CD pipeline, to test my application code. However I have problem with the certificate of the cluster. I need to communicate on the cluster using the external addres. My docker-compose script looks as follows
version: '3'
services:
server:
image: rancher/k3s:v0.8.1
command: server --disable-agent
environment:
- K3S_CLUSTER_SECRET=somethingtotallyrandom
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- k3s-server:/var/lib/rancher/k3s
# get the kubeconfig file
- .:/output
ports:
# - 6443:6443
- 6080:6080
- 192.168.2.110:6443:6443
node:
image: rancher/k3s:v0.8.1
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_URL=https://server:6443
- K3S_CLUSTER_SECRET=somethingtotallyrandom
ports:
- 31000-32000:31000-32000
volumes:
k3s-server: {}
accessing the cluster from python gives me
MaxRetryError: HTTPSConnectionPool(host='192.168.2.110', port=6443): Max retries exceeded with url: /apis/batch/v1/namespaces/mlflow/jobs?pretty=True (Caused by SSLError(SSLCertVerificationError("hostname '192.168.2.110' doesn't match either of 'localhost', '172.19.0.2', '10.43.0.1', '172.23.0.2', '172.18.0.2', '172.23.0.3', '127.0.0.1', '0.0.0.0', '172.18.0.3', '172.20.0.2'")))
Here are my two (three) question
how can I add additional IP adresses to the cert generation? I was hoping the --bind-address in the server command triggers taht
how can I fall back on http providing an --http-listen-port didn't achieve the expected result
any other suggestion how I can enable communication with the cluster
changing the python code is not really an option as I would like o keep the code unaltered for testing. (Fallback on http works via kubeconfig.
The solution is to use the parameter tls-san
server --disable-agent --tls-san 192.168.2.110

rsyslog not connecting to elasticsearch in docker

I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.

Resources