Error output when running a postfix container - docker

I have the following in my docker-compose.yml file:
simplemail:
image: tozd/postfix
ports:
- "25:25"
So far, so good. But I get the following output when I run docker-compose run simplemail:
rsyslogd: cannot create '/var/spool/postfix/dev/log': No such file or
directory rsyslogd: imklog: cannot open kernel log (/proc/kmsg):
Operation not permitted. rsyslogd: activation of module imklog failed
[try http://www.rsyslog.com/e/2145 ] rsyslogd: Could no open output
pipe '/dev/xconsole': No such file or directory [try
http://www.rsyslog.com/e/2039 ] * Starting Postfix Mail Transport
Agent postfix [ OK ]
What can I do to fix the errors above?

The documentation for the tozd/postfix image states:
You should make sure you mount spool volume (/var/spool/postfix) so
that you do not lose e-mail data when you are recreating a container.
If a volume is empty, image will initialize it at the first startup.
your docker-compose.yml file should then be:
version: "3"
volumes:
postfix-data: {}
services:
simplemail:
image: tozd/postfix
ports:
- "25:25"
volumes:
- postfix-data:/var/spool/postfix

Related

Got issue in acessing url from outside docker container

I have a docker container inside which Prometheus metrics is running on port 127.0.0.1:9615
I want to access those metrics from my host machine so I did the port binding 0.0.0.0:9615->9615. But still not able to curl that url localhost:9615/metrics gives me a response
curl: (56) Recv failure: Connection reset by peer
My docker-compose file looks like that
version: '2'
services:
polkadot:
container_name: polkadot
image: parity/polkadot
ports:
- 30333:30333 # p2p port
- 9933:9933 # rpc port
- 9944:9944 # ws port
- 9615:9615
command: [
"--name", "PolkaDocker",
"--ws-external",
"--rpc-external",
"--rpc-cors", "all"
]
What mistake am I doing?
After pulling down your docker-compose.yaml it seems like you were just missing one additional CLI flag --prometheus-external.
Updated docker-compose.yaml:
version: '2'
services:
polkadot:
container_name: polkadot
image: parity/polkadot
ports:
- 30333:30333 # p2p port
- 9933:9933 # rpc port
- 9944:9944 # ws port
- 9615:9615
command: [
"--name", "PolkaDocker",
"--ws-external",
"--rpc-external",
"--rpc-cors", "all",
"--prometheus-external" # NEW FLAG HERE
]
Now if you hit localhost:9615/metrics you should see data:
# HELP polkadot_block_height Block height info of the chain
# TYPE polkadot_block_height gauge
polkadot_block_height{status="best"} 0
polkadot_block_height{status="finalized"} 0
# HELP polkadot_block_verification_and_import_time Time taken to verify and import blocks
# TYPE polkadot_block_verification_and_import_time histogram
polkadot_block_verification_and_import_time_bucket{le="0.005"} 1076
...
Based on the CLI polkadot --help the flag is described like so:
$ polkadot --help
polkadot 0.9.8-3a10ee63c-x86_64-linux-gnu
Parity Technologies <admin#parity.io>
Polkadot Relay-chain Client Node
USAGE:
polkadot [FLAGS] [OPTIONS]
polkadot <SUBCOMMAND>
FLAGS:
...
--prometheus-external
Listen to all Prometheus data source interfaces.
Default is local.

Copy single file into container via docker-compose and environmental variable

Any ideas why this isn't putting the file into the container?
k6:
image: loadimpact/k6:0.32.0
environment:
- ENVIRONMENT=${ENVIRONMENT}
- AMOUNT_OF_USERS=${AMOUNT_OF_USERS}
- RUN_TIME=${RUN_TIME}
command: run /test.mjs
volumes:
- ./load-test.mjs:/test.mjs # <-- this works
- ./${USERS_FILE}:/users.csv # <-- this does not work
I'm running with the command:
ENVIRONMENT=bla AMOUNT_OF_USERS=5 RUN_TIME=1m USERS_FILE=users2.csv docker-compose up
I did a check inside the container:
console.log(exists('users.csv'))
console.log(exists('test.mjs'))
Results:
k6_1 | time="2021-05-24T11:31:51Z" level=info msg=false source=console
k6_1 | time="2021-05-24T11:31:51Z" level=info msg=true source=console
The USERS_FILE variable file exists in the same directory as the current working directory, i.e. users2.csv.
If I set the volume to the following it works:
- ./users2.csv:/users.csv
volumes:
- ./load-test.mjs:/test.mjs
- ${PWD}/${USERS_FILE}:/users.csv

Authelia (Docker-Compose) can't find/read existing configuration files (Volumes)

I tried to install Authelia as oAuth Server with Docker-Compose. But everytime when I start the container, the logs are saying this
time="2020-05-23T16:51:09+02:00" level=error msg="Provide a JWT secret using \"jwt_secret\" key"
time="2020-05-23T16:51:09+02:00" level=error msg="Please provide `ldap` or `file` object in `authentication_backend`"
time="2020-05-23T16:51:09+02:00" level=error msg="Set domain of the session object"
time="2020-05-23T16:51:09+02:00" level=error msg="A storage configuration must be provided. It could be 'local', 'mysql' or 'postgres'"
time="2020-05-23T16:51:09+02:00" level=error msg="A notifier configuration must be provided"
panic: Some errors have been reported
goroutine 1 [running]:
main.startServer()
github.com/authelia/authelia/cmd/authelia/main.go:41 +0xc80
main.main.func1(0xc00009c000, 0xc0001e6100, 0x0, 0x2)
github.com/authelia/authelia/cmd/authelia/main.go:126 +0x20
github.com/spf13/cobra.(*Command).execute(0xc00009c000, 0xc000020190, 0x2, 0x2, 0xc00009c000, 0xc000020190)
github.com/spf13/cobra#v0.0.7/command.go:842 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0xc00009c000, 0xc0007cdf58, 0x4, 0x4)
github.com/spf13/cobra#v0.0.7/command.go:943 +0x317
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/cobra#v0.0.7/command.go:883
main.main()
github.com/authelia/authelia/cmd/authelia/main.go:143 +0x166
and the container is restarting.
I don't realy understand why and where this behavior comes from. I've used named volumes just like binded volumes but it is still the same error. Maybe someone can tell me where I'm doing a (probably stupid) mistake, becuase I don't see it.
My compose.yml file:
version: '3.7'
services:
authelia:
image: "authelia/authelia:latest"
container_name: authelia
restart: "unless-stopped"
# security_opt:
# - no-new-privileges:true
networks:
- "web"
- "intern"
volumes:
- ./authelia:/var/lib/authelia
- ./configuration.yml:/etc/authelia/configuration.yml:ro
- ./users_database.yml:/etc/authelia/users_database.yml
# Had to bind this volumen, without it, docker creates an own volumen with empty configuration.yml and
# users_database.yml
- ./data:/etc/authelia
environment:
- TZ=$TZ
labels:
- "traefik.enable=true"
# HTTP Routers
- "traefik.http.routers.authelia-rtr.entrypoints=https"
- "traefik.http.routers.authelia-rtr.rule=Host(`secure.$DOMAINNAME`)"
- "traefik.http.routers.authelia-rtr.tls=true"
- "traefik.http.routers.authelia-rtr.tls.certresolver=le"
# Middlewares
- "traefik.http.routers.authelia-rtr.middlewares=chain-no-auth#file"
# HTTP Service
- "traefik.http.routers.authelia-rtr.service=authelia-svc"
- "traefik.http.services.auhtelia-svc.loadbalancer.server.port=9091"
networks:
web:
external: true
intern:
external: true
The files and folders under the volumes section are existing and configuration.yml is not empty. I use an admin (non-root) user with sudo permissions.
Can anybody tell me what I'm doing wrong and why authelia isn't able to find or read the configuration.yml?
Verify your configuration.yml file. These errors show up when your yml syntax is incorrect. In particular:
double-check indentation,
put your domain names in quotation marks (this was my problem when I encountered that).
See also discussion here.

ValueError: invalid width 0 (must be > 0)

I'm trying to use an expect script in an ArchLinux container docker but I get an error feedback:
spawn protonvpn init
[ -- PROTONVPN-CLI INIT -- ]
Traceback (most recent call last):
File "/usr/sbin/protonvpn", line 8, in <module>
sys.exit(main())
File "/usr/lib/python3.8/site-packages/protonvpn_cli/cli.py", line 73, in main
cli()
File "/usr/lib/python3.8/site-packages/protonvpn_cli/cli.py", line 96, in cli
init_cli()
File "/usr/lib/python3.8/site-packages/protonvpn_cli/cli.py", line 212, in init_cli
print(textwrap.fill(line, width=term_width))
File "/usr/lib/python3.8/textwrap.py", line 391, in fill
return w.fill(text)
File "/usr/lib/python3.8/textwrap.py", line 363, in fill
return "\n".join(self.wrap(text))
File "/usr/lib/python3.8/textwrap.py", line 354, in wrap
return self._wrap_chunks(chunks)
File "/usr/lib/python3.8/textwrap.py", line 248, in _wrap_chunks
raise ValueError("invalid width %r (must be > 0)" % self.width)
ValueError: invalid width 0 (must be > 0)
send: spawn id exp6 not open
while executing
"send "$env(ID)\r""
(file "/sbin/protonvpnActivate.sh" line 5)
But when I run it manually in the linux container everything goes well.
[root#e2c097bb81ed /]# /usr/bin/expect /sbin/protonvpnActivate.sh
spawn protonvpn init
[ -- PROTONVPN-CLI INIT -- ]
ProtonVPN uses two different sets of credentials, one for the website and official apps where the username is most likely your e-mail, and one for
connecting to the VPN servers.
You can find the OpenVPN credentials at https://account.protonvpn.com/account.
--- Please make sure to use the OpenVPN credentials ---
Enter your ProtonVPN OpenVPN username:
Enter your ProtonVPN OpenVPN password:
Confirm your ProtonVPN OpenVPN password:
Please choose your ProtonVPN Plan
1) Free
2) Basic
3) Plus
4) Visionary
Your plan: 3
Choose the default OpenVPN protocol.
OpenVPN can act on two different protocols: UDP and TCP.
UDP is preferred for speed but might be blocked in some networks.
TCP is not as fast but a lot harder to block.
Input your preferred protocol. (Default: UDP)
1) UDP
2) TCP
Your choice: 1
You entered the following information:
Username: xxx
Password: xxxxxx
Tier: Plus
Default protocol: UDP
Is this information correct? [Y/n]: Y
Writing configuration to disk...
Done! Your account has been successfully initialized.
Here is the lauching script whitch use expect command:
#!/usr/bin/expect
spawn protonvpn init
expect "username:"
send "$env(ID)\r"
expect "password:"
send "$env(PASSWORD)\r"
expect "password:"
send "$env(PASSWORD)\r"
expect "plan:"
send "3\r"
expect "choice:"
send "1\r"
expect "correct?"
send "Y\r"
expect eof
And here is the docker-compose.yml :
version: "3.7"
services:
protonvpn:
image: protonvpn:archlinux_template
environment:
- ID=***
- PASSWORD=******
volumes:
- "/opt/protonvpn/entrypoint.sh:/sbin/entrypoint.sh:rw"
- "/opt/protonvpn/protonvpnActivate.sh:/sbin/protonvpnActivate.sh:rw"
entrypoint: ["/bin/bash", "/sbin/entrypoint.sh"]
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
stdin_open: true
privileged: true
restart: unless-stopped
In advance, thank you for all the help you can provide.
It looks like the script is trying to read the terminal width, but the terminal width is 0. Maybe try adding tty: true to the docker-compose, like so:
version: "3.7"
services:
protonvpn:
image: protonvpn:archlinux_template
environment:
- ID=***
- PASSWORD=******
volumes:
- "/opt/protonvpn/entrypoint.sh:/sbin/entrypoint.sh:rw"
- "/opt/protonvpn/protonvpnActivate.sh:/sbin/protonvpnActivate.sh:rw"
entrypoint: ["/bin/bash", "/sbin/entrypoint.sh"]
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
stdin_open: true
tty: true
privileged: true
restart: unless-stopped
There is also an old docker bug that might be related: https://github.com/moby/moby/issues/33794
If that's the issue would edit your environment section to be the following:
environment:
- COLUMNS=`tput cols`
- LINES=`tput lines`
- ID=***
- PASSWORD=******

rsyslog not connecting to elasticsearch in docker

I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.

Resources