ValueError: invalid width 0 (must be > 0) - docker

I'm trying to use an expect script in an ArchLinux container docker but I get an error feedback:
spawn protonvpn init
[ -- PROTONVPN-CLI INIT -- ]
Traceback (most recent call last):
File "/usr/sbin/protonvpn", line 8, in <module>
sys.exit(main())
File "/usr/lib/python3.8/site-packages/protonvpn_cli/cli.py", line 73, in main
cli()
File "/usr/lib/python3.8/site-packages/protonvpn_cli/cli.py", line 96, in cli
init_cli()
File "/usr/lib/python3.8/site-packages/protonvpn_cli/cli.py", line 212, in init_cli
print(textwrap.fill(line, width=term_width))
File "/usr/lib/python3.8/textwrap.py", line 391, in fill
return w.fill(text)
File "/usr/lib/python3.8/textwrap.py", line 363, in fill
return "\n".join(self.wrap(text))
File "/usr/lib/python3.8/textwrap.py", line 354, in wrap
return self._wrap_chunks(chunks)
File "/usr/lib/python3.8/textwrap.py", line 248, in _wrap_chunks
raise ValueError("invalid width %r (must be > 0)" % self.width)
ValueError: invalid width 0 (must be > 0)
send: spawn id exp6 not open
while executing
"send "$env(ID)\r""
(file "/sbin/protonvpnActivate.sh" line 5)
But when I run it manually in the linux container everything goes well.
[root#e2c097bb81ed /]# /usr/bin/expect /sbin/protonvpnActivate.sh
spawn protonvpn init
[ -- PROTONVPN-CLI INIT -- ]
ProtonVPN uses two different sets of credentials, one for the website and official apps where the username is most likely your e-mail, and one for
connecting to the VPN servers.
You can find the OpenVPN credentials at https://account.protonvpn.com/account.
--- Please make sure to use the OpenVPN credentials ---
Enter your ProtonVPN OpenVPN username:
Enter your ProtonVPN OpenVPN password:
Confirm your ProtonVPN OpenVPN password:
Please choose your ProtonVPN Plan
1) Free
2) Basic
3) Plus
4) Visionary
Your plan: 3
Choose the default OpenVPN protocol.
OpenVPN can act on two different protocols: UDP and TCP.
UDP is preferred for speed but might be blocked in some networks.
TCP is not as fast but a lot harder to block.
Input your preferred protocol. (Default: UDP)
1) UDP
2) TCP
Your choice: 1
You entered the following information:
Username: xxx
Password: xxxxxx
Tier: Plus
Default protocol: UDP
Is this information correct? [Y/n]: Y
Writing configuration to disk...
Done! Your account has been successfully initialized.
Here is the lauching script whitch use expect command:
#!/usr/bin/expect
spawn protonvpn init
expect "username:"
send "$env(ID)\r"
expect "password:"
send "$env(PASSWORD)\r"
expect "password:"
send "$env(PASSWORD)\r"
expect "plan:"
send "3\r"
expect "choice:"
send "1\r"
expect "correct?"
send "Y\r"
expect eof
And here is the docker-compose.yml :
version: "3.7"
services:
protonvpn:
image: protonvpn:archlinux_template
environment:
- ID=***
- PASSWORD=******
volumes:
- "/opt/protonvpn/entrypoint.sh:/sbin/entrypoint.sh:rw"
- "/opt/protonvpn/protonvpnActivate.sh:/sbin/protonvpnActivate.sh:rw"
entrypoint: ["/bin/bash", "/sbin/entrypoint.sh"]
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
stdin_open: true
privileged: true
restart: unless-stopped
In advance, thank you for all the help you can provide.

It looks like the script is trying to read the terminal width, but the terminal width is 0. Maybe try adding tty: true to the docker-compose, like so:
version: "3.7"
services:
protonvpn:
image: protonvpn:archlinux_template
environment:
- ID=***
- PASSWORD=******
volumes:
- "/opt/protonvpn/entrypoint.sh:/sbin/entrypoint.sh:rw"
- "/opt/protonvpn/protonvpnActivate.sh:/sbin/protonvpnActivate.sh:rw"
entrypoint: ["/bin/bash", "/sbin/entrypoint.sh"]
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
stdin_open: true
tty: true
privileged: true
restart: unless-stopped
There is also an old docker bug that might be related: https://github.com/moby/moby/issues/33794
If that's the issue would edit your environment section to be the following:
environment:
- COLUMNS=`tput cols`
- LINES=`tput lines`
- ID=***
- PASSWORD=******

Related

FreeIPA Docker Compose WEB UI

After spending hours searching why I cannot access to my webUI, I turn to you.
I setup freeipa on docker using docker-compose. I opened some port to gain remote access using host-ip:port on my own computer. Freeipa is supposed to be run on my server (lets say 192.168.1.2) and the webui accessible with any other local computer on port 80 / 443 (192.168.1.4:80 or 192.168.1.4:443)
When I run my .yaml file, freeipa get setup with a "the ipa-server-install command was successful" message.
I thought it could come from my tight iptables rules and tried to put all policies to ACCEPT to debug. It didn't do it.
I'm a bit lost to how I could debbug this or find how to fix it.
OS : ubuntu 20.04.3
Docker version: 20.10.12, build e91ed57
freeipa image: freeipa/freeipa:centos-8-stream
Docker-compose version: 1.29.2, build 5becea4c
My .yaml file:
version: "3.8"
services:
freeipa:
image: freeipa/freeipa-server:centos-8-stream
hostname: sanctuary
domainname: serv.sanctuary.local
container_name: freeipa-dev
ports:
- 80:80
- 443:443
- 389:389
- 636:636
- 88:88
- 464:464
- 88:88/udp
- 464:464/udp
- 123:123/udp
dns:
- 10.64.0.1
- 1.1.1.1
- 1.0.0.1
restart: unless-stopped
tty: true
stdin_open: true
environment:
IPA_SERVER_HOSTNAME: serv.sanctuary.local
IPA_SERVER_IP: 192.168.1.100
TZ: "Europe/Paris"
command:
- -U
- --domain=sanctuary.local
- --realm=sanctuary.local
- --admin-password=pass
- --http-pin=pass
- --dirsrv-pin=pass
- --ds-password=pass
- --no-dnssec-validation
- --no-host-dns
- --setup-dns
- --auto-forwarders
- --allow-zone-overlap
- --unattended
cap_add:
- SYS_TIME
- NET_ADMIN
restart: unless-stopped
volumes:
- /etc/localtime:/etc/localtime:ro
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- ./data:/data
- ./logs:/var/logs
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
- net.ipv6.conf.lo.disable_ipv6=0
security_opt:
- "seccomp:unconfined"
labels:
- dev
I tried to tinker with the deployment file (add or remove conf found on internet such as add/remove IPA_SERVER_IP, add/remove an external bridge network)
Thank you very much for any help =)
Alright, for those who might have the same problem, I will explain everything I did to debug this.
I extensively relieded on the answers found here : https://floblanc.wordpress.com/2017/09/11/troubleshooting-freeipa-pki-tomcatd-fails-to-start/
First, I checked the status of each services with ipactl status. Depending of the problem, you might have different output but mine was like this :
Directory Service: RUNNING
krb5kdc Service: RUNNING
kadmin Service: RUNNING
named Service: RUNNING
httpd Service: RUNNING
ipa-custodia Service: RUNNING
pki-tomcatd Service: STOPPED
ipa-otpd Service: RUNNING
ipa-dnskeysyncd Service: RUNNING
ipa: INFO: The ipactl command was successful
I therefore checked the logs for tomcat /var/log/pki/pki-tomcat/ca/debug-xxxx. I realised I had connection refused with something related to the certificates.
Here, I first checked that my certificate was present in /etc/pki/pki-tomcat/alias using sudo certutil -L -d /etc/pki/pki-tomcat/alias -n 'subsystemCert cert-pki-ca'.
## output :
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4 (0x4)
...
...
Then I made sure that the private key can be read using the password found in /var/lib/pki/pki-tomcat/conf/password.conf (with the tag internal=…)
grep internal /var/lib/pki/pki-tomcat/conf/password.conf | cut -d= -f2 > /tmp/pwdfile.txt
certutil -K -d /etc/pki/pki-tomcat/alias -f /tmp/pwdfile.txt -n 'subsystemCert cert-pki-ca'
I still had nothings strange so I assumed that at this point :
pki-tomcat is able to access the certificate and the private key
The issue is likely to be on the LDAP server side
I tried to read the user entry in the LDAP to compare it to the certificate using ldapsearch -LLL -D 'cn=directory manager' -W -b uid=pkidbuser,ou=people,o=ipaca userCertificate description seeAlso but had an error after entering the password. Because my certs were OK and LDAP service running, I assumed something was off with the certificates date.
Indeed, during the install freeipa setup the certs using your current system date as base. But it also install chrony for server time synchronization. After reboot, my chrony conf were wrong and set my host date 2 years ahead.
I couldnt figure out the problem with the chrony conf so I stopped the service and set the date manually using timedatectl set-time "yyyy-mm-dd hh:mm:ss".
I restarted freeipa services amd my pki-tomcat service was working again.
After that, I set the freeipa IP in my router as DNS. I restarted services and computer in the local network so DNS config were refreshed. After that, the webUI was accessible !

Ghost Docker SMTP setup

I created a ghost instance in my vps with the official docker compose file of the ghost cms
and I modified it to use a mailgun SMTP account as follows
version: '3.1'
services:
mariadb:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_ghost
- MARIADB_DATABASE=bitnami_ghost
volumes:
- 'mariadb_data:/bitnami'
ghost:
image: 'ghost:3-alpine'
environment:
MARIADB_HOST: mariadb
MARIADB_PORT_NUMBER: 3306
GHOST_DATABASE_USER: bn_ghost
GHOST_DATABASE_NAME: bitnami_ghost
GHOST_HOST: localhost
mail__transport: SMTP
mail__options__service: Mailgun
mail__auth__user: ${MY_MAIL_USER}
mail__auth__pass: ${MY_MAIL_PASS}
mail__from: ${MY_FROM_ADDRESS}
ports:
- '80:2368'
volumes:
- 'ghost_data:/bitnami'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
ghost_data:
driver: local
but when I try to invite authors to the site
it gives me following error
Failed to send 1 invitation: dulara#thinksmart.lk. Please check your email configuration, see https://ghost.org/docs/concepts/config/#mail for instructions
I am certain that my SMTP credentials are correct.
I logged in to ghost containers bash shell and checked its files there.
it's mail section is empty
I still can't find what is my mistake. I am not sure about the variable names. but I took them from the official documentation.
My exemple :
url=https://www.exemple.com/
# admin__url=XXX // Remove it (For my side, the redirection is failed)
database__client=mysql
database__connection__host=...
database__connection__port=3306
database__connection__database=ghost
database__connection__user=ghost
database__connection__password=XXX
privacy__useRpcPing=false
mail__transport=SMTP
mail__options__host=smtp.exemple.com
mail__options__port=587
# mail__options__service=Exemple // Remove it
mail__options__auth__user=sys#exemple.com
mail__options__auth__pass=XXX
# mail__options__secureConnection=true // Remove it
mail__from=Exemple Corp. <sys#exemple.com>
In your case change :
mail__auth__user => mail__options__auth__user
mail__auth__pass => mail__options__auth__pass
And delete : mail__options__service
(https://github.com/metabase/metabase/issues/4272#issuecomment-566928334)

Certificate of K3S cluster

I'm using a K3S Cluster in a docker(-compose) container in my CI/CD pipeline, to test my application code. However I have problem with the certificate of the cluster. I need to communicate on the cluster using the external addres. My docker-compose script looks as follows
version: '3'
services:
server:
image: rancher/k3s:v0.8.1
command: server --disable-agent
environment:
- K3S_CLUSTER_SECRET=somethingtotallyrandom
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- k3s-server:/var/lib/rancher/k3s
# get the kubeconfig file
- .:/output
ports:
# - 6443:6443
- 6080:6080
- 192.168.2.110:6443:6443
node:
image: rancher/k3s:v0.8.1
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_URL=https://server:6443
- K3S_CLUSTER_SECRET=somethingtotallyrandom
ports:
- 31000-32000:31000-32000
volumes:
k3s-server: {}
accessing the cluster from python gives me
MaxRetryError: HTTPSConnectionPool(host='192.168.2.110', port=6443): Max retries exceeded with url: /apis/batch/v1/namespaces/mlflow/jobs?pretty=True (Caused by SSLError(SSLCertVerificationError("hostname '192.168.2.110' doesn't match either of 'localhost', '172.19.0.2', '10.43.0.1', '172.23.0.2', '172.18.0.2', '172.23.0.3', '127.0.0.1', '0.0.0.0', '172.18.0.3', '172.20.0.2'")))
Here are my two (three) question
how can I add additional IP adresses to the cert generation? I was hoping the --bind-address in the server command triggers taht
how can I fall back on http providing an --http-listen-port didn't achieve the expected result
any other suggestion how I can enable communication with the cluster
changing the python code is not really an option as I would like o keep the code unaltered for testing. (Fallback on http works via kubeconfig.
The solution is to use the parameter tls-san
server --disable-agent --tls-san 192.168.2.110

Error output when running a postfix container

I have the following in my docker-compose.yml file:
simplemail:
image: tozd/postfix
ports:
- "25:25"
So far, so good. But I get the following output when I run docker-compose run simplemail:
rsyslogd: cannot create '/var/spool/postfix/dev/log': No such file or
directory rsyslogd: imklog: cannot open kernel log (/proc/kmsg):
Operation not permitted. rsyslogd: activation of module imklog failed
[try http://www.rsyslog.com/e/2145 ] rsyslogd: Could no open output
pipe '/dev/xconsole': No such file or directory [try
http://www.rsyslog.com/e/2039 ] * Starting Postfix Mail Transport
Agent postfix [ OK ]
What can I do to fix the errors above?
The documentation for the tozd/postfix image states:
You should make sure you mount spool volume (/var/spool/postfix) so
that you do not lose e-mail data when you are recreating a container.
If a volume is empty, image will initialize it at the first startup.
your docker-compose.yml file should then be:
version: "3"
volumes:
postfix-data: {}
services:
simplemail:
image: tozd/postfix
ports:
- "25:25"
volumes:
- postfix-data:/var/spool/postfix

Pihole and Unbound in Docker Containers - Unbound Not Receiving Requests

I'm trying to run 2 Docker containers on Raspberry pi 3, one for Unbound and one for Pihole. The idea is that Pihole will first block any requests before using Unbound as its DNS server. I've been following Pihole's documentation to get this running found here and have got both containers starting, and pihole working. However, when running docker exec pihole dig pi-hole.net #127.0.0.1 -p 5333 or -p 5354 I get a response of
; <<>> DiG 9.10.3-P4-Debian <<>> pi-hole.net #127.0.0.1 -p 5354
;; global options: +cmd
;; connection timed out; no servers could be reached
I theorized this could be to do with the pihole container not being able to communicate with the Unbound container through localhost, so updated my docker-compose to try and correct this using the netowkr bridge. However after that I still get the same error, no matter what ports I try. I'm new Docker and Unbound so this has been a bit of a dive in at the deep end! My docker-compose.yml and unbound.conf are below.
docker-compose.yml
version: "3.7"
services:
unbound:
cap_add:
- NET_ADMIN
- SYS_ADMIN
container_name: unbound
image: masnathan/unbound-arm
ports:
- 8953:8953/tcp
- 5354:53/udp
- 5354:53/tcp
- 5333:5333/udp
- 5333:5333/tcp
volumes:
- ./config/unbound.conf:/etc/unbound/unbound.conf
- ./config/root.hints:/var/unbound/etc/root.hints
restart: always
networks:
- unbound-pihole
pihole:
cap_add:
- NET_ADMIN
- SYS_ADMIN
container_name: pihole
image: pihole/pihole:latest
ports:
- 53:53/udp
- 53:53/tcp
- 67:67/udp
- 80:80
- 443:443
volumes:
- ./config/pihole/:/etc/pihole/
environment:
- ServerIP=10.0.0.20
- TZ=UTC
- WEBPASSWORD=random
- DNS1=127.0.0.1#5333
- DNS2=no
restart: always
networks:
- unbound-pihole
networks:
unbound-pihole:
driver: bridge
unbound.conf
server:
# If no logfile is specified, syslog is used
# logfile: "/var/log/unbound/unbound.log"
verbosity: 0
port: 5333
do-ip4: yes
do-udp: yes
do-tcp: yes
# May be set to yes if you have IPv6 connectivity
do-ip6: no
# Use this only when you downloaded the list of primary root servers!
root-hints: "/var/unbound/etc/root.hints"
# Trust glue only if it is within the servers authority
harden-glue: yes
# Require DNSSEC data for trust-anchored zones, if such data is absent, the zone becomes BOGUS
harden-dnssec-stripped: yes
# Don't use Capitalization randomization as it known to cause DNSSEC issues sometimes
# see https://discourse.pi-hole.net/t/unbound-stubby-or-dnscrypt-proxy/9378 for further details
use-caps-for-id: no
# Reduce EDNS reassembly buffer size.
# Suggested by the unbound man page to reduce fragmentation reassembly problems
edns-buffer-size: 1472
# TTL bounds for cache
cache-min-ttl: 3600
cache-max-ttl: 86400
# Perform prefetching of close to expired message cache entries
# This only applies to domains that have been frequently queried
prefetch: yes
# One thread should be sufficient, can be increased on beefy machines
num-threads: 1
# Ensure kernel buffer is large enough to not loose messages in traffic spikes
so-rcvbuf: 1m
# Ensure privacy of local IP ranges
private-address: 192.168.0.0/16
private-address: 169.254.0.0/16
private-address: 172.16.0.0/12
private-address: 10.0.0.0/8
private-address: fd00::/8
private-address: fe80::/10
Thanks!
From the docs https://nlnetlabs.nl/documentation/unbound/unbound.conf/ under the access-control section:
By default only localhost is allowed, the rest is refused. The
is refused, because that is protocol-friendly. The DNS
protocol is not designed to handle dropped packets due to pol-
icy, and dropping may result in (possibly excessive) retried
queries.
The unbound server, by default listen for connections from localhost only. in this case, the request to the DNS server can allow be accepted from inside the docker container running unbound.
Therefore, to allow the DNS to be resolved by the unbound in the docker-compose, add the following to the unbound.conf
server:
access-control: 0.0.0.0/0 allow

Resources