I tried synchronising the timezone from host to container at runtime using:
docker run -v $(pwd)/Data:/code/Data -v /etc/timezone:/etc/timezone:ro -v /etc/localtime:/etc/localtime:ro --restart unless-stopped intermediateservice
This does not appear to work as running the docker logs command:
docker logs -f -t zen_blackwell |tee output.log
Produces a timestamp (which is approximately 2 hours behind):
2021-01-13T10:43:22.372893697Z Ready...
This is incorrect as running the timedatectl command to check the current time on Host (Ubuntu 18.04 LTS - Bionic Beaver) produces:
timedatectl
Local time: Wed 2021-01-13 12:51:00 SAST
Universal time: Wed 2021-01-13 10:51:00 UTC
RTC time: Wed 2021-01-13 10:51:02
Time zone: Africa/Johannesburg (SAST, +0200)
System clock synchronized: yes
systemd-timesyncd.service active: yes
RTC in local TZ: no
What am I missing here?
you can look at this answer:
For the docker run -e TZ={timezone} workaround to work tzdata of course has to be installed in the container you’re trying to run.
What I was initially looking for (Docker container automatically having same timezone as host system) can be achieved through an ugly hack in the run script.
It can query geoip.ubuntu.com 172 (or any other geo IP database) once it is started on the new network and then set the server’s timezone based on the response:
https://askubuntu.com/a/565139
the askubuntu code snippet is given here:
echo "Setting TimeZone..."
export tz=`wget -qO - http://geoip.ubuntu.com/lookup | sed -n -e 's/.*<TimeZone>\(.*\)<\/TimeZone>.*/\1/p'` && timedatectl set-timezone $tz
export tz=`timedatectl status| grep Timezone | awk '{print $2}'`
echo "TimeZone set to $tz"
Related
I want to run some cron jobs in a Docker container and send the output to stdout. I read this post: How to run a cron job inside a docker container?
To try this out with a simple example, I created a demo crontab:
my-crontab:
* * * * * date > /dev/stdout 2> /dev/stderr
# empty line
Then I run an interactive shell inside a Docker container based on the image my scripts will need:
docker run -it --entrypoint bash python:3.10.3-bullseye
/# apt update
/# apt install cron
/# crontab < my-crontab
/# cron -f
If I wait 60 seconds, I expect to see some output to the console attached to the container once every minute. But there is no output.
Finally, I found the output in /var/spool/mail/mail. Here is one message:
From root#5e3c82cb3651 Tue May 10 20:04:02 2022
Return-path: <root#5e3c82cb3651>
Envelope-to: root#5e3c82cb3651
Delivery-date: Tue, 10 May 2022 20:04:02 +0000
Received: from root by 5e3c82cb3651 with local (Exim 4.94.2)
(envelope-from <root#5e3c82cb3651>)
id 1noW5S-0000SA-0T
for root#5e3c82cb3651; Tue, 10 May 2022 20:04:02 +0000
From: root#5e3c82cb3651 (Cron Daemon)
To: root#5e3c82cb3651
Subject: Cron <root#5e3c82cb3651> date > /dev/stdout 2> /dev/stderr
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/root>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=root>
Message-Id: <E1noW5S-0000SA-0T#5e3c82cb3651>
Date: Tue, 10 May 2022 20:04:02 +0000
Tue May 10 20:04:01 UTC 2022
Then it looks like /bin/sh is completely ignoring the shell redirection in the crontab.
#DavidMaze answered this in his comment (above - can't find a link to it). Redirecting to /proc/1/fd/1 and /proc/1/fd/2 (for stderr) totally works. Thank you, David.
Nevertheless, that's counterintuitive. The filesystem nodes /dev/stdout and /dev/stderr already exist as symlinks that point to /proc/1/fd/1 and /proc/1/fd/2, respectively, independent of cron. Why wouldn't cmd > /dev/stdout and cmd > /proc/1/fd/1 be interchangeable in a crontab?
cron was written quite a while a ago. And expectedly, it's not, say, docker-friendly. It expects tasks to produce no output. And if they do, that is considered an error, and cron tries to email the responsible person about it. There are some tricks to make cron tasks' output to be seen in docker logs, but why not choose a docker-friendly cron implementation?
One of which is supercronic:
docker-compose.yml:
services:
supercronic:
build: .
command: supercronic crontab
Dockerfile:
FROM alpine:3.17
RUN set -x \
&& apk add --no-cache supercronic shadow \
&& useradd -m app
USER app
COPY crontab .
crontab:
* * * * * date
A gist with a bit more info.
Another good one is yacron, but it uses YAML.
ofelia can be used, but they seem to focus on running tasks in separate containers. Which is probably not a downside, but I'm not sure why I'd want to do that.
But if you insist on traditional ones, you can find a couple in my other answer.
I have a logrotate config:
/opt/docker_folders/logs/nginx/*.log {
dateext
daily
rotate 31
nocreate
missingok
notifempty
nocompress
postrotate
/usr/bin/docker exec -it nginx-container-name nginx -s reopen > /dev/null 2>/dev/null
endscript
su docker_nginx root
}
folder permissions:
drwxrwxr-x. 2 docker_nginx root 4096 Oct 13 10:22 nginx
nginx is a local host folder mounted to docker container.
docker_nginx is a user that has same uid as nginx user inside a container (uid: 101).
If I run commands (as root)
# /sbin/logrotate -v /etc/logrotate.d/nginx_logrotate_config
# /sbin/logrotate -d -v /etc/logrotate.d/nginx_logrotate_config
# /sbin/logrotate -d -f -v /etc/logrotate.d/nginx_logrotate_config
All working like a charm.
Problem:
But when log rotating automatically by cron I have get error
logrotate: ALERT exited abnormally with [1]
in /var/log/messages
As result logs rotating as usual but nginx don't create new files (access.log, etc).
Looks like postrotate nginx -s reopen script failing.
Linux version is CentOS 7.
SELinux disabled.
Question:
At least how know what happend when logrotate running from cron?
And what problem may be?
PS I know that I can also use docker restart. But I don't want to do this because of service short time disconnect.
PS2 Also I know that here is nocreate parameter in config. That is because I want to create new log files by nginx (to avoid wrong permissions of new files). Anyway, if nginx -s reopen really failing, there is a possibility that nginx will not re-read newly created files.
EDIT1:
I edited /etc/cron.daily/logrotate script and get logs.
There is only one line about problem.
error: error running non-shared postrotate script for /opt/docker_folders/logs/nginx/access.log of '/opt/docker_folders/logs/nginx/*.log '
So I still don't understand what cause this problem... When I run this script manually all running fine.
Okay. Answering by myself.
-it parameters can't be used with cron tasks (and logrotate is also a cron task).
Because cron don't has interactive session (TTY).
I figured it out by running the /usr/bin/docker exec -it nginx-container-name nginx -s reopen > /dev/null 2>/dev/null as a cron task. I have got error message "The input device is not a TTY"
So my new logrotate config looks like
/opt/docker_folders/logs/nginx/*.log {
dateext
daily
rotate 31
nocreate
missingok
notifempty
nocompress
postrotate
/usr/bin/docker exec nginx-container-name /bin/sh -c '/usr/sbin/nginx -s reopen > /dev/null 2>/dev/null'
endscript
su docker_nginx root
}
And it's finally works.
I have to understand the parameter before using it
I have to understand the parameter before using it
I have to understand the parameter before using it
I am running docker container for my development stack which I pulled from docker-hub, the image is created for a different timezone than where my application is supposed to be deployed.
How do I change timezone in a docker container?
I tried to change the timezone config within the container by running
echo "Africa/Lusaka" > /etc/timezone
and restarted the container but I still get the same timezone.
You can override as suggest by #LinPy during the run stage, but if you want to set at your Dockerfile you can set using ENV as tzdata is already there in your base image.
FROM postgres:10
ENV TZ="Africa/Lusaka"
RUN date
Build
docker build -t dbtest .
RUN
docker run -it dbtest -c "date"
Now you can verify on DB side by running
show timezone;
You will see Central Africa Time in both container and Postgres
in the alpine base image, the environment variable will not work. You will need to run
RUN ls /usr/share/zoneinfo && \
cp /usr/share/zoneinfo/Europe/Brussels /etc/localtime && \
echo "Africa/Lusaka" > /etc/timezone && \
There's a few ways to do it .
You can declare the time zone directly as an environment variable in the docker compose file
environment:
- TZ=Asia/Singapore
- DEBIAN_FRONTEND=noninteractive
You can map the container's time zone and local time files to use that of the host machine in the docker compose file
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
I personally prefer to use the second method, in this way , all of my containers will have the same time configuration as my host machine
Simply change the /etc/localtime to the time zone in the /usr/share/zoneinfo directory.
follow these steps:
first log into bash of your container:
docker exec -u 0 -it mycontainer bash
then remove the symbolic link file (/etc/localtime):
sudo rm -rf /etc/localtime
Identify the timezone you want to configure and create the symbolic link for it:
For instance, I would like to set Asia/Tehran timezone:
ln -s /usr/share/zoneinfo/Asia/Tehran /etc/localtime
Now verify it by:
date
and the output would be your timezone:
Sat Jan 30 14:22:17 +0330 2021
the best way is to use ENV in your run stage
-e TZ=Africa/Lusaka
and make sure that the package tzdata is present in the Container
A simpler method would be to add an env var to your deployment:
env:
- name: TZ
value: "Europe/London"
(kubernetes deployment yaml)
If you have TZ env set correctly and you still get the wrong time, make sure the tzdata system dependency is installed.
This question was about a postgres base, mine was about an Alpine base, but based on the Alpine Wiki, what I can glean of best practice means my Dockerfile looks like:
FROM alpine:3.14
RUN apk add --no-cache alpine-conf && \
setup-timezone -z Europe/London
https://wiki.alpinelinux.org/wiki/Alpine_setup_scripts#setup-timezone
For anyone who are using --env-file. add
# .env
TZ=Asia/Shanghai
To .env file, and it will get the time zone you want.
use
ls /usr/share/zoneinfo/
to show all zone
I have a docker local registry:2.6.2, and my Web-UI constantly log an error:
time="2019-08-13T13:58:43Z" level=error msg="Failed to retrieve an updated list of tags for http://172.20.0.20:5000" Error="Get http://172.20.0.20:5000/v2/myrepo/tags/list: http: non-successful response (status=404 body=\"{\\\"errors\\\":[{\\\"code\\\":\\
\"NAME_UNKNOWN\\\",\\\"message\\\":\\\"repository name not known to registry\\\",\\\"detail\\\":{\\\"name\\\":\\\"myrepo\\\"}}]}\\n\")" Repository Name=myrepo file=allregistries.go line=71 source=ap
It happens, becouse of empty repository "myrepo" witch exist om my registry.
curl -X GET http://172.20.0.20:5000/v2/_catalog
{"repositories":["myrepo","myrepo2","myrepo3"]}
curl -X GET http://172.20.0.20:5000/v2/myrepo/tags/list
{"errors":[{"code":"NAME_UNKNOWN","message":"repository name not known to registry","detail":{"name":"myrepo"}}]}
The question is, how to delete this empty repository?
NOTE: All below example codes are executed with root privilege on ubuntu 20.04 LTS
I use following command to delete emptry repository
docker exec -it registry sh -c '
for t in $(find /var/lib/registry/docker/registry/v2/repositories -name tags)
do
TAGS=$(ls $t | wc -l)
REPO=${t%%\/_manifests\/tags}
LINKS=$(find $REPO/_manifests/revisions -name link | wc -l)
if [ "$TAGS" -eq 0 -a "$LINKS" -eq 0 ]; then
echo "REMOVE empty repo: $REPO"
rm -rf $REPO
fi
done'
docker restart registry
the registry is the registry that runs on the machine.
you can check the name by docker ps
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3c40930b56b konradkleine/docker-registry-frontend:v2 "/bin/sh -c $START_S…" About an hour ago Up 42 minutes 80/tcp, 0.0.0.0:30443->443/tcp, :::30443->443/tcp reg-ui
787dd0c13058 registry:2.7.1 "/entrypoint.sh /etc…" 5 days ago Up 34 minutes 0.0.0.0:443->443/tcp, :::443->443/tcp, 5000/tcp registry
#
make sure that both the number of tags and links must be zero.
and the docker process must be restarted, if any directories are removed.
If the command works find, you can create a script
cat<<EOM | tee /usr/local/bin/docker-registry-remove-empty-repo.sh
#!/bin/bash
RM_LOG=\$(docker exec -it registry sh -c '
for t in \$(find /var/lib/registry/docker/registry/v2/repositories -name tags)
do
TAGS=\$(ls \$t | wc -l)
REPO=\${t%%\/_manifests\/tags}
LINKS=\$(find \$REPO/_manifests/revisions -name link | wc -l)
if [ "\$TAGS" -eq 0 -a "\$LINKS" -eq 0 ]; then
echo "REMOVE empty repo: \$REPO"
rm -rf \$REPO
fi
done')
if [ -n "\$RM_LOG" ]; then
echo "\$RM_LOG"
docker restart registry
fi
EOM
chmod +x /usr/local/bin/docker-registry-remove-empty-repo.sh
even you can run it every minutes by registring the script to the cron
sudo sed -i '/docker-registry-remove-empty-repo.sh/d' /var/spool/cron/crontabs/$USER
cat <<EOM | tee -a /var/spool/cron/crontabs/$USER
* * * * * /usr/local/bin/docker-registry-remove-empty-repo.sh
EOM
systemctl restart cron
after that, all is ok, if you see the following system log
# tail -f /var/log/syslog
Jun 7 02:46:01 localhost CRON[38764]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:46:01 localhost CRON[38763]: (CRON) info (No MTA installed, discarding output)
Jun 7 02:47:01 localhost CRON[38774]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:47:01 localhost CRON[38773]: (CRON) info (No MTA installed, discarding output)
Jun 7 02:48:01 localhost CRON[38795]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:48:01 localhost CRON[38794]: (CRON) info (No MTA installed, discarding output)
Jun 7 02:49:01 localhost CRON[38804]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:49:02 localhost CRON[38803]: (CRON) info (No MTA installed, discarding output)
Jun 7 02:50:01 localhost CRON[38814]: (root) CMD (/usr/local/bin/docker-registry-remove-empty-repo.sh)
Jun 7 02:50:01 localhost CRON[38813]: (CRON) info (No MTA installed, discarding output)
^C
#
I'm using Docker version:
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:25:01 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:25:01 UTC 2015
OS/Arch: linux/amd64
I'm on Centos 7
I have a Jenkins-container running in my Docker Environment.
When I'm accessing the Jenkins-container and try to perform a Docker-command I got this error:
libsystemd-journal.so.0: cannot open shared object file: No such file or directory
I tried:[root#localhost lib64]# sudo ln -s /usr/lib64/libsystemd.so.0 libsystemd.so.0
ln: failed to create symbolic link ‘libsystemd.so.0’: File exists
I saw this issue after solving this: question
Here is the same issue: https://botbot.me/freenode/docker/2015-12-01/?page=4
After multiple comments on the previous question, the OP Jenson confirms making it work with:
I will have to make a dockerfile because the run command is too much.
But it works at the moment:
docker run -d --name jenkins --volumes-from jenkins-dv --privileged=true \
-t -i \
-v /var/run/docker.sock:/var/run/docker.sock
-v $(which docker):/bin/docker \
-v /lib64/libsystemd-journal.so.0:/usr/lib/libsystemd-journal.so.0 \
-v /lib64/libsystemd-id128.so.0:/usr/lib/libsystemd-id128.so.0 \
-v /lib64/libdevmapper.so.1.02:/usr/lib/libdevmapper.so.1.02 \
-v /lib64/libgcrypt.so.11:/usr/lib/libgcrypt.so.11 \
-v /lib64/libdw.so.1:/usr/lib/libdw.so.1 \
-p 8080:8080 jenkins
I mentioned that running docker from a container ("cic": "container-in-container") means mounting the docker executable and /var/run/docker.sock.
Apparently, that particular image needs a bit more to run from within a container.
For my developer environment, I'm runnining docker-compose and I connect to the ubuntu image container (14.04 LTS) (I mount the /var/run/docker.sock as well).
Since an update of my host ubuntu system yesterday evening, I had the same error when I wanted to run a docker command inside the dev container :
[dev#docker_dev]:~$ docker ps
docker: error while loading shared libraries: libsystemd-journal.so.0: cannot open shared object file: No such file or directory
So I did an update and I installed libsystemd-journal0 :
[dev#docker_dev]:~$ sudo apt-get update
[dev#docker_dev]:~$ sudo apt-get install libsystemd-journal0
And now my dev container is working fine with docker commands
From the error, it appears that the shared library required by your executable is missing. One way to resolve this issue is:
Use "COPY" command inside Dockerfile to copy the shared libraries/dependencies inside the container. Example: COPY {local_path} {docker_path}
Then, set the environment variable where shared libraries are searched for first before the standard set of directories. For instance for Linux based OS, LD_LIBRARY_PATH is used. Environment variables can be set via Docker's Environment replacement (ENV). Example: ENV LD_LIBRARY_PATH={docker_path}:$LD_LIBRARY_PATH
Other is to statically link your binary with the dependencies (language dependent).