Docker Compose to CoreOS - docker

I'm currently learning Docker, and have made a nice and simple Docker Compose setup. 3 containers, all with their own Dockerfile setup. How could I go about converting this to work on CoreOS so I can setup up a cluster later on?
web:
build: ./app
ports:
- "3030:3000"
links:
- "redis"
newrelic:
build: ./newrelic
links:
- "redis"
redis:
build: ./redis
ports:
- "6379:6379"
volumes:
- /data/redis:/data

taken from https://docs.docker.com/compose/install/
the only thing is that /usr is read only, but /opt/bin is writable and in the path, so:
sd-xx~ # mkdir /opt/
sd-xx~ # mkdir /opt/bin
sd-xx~ # curl -L https://github.com/docker/compose/releases/download/1.3.3/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 403 0 403 0 0 1076 0 --:--:-- --:--:-- --:--:-- 1080
100 7990k 100 7990k 0 0 2137k 0 0:00:03 0:00:03 --:--:-- 3176k
sd-xx~ # chmod +x /opt/bin/docker-compose
sd-xx~ # docker-compose
Define and run multi-container applications with Docker.
Usage:
docker-compose [options] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name (default: directory name)
--verbose Show more output
-v, --version Print version and exit
Commands:
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
up Create and start containers
migrate-to-labels Recreate containers to add labels

I've created simple script for installing latest Docker Compose on CoreOS:
https://gist.github.com/marszall87/ee7c5ea6f6da9f8968dd
#!/bin/bash
mkdir -p /opt/bin
curl -L `curl -s https://api.github.com/repos/docker/compose/releases/latest | jq -r '.assets[].browser_download_url | select(contains("Linux") and contains("x86_64"))'` > /opt/bin/docker-compose
chmod +x /opt/bin/docker-compose
Just run it with sudo

The proper way to install or run really anything on CoreOS is either
Install it as a unit
Run in a separate docker container
For docker-compose you probably want to install it as a unit, just like you have docker as a unit. See Digital Ocean's excellent guides on CoreOS and the systemd units chapter to learn more.
Locate your cloud config based on your cloud provider or custom installation, see https://coreos.com/os/docs/latest/cloud-config-locations.html for locations.
Install docker-compose by adding it as a unit
#cloud-config
coreos:
units:
- name: install-docker-compose.service
command: start
content: |
[Unit]
Description=Install docker-compose
ConditionPathExists=!/opt/bin/docker-compose
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/mkdir -p /opt/bin/
ExecStart=/usr/bin/curl -o /opt/bin/docker-compose -sL "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-linux-x86_64"
ExecStart=/usr/bin/chmod +x /opt/bin/docker-compose
Note that I couldn't get the uname -s and uname -m expansions to work in the curl statement so I just replaced them with their expanded values.
Validate your config file with
coreos-cloudinit -validate --from-file path-to-cloud-config
It should output something like
myhost core # coreos-cloudinit -validate --from-file path-to-cloudconfig
2016/12/12 12:45:03 Checking availability of "local-file"
2016/12/12 12:45:03 Fetching user-data from datasource of type "local-file"
myhost core #
Note that coreos-cloudinit doesn't validate the contents-blocks in your cloud-config. Restart CoreOS when you're finished, and you're ready to go.
Update: As #Wolfgang comments, you can run coreos-cloudinit --from-file path-to-cloud-config instead of restarting CoreOS.

I would also suggest docker-compose in a docker container like the one from dduportal.
For the sake of usability I extended my cloud-config.yml as follows:
write_files:
- path: "/etc/profile.d/aliases.sh"
content: |
alias docker-compose="docker run -v \"\$(pwd)\":\"\$(pwd)\" -v /var/run/docker.sock:/var/run/docker.sock -e COMPOSE_PROJECT_NAME=\$(basename \"\$(pwd)\") -ti --rm --workdir=\"\$(pwd)\" dduportal/docker-compose:latest"
After updating the cloud-config via sudo coreos-cloudinit -from-url http-path-to/cloud-config.yml and a system reboot, you are able to use the docker-compose command like you are used to on every other machine.

CentruyLabs created a rubygem called fig2coreos
It translates fig.yml to .service files
fig is deprecated since docker-compose was created but the syntax seems to be the same so that it could probably work.

Simple 3 Steps:
sudo mkdir -p /opt/bin
Grab the command in the official website https://docs.docker.com/compose/install/ and change the output path from /usr/local/bin/docker-compose to /opt/bin :
sudo curl -L "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-$(uname -s)-$(uname -m)" -o /opt/bin/docker-compose
Make executable:
sudo chmod +x /opt/bin/docker-compose
Now you have docker-compose :)

here it is,
the best way I found:
core#london-1 ~ $ docker pull dduportal/docker-compose
core#london-1 ~ $ cd /dir/where-it-is-your/docker-compose.yml
core#london-1 ~ $ docker run -v "$(pwd)":/app \
-v /var/run/docker.sock:/var/run/docker.sock \
-e COMPOSE_PROJECT_NAME=$(basename "$(pwd)")\
-ti --rm \
dduportal/docker-compose:latest up
done!

well, coreOS supports docker but it is bare bone linux with clustering suppport so you need to include a base image for all your containers ( use FROM and in Dockerfile you might also need to do RUN yum -y install bzip2 gnupg etc., ) that has the bins and libs that are needed by you app and redis ( better take some ubuntu base image )
Here you can put all of them in one container/docker or seperate if you do it seperate then you need to link the containers and optionally volume mount - docker has some good notes about it (https://docs.docker.com/userguide/dockervolumes/)
Atlast, you need to write cloud config which specifies the systemd units . In your case you will have 3 units that will be started by systemd ( systemd replaces the good old init system in coreOS) and feed it to coreos-cloudinit ( tip: coreos-cloudinit -from-file=./cloud-config -validate=false ), You also need to provide this cloud-config on the linux bootcmd for persistency.

Currently, the easiest way to use docker-compose agains a CoreOS Vagrant VM. You just need to make sure to forward Docker port.
If you are not particularly attached to using docker-compose, you can try CoreOS running Kubernetes. There are multiple options and I have implemented one of those for Azure.

For using docker-compose with Fedora CoreOS you may run into issues with python, however running docker-compose from a container works perfectly.
There is a handy bash wrapper script and it is documented in the official documentation here: https://docs.docker.com/compose/install/#alternative-install-options under the "Install as a container" section.

Related

Docker is not running on Colab

I have tried to install Docker on google Colab through the following ways:
(1)https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
(2)https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
(3)https://colab.research.google.com/drive/10OinT5ZNGtdLLQ9K399jlKgNgidxUbGP
I started the docker service and saw the status, but it showed 'Docker is not running'. Maybe the docker can not work on the Colab.
I feel confused and want to know the reason.
Thanks
It's possible to run Docker in Colab, but with limiting functionality.
There are two methods of running Docker service, a regular one (more restrictive), and in rootless mode (dockerd inside RootlessKit).
dockerd
Install by:
!apt-get -qq install docker.io
Use the following shell script:
%%shell
set -x
dockerd -b none --iptables=0 -l warn &
for i in $(seq 5); do [ ! -S "/var/run/docker.sock" ] && sleep 2 || break; done
docker info
docker network ls
docker pull hello-world
docker pull ubuntu
# docker build -t myimage .
docker images
kill $(jobs -p)
As shown above, before each docker command, you've to run Docker service (dockerd) in the background, then kill it. Unfortunately you've to run dockerd for each cell where you want to run your docker commands.
Notes on dockerd arguments:
-b none/--bridge none - Disables a network bridge to avoid errors.
--iptables=0 - Disables addition of iptables rules to avoid errors.
-D - Add to enable debug mode.
However in this mode running most of the containers will generate the errors related to read-only file system.
Additional notes:
To disable cpuset support, run: !umount -vl /sys/fs/cgroup/cpuset.
Related issue: https://github.com/docker/for-linux/issues/1124.
Here are some notepads:
https://colab.research.google.com/drive/1Lmbkc7v7XjSWK64E3NY1cw7iJ0sF1brl
https://colab.research.google.com/drive/1RVS5EngPybRZ45PQRmz56PPdz9nWStIb (without cpuset support)
Rootless dockerd
Rootless mode allows running the Docker daemon and containers as a non-root user.
To install, use the following code:
%%shell
useradd -md /opt/docker docker
apt-get -qq install iproute2 uidmap
sudo -Hu docker SKIP_IPTABLES=1 bash < <(curl -fsSL https://get.docker.com/rootless)
To run dockerd service, there are two methods: using a script (dockerd-rootless.sh) or running rootlesskit directly.
Here is the script which uses dockerd-rootless.sh to run a hello-world container:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
/opt/docker/bin/dockerd-rootless.sh --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker run $#
jobs -p
kill $(jobs -p)
To run above script, run:
!sudo -Hu docker bash -x docker-run.sh hello-world
The above may generate the following warnings:
WARN[0000] failed to mount sysfs, falling back to read-only mount: operation not permitted
To remount some folders with write access, you can try:
!mount -vt sysfs sysfs /sys -o rw,remount
!mount -vt tmpfs tmpfs /sys/fs/cgroup -o rw,remount
[rootlesskit:child ] error: executing [[ip tuntap add name tap0 mode tap] [ip link set tap0 address 02:50:00:00:00:01]]: exit status 1
The above error is related to dockerd-rootless.sh script which adds extra network parameters to rootlesskit such as:
--net=vpnkit --mtu=1500 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin
This has been reported at https://github.com/rootless-containers/rootlesskit/issues/181 (however ignored).
To workaround the above problem, we can pass our own arguments to rootlesskit using the following file instead:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
rootlesskit --debug --disable-host-loopback --copy-up=/etc --copy-up=/run /opt/docker/bin/dockerd -b none --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker $#
jobs -p
kill $(jobs -p)
Then run as:
!sudo -Hu docker bash docker-run.sh run --cap-add SYS_ADMIN hello-world
Depending on your image, this may generate the following error:
process_linux.go:449: container init caused "join session keyring: create session key: operation not permitted": unknown.
Which could be solved by !sysctl -w kernel.keys.maxkeys=500, however Colab doesn't allow it. Related: Error response from daemon: join session keyring: create session key: disk quota exceeded.
Notepad showing the above:
https://colab.research.google.com/drive/1oRja4v-PtY6lFMJIIF79No4s3s-vbqd4
Suggested further reading:
Finding the minimal set of privileges for a docker container.
I had the same issue as you and apparently Docker is not supported in Google Colab according to the answers on this issue from its Github repository: https://github.com/googlecolab/colabtools/issues/299#issuecomment-615308778.
I know, it is an old question, but this an old answer (2020) by a member of the Google Colaboratory team.
this isn't possible, and we currently have no plans to support this.
The virtualization/isolation provided by docker is available in Colab as each Colab session is an isolation by itself, if one installs the required libraries, hardware abstraction (Colab by default offers a free GPU and one can choose it during run time).....Have used conda and when I switched to dockers, there was a distinct difference in performance......Docker never had GPU memory fragmentation, but using conda (bare-metal) had the same......I have been trying single colab sessions for training in TF2 and soon will have testing and monitoring sessions(using Tensorboard) and can fully understand, whether having docker in Colab is good or not......Will come back and post my feed back soon....

There is any "Podman Compose"?

I recently found out about Podman (https://podman.io). Having a way to use Linux fork processes instead of a Daemon and not having to run using root just got my attention.
But I'm very used to orchestrate the containers running on my machine (in production we use kubernetes) using docker-compose. And I truly like it.
So I'm trying to replace docker-compose. I will try to keep docker-compose and using podman as an alias to docker as Podman uses the same syntax as docker:
alias docker=podman
Will it work? Can you suggest any other tool? I really intend to keep my docker-compose.yml file, if possible.
Yes, that is doable now, check podman-compose, this is one way of doing it, another way is to convert the docker-compose yaml file to a kubernetes deployment using Kompose. there is a blog post from Jérôme Petazzoni #jpetazzo: from docker-compose to kubernetes deployment
Update 6 May 2022 : Podman now supports Docker Compose v2.2 and higher (see Podman 4.1.0 release notes)
Old answer:
Running docker-compose with Podman as a normal user (rootless)
Requirement: Podman version >= 3.2.1 (released in June 2021)
Install the executable docker-compose
curl -sL -o ~/docker-compose https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)
chmod 755 ~/docker-compose
Alternatively you could also run docker-compose in a container image (see below).
Run
systemctl --user start podman.socket
Set the environment variable DOCKER_HOST
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
Run
~/docker-compose up -d
Running docker-compose with Podman as root
Requirement: Podman version >= 3.0 (released in February 2021)
Follow the same procedure but remove the flag --user
systemctl start podman.socket
Running docker-compose in a container image
Use the container image docker.io/docker/compose to run
docker-compose
podman \
run \
--rm \
--detach \
--env DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock \
--security-opt label=disable \
--volume $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock \
--volume $(pwd):$(pwd) \
--workdir $(pwd) \
docker.io/docker/compose \
--verbose \
up -d
(the flag --verbose is optional)
The same command with short command-line options on a single line:
podman run --rm -d -e DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock --security-opt label=disable -v $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock -v $(pwd):$(pwd) -w $(pwd) docker.io/docker/compose --verbose up -d
Regarding SELINUX: Runnng Podman with SELINUX is preferable from a security point-of-view, but I didn't get it to work on a Fedora 34 computer so I disabled SELINUX by adding the command-line option
--security-opt label=disable
Troubleshooting tips
Test the Docker REST API
A minimal check to see that the Docker REST API is working:
$ curl -H "Content-Type: application/json" \
--unix-socket $XDG_RUNTIME_DIR/podman/podman.sock \
http://localhost/_ping
OK$
Avoid short container image names
If any of your docker-compose.yaml or Dockerfile files contain a short container image name, for instance
$ grep image: docker-compose.yaml
image: mysql:8.0.19
$
$ grep FROM Dockerfile
FROM python:3.9
$
edit the files to use the whole container image name instead
$ grep image: docker-compose.yaml
image: docker.io/library/mysql:8.0.19
$
$ grep FROM Dockerfile
FROM docker.io/library/python:3.9
$
Most often short names have been used to reference DockerHub Official Images
(a catalogue) so a good guess would be to prepend the container image name with docker.io/library/
There are currently many different container image registries, not just DockerHub (docker.io). Writing the whole container image name is thus a good practice. Podman might complain otherwise depending on how Podman is configured.
Rootless users can't bind to ports below 1024
If for instance
$ grep -A1 ports: docker-compose.yml
ports:
- 80:80
$
edit docker-compose.yaml so that the host port number >= 1024, for instance 8080
$ grep -A1 ports: docker-compose.yml
ports:
- 8080:80
$
An alternative solution is to adjust net.ipv4.ip_unprivileged_port_start with sysctl (see Shortcomings of Rootless Podman)
In case Systemd is missing
Most Linux distributions use Systemd where you would preferably start the Podman service (providing the REST API) by "starting" the Podman socket
systemctl --user start podman.socket
or
systemctl start podman.socket
but in case Systemd is missing you could also start the Podman service directly
podman system service --time 0 unix:/some/path/podman.sock
Systemd gives the extra benefit that the Podman service is started on demand with Systemd socket activation and stops after some time of inactivity.
Caveat: Swarm functionality is missing
A difference to Docker is that the functionality relating to Swarm is not supported when using docker-compose with Podman.
References:
https://www.redhat.com/sysadmin/podman-docker-compose
https://github.com/containers/podman/discussions/10644#discussioncomment-857897
Ensure Podman is installed on your machine.
You can install Podman Compose in a terminal with the following command:
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
cd into the directory your docker-compose file is located in
Run podman-compose up
See the following link for a decent introduction.

Running PHP, Apache and MySQL with Docker & docker-compose

I can't figure out how to use my containers after they are running in Docker (through docker-compose).
I've been reading up on Docker for the last few weeks and I'm interested in building an environment that I could develop on that and I could replicate efficiently to collaborate with my colleagues.
I've read through the following articles:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-centos-7
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-and-phpmyadmin-with-docker-compose-on-ubuntu-14-04
https://www.digitalocean.com/community/tutorials/how-to-work-with-docker-data-volumes-on-ubuntu-14-04
I've installed Docker and Docker Compose through a Bash script made up of the commands found in the previous articles: (Run as sudo)
## Install Docker package
function docker_install {
echo "__ installing docker"
# Run installer script
wget -qO- https://get.docker.com/ | sh
# Add user parameter to docker group
usermod -aG docker $1
# Enable docker on boot
sudo systemctl enable docker.service
# Start docker now
sudo systemctl start docker.service
}
## Install docker-compose package
function docker-compose_install {
echo "__ installing docker-compose"
# Update yum repo
sudo yum install epel-release
# Install pip with "yes" flag
sudo yum install -y python-pip
# Install SSL Extension
sudo pip install backports.ssl_match_hostname --upgrade
# Install docker-compose through pip
sudo pip install docker-compose
# Upgrade python
sudo yum upgrade python*
}
# Call Functions
docker_install
docker-compose_install
And I have a docker-compose.yml file for my Docker images. (For now I'm only pulling PHP and Apache as a proof-of-concept, I will include MySQL once can get PHP and Apache working together)
php:
image: php
links:
- httpd
httpd:
image: httpd
ports:
- 80:80
I call a sudo docker-compose up -d and I don't receive any errors:
Creating containerwrap_httpd_1
Creating containerwrap_php_1
Any my question is:
When I run a php -v and service httpd status after the images are running I receive:
-bash: php: command not found
and
● httd.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
If the images are running why am I not able to use PHP or see the status of Apache? How am I supposed to utilize my PHP and Apache containers?
Update
I've asked a more informed version of this question again.
What you are missing is that containers are like different machines. Running the commands at your droplet, you will not see anything running in those machines. You need to connect to them. One easy way is to use something:
docker exec -it CONTAINER_ID /bin/bash. This will give you access to each containers bash.
For your Apache server that would be containerwrap_httpd_1 and it will always change unless you have your docker-compose.yaml set up to use a constant name each time the service is created and started. Of course you can either curl localhost or open a browser and browse to your Droplet's IP address, provided it forwards http on the default port(or use the forwarding port). This will have the effect to see the Apache's default web page, because you have set up the export 80:80 rule for the service.
Seems you have some misunderstanding of Docker's concepts. You can image that each docker container as a lightweight Virtual Machine(of course Docker container is different with real VM). So basically after you created php and httpd containers, these php and httpd commands only available in the containers' bash. You cannot perform these commands in your host, because your host is a different machine from your containers. If you want to attach to the container bash, check out this command exec. You should be able to run php or httpd commands in the containers' bash.
If you want connect to your php container from your host, you can try docker run -p parameter, which will publish a container's port(s) to the host.
Or you want connect your php and httpd containers together, you should consider to read docker's network documents to figure out how to use link or docker network.

Docker-compose does not install or run properly on boot2docker

I have successfully installed docker-machine on my Windows computer, and I'm able to use the Docker CLI on my windows box to run docker commands on a boot2docker VM.
I have docker-machine version 0.2.0, and docker 1.6.2, and the VM yields "4.0.3-boot2docker" when I run "uname -r" on it.
Now I want to install docker-compose to manage that boot2docker VM. Does docker-compose run on my Windows machine and manage the VM "remotely", as docker does, or do I have to install it on the VM itself?
On a related note, I tried installing docker-compose on my VM by doing the following:
C:\ docker-machine ssh dev
$ whoami
docker
$ sudo -i
# curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
# chmod +x /usr/local/bin/docker-compose
# exit
$ which docker
/usr/local/bin/docker
$ which docker-compose
/usr/local/bin/docker-compose
This is fine, but when I try to run docker-compose it doesn't work.
$ docker-compose up
-sh: docker-compose: not found
The file is in /usr/local/bin, and it has exactly the same privileges as docker.
docker#dev:/usr/local/bin$ ls -al do*
-rwxr-xr-x 1 root root 15443675 May 13 21:24 docker
-rwxr-xr-x 1 root root 5263681 May 19 00:09 docker-compose
docker#dev:/usr/local/bin$
Is there something I'm missing?
Have a good look at the curl output. It seems that the download url is not valid anymore. I found that
curl -L https://github.com/docker/compose/releases/download/1.2.0/docker-compose-Linux-x86_x64
gave
{"error":"Not Found"}
For me, the current release 1.3.2 worked well, i.e.:
curl -L https://github.com/docker/compose/releases/download/1.3.2/docker-compose-Linux-x86_x64
NOTE: When using on current CoreOS don't try to output in /usr/local/bin/docker-compose as noted here. Instead use /opt/bin/docker-compose (dir may need to be created first), i.e.
mkdir -p /opt/bin
curl -L https://github.com/docker/compose/releases/download/1.3.2/docker-compose-Linux-x86_x64 > /opt/bin/docker-compose
I found that the download links don't work for older versions and the "install" fails silently resulting in the problem you describe. Have a look to find a download link to a current version here:
https://github.com/docker/compose/releases
Like mkoertgen said, you can always view the output from the curl command in the terminal to see that you don't get "not found" or something similar or run cat /usr/local/bin/docker-compose to verify that it's not a textfile containing "not found".
You can install docker-compose on your Windows host too.
It will manage your docker remotely. You can think of docker-compose as a more abstract interface to docker.
After running boot2docker init, run boot2docker shellinit | Invoke-Expression. This will tell docker and docker-compose where the docker server is running.
More info on installing it on Windows can be found here: http://docs.docker.com/installation/windows/

How to make sure docker's time syncs with that of the host?

I have dockers running on Linode servers. At times, I see that the time is not right on the dockers. Currently I have changed the run script in every docker to include the following lines of code.
yum install -y ntp
service ntpd stop
ntpdate pool.ntp.org
What I would ideally like to do however is that the docker should sync time with the host. Is there a way to do this?
The source for this answer is the comment to the answer at: Will docker container auto sync time with the host machine?
After looking at the answer, I realized that there is no way a clock drift will occur on the docker container. Docker uses the same clock as the host and the docker cannot change it. It means that doing an ntpdate inside the docker does not work.
The correct thing to do is to update the host time using ntpdate
As far as syncing timezones is concerned, -v /etc/localtime:/etc/localtime:ro works.
You can add your local files (/etc/timezone and /etc/localtime) as volume in your Docker container.
Update your docker-compose.yml with the following lines.
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
Now the container time is the same as on your host.
This will reset the time in the docker server:
docker run --rm --privileged alpine hwclock -s
Next time you create a container the clock should be correct.
Source: https://github.com/docker/for-mac/issues/2076#issuecomment-353749995
If you are using boot2docker and ntp doesn't work inside the docker VM (you are behind a proxy which does not forward ntp packets) but your host is time-synced, you can run the following from your host:
docker-machine ssh default "sudo date -u $(date -u +%m%d%H%M%Y)"
This way you are sending your machine's current time (in UTC timezone) as a string to set the docker VM time using date (again in UTC timezone).
NOTE: in Windows, inside a bash shell (from the msys git), use:
docker-machine.exe ssh default "sudo date -u $(date -u +%m%d%H%M%Y)"
This is what worked for me with a Fedora 20 host. I ran a container using:
docker run -v /etc/localtime:/etc/localtime:ro -i -t mattdm/fedora /bin/bash
Initially /etc/localtime was a soft link to /usr/share/zoneinfo/Asia/Kolkata which Indian Standard Time. Executing date inside the container showed me same time as that on the host. I exited from the shell and stopped the container using docker stop <container-id>.
Next, I removed this file and made it link to /usr/share/zoneinfo/Singapore for testing purpose. Host time was set to Singapore time zone. And then did docker start <container-id>. Then accessed its shell again using nsenter and found that time was now set to Singapore time zone.
docker start <container-id>
docker inspect -f {{.State.Pid}} <container-id>
nsenter -m -u -i -n -p -t <PID> /bin/bash
So the key here is to use -v /etc/localtime:/etc/localtime:ro when you run the container first time. I found it on this link.
Hope it helps.
This easy solution fixed our time sync issue for the docker kernel in WSL2.
Open up PowerShell in Windows and run this command to resync the clock.
wsl -d docker-desktop -e /sbin/hwclock -s
You can then test it using
docker run -it alpine date
Reference: https://github.com/docker/for-win/issues/10347#issuecomment-776580765
I have the following in the compose file
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
Then all good in Gerrit docker with its replication_log set with correct timestamp.
If you're using docker-machine, the virtual machines can drift. To update the clock on the virtual machine without restarting run:
docker-machine ssh <machine-name|default>
sudo ntpclient -s -h pool.ntp.org
This will update the clock on the virtual machine using NTP and then all the containers launched will have the correct date.
I was facing a time offset of -1hour and 4min
Restarting Docker itself fixed the issue for me.
To set the timezone in general:
ssh into your container:
docker exec -it my_website_name bash
run dpkg-reconfigure tzdata
run date
docker-compose usage:
Add /etc/localtime:/etc/localtime:ro to the volumes attribute:
version: '3'
services:
a-service:
image: service-name
container_name: container-name
volumes:
- /etc/localtime:/etc/localtime:ro
It appears there can by time drift if you're using Docker Machine, as this response suggests: https://stackoverflow.com/a/26454059/105562 , due to VirtualBox.
Quick and easy fix is to just restart your VM:
docker-machine restart default
For docker on macOS, you can use docker-time-sync-agent. It works for me.
With docker for windows I had to tick
MobyLinuxVM > Settings > Integration Services > Time synchronization
in Hyper-V manager and it worked
Windows users:
The solution is very simple. Simply open a powershell prompt and enter:
docker run --privileged --rm alpine date -s "$(Get-Date ([datetime]::UtcNow) -UFormat "+%Y-%m-%d %H:%M:%S")"
To check that it works, run the command:
docker run --rm -it alpine date
My solution is inspired by something I found in docker forum thread. Anyways, it was the only solution that worked for me on docker desktop, except for restarting my machine (which also works). Here's a link to the original thread: https://forums.docker.com/t/syncing-clock-with-host/10432/23
The difference between the thread answer and mine is that mine converts the time to UTC time, which is necessary for e.g. AWS. Otherwise, the original answer from the forum looks like this:
docker run --privileged --rm alpine date -s "$(date -u "+%Y-%m-%d %H:%M:%S")"
Although this is not a general solution for every host, someone may find it useful. If you know where you are based (UK for me) then look at tianon's answer here.
FROM alpine:3.6
RUN apk add --no-cache tzdata
ENV TZ Europe/London
This is what I added to my Dockerfile ^ and the timezone problem was fixed.
Docker Usage
Here's a complete example which builds a docker image for a go app in a multistage build. It shows how to include the timezone in your image.
FROM golang:latest as builder
WORKDIR /app
ENV GO111MODULE=on \
CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o main
### Certs
FROM alpine:latest as locals
RUN apk --update --no-cache add ca-certificates
RUN apk add --no-cache tzdata
### App
FROM scratch
WORKDIR /root/
COPY --from=locals /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=builder app/main .
COPY --from=builder app/templates ./templates
COPY --from=locals /usr/share/zoneinfo /usr/share/zoneinfo
ENV TZ=Asia/Singapore
EXPOSE 8000
CMD ["./main"]
For me, restarting Docker Desktop did not help. Shutting down Win10 and start it again, it did help.
I saw this on windows, launching Prometheus from docker-compose. I had a 15 hour time drift.
If you are running Docker Desktop on WSL, you can try running wsl --shutdown from a powershell prompt.
Docker Desktop should restart, and you can try running your docker container again.
Worked for me, and I didn't have to restart.
I've discovered that if your computer goes to sleep then the docker container goes out of sync.
https://forums.docker.com/t/time-in-container-is-out-of-sync/16566
I have made a post about it here
Certificate always expires 5 days ago in Docker
Enabling Hyper-V in Windows Features solved the problem:
Windows Features
For whatever reason none of these answers solved my problem.
It had nothing to do with the docker date/time for the images I was creating. It had to do with my local WSL date time.
Once I ran sudo ntpdate-debian everything worked.
If you don't have ntp just install it and run the command. If you aren't using debian then you probably won't have the shell script ntpdate-debian, but you can use ntpd -gq as well. Basically just update the date for your main WSL distro.
This code worked for me
docker run -e TZ="$(cat /etc/timezone)" myimage

Resources