How to disable clock sync in docker-desktop VM for MACOS - docker

Using macos Catalina and docker desktop.
The time of the conteiners perfectly syncs with the time in Vm Docker Desktop.
But I need to test one conteiner with date in the future.
I dont want to advance the clock of my mac because of iCloud services.
So I can achieve this just changing the hour in VM docker-desktop
I run:
docker run --privileged --rm alpine date -s "2023-02-19 11:27"
It changes the time ok. But it last just some seconds. Clearly there is some type of "syncronizer" that keeps changing back the time.
How do I disable this "syncronizer"?

There's only one time in Linux, it's not namespaced, so when Docker runs ntp on the VM to keep it synchronized (in the past it would get out of sync, especially after the parent laptop was put to sleep), that sync applies to the Linux kernel, which applies to every container since it's the same kernel value for everything. Therefore it's impossible to set this on just one container in the Linux kernel.
Instead, I'd recommend going with something like libfaketime that can be used to alter the response applications see when the query that time value. It basically sits as a layer between the kernel and application, and injects an offset based on an environment variable you set.
FROM debian
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y libfaketime \
&& rm -rf /var/lib/apt/lists*
ENV LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1
And then to run it, set FAKETIME:
$ docker run --rm test-faketime date
Thu Feb 17 14:59:48 UTC 2022
$ docker run -e FAKETIME="+7d" --rm test-faketime date
Thu Feb 24 14:59:55 UTC 2022
$ date
Thu 17 Feb 2022 09:59:57 AM EST

I found that you can kill the NTP service which syncs the VM time to the host's time. Details of how service works.
First, use this guide to get a shell inside the VM.
Then, find the sntpc service:
/ # ps a | grep sntpc
1356 root 0:00 /usr/bin/containerd-shim-runc-v2 -namespace services.linuxkit -id sntpc -address /run/containerd/containerd.sock
1425 root 0:00 /usr/sbin/sntpc -v -i 30 127.0.0.1
3465 root 0:00 grep sntpc
Take the number at the beginning of the /usr/sbin/sntpc line, and use kill to stop the process.
/ # kill 1425
I have found that Docker Desktop does not seem to restart this process if it dies, and you can change the VM time without SNTPC changing it back.

Related

Fedora 36 - Docker - Cannot connect to the Docker daemon [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
OS: Fedora 36
I noticed this when my docker containers stopped working out of the blue. Fedora said the docker-compose stopped working. After system updates and a restart, I did the following:
sudo service docker start
Which worked, as I then did sudo service docker status
redirecting to /bin/systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor p>
Active: active (running) since Wed 2022-09-14 10:29:01 MDT; 1s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 2778 (dockerd)
Tasks: 22
Memory: 114.0M
CPU: 347ms
CGroup: /system.slice/docker.service
└─ 2778 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/con>
Sep 14 10:29:00 fedora dockerd[2778]: time="2022-09-14T10:29:00.385376990-06:00>
Sep 14 10:29:00 fedora dockerd[2778]: time="2022-09-14T10:29:00.439821904-06:00>
Sep 14 10:29:00 fedora dockerd[2778]: time="2022-09-14T10:29:00.696795461-06:00>
Sep 14 10:29:00 fedora dockerd[2778]: time="2022-09-14T10:29:00.839972916-06:00>
Sep 14 10:29:00 fedora dockerd[2778]: time="2022-09-14T10:29:00.895624616-06:00>
Sep 14 10:29:00 fedora dockerd[2778]: time="2022-09-14T10:29:00.994809032-06:00>
Sep 14 10:29:01 fedora dockerd[2778]: time="2022-09-14T10:29:01.017873180-06:00>
Sep 14 10:29:01 fedora dockerd[2778]: time="2022-09-14T10:29:01.018007624-06:00>
Sep 14 10:29:01 fedora systemd[1]: Started docker.service - Docker Application >
Sep 14 10:29:01 fedora dockerd[2778]: time="2022-09-14T10:29:01.035944310-06:00>
So I can see it is working, it is running. I ran this again five minutes later, same result.
Next I ran docker ps -a and got:
Cannot connect to the Docker daemon at
unix:///home/XXXXX/.docker/desktop/docker.sock. Is the docker daemon
running?
Which is odd, so next I checked who owns the docker.sock:
sudo ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 Sep 14 10:29 /var/run/docker.sock
For some reason its owned by root, so I decided to change it my user:
sudo chown XXXXX:docker /var/run/docker.sock
Now it shows as me: XXXXX:docker - blanked out user name:
srw-rw---- 1 XXXXX docker 0 Sep 14 10:29 /var/run/docker.sock
Now we stop and start again, as above. As above it is also running after doing sudo service docker status
Now if I try and do docker ps -a I still get:
Cannot connect to the Docker daemon at
unix:///home/XXXXX/.docker/desktop/docker.sock. Is the docker
daemon running?
I have googled, and I have searched, but I am so confused, docker is running - but apparently it's not running?
How do I fix this?
The only thing I can think of is blowing away docker completely and re-installing, but that seems drastic.
Every where I look its:
Make sure its running - check
Change owner/group of of the sock file - done
restart docker - done
Check status - done
Another thing that I stumbled upon was:
sudo dockerd
Which gave me a bunch of output but at the end it was:
failed to start daemon: error while opening volume store metadata
database: timeout
I'm also running Fedora 36.
Docker is looking in your home directory for docker.sock
/home/XXXXX/.docker/desktop/docker.sock
This is most likely because you had / still have docker desktop installed.
If you remove docker desktop there is a possibility that the .docker folder would still be present in your home directory. When reinstalling the docker engine instead of installing docker desktop the daemon would still try and connect to the docker.sock in the aboce directory.
I had the same problem and solved it by doing the following:
Uninstall Docker Desktop
Unistall Docker Engine as per the installation guide
delete the .docker folder rm -R ~/.docker
Continue installing docker engine as per the installation guide
Add your user to the docker group sudo usermod -aG docker $USER
Do one of the following inorder to tell the terminal the group has updated
run newgrp docker
OR Logout out and log back in of your machine
OR Restart your machine
Run docker run hello-world to confirm if its working now
The above was what fixed the issue on my machine and there is always the possibility that our setups arent exactly the same.

Time in Docker container out of sync with host machine

I'm trying to connect to CosmosDB through my SpringBoot app. I have all of this working if I run the app with Spring or via Intellij. But, when I run the app in Docker I get the following error message:
com.azure.data.cosmos.CosmosClientException: The authorization token is not valid at the current time.
Please create another token and retry
(token start time: Thu, 26 Mar 2020 04:32:10 GMT,
token expiry time: Thu, 26 Mar 2020 04:47:10 GMT, current server time: Tue, 31 Mar 2020 20:12:42 GMT).
Note that in the above error message the current server time is correct but the other times are 5 days behind.
What I find interesting is that I only ever receive this in the docker container.
FROM {copy of zulu-jdk11}
ARG JAR_FILE
#.crt file in the same folder as your Dockerfile
ARG CERT="cosmos.cer"
ARG ALIAS="cosmos2"
#import cert into java
COPY $CERT /
RUN chmod +x /$CERT
WORKDIR $JAVA_HOME/lib/security
RUN keytool -importcert -file /$CERT -alias $ALIAS -cacerts -storepass changeit -noprompt
WORKDIR /
COPY /target/${JAR_FILE} app.jar
COPY run-java.sh /
RUN chmod +x /run-java.sh
ENV JAVA_OPTIONS "-Duser.timezone=UTC"
ENV JAVA_APP_JAR "/app.jar"
# run as non-root to mitigate some security risks
RUN addgroup -S pcc && adduser -S nonroot -G nonroot
USER nonroot:nonroot
ENTRYPOINT ["/run-java.sh"]
One thing to note is ENV JAVA_OPTIONS "-Duser.timezone=UTC" but removing this didn't help me at all
I basically run the same step from IntelliJ and I have no issues with it but in docker the expiry date seems to be 5 days behind.
version: "3.7"
services:
orchestration-agent:
image: {image-name}
ports:
- "8080:8080"
network_mode: host
environment:
- COSMOSDB_URI=https://host.docker.internal:8081/
- COSMOSDB_KEY={key}
- COSMOSDB_DATABASE={database}
- COSMOSDB_POPULATEQUERYMETRICS=true
- COSMOSDB_ITEMLEVELTTL=60
I think it should also be mentioned that I changed the network_mode to host. And I also changed the CosmosDB URI from https://localhost:8081 to https://host.docker.internal:8081/
I would also like to mention that I built my dockerfile with the help of:
Importing self-signed cert into Docker's JRE cacert is not recognized by the service
How to add a SSL self-signed cert to Jenkins for LDAPS within Dockerfile?
Docker containers don't maintain a separate clock, it's identical to the Linux host since time is not a namespaced value. This is also why Docker removes the permission to change the time inside the container, since that would impact the host and other containers, breaking the isolation model.
However, on Docker Desktop, docker runs inside of a VM (allowing you to run Linux containers on non-Linux desktops), and that VM's time can get out of sync when the laptop is suspended. This is currently being tracked in an issue over on github which you can follow to see the progress: https://github.com/docker/for-win/issues/4526
Potential solutions include restarting your computer, restarting docker's VM, running NTP as a privileged container, or resetting the time sync in the windows VM with the following PowerShell:
Get-VMIntegrationService -VMName DockerDesktopVM -Name "Time Synchronization" | Disable-VMIntegrationService
Get-VMIntegrationService -VMName DockerDesktopVM -Name "Time Synchronization" | Enable-VMIntegrationService
With WSL 2, restarting the VM involves:
wsl --shutdown
wsl
There is recent known problem with WSL 2 time shift after sleep which has been fixed in 5.10.16.3 WSL 2 Linux kernel which is still not included in Windows 10 version 21H1 update but can be installed manually.
How to check WSL kernel version:
> wsl uname -r
Temporal workaround for the old kernel that helps until next sleep:
> wsl hwclock -s
Here's an alternative that worked for me on WSL2 with Docker Desktop on Windows:
Since it's not possible to set the date inside a Docker container, I just opened Ubuntu in WSL2 and ran the following command to synchronize the clock:
sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"
It worked well, so I added the following line in my root user's crontab:
# Edit root user's crontab
sudo crontab -e
# Add the following line to run it every minute of every day:
* * * * * sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"
After that, I just restarted my Docker containers, and the dates were correct since they seemed to use the WSL2 Ubuntu dates.
Date before (incorrect):
date
Thu Feb 4 21:50:35 UTC 2021
Date after (correct):
date
Fri Feb 5 19:01:05 UTC 2021

GitLab (via Docker) on a QNAP NAS with ARM CPU ("exec format error")

I just bought a QNAP TS-832X NAS (Firmware: 4.3.4.0695 Build 20180830).
This machine comes with an ARM CPU (Annapurna Labs Alpine AL324 Quad-core ARM Cortex-A57 CPU # 1.70GHz).
I bought it only to install GitLab on it, but the official image doesn't seem to work.
When I try to run the image it fails.
[~] # docker run -d --name gitlab-server --hostname build1 -p 10080:10080 -p 10022:22 -p 10443:443 -v /share/GitLab/config:/etc/gitlab -v /share/GitLab/logs:/var/log/gitlab -v /share/GitLab/data:/var/opt/gitlab --restart always gitlab/gitlab-ce:latest
[~] # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a176158729ad gitlab/gitlab-ce:latest "/assets/wrapper" 5 seconds ago Restarting (1) 1 second ago gitlab-server
[~] # docker logs a1
standard_init_linux.go:185: exec user process caused "exec format error"
standard_init_linux.go:185: exec user process caused "exec format error"
standard_init_linux.go:185: exec user process caused "exec format error"
standard_init_linux.go:185: exec user process caused "exec format error"
standard_init_linux.go:185: exec user process caused "exec format error"
standard_init_linux.go:185: exec user process caused "exec format error"
standard_init_linux.go:185: exec user process caused "exec format error"
After googling I figured it might be caused by the host architecture, so I tried running ulm0/gitlab, but with the same result.
I also tried other images with "ARM" in their tags like arm64v8/ubuntu. This one didn't even give any logs.
[~] # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2b2b68bc912c arm64v8/ubuntu:latest "/bin/bash" 7 seconds ago Restarting (0) 1 second ago ubuntu-arm
a176158729ad gitlab/gitlab-ce:latest "/assets/wrapper" 2 hours ago Restarting (1) 51 seconds ago gitlab-server
[~] # docker logs 2b
[~] #
uname -a
Linux build1 4.2.8 #2 SMP Thu Aug 30 07:33:01 CST 2018 aarch64 GNU/Linux
docker version
Client:
Version: 17.09.1-ce
API version: 1.32
Go version: go1.8.3
Git commit: a9fd393
Built: Fri Aug 3 04:31:20 2018
OS/Arch: linux/arm64
Server:
Version: 17.09.1-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: a9fd393
Built: Fri Aug 3 04:31:20 2018
OS/Arch: linux/arm64
Experimental: false
Sorry to hear about your problem, unfortunately I don't believe there is any official GitLab Docker image for ARM devices.
From personal experience I've found that most developers will make a Docker image for Intel devices but not work on ARM Devices.
This topic has been discussed on the QNAP Forums already:
My QNAP is Intel based, so I can't corroborate your results, but quoting a few sentences from a page about docker on Raspberry Pi:
"Docker-based apps you use have to be packaged specifically for ARM architecture! Docker-based apps packaged for x86/x64 will not work and will result in an error such as:
FATA[0003] Error response from daemon: Cannot start container 0f0fa3f8e510e53908e6a459e817d600b9649e621e7dede974d6a65761ad39e5: exec format error
Keep this in mind when searching for apps on the Docker Hub - the source for Docker apps/images. If you see the keyword RPI or ARM in the heading or description, this app can usually be used for the Raspberry Pi."
The TS-831X has a "AnnapurnaLabs, an Amazon company Alpine AL-314 Quad-core 1.7 GHz Cortex-A15 processor" CPU, which is an ARM architecture much like the Raspberry Pi..
So, I suspect you may be limited in what Docker images you have access to and unless an official/canonical maintainer of an app also makes an ARM build, you may be stuck with either rolling your own or trusting a 3rd party hobbyist to do so...
I hate to say this but I'd say you should have picked up an Intel one instead.
I have a QNAP TS-251+ (Intel based) with 8GB RAM and 2x8TB in a RAID Configuration and this works perfectly for my Gitlab instance, in addition to running PLEX and using it as a Webserver as well.
I would also suggest when you do finally get it up and running to map the volumes to directories that are easy to access so you can make configuration changes easily.

Can I use "mount" inside a Docker Alpine container?

I am Dockerising an old project. A feature in the project pulls in user-specified Git repos, and since the size of a repo could cause the filing system to be overwhelmed, I created a local filing system of a fixed size, and then mounted it. This was intended to prevent the web host from having its file system filled up.
The general approach is this:
IMAGE=filesystem/image.img
MOUNT_POINT=filesystem/mount
SIZE=20
PROJECT_ROOT=`pwd`
# Number of M to set aside for this filing system
dd if=/dev/zero of=$IMAGE bs=1M count=$SIZE &> /dev/null
# Format: the -F permits creation even though it's not a "block special device"
mkfs.ext3 -F -q $IMAGE
# Mount if the filing system is not already mounted
$MOUNTCMD | cut -d ' ' -f 3 | grep -q "^${PROJECT_ROOT}/${MOUNT_POINT}$"
if [ $? -ne 0 ]; then
# -p Create all parent dirs as necessary
mkdir -p $MOUNT_POINT
/bin/mount -t ext3 $IMAGE $MOUNT_POINT
fi
This works fine in a Linux local or remote VM. However, I'd like to run this shell code, or something like it, inside a container. Part of the reason I'd like to do that is to contain all fiddly stuff inside a container, so that building a new host machine is as kept as simple as possible (in my view, setting up custom mounts and cron-restart rules on the host works against that).
So, this command does not work inside a container ("filesystem" is an on-host Docker volume)
mount -t ext3 filesystem/image.img filesystem/mount
mount: can't setup loop device: No space left on device
It also does not work on a container folder ("filesystem2" is a container directory):
dd if=/dev/zero of=filesystem2/image.img bs=1M count=20
mount -t ext3 filesystem2/image.img filesystem2/mount
mount: can't setup loop device: No space left on device
I wonder whether containers just don't have the right internal machinery to do mounting, and thus whether I should change course. I'd prefer not to spend too much time on this (I'm just moving a project to a Docker-only server) which is why I would like to get mount working if I can.
Other options
If that's not possible, then a size-limited Docker volume, that works with both Docker and Swarm, may be an alternative I'd need to look into. There are conflicting reports on the web as to whether this actually works (see this question).
There is a suggestion here to say this is supported in Flocker. However, I am hesitant to use that, as it appears to be abandoned, presumably having been affected by ClusterHQ going bust.
This post indicates I can use --storage-opt size=120G with docker run. However, it does not look like it is supported by docker service create (unless perhaps the option has been renamed).
Update
As per the comment convo, I made some progress; I found that adding --privileged to the docker run enables mounting, at the cost of removing security isolation. A helpful commenter says that it is better to use the more fine-grained control of --cap-add SYS_ADMIN, allowing the container to retain some of its isolation.
However, Docker Swarm has not yet implemented either of these flags, so I can't use this solution. This lengthy feature request suggests to me that this feature is not going to be added in a hurry; it's been pending for two years already.
You won't be able to safely do this inside of a container. Docker removes the mount privilege from containers because using this you could mount the host filesystem and escape the container. However, you can do this outside of the container and mount the filesystem into the container as a volume using the default local driver. The size option isn't supported by most filesystems, tmpfs being one of the few exceptions. Most of them use the size of the underlying device which you defined with the image file creation command:
dd if=/dev/zero of=filesystem/image.img bs=1M count=$SIZE
I had trouble getting docker to create the loop device dynamically, so here's the process to create it manually:
$ sudo losetup --find --show ./vol-image.img
/dev/loop0
$ sudo mkfs -t ext3 /dev/loop0
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 10240 1k blocks and 2560 inodes
Filesystem UUID: 25c95fcd-6c78-4b8e-b923-f808517b28df
Superblock backups stored on blocks:
8193
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
When defining the volume mount options are passed almost verbatim from the mount command you run on the command line:
docker volume create --driver local --opt type=ext3 \
--opt device=filesystem/image.img app_vol
docker service create --mount type=volume,src=app_vol,dst=/filesystem/mount ...
or in a single service create command:
docker service create \
--mount type=volume,src=app_vol,dst=/filesystem/mount,volume-driver=local,volume-opt=type=ext3,volume-opt=device=filesystem/image.img ...
With docker run, the command looks like:
$ docker run -it --rm --mount type=volume,dst=/data,src=ext3vol,volume-driver=local,volume-opt=type=ext3,volume-opt=device=/dev/loop0 busybox /bin/sh
/ # ls -al /data
total 17
drwxr-xr-x 3 root root 1024 Sep 19 14:39 .
drwxr-xr-x 1 root root 4096 Sep 19 14:40 ..
drwx------ 2 root root 12288 Sep 19 14:39 lost+found
The only prerequisite is that you create this file and loop device before creating the service, and that this file is accessible wherever the service is scheduled. I would also suggest making all of the paths in these commands fully qualified rather than relative to the current directory. I'm pretty sure there are a few places that relative paths don't work.
I have found a size-limiting solution I am happy with, and it does not use the Linux mount command at all. I've not implemented it yet, but the tests documented below are satisfying enough. Readers may wish to note the minor warnings at the end.
I had not tried mounting Docker volumes prior to asking this question, since part of my research stumbled on a Stack Overflow poster casting doubt on whether Docker volumes can be made to respect a size limitation. My test indicates that they can, but you may wish to test this on your own platform to ensure it works for you.
Size limit on Docker container
The below commands have been cobbled together from various sources on the web.
To start with, I create a volume like so, with a 20m size limit:
docker volume create \
--driver local \
--opt o=size=20m \
--opt type=tmpfs \
--opt device=tmpfs \
hello-volume
I then create an Alpine Swarm service with a mount on this container:
docker service create \
--mount source=hello-volume,target=/myvol \
alpine \
sleep 10000
We can ensure the container is mounted by getting a shell on the single container in this service:
docker exec -it amazing_feynman.1.lpsgoyv0jrju6fvb8skrybqap
/ # ls - /myvol
total 0
OK, great. So, while remaining in this shell, let's try slowly overwhelming this disk, in 5m increments. We can see that it fails on the fifth try, which is what we would expect:
/ # cd /myvol
/myvol # ls
/myvol # dd if=/dev/zero of=image1 bs=1M count=5
5+0 records in
5+0 records out
/myvol # dd if=/dev/zero of=image2 bs=1M count=5
5+0 records in
5+0 records out
/myvol # ls -l
total 10240
-rw-r--r-- 1 root root 5242880 Sep 16 13:11 image1
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image2
/myvol # dd if=/dev/zero of=image3 bs=1M count=5
5+0 records in
5+0 records out
/myvol # dd if=/dev/zero of=image4 bs=1M count=5
5+0 records in
5+0 records out
/myvol # ls -l
total 20480
-rw-r--r-- 1 root root 5242880 Sep 16 13:11 image1
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image2
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image3
-rw-r--r-- 1 root root 5242880 Sep 16 13:12 image4
/myvol # dd if=/dev/zero of=image5 bs=1M count=5
dd: writing 'image5': No space left on device
1+0 records in
0+0 records out
/myvol #
Finally, let's see if we can get an error by overwhelming the disk in one go, in case the limitation only applies to newly opened file handles in a full disk:
/ # cd /myvol
/ # rm *
/myvol # dd if=/dev/zero of=image1 bs=1M count=21
dd: writing 'image1': No space left on device
21+0 records in
20+0 records out
It turns out we can, so that looks pretty robust to me.
Nota bene
The volume is created with a type and a device of "tmpfs", which sounded to me worryingly like a RAM disk. I've successfully checked that the volume remains connected and intact after a system reboot, so it looks good to me, at least for now.
However, I'd say that when it comes to organising your data persistence systems, don't just copy what I have. Make sure the volume is robust enough for your use case before you put it into production, and of course, make sure you include it in your back-up process.
(This is for Docker version 18.06.1-ce, build e68fc7a).

How to limit `docker run` execution time?

I want to run a command inside a docker container. If the command takes more than 3 seconds to finish, the container should be deleted.
I thought I can achieve this goal by using --stop-timeout option in docker run.
But it looks like something goes wrong with my command.
For example, docker run -d --stop-timeout 3 ubuntu:14.04 sleep 100 command creates a docker container that lasts for more than 3 seconds. The container is not stopped or deleted after the 3rd second.
Do I misunderstand the meaning of --stop-timeout?
The document says
--stop-timeout Timeout (in seconds) to stop a container
Here's my docker version:
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:03:51 2017
OS/Arch: darwin/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:12:29 2017
OS/Arch: linux/amd64
Experimental: true
The API version is newer than 1.25.
You can try
timeout 3 docker run...
there is a PR on that subject
https://github.com/moby/moby/issues/1905
See also
Docker timeout for container?
The --stop-timeout option is the maximum amount of time docker should wait for your container to stop when using the docker stop command.
A container will stop when it's told to or when the command is running finishes, so if you change you sleep from 100 to 1, you'll see that the container is stopped after a second.
What I'll advice you to do is to change the ENTRYPOINT of your container to a script that you create, that will execute what you want and keep track of the execution time from within and exit when timeout.
After that you can start your container using the --rm option that will delete it once the script finishes.
A small example.
Dockerfile:
FROM ubuntu:16.04
ADD ./script.sh /script.sh
ENTRYPOINT /script.sh
script.sh:
#!/bin/bash
timeout=5
sleep_for=1
sleep 100 &
find_process=$(ps aux | grep -v "grep" | grep "sleep")
while [ ! -z "$find_process" ]; do
find_process=$(ps aux | grep -v "grep" | grep "sleep")
if [ "$timeout" -le "0" ]; then
echo "Timeout"
exit 1
fi
timeout=$(($timeout - $sleep_for))
sleep $sleep_for
done
exit 0
Run it using:
docker build -t testing .
docker run --rm testing
This script will execute sleep 100 in background, check if its still running and if the timeout of 5 seconds is reach then exit.
This might not be the best way to do it, but if you want to do something simple it may help.
docker run --rm ubuntu timeout 2 sh -c 'echo start && sleep 30 && echo finish'
will terminate after 2 seconds and finish will never be output
Depending on what exactly you want to achieve, the --ulimit parameter to docker run may do what you need. For example:
docker run --rm -it --ulimit cpu=1 debian:buster bash -c '(while true; do true; done)'
After about 1s, this will print Killed and return. With the --ulimit option, it would rune forever.
However, note that this only limits the CPU time, not the wall clock time. You can happily run sleep 24h with a --ulimit cpu=1 because sleep does not consume CPU time.
In my case, I had a docker container that started an Express server, and then remained running, and I wanted a simple test on CI to check that the container can start without any immediate error (such as configuration errors).
I made sure my code returned a non-zero exit code if something failed during start, and then ended up with this:
timeout 10 docker run [ container params ]; test $? -eq 124 && echo "Container ran for 10 seconds without failing"
This will send SIGTERM to the docker container after 10 seconds if it has not already died. If it's alive long enough for the timeout to occur, it will return 124, which is what the test is for. In other words, this verifies that the docker ran long enough to reach a timeout, any error (or early exit with code 0!) will be considered an error.

Resources