In docker inspect, what do "StartedAt" and "FinishedAt" mean? - docker

This is the result of running docker inspect on a running container:
$ docker inspect some_container | jq .[0].State
{
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 16086,
"ExitCode": 0,
"Error": "",
"StartedAt": "2021-09-16T02:36:12.036585245Z",
"FinishedAt": "2021-09-16T04:36:10.87103895+02:00"
}
Nobody was logged into that system at the times listed in the StartedAt and FinishedAt entries, and it doesn't seem like the container was restarted after a crash:
$ docker inspect lxonlinedlservice_rabbitmq_1 | grep RestartCount
"RestartCount": 0,
What do the StartedAt and FinishedAt entries mean?

From github
startedAt - Time at which previous execution of the container started
finishedAt - Time at which the container last terminated
You mentioned crash. Maybe container started after a crash at 2021-09-16T02:36:12.036585245Z and at 2021-09-16T04:36:10.87103895+02:00 there was another crash?
Or, might it me that docker host where the container runs was rebooted?
Suggest also to check that your clock is synced using ntp. Check this docker best practice.

To get the exact uptime for a container: docker inspect -f '{{ .State.StartedAt }}' CONTAINER_ID
StartedAt: when you started your image or container
FinishedAt: when you stopped your image or container
(from this answer https://stackoverflow.com/a/28203469/500902)

Related

how to bring up failed container

have a container that failed after a long setup and i want to log in (exec bash) at that point instead of executing the slow setup again. Is there any way?
The container is a left over from a docker build process, it is still the FROM ... AS builder stage.
if i try to start it, it will fail right away.
$ docker start -ai 3d35a7f7a7b4
/bin/sh: mvn: command not found
trying to exec anything right away doesn't work either
$ docker start 3d35a7f7a7b4 & docker exec 3d35a7f7a7b4 -it /bin/sh
[1] 403273
3d35a7f7a7b4
unable to upgrade to tcp, received 500
[1]+ Done docker start 3d35a7f7a7b4
more info:
$ docker inspect 3d35a7f7a7b4
[
{
"Id": "3d35a7f7a7b4018ebbbd9aa59356714d7fed291a43752cbcb86dd852c946cc1e",
"Created": "2022-07-06T23:56:37.001004587Z",
"Path": "/bin/sh",
"Args": [
"-c",
"mvn --version"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 127,
"Error": "",
"StartedAt": "2022-07-07T00:02:35.755444447Z",
"FinishedAt": "2022-07-07T00:02:35.75741167Z"
},
"Image": "sha256:4819e2469963fdf531ec5bce5401b7ae7d28cd403528c0109512b5170ef61752",
...
this is not an optimal answer. Here just for documentation (and for people to vote up if it is the best one can do with docker)
docker run can be used on the image of the stopped container, and you can pass the CMD parameter right away. But any other peculiarity of the stopped container will also have to be repeated. e.g. network.
for the example on the question:
host$ docker run -it sha256:4819e2469963fdf531ec5bce5401b7ae7d28cd403528c0109512b5170ef61752 /bin/bash
container# _

Finding reason why docker-swarm restarts some services at deploy time even if there are no changes

We have a docker swarm stack with 36 services deployed on Azure Cloud with Docker 20.10.12 version, 7 nodes and 3 managers. Every time a deploy is done in the cluster, 3 of our services get restarted even if there are no updates for them.
The issue is not happening for the other services, so we want to understand why these 3 are restarted.
For all services including these we are using a explicitly tagged images(not latest) and 2 of the restarted services have a healthcheck, one of them doesn’t.
Inspecting the shutdown containers doesn’t provide the cause, container health looks ok until it seems to be shutdown by the swarm:
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-06-28T11:07:50.253278648Z",
"FinishedAt": "2022-06-28T12:09:38.028324252Z",
"Health": {
"Status": "unhealthy",
"FailingStreak": 0,
"Log": [
{
"Start": "2022-06-28T12:07:31.823395345Z",
"End": "2022-06-28T12:07:31.877107802Z",
"ExitCode": 0,
"Output": ""
}...
Docker stack ps for one of the services: “CurrentState”:“Shutdown 2 hours ago”,“DesiredState”:“Shutdown”,“Error”:“”,“ID”:“986fpoefleoj” …
We’ve set docker logs to debug and I can see the running task desired state is set to shutdown and a new task is created but no reason why this happens:
new task ---> time="2022-06-28T12:08:26.336889162Z" level=debug msg="task kcu8p5kk29s7a3bqxaaifvqnm was marked pending allocation" module=node node.id=g9to4w40wx0met9q8h64h29b7
...
existing task---> time="2022-06-28T12:08:26.533482977Z" level=debug msg=assigned module=node/agent node.id=g9to4w40wx0met9q8h64h29b7 task.desiredstate=SHUTDOWN task.id=986fpoefleoj8mpnhu0eaop62
time="2022-06-28T12:08:26.533530078Z" level=debug msg=assigned module=node/agent node.id=g9to4w40wx0met9q8h64h29b7 task.desiredstate=READY task.id=kcu8p5kk29s7a3bqxaaifvqnm
Existing container is disabled and a new one is created:
DisableService 134bcd810301dad5ce535324960e66de085d3b5db07e4ac80cdef4fb4b2e6d69 START
...
time="2022-06-28T12:08:45.247316912Z" level=debug msg="EnableService b00270822ac16280d40d162d34eed1b01a05af06debe6fd10e7bf90fd0cd1c7e START"
...
Found these posts that are somewhat similar but they don’t seem to have gotten an answer:
https://forums.docker.com/t/why-does-docker-swarm-set-desired-status-to-shutdown/69379
How to investigate Docker Swarm Mode shutting down containers?
Any idea how we could find the cause why these services are restarted?

Difference in Docker hyperV when connecting to a local volume vs a network volume [Windows Container]

I am am building the an image where an external network drive is required to be mapped. There is a strange difference in the way docker interacts with the volumes, when using hyperV isolation.
Case 1:
HyperV isolation, LocalDrive C:\data
docker run -v "C:\data":"C:\images" -i --isolation hyperv dockerimage
This executes perfectly, and doesn't cause trouble. This is not an ideal situation for my use-case, as the data is present in an local network on a server.
Case 2:
a) HyperV isolation, RemoteDrive \192.xxx.0.xx\data
RemotePath is mapped locally to a local drive, using
New Smb-GlobalMapping -RemotePath \\192.xxx.0.xx\data -LocalPath H:
Docker image is run again ,
docker run -v "H:/":"C:/images" -i --isolation hyperv dockerimage
Gives the following error
{
"Id": "98efda4f99108b5b55a5294f4063e0178c4eb5cb0c4c90dff892b02a7cb53784",
"Created": "2020-11-17T16:02:00.6358887Z",
"Path": "cmd",
"Args": [],
"State": {
"Status": "created",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 128,
"Error": "hcsshim::CreateComputeSystem 98efda4f99108b5b55a5294f4063e0178c4eb5cb0c4c90dff892b02a7cb53784: The parameter is incorrect.\n(extra info: {\"SystemType\":\"Container\",\"Name\":\"98efda4f99108b5b55a5294f4063e0178c4eb5cb0c4c90dff892b02a7cb53784\",\"Owner\":\"docker\",\"IgnoreFlushesDuringBoot\":true,\"LayerFolderPath\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\98efda4f99108b5b55a5294f4063e0178c4eb5cb0c4c90dff892b02a7cb53784\",\"Layers\":[{\"ID\":\"da562984-7fd6-595c-9fb2-dfc33bbbfc90\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\859133db54b33dbe98f51e28660da06c3a299d417841a45ed5f00c3d3b2698fa\"},{\"ID\":\"86486f96-8a0d-5098-8d22-4115528554f8\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\84f5ccf5da7976b5af2375bfdc76c65e1089ed84948278c2d3b0e6213d4cdc9d\"},{\"ID\":\"2fc30483-1251-5e25-b949-ba907b2555da\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\50faff83b3942f7551a9b0538ace20126e7345efcf2992aa5972ae8885123baa\"},{\"ID\":\"5e04bd25-484e-5d28-9f1a-f61c08d3b89d\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\b7b60b5fa26b4ad2835e8d10ceacc0c25e82c6a4f9ad296c8c11f7ecc9ab876c\"},{\"ID\":\"37385a8f-d779-5a3e-a4f1-2e10fc948496\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\5491ad561bb73035517f646a522f506a1869f9d477915348409f1a3eb2cfba19\"},{\"ID\":\"8ee96973-01f4-5a0b-9be9-82725d7e25af\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\cdd6f63701ab3d22ee79917d89a4e6c94149d5c29fa3f9db9114cd5d5c4bc264\"},{\"ID\":\"b2559eb2-1b51-5d04-a419-6d1393fbf21f\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\d9964c2a8c1a800268f9d3d6d57ed20929499698e52e01c4a0ec77eda8e76505\"},{\"ID\":\"f9e55ff6-10ae-5d1c-9146-cb04fd16dc6f\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\110665c19202b6e830ba27ba4ac9cda37da25b323c2bb4867136208b40c29e84\"},{\"ID\":\"3d845347-bfed-561e-b051-0a804df11edd\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\e428b1c7d890891419929dc40bb5d891efe69dac7e70844976cb8466bb35094a\"},{\"ID\":\"ee1dce4e-c131-53c3-bb05-7223d917aca8\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\22f70d8b8c8edd3710c1bcbc160487fb48c01a40cfb8d8b51c624947cbce42d5\"},{\"ID\":\"4b33e55b-aa1e-5574-91bc-cbdbb1857c6f\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\b47e9eb54f5da950c1c1926e5e0df0b89a8852afab06a134373057c29b151f6f\"},{\"ID\":\"37460943-6f35-510d-8eac-d1884ca7b2e4\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\8336953b188c7daad94e83b2c64ad652f1827d11818e07219a7a656010fd8efb\"},{\"ID\":\"0144cfde-366a-52d7-95cb-e872a47dc74e\",\"Path\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\64c264e7affc657fe876cdef72289936eb63df4f31d1d76848f5453a43182e86\"}],\"HostName\":\"98efda4f9910\",\"MappedDirectories\":[{\"HostPath\":\"h:\\\\\",\"ContainerPath\":\"c:\\\\images\",\"ReadOnly\":false,\"BandwidthMaximum\":0,\"IOPSMaximum\":0,\"CreateInUtilityVM\":false}],\"HvPartition\":true,\"EndpointList\":[\"041D1842-5520-4BD7-A5F1-1F00A6EC724E\"],\"HvRuntime\":{\"ImagePath\":\"C:\\\\ProgramData\\\\Docker\\\\windowsfilter\\\\8336953b188c7daad94e83b2c64ad652f1827d11818e07219a7a656010fd8efb\\\\UtilityVM\"},\"AllowUnqualifiedDNSQuery\":true})",
"StartedAt": "0001-01-01T00:00:00Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
b) Process isolation ,RemoteDrive \192.xxx.0.xx\data
docker run -v "H:/":"C:/images" -i --isolation process dockerimage
Runs perfectly fine.
Link to the isolation modes : https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container
Is this something to do permissions inside the hyperV environment?, The Error message from docker unfortunately, does not give a clear picture. Using Hyper-V isolation is necessary in my case due to compatibility issues with Windows container OS and Host OS https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-20H2%2Cwindows-10-20H2.
Try using Linux-style paths, e.g. instead of "C:\images" use /c/images (no quotes are necessary unless your path contains spaces).

Check if docker container is stopped or failed

I am attempting to check if (and handle all edge cases for) a container has been stopped or exited in an unclean state. I am using the 'State' block returned by docker inspect <container> to attempt to resolve this.
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-03-01T18:56:19.541980678Z",
"FinishedAt": "2018-03-01T18:56:24.618264625Z"
},
I know the 'ExitCode' of a stopped container will be 137, but there's a lot of other information there. Will filtering on State.ExitCode == 137 be enough to filter for stopped instances?
EDIT: I should mention the reason why I am attempting to do this instead of using pause and unpause to manage my containers is that I want and active/standby arrangement of containers with port bindings. A container in paused state still maintains its port bindings which I need released when it is in standby state.
To get stopped container:
docker ps -f status=exited -f name=$container_name
See docker ps filtering documentation.

docker healthcheck in config.v2.json

docker ps --quiet | xargs docker inspect --format '{{ .Id }}: Health={{ .State.Health.Status }}'
c1ab47fdc94858275e9327ce56d039010cb9db1eb7865e0917f3d8a74862367e: Health=unhealthy
**Template parsing error: template: :1:27: executing "" at <.State.Health.Status>: map has no entry for key "Health"**
I just want to know why the error map has no entry for key "Health is reported after docker inspect command. The status should be in the container's config.v2.json file, however, in that file, there's no unhealthy under Status, so I want to know where is the "Health=unhealthy" come from.
Thanks.
The ouput of command docker inspect, it shows the json as response.
If you notice the response, there is nothing called Health. Hence, the error. However, there is State -> Status whose value is running. So just use .State.Status instead of .State.Health.Status
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 23570,
"ExitCode": 0,
"Error": "",
"StartedAt": "2016-10-30T07:06:14.114090476Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
Since you wanted to see Status, please use below command which shows as you desired output:
sudo docker ps --quiet | xargs sudo docker inspect --format '{{ .Id }}:Health={{ .State.Status }}'
5db8668eb121bd67b6fdeba12269fa7f194c48140b5d547c70befe70b2c3f607:Health=running
To show different Status value for another container which is not running any more shows as below :
$ sudo docker inspect --format '{{ .Id }}:Health={{ .State.Status }}' 060d98f7838e
060d98f7838ec901fd7d3c855254af0d15702d2758d61f6754af8899bee9613a:Health=exited
Hope this is helpful.

Resources