VSCode gets disconnected from remote container constantly - docker

I have a setup to work remotly in a vm inside a container and I have been working with these setup for several months. I followed this guide to setup my client environment.
Today it got disconnected when I removed the container (because I was building a new one), and since that moment I can't connect to work. I get connected and instantly I get disconnected. I tried several things
I closed and opened the Visual Studio Code
I restarted the computer
I did a docker system prune (both client and server)
I removed lot of docker images that were not being used
I already deleted and created a new container
The container is working, the problem is that I am getting constantly disconnected with VSCode from the container
In the logs I can read
[23918 ms] Start: Run in container: set -o noclobber ; mkdir -p '/home/root/.vscode-server/data/Machine' && { > '/home/root/.vscode-server/data/Machine/.installExtensionsMarker' ; } 2> /dev/null
[24517 ms]
[24517 ms]
[24517 ms] Exit code 1
[24518 ms] Start: Run in container: for pid in `cd /proc && ls -d [0-9]*`; do { echo $pid ; readlink -f /proc/$pid/cwd ; xargs -0 < /proc/$pid/environ ; xargs -0 < /proc/$pid/cmdline ; } ; echo ; done 2>/dev/null
[25583 ms] Extension host agent is already running.
[25583 ms] Start: Run in container: cat /home/root/.vscode-server/bin/58bb7b2331731bf72587010e943852e13e6fd3cf/.devport 2>/dev/null
I think that maybe that Exit code 1 can be telling that something is not right

Related

Get return code of a Docker container run with --rm -d

If I docker run a container with some script inside using --rm and --detach, how can I found the RC of that container ? I.e. whether the script inside the container finished successfully or failed ?
Because of --rm flag I can't see that container in docker ps --all after it finishes.
You can't, since you're explicitly asking Docker to clean up after the image. That will include all of the metadata like the exit status.
On the other hand, if you're actively planning to check the status code anyways, you'll have the opportunity to do the relevant cleanup yourself.
CONTAINER_ID=$(docker run -d ...)
...
docker stop "$CONTAINER_ID" # if needed
docker wait "$CONTAINER_ID" # produces its exit status
CONTAINER_RC=$?
docker rm "$CONTAINER_ID"
if [ "$CONTAINER_RC" -ne 0 ]; then
echo "container failed" >&2
fi
The best way to check weather the script works is first capture the script response using command1 > everything.txt 2>&1
And lastly, you can go inside to the running container using docker exec -it <mycontainer> bash

Why can't I always kill a docker process with Ctrl-C?

I have a script which I want to optionally run within a container. I have observed that if I run an intermediate script it can be killed with Ctrl-C, however if I do not then it can't.
Here is an example:
test1.sh:
#!/bin/bash
if [ "${1}" = true ]; then
while true; do echo "args: $#"; sleep 1; done
else
docker run --rm -it $(docker build -f basic-Dockerfile -q .) /test2.sh $#
fi
test2.sh:
#!/bin/bash
/test1.sh true $#
basic-Dockerfile:
FROM alpine:3.7
RUN apk add --no-cache bash
COPY test1.sh test2.sh /
ENTRYPOINT ["bash"]
Running ./test1.sh true foo bar will happily print out true foo bar, and running ./test1.sh foo bar will do the same in a container. Sending Ctrl-C will kill the process and delete the container as expected.
However if I try to remove the need for an extra file by changing /test2.sh $# to /test1.sh true $#:
test1.sh
#!/bin/bash
if [ "${1}" = true ]; then
while true; do echo "args: $#"; sleep 1; done
else
docker run --rm -it $(docker build -f basic-Dockerfile -q .) /test1.sh true $#
fi
then the process can no longer be terminated with Ctrl-C, and instead must be stopped with docker kill.
Why is this happening?
Docker version 18.06.1-ce running on Windows 10 in WSL
That's a common misunderstanding in docker but it's for a good reason.
When a process run as PID 1 in Linux it behaves a little different. Specifically, it ignores signals as SIGTERM (which you send when hitting Ctrl-C), unless the script is coded to do so. This doesn't occur when PID > 1.
And that's why your second scenario works (The PID 1 is script2.sh, which delegates the signal in script1.sh, which stops because it is not PID1) but not the first one (script1.sh is PID 1 and thus it doesn't stop with SIGTERM).
To solve that, you can trap the signal in script1.sh and exit:
exit_func() {
echo "SIGTERM detected"
exit 1
}
trap exit_func SIGTERM SIGINT
Or tell docker run to init the container with a different process as PID 1. Specifically, if you add --init to docker run with no more arguments, it uses a default program, tini, prepared to handle these situations:
docker run --rm -it --init $(docker build -f basic-Dockerfile -q .) /test1.sh true $#
You can also you can also use exec to replace the current shell with a new one which can be stopped with ctrl-c
For example start.sh script which starts nginx server and runs uwsgi
#!/usr/bin/env bash
service nginx start
uwsgi --ini uwsgi.ini
should changed to
#!/usr/bin/env bash
service nginx start
exec uwsgi --ini uwsgi.ini
After theese changes ctrl c will stop the container

Does Docker run command runs in background?

I am running an docker Run command on bash it looks like immediately after firing this command control start executing next line instead of waiting for docker container startup.
Is this is how docker works??
UPDATE : I am using -dit as attribute which means it is running in detached mode which I think explains why it jumped to next line immediately. As it is a VM's startup script it will have to be detached but is there is any option where we can at least wait till docker container is done with its provisioning ?
The -d is causing the container to detach immediately. All containers have a different idea of when they are "done with their provisioning" and Docker can't know how the internals of every container work so it's hard for Docker to be responsible for this.
Docker has added a HEALTHCHECK so you can define a test specifically for you container. Then you can query the containers state and wait for it to become healthy in your script.
HEALTHCHECK --interval=1m --timeout=5s \
CMD curl -f http://localhost/ || exit 1
Then wait in the script
now="$(date +%s)"
let timeout=now+60
while sleep 5; do
res="$(docker inspect --format='{{.State.Health}}' container_id) 2>&1"
if [ "res" == "healthy" ]; then break; fi
if [ "$(date +%s)" -lt "$timeout" ]; then
echo "Error timeout: $res"
# handle error
break
fi
done
You can modify the wait to run any command, like a curl or nc if you want to forgo the HEALTHCHECK in the container
docker logs container_id may also include the information you need to wait for. Most daemons will log something like "Ready to accept connections"

Tail stdout from multiple Docker containers

I have a script that starts 10 containers in background mode (fig up -d option). I want to aggregate the stdout or the log in /var/log from all of them. How can I do this?
The containers are started using different docker-compose files so I can not do docker-compose up target1 target2 target3
docker logs only accepts one container as a parameter.
I was considering creating a volume from the /var/log on all containers, mapping them to a directory outside of docker, making sure the logs do not have colliding name and than using the bash tail -f * . But I would appreciate a more elegant solution
This bash script will do what you want:
docker-logs
#!/bin/bash
if [ $# -eq 0 ]; then
echo "Usage: $(basename "$0") containerid ..."
exit 1
fi
pids=()
cleanup()
{
kill "${pids[#]}"
}
trap cleanup EXIT
while [ $# -ne 0 ]
do
(docker logs -f -t --tail=10 "$1"|sed -e "s/^/$1: /")&
pids+=($!)
shift
done
wait
Usage:
$ docker-logs containerid1 containerid2 ... containeridN
The output of this script has each line from the tracked logs prepended with the container id.
The script works in --follow mode and must be interrupted with Ctrl-C.
Note that the options of docker logs are hardcoded in the script. If you need to be able to control the options of docker logs from the command line then you will need to parse the command line arguments (for example with getopts).
Docker does not support as 1.12 yet. But I have a workaround via bash;
docker ps | grep -w <filter-text> | for i in `awk '{ print $1 }'`; do docker logs -f --tail=30 $i & done
I am using docker swarm modes comes with 1.12 and deploying many replication. So all of my containers contain common text which is same as service name. To tail all of its logs in a docker node , I am using this on each docker node. filter-text will be filtering only my containers.
If you want to stop tailing, this works for me;
pkill -f 'docker logs'

How to tell if a docker container run with -d has finished running its CMD

I want to make a simple bash script which runs one docker container with -d and then do something else if and only if the container has finished running its CMD. How can I do this while avoiding timing issues since the docker container can take a while to finish starting up?
My only thought was that the Dockerfile for the container will need to create some sort of state on the container itself when it's done and then the bash script can poll until the state file is there. Is there a better / standard way to do something like this?
Essentially I need a way for the host that ran a docker container with -d to be able to tell when it's ready.
Update
Made it work with the tailing logs method, but it seems a bit hacky:
docker run -d \
--name sauceconnect \
sauceconnect
# Tail logs until 'Sauce Connect is up'
docker logs -f sauceconnect | while read LINE
do
echo "$LINE"
if [[ "$LINE" == *"Sauce Connect is up"* ]]; then
pkill -P $$ docker
fi
done
You should be fine to check the logs via docker logs -f <containter_name_or_ID>
-f : same as tail -f
For example, the CMD is finished, and export a log as JOB ABC is successfully started.
your script can detect and run the rest jobs after get it.

Resources