why executing docker log -f it returns data from scratch? - docker

I have a simple code in Dotnet core console and it simply counted forever :
class Program
{
static void Main(string[] args)
{
int counter = 1;
while (true)
{
counter++;
Console.WriteLine(counter);
}
}
}
I contained it with docker and ran it via VS Docker run.
but every time that I execute > docker logs -f dockerID it starts and shows counting from scratch! (1,2,3,....). I expected that whenever I run this command it shows me logs from the last integer that it counts!
Is "docker logs -f" cause to run a new instance of my application every time?

When running docker help logs you see this:
--tail string Number of lines to show from the end of the logs
(default "all")
So you should do:
docker logs --tail 100 -f dockerID
Additional reference: docker logs documentation > Options

docker logs -f should work similar to tail -f, so, it should show just new logs.
Therefore, if you're watching logs repeated, it looks your program is being executed in loop.
The only thing that comes to my mind is a little trick, although it's not an elegant solution, it can be useful for other similar situations:
Define in your system history time format according to docker logs
export HISTTIMEFORMAT="%Y-%m-%dT%H:%M:%S "
Get your last docker logs command date with a simple script.
LAST_DOCKER_LOG=`history | grep " docker logs" | tac | head -1 | awk '{print $2}'`
Get logs after your last docker logs execution:
docker logs --since $LAST_DOCKER_LOG <your_docker>
In one line:
docker logs -t --since `history | grep " docker logs" | tac | head -1 | awk '{print $2}'` <your_docker>
Note: history time and docker log times have to be in the same timezone.
In spite of everything, I'd have a look to your loop execution. Maybe, docker log -t timestamp show option can help you to solve it.

Related

browserless/chrome docker taking too much cpu on certain websites

We are using browserless docker to read content on certain websites. For few websites, the CPU increases proportional to the number of sessions open.
Specifics
docker version browserless/chrome:1.35-chrome-stable
Script to reproduce once docker is up and running.
#!/bin/bash
HOST='localhost:3000'
curl_new_session() {
echo $(curl -XPOST http://$HOST/webdriver/session -d ' {"desiredCapabilities":{"browserName":"chrome","platform":"ANY","chromeOptions":{"binary":"","args":["--window-size=1400,900","--no-sandbox","--headless"]},"goog:chromeOptions":{"args":["--window-size=1400,900","--no-sandbox","--headless"]}}}' | jq '.sessionId') | tr -d '"'
{"desiredCapabilities":{"browserName":"chrome","platform":"ANY","chromeOptions":{"binary":"","args":["--window-size=1400,900","--no-sandbox","--headless"]},"goog:chromeOptions":{"args":["--window-size=1400,900","--no-sandbox","--headless"]}}}
}
# we open the session and keep it running
curl_visit_url() {
local id=$1
local url=$2
echo "http://$HOST/webdriver/session/$id/url"
echo $(curl http://$HOST/webdriver/session/$id/url -d '{"url":"'$url'"}' |jq '' )
}
for i in {1..5}
do
id=$(curl_new_session)
echo $id
curl_visit_url $id 'http://monday.com' &
sleep 0.5
echo '.'
done
This specific site(monday.com), uses too much cpu but this can happen with other sites as well.
Question
Considering we encounter such websites from time to time. What's the best way to handle them?
We want the sessions to be kept alive and not close after opening them.

How do I check that a docker host is in swarm mode?

After executing this;
eval $(docker-machine env mymachine)
How do I check if the docker daemon on mymachine is a swarm manager?
To check general swarm membership, my preferred method is to use the formatted output from docker info. The possible values of this are currently inactive, pending, active, locked, and error:
case "$(docker info --format '{{.Swarm.LocalNodeState}}')" in
inactive)
echo "Node is not in a swarm cluster";;
pending)
echo "Node is not in a swarm cluster";;
active)
echo "Node is in a swarm cluster";;
locked)
echo "Node is in a locked swarm cluster";;
error)
echo "Node is in an error state";;
*)
echo "Unknown state $(docker info --format '{{.Swarm.LocalNodeState}}')";;
esac
To check for manager status, rather than just a node in a cluster, the field you want is .Swarm.ControlAvailable:
docker info --format '{{.Swarm.ControlAvailable}}'
That will output "true" for managers, and "false" for any node that is a worker or not in a swarm.
To identify worker nodes, you can join to two:
if [ "$(docker info --format '{{.Swarm.LocalNodeState}}')" = "active" \
-a "$(docker info --format '{{.Swarm.ControlAvailable}}')" = "false" ]; then
echo "node is a worker"
else
echo "node is not a worker"
fi
You could also use docker info to see the result of Swarm property (inactive or active).
For example:
function isSwarmNode(){
if [ "$(docker info | grep Swarm | sed 's/Swarm: //g')" == "inactive" ]; then
echo false;
else
echo true;
fi
}
I don't have a swarm node handy at the moment, but it looks as if you could simply run something like docker node ls. When targeting a docker daemon that is not in swarm node, that results in:
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
And it returns a nonzero exit code
$ echo $?
1
So the test would look something like:
if docker node ls > /dev/null 2>&1; then
echo this is a swarm node
else
echo this is a standalone node
fi
In addition to larsks answer, if you run docker node ls when pointing to a worker node, you'll get the following message:
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.
You can use this to differentiate between worker nodes and nodes not in a swarm at all.
Make sure your docker environment variables are set properly
$env | grep DOCKER_
Compare url and port values with the output from
$docker-machine ls
Select the swarm master machine name and you can reset the environment variables using
$eval $(docker-machine env your_master_machine_name)
Once environment variables are set properly, your command
$docker info | egrep '^Swarm: ' | cut -d ' ' -f2
should give the correct result
To get the IP address of a manager from any node (either worker or manager) using bash you can do:
read manager_ip _ <<<$(IFS=':'; echo $(docker info --format "{{ (index .Swarm.RemoteManagers 0).Addr }}"))
echo "${manager_ip}"
As mentioned above, the most direct way to identify if the current node is a manager is by using:
docker info --format '{{.Swarm.ControlAvailable}}'

How to know if my program is completely started inside my docker with compose

In my CI chain I execute end-to-end tests after a "docker-compose up". Unfortunately my tests often fail because even if the containers are properly started, the programs contained in my containers are not.
Is there an elegant way to verify that my setup is completely started before running my tests ?
You could poll the required services to confirm they are responding before running the tests.
curl has inbuilt retry logic or it's fairly trivial to build retry logic around some other type of service test.
#!/bin/bash
await(){
local url=${1}
local seconds=${2:-30}
curl --max-time 5 --retry 60 --retry-delay 1 \
--retry-max-time ${seconds} "${url}" \
|| exit 1
}
docker-compose up -d
await http://container_ms1:3000
await http://container_ms2:3000
run-ze-tests
The alternate to polling is an event based system.
If all your services push notifications to an external service, scaeda gave the example of a log file or you could use something like Amazon SNS. Your services emit a "started" event. Then you can subscribe to those events and run whatever you need once everything has started.
Docker 1.12 did add the HEALTHCHECK build command. Maybe this is available via Docker Events?
If you have control over the docker engine in your CI setup you could execute docker logs [Container_Name] and read out the last line which could be emitted by your application.
RESULT=$(docker logs [Container_Name] 2>&1 | grep [Search_String])
logs output example:
Agent pid 13
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
#host SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
parse specific line:
RESULT=$(docker logs ssh_jenkins_test 2>&1 | grep Enter)
result:
Enter passphrase (empty for no passphrase): Enter same passphrase again: Identity added: id_rsa (id_rsa)

docker exec command doesn't return after completing execution

I started a docker container based on an image which has a file "run.sh" in it. Within a shell script, i use docker exec as shown below
docker exec <container-id> sh /test.sh
test.sh completes execution but docker exec does not return until i press ctrl+C. As a result, my shell script never ends. Any pointers to what might be causing this.
I could get it working with adding the -it parameters:
docker exec -it <container-id> sh /test.sh
Mine works like a charm with this command. Maybe you only forgot the path to the binary (/bin/sh)?
docker exec 7bd877d15c9b /bin/bash /test.sh
File location at
/test.sh
File Content:
#!/bin/bash
echo "Hi"
echo
echo "This works fine"
sleep 5
echo "5"
Output:
ArgonQQ#Terminal ~ docker exec 7bd877d15c9b /bin/bash /test.sh
Hi
This works fine
5
ArgonQQ#Terminal ~
My case is a script a.sh with content
like
php test.php &
if I execute it like
docker exec contianer1 a.sh
It also never returned.
After half a day googling and trying
changed a.sh to
php test.php >/tmp/test.log 2>&1 &
It works!
So it seems related with stdin/out/err.
>/tmp/test.log 2>&1
Please try.
And please note that my test.php is a dead loop script that monitors a specified process, if the process is down, it will restart it. So test.php will never exit.
As described here, this "hanging" behavior occurs when you have processes that keep stdout or stderr open.
To prevent this from happening, each long-running process should:
be executed in the background, and
close both stdout and stderr or redirect them to files or /dev/null.
I would therefore make sure that any processes already running in the container, as well as the script passed to docker exec, conform to the above.
OK, I got it.
docker stop a590382c2943
docker start a590382c2943
then will be ok.
docker exec -ti a590382c2943 echo "5"
will return immediately, while add -it or not, no use
actually, in my program, the deamon has the std input and std output, std err. so I change my python deamon like following, things work like a charm:
if __name__ == '__main__':
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# exit first parent
os._exit(0)
except OSError, e:
print "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
os._exit(0)
# decouple from parent environment
#os.chdir("/")
os.setsid()
os.umask(0)
#std in out err, redirect
si = file('/dev/null', 'r')
so = file('/dev/null', 'a+')
se = file('/dev/null', 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# do second fork
while(True):
try:
pid = os.fork()
if pid == 0:
serve()
if pid > 0:
print "Server PID %d, Daemon PID: %d" % (pid, os.getpid())
os.wait()
time.sleep(3)
except OSError, e:
#print "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
os._exit(0)

Where is a log file with logs from a container?

I am running several containers using docker-compose. I can see application logs with command docker-compose logs. However I would like to access raw log file to send it somewhere for example? Where is it located? I guess it's separate log per each container (inside container?) but where I can find it?
A container's logs can be found in :
/var/lib/docker/containers/<container id>/<container id>-json.log
(if you use the default log format which is json)
You can docker inspect each container to see where their logs are:
docker inspect --format='{{.LogPath}}' $INSTANCE_ID
And, in case you were trying to figure out where the logs were to manage their collective size, or adjust parameters of the logging itself you will find the following relevant.
Fixing the amount of space reserved for the logs
This is taken from Request for the ability to clear log history (issue 1083)):
Docker 1.8 and docker-compose 1.4 there is already exists a method to limit log size using docker compose log driver and log-opt max-size:
mycontainer:
...
log_driver: "json-file"
log_opt:
# limit logs to 2MB (20 rotations of 100K each)
max-size: "100k"
max-file: "20"
In docker compose files of version '2' , the syntax changed a bit:
version: '2'
...
mycontainer:
...
logging:
#limit logs to 200MB (4rotations of 50M each)
driver: "json-file"
options:
max-size: "50m"
max-file: "4"
(note that in both syntaxes, the numbers are expressed as strings, in quotes)
Possible issue with docker-compose logs not terminating
issue 1866: command logs doesn't exit if the container is already stopped
To see how much space each container's log is taking up, use this:
docker ps -qa | xargs docker inspect --format='{{.LogPath}}' | xargs ls -hl
(you might need a sudo before ls).
docker inspect <containername> | grep log
On Windows, the default location is: C:\ProgramData\Docker\containers\<container-id>-json.log.
Here is the location for
Windows 10 + WSL 2 (Ubuntu 20.04), Docker version 20.10.2, build 2291f61
Lets say
DOCKER_ARTIFACTS == \\wsl$\docker-desktop-data\version-pack-data\community\docker
Location of container logs can be found in
DOCKER_ARTIFACTS\containers\[Your_container_ID]\[Your_container_ID]-json.log
Here is an example
To directly view the logfile in less, I use:
docker inspect $1 | grep 'LogPath' | sed -n "s/^.*\(\/var.*\)\",$/\1/p" | xargs sudo less
run as ./viewLogs.sh CONTAINERNAME
As of 8/22/2018, the logs can be found in :
/data/docker/containers/<container id>/<container id>-json.log
To see the size of logs per container, you can use this bash command :
for cont_id in $(docker ps -aq); do cont_name=$(docker ps | grep $cont_id | awk '{ print $NF }') && cont_size=$(docker inspect --format='{{.LogPath}}' $cont_id | xargs sudo ls -hl | awk '{ print $5 }') && echo "$cont_name ($cont_id): $cont_size"; done
Example output:
container_name (6eed984b29da): 13M
elegant_albattani (acd8f73aa31e): 2.3G

Resources