Docker - `top` like real time metrics in command line - docker

When running Docker containers, I'd like to get real time metrics reports of all my running containers?
I'd like to see the memory, cpu and network usage in real time, like the top command on Linux.

You have some tools from docker to start with
docker stats
https://docs.docker.com/engine/reference/commandline/stats/
docker top
https://docs.docker.com/engine/reference/commandline/top/
keep in mind that you can use various ps options with docker top
An example
$ docker top b753f4832fb -o pid,cmd
will show something like
PID CMD
6103 /usr/bin/vi

I found a nice tool to help you with this requirement.
Ctop (github link). Top-like interface for container metrics. It's works really well, and is easy to use.
Note - you can also use docker stats, but it has fewer options and a simpler display.
I hope this helps!

Related

docker compose pull - suppress layer status information to cleanup terminal output problem due to many layers

Moving from docker-compose to the newer built in docker compose - the output is way more verbose and causes an output problem when there are a lot of container layers while using an SSH client to deploy. If there isn't enough vertical lines, it causes the ssh terminal scroll at an unreadable speed.
The original docker-compose seemed to show the data in a single line per service, and was clean looking.
Using the new docker compose:
Example of good looking input with enough vertical terminal space:
If the user does not have enough screen resolution, this basically pushes the previous output on the screen off and scrolled it out of the terminal buffer, many thousands of lines created.
One solution would be to pull each individual layer individually with something like
docker compose ps --services |xargs -n 1 docker compose pull
However, hoping there was a flag to just suppress the layer output, and just show a status on a single line for each service like the older docker-compose.
I am not looking to make it quiet (-q / --quiet), just not messy.
Reference: https://docs.docker.com/engine/reference/commandline/compose_pull/
Reference docs don't seem to have any solution here. Was curious if there was another method (without me going back to the older docker-compose)

Recover docker container's run arguments

I often find myself in need of re-creating container with minor modifications to arguments used to docker run container originally (things like changing published ports, network, memory amount).
Now I am making images and running them in place of old containers.
This works fine but I don't always have original params to docker run saved and sometimes (esp. when there are lot of things to define) it becomes pain to recover them.
Is there any way to recover docker run arguments from existing container?
Sorry for being a couple of years late, but I had a similar question and no satisfying answer yet, so I still needed to find my way out.
I've found two sources addressing the issue:
A gist
To run, save this to a file, e.g. run.tpl and do docker inspect --format "$(<run.tpl)" name_or_id_of_running_container
A docker image
Quick run:
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nexdrew/rekcod <container>
Both solutions are quite simple to use, but the second one failed to generate the command for an Nginx container because they did not manage to have it quoted like this "nginx" "-g" "daemon off;"
So, I focused on the first solution, which is a golang template intended to feed the --format parameter of docker inspect. I liked it because it was kind of simple, elegant, and no other tool needed.
I've made some improvements in my forked gist and notified the original author about it.
Couple of answers to this. Run your containers using docker-compose, then you can just run compose files and retain all your configuration. Obviously compose is designed for multi-container applications, but massively underrated for single-container, complex run argument use cases.
Second one is to put your run command into a LABEL on the image. Take a look at Label Schema's docker.cmd etc... Then you can easily retrieve from the image (or from your Dockerfile).
the best way to do this is not to type the commands manually. put them into a shell script... a .sh file on linux/mac, or a .cmd file on windows. then you just run the shell script to create your container and you never have to worry about re-typing the commands and options, you'll never get them wrong, etc.
personally, i write my scripts with "npm scripts" in my package.json file. but the same thing can be done with any tool that can run command-line program with arguments
i do this along with a few other tricks to make sure i never fail to build my images or run my containers. makes life with docker soooo much easier. :)
You can use docker inspect to get the container's configuration. Reconstructing the docker run command from that can be somewhat tedious though.
Another option is to search your shell history using either history | grep "docker run" or ctrl+r (if you use bash). That way, you don't need to go out of your way to save the commands but can still recover them quickly.

Slowness in Docker container

I'm using docker for mac and curl command from docker container takes way longer than from my mac. Container is using default bridge network.
See below curl command from inside the container:
Command from mac:
Thanks.
It is a known issue that networking with bridge / nat mode in docker is slow. You could use hosted mode. This should be solved by the macvlan driver.
For further reference, please look at this bug.
This is known and solved in different ways, please see those benchmarks here: https://github.com/EugenMayer/docker-sync/wiki/4.-Performance
You also see, that the new :cached mount will not help with application performance, but can be used with docker-sync to speedup sync.
:delegated will help with application performance but will still take a while to land in d4m.
You can try using http://docker-sync.io with the native_osx strategy - it results in a speedup from 60-100 times faster, depending on your hardware performance and project size ( closer to 100 with bigger projects /and or worse hardware).
I am biased, so you want to look at the alternatives here https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync - a detailed write up in what the difference is in strategies, you can look up here https://github.com/EugenMayer/docker-sync/wiki/8.-Strategies
One option is to switch to Docker machine. Docker machine doesn't have the problem like docker for mac.

Chaos Monkey equivalent for Docker?

Is there a tool out there like Chaos Monkey that will randomly shutdown docker containers, so we can test the resiliency of our system?
You can use Pumba. It can be started as a docker container to start randomly killing other containers
Try this:
https://github.com/giantswarm/mizaru
I think this is the stuff you are looking for.
Take a look on Blockade.
You can play with network partition, blockade underneath uses both iptables and tc.
It's not Docker specific, but testing resiliency is more than about just killing Docker containers.
If you want to degrade network performance, tamper with HTTP request/response etc. you could look at Muxy. I use it often in (e.g. docker compose) setups to test how the overall set of Docker services behave.
Note that I am the author of the tool.

Is it possible/sane to develop within a container Docker

I'm new to Docker and was wondering if it was possible (and a good idea) to develop within a docker container.
I mean create a container, execute bash, install and configure everything I need and start developping inside the container.
The container becomes then my main machine (for CLI related works).
When I'm on the go (or when I buy a new machine), I can just push the container, and pull it on my laptop.
This sort the problem of having to keep and synchronize your dotfile.
I haven't started using docker yet, so is it something realistic or to avoid (spacke disk problem and/or pull/push timing issue).
Yes. It is a good idea, with the correct set-up. You'll be running code as if it was a virtual machine.
The Dockerfile configurations to create a build system is not polished and will not expand shell variables, so pre-installing applications may be a bit tedious. On the other hand after building your own image to create new users and working environment, it won't be necessary to build it again, plus you can mount your own file system with the -v parameter of the run command, so you can have the files you are going to need both in your host and container machine. It's versatile.
> sudo docker run -t -i -v
/home/user_name/Workspace/project:/home/user_name/Workspace/myproject <container-ID>
I'll play the contrarian and say it's a bad idea. I've done work where I've tried to keep a container "long running" and have modified it, but then accidentally lost it or deleted it.
In my opinion containers aren't meant to be long running VMs. They are just meant to be instances of an image. Start it, stop it, kill it, start it again.
As Alex mentioned, it's certainly possible, but in my opinion goes against the "Docker" way.
I'd rather use VirtualBox and Vagrant to create VMs to develop in.
Docker container for development can be very handy. Depending on your stack and preferred IDE you might want to keep the editing part outside, at host, and mount the directory with the sources from host to the container instead, as per Alex's suggestion. If you do so, beware potential performance issue on macos x with boot2docker.
I would not expect much from the workflow with pushing the images to sync between dev environments. IMHO keeping Dockerfiles together with the code and synching by SCM means is more straightforward direction to start with. I also carry supporting Makefiles to build image(s) / run container(s) same place.

Resources