I have a docker image called my_image which launch a command and closes.
When running the image in a container using command docker run --rm my_image, is it possible to measure the execution time of the container ?
Edit :
I need to see those timing information after container execution, thus I can't use time command.
I somehow hoped to find some container execution history kept by docker even if --rm was used. But if it doesn't exist, then #tgogos' answer is suited.
The goal is to compare execution time of several images to draw a conclusion about the different tools used.
1st approach: time
time docker run --rm --name=test alpine ping -c 10 8.8.8.8
...
real 0m10.261s
user 0m0.228s
sys 0m0.044s
but this will also include the time for creating and removing the container.
2nd approach: container information
The information you are looking for is stored by docker and can be reached by docker container inspect.
docker run --name=test alpine ping -c 10 8.8.8.8
* notice that I didn't use --rm because the next step is to inpect the container. You will have to remove it afterwards. The timestamps you might be interested in are:
"Created": "2018-08-02T10:16:48.59705963Z",
"StartedAt": "2018-08-02T10:16:49.187187456Z",
"FinishedAt": "2018-08-02T10:16:58.27795818Z"
$ docker container inspect test
[
{
"Id": "96e469fdb437814817ee2e9ad2fcdbf468a88694fcc998339edd424f9689f71f",
"Created": "2018-08-02T10:16:48.59705963Z",
"Path": "ping",
"Args": [
"-c",
"10",
"8.8.8.8"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-08-02T10:16:49.187187456Z",
"FinishedAt": "2018-08-02T10:16:58.27795818Z"
}
...
Duration calculation example (with bash):
You can put these timestamps in bash variables with single commands like this:
START=$(docker inspect --format='{{.State.StartedAt}}' test)
STOP=$(docker inspect --format='{{.State.FinishedAt}}' test)
Then you can convert them to UNIX epoch timestamps (seconds since Jan 01 1970. (UTC))
START_TIMESTAMP=$(date --date=$START +%s)
STOP_TIMESTAMP=$(date --date=$STOP +%s)
and if you subtract these two, you get the duration in seconds...
echo $(($STOP_TIMESTAMP-$START_TIMESTAMP)) seconds
9 seconds
You have to consider a couple of things:
How to get time execution of a process given a PID.
Execute docker exec specifying PID=1 because container time running = entrypoint running.
Combining two things you get docker container time execution with:
docker exec -ti <container_id> ps -o etime= -p "1"
It gives you more accuracy than column STATUS of docker ps command.
If you're executing several processes in container and you need execution time for any of them, just replace "1" by its PID inside container.
Another possible approach may be to override the default entrypoint with the time command.
$ docker run --rm --name=test --entrypoint=time alpine ping -c 10 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=51.213 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=7.844 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=8.120 ms
64 bytes from 8.8.8.8: seq=3 ttl=37 time=10.859 ms
64 bytes from 8.8.8.8: seq=4 ttl=37 time=10.975 ms
64 bytes from 8.8.8.8: seq=5 ttl=37 time=12.520 ms
64 bytes from 8.8.8.8: seq=6 ttl=37 time=7.994 ms
64 bytes from 8.8.8.8: seq=7 ttl=37 time=8.904 ms
64 bytes from 8.8.8.8: seq=8 ttl=37 time=6.674 ms
64 bytes from 8.8.8.8: seq=9 ttl=37 time=7.132 ms
--- 8.8.8.8 ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max = 6.674/13.223/51.213 ms
real 0m 9.02s
user 0m 0.00s
sys 0m 0.00s
Doing this won't include the container start up time. You can even do something like:
time docker run --rm --name=test --entrypoint=time alpine ping -c 10 8.8.8.8 to see how long just the container startup takes.
Related
I am kind of new to Docker and I have a Nextjs application running in a docker container. The app used some environment variables to communicate with the server. Here is where the problem appears. For some reason, when running the container and passing the env variables, they are created OK. Using docker inspect containerId I can see the correct value. However, when doing the real call to the server the value (server id) is the one that was set up on the build.
Building the image and passing the parameter. Let's say SERVER_API=127.1.2.3
docker build -t miTestImage --build-arg SERVER_API=$(SERVER_API) --rm --no-cache myNextjsApp/
By running the following command I can see the correct value was set up.
docker image inspect imageId
BUT, when running the image
docker run -itd -e SERVER_API=http://127.0.3.9:4000 --name myContianerApp -p 5000:5000 --rm imageId
and sending a request to the server, it is using the Old value (127.1.2.3) instead of the new one (http://127.0.3.9:4000)
And by doing:
docker inspect myContianerApp,
I can see the new value properly added but I don't understand why is not been picking up by the app.
I was reading this article where they have the following diagram. I'm doing the same steps but is not working for me. Do I missing something?
Any help/ clue is really appreciated.
Here's a working example:
FROM busybox
ARG SERVER="google.com"
ENV SERVER=${SERVER}
ENTRYPOINT "/bin/ash" "-c" "ping ${SERVER}"
Then:
docker build --tag=62270940 --file=./Dockerfile .
docker inspect 62270940 --format="{{.Config.Env}}"
[PATH=... SERVER=google.com]
docker run \
--interactive --tty \
62270940
PING google.com (172.217.14.238): 56 data bytes
64 bytes from 172.217.14.238: seq=0 ttl=52 time=13.850 ms
64 bytes from 172.217.14.238: seq=1 ttl=52 time=11.494 ms
docker run \
--interactive --tty \
--env=SERVER="stackoverflow.com" \
62270940
PING stackoverflow.com (151.101.1.69): 56 data bytes
64 bytes from 151.101.1.69: seq=0 ttl=55 time=13.763 ms
64 bytes from 151.101.1.69: seq=1 ttl=55 time=13.800 ms
64 bytes from 151.101.1.69: seq=2 ttl=55 time=25.678 ms
I am trying to connect two locally developed projects running on docker-compose by using external networking.
From one side I have an 1st application intended to be exposed. Compose contains hosts: app and rabbit:
version: '3.4'
services:
app:
# ...
rabbit:
# ...
networks:
default:
driver: bridge
From other side I have second application expected to see 1st application:
version: '3.4'
services:
app:
# ...
networks:
- paymentservice_default
- default
networks:
paymentservice_default:
external: true
Reaching host rabbit.paymentservice_default is possible.
However service app (1st) conflicts with app (2nd):
root#6db86687229c:/app# ping app.paymentservice_default
PING app.paymentservice_default (192.168.80.6) 56(84) bytes of data.
root#6db86687229c:/app# ping app
PING app (192.168.80.6) 56(84) bytes of data.
In general from 2nd compose perspective hosts app and app.paymentservice_default shares same IP making app.paymentservice_default undiscoverable.
The question here is, do I have proper configuration and conflict can be avoided without changing service names app? Why this constraint? Taking consideration that every docker-compose configuration is shared across projects and can be developed in micro-services world.
$ docker-compose --version
docker-compose version 1.17.1, build unknown
$ docker --version
Docker version 19.03.4, build 9013bf583a
Thank you.
I use the following configuration on Docker Playground
paymentservice.docker-compose.yml
version: '3.4'
services:
app:
image: busybox
# keep container running
command: tail -f /dev/null
rabbit:
image: rabbitmq
networks:
default:
driver: bridge
other.docker-compose.yml
version: '3.4'
services:
app:
image: busybox
# keep container running
command: tail -f /dev/null
networks:
- paymentservice_default
- default
networks:
paymentservice_default:
external: true
Run both projects
$ COMPOSE_PROJECT_NAME=paymentservice docker-compose -f paymentservice.docker-compose.yml up -d
$ COMPOSE_PROJECT_NAME=other docker-compose -f other.docker-compose.yml up -d
Show Docker IPs
$ docker ps -q | xargs -n 1 docker inspect --format '{{ .Name }} {{range .NetworkSettings.Networks}} {{.IPAddress}}{{end}}' | sed 's#^/##';
I got
other_app_1 172.20.0.2 172.19.0.4
paymentservice_app_1 172.19.0.3
paymentservice_rabbit_1 172.19.0.2
and I pinged paymentservice_app_1 (172.19.0.3) from other_app_1 using app.paymentservice_default
$ docker exec -it other_app_1 ping -c 1 app.paymentservice_default
PING app.paymentservice_default (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.258 ms
--- app.paymentservice_default ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.258/0.258/0.258 ms
and I pinged other_app_1 (172.20.0.2) from other_app_1 using app
$ docker exec -it other_app_1 ping -c 1 app
PING app (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.054 ms
--- app ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.054/0.054 ms
As you can see, I can access the 1st app (of paymentservice.docker-compose.yml) from the 2nd app (of other.docker-compose.yml).
The same works in the other direction. I pinged other_app_1 (172.19.0.4) from paymentservice_app_1 using app.paymentservice_default
$ docker exec -it paymentservice_app_1 ping -c 1 app.paymentservice_default
PING app.paymentservice_default (172.19.0.4): 56 data bytes
64 bytes from 172.19.0.4: seq=0 ttl=64 time=0.198 ms
--- app.paymentservice_default ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.198/0.198/0.198 ms
I pinged paymentservice_app_1 (172.19.0.3) from paymentservice_app_1 using app
$ docker exec -it paymentservice_app_1 ping -c 1 app
PING app (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.057 ms
--- app ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.057/0.057 ms
As you can see, I can access app service of both projects. If I like to access the service of the same project, I use the default network of the project. If I'd like to access the service of another project, I use the external network shared between both projects.
Note: I would recommend to make this more explicit by creating the shared network outside of the projects using the command line
docker network create shared-between-paymentservice-and-other
and declaring it as external in both projects.
Note: There is still the limitation that service discovery may not work if you have 3 projects with the same service name (e.g. app) in the same (external) network (sort of a namespace). In that case, it might be a better idea to rename your services, use multiple external networks, define aliases or use a totally different approach to discover/identify the Docker containers.
Afterword
Has that been the requirement? I tried to reproduce your issue, but I'm not sure if I did the same as you. For example, I'm not sure, where you are running ping. Is root#6db86687229c the Docker host or a Docker container? Which container? I assumed it is the Docker container of service app of other.docker-compose.yml. Please comment if I'm missing something or misinterpreted your question and I will update my answer. Then I may explain in more detail or make another suggestion how to do service discovery between multiple Docker Compose projects.
Appendix
Cleanup
$ COMPOSE_PROJECT_NAME=other docker-compose -f other.docker-compose.yml down
$ COMPOSE_PROJECT_NAME=paymentservice docker-compose -f paymentservice.docker-compose.yml down
Versions
$ docker --version
Docker version 20.10.0, build 7287ab3
$ docker-compose --version
docker-compose version 1.26.0, build unknown
I am trying to add a cluster with replicas using docker-compose scale graylog-es-slave=2 but for a version 3 Dockerfile unlike Docker compose and hostname
What I am trying to do ix figure out how to get the specific node in the replica set
Here is what I have tried
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave -c 2
PING graylog-es-slave (172.19.0.4): 56 data bytes
64 bytes from 172.19.0.4: icmp_seq=0 ttl=64 time=0.067 ms
64 bytes from 172.19.0.4: icmp_seq=1 ttl=64 time=0.104 ms
--- graylog-es-slave ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.067/0.085/0.104/0.000 ms
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave.1 -c 2
ping: unknown host
D:\p\liberty-docker>docker exec 706814bf33b2 ping graylog-es-slave_1 -c 2
ping: unknown host
The docker-compose.yml
version: 3
service:
graylog-es-slave:
image: elasticsearch:2
command: "elasticsearch -Des.cluster.name='graylog'"
environment:
ES_HEAP_SIZE: 2g
deploy:
replicas: 2 <-- this is ignored on docker-compose just putting it here for completeness
Instead of ., use _ (underscore), and add the prefix of the project name (the directory that holds your docker-compose.yml, I assume that it is liberty-docker_graylog):
ping liberty-docker_graylog-es-slave_1
You can see that doing network ls, search for the right network, then docker network inspect network_id.
When I manually execute a jenkins job, it would save an annoying step if, upon submitting the build button, the next screen was the console output and not the project page.
Is there a configuration setting that allows this to happen?
No native option of that kind I guess. Recent jenkins builds allow a shorter way to access console output - just click on the gray/blue point left to the currently running build:
If you still think one click is annoying extra step, then next option I'd suggest is using jenkins CLI. It needs some more one-time effort though.
Have java installed on your workstation
Download jenkins-cli.jar from http://your jenkins host/jnlpJars/jenkins-cli.jar
Then you can start builds like that:
java -jar jenkins-cli.jar -s http://jenkins-url build buildname -w -s -v -p parameterN=valueN
java -jar jenkins-cli.jar -noKeyAuth -s http://jet:8080 build tst-so -w -s -v -p host2ping=google.com
Started tst-so #17
Started from command line by anonymous
Building in workspace /var/lib/jenkins/jobs/tst-so/workspace
[workspace] $ /bin/sh -xe /tmp/hudson5079113569382475588.sh
+ echo Hello from tst-so job
Hello from tst-so job
+ ping -c 6 google.com
PING google.com (216.58.209.206) 56(84) bytes of data.
64 bytes from bud02s22-in-f14.1e100.net (216.58.209.206): icmp_seq=1 ttl=54 time=51.6 ms
64 bytes from bud02s22-in-f14.1e100.net (216.58.209.206): icmp_seq=2 ttl=54 time=51.9 ms
64 bytes from bud02s22-in-f14.1e100.net (216.58.209.206): icmp_seq=3 ttl=54 time=51.8 ms
64 bytes from bud02s22-in-f14.1e100.net (216.58.209.206): icmp_seq=4 ttl=54 time=51.8 ms
64 bytes from bud02s22-in-f14.1e100.net (216.58.209.206): icmp_seq=5 ttl=54 time=51.8 ms
64 bytes from bud02s22-in-f14.1e100.net (216.58.209.206): icmp_seq=6 ttl=54 time=51.7 ms
--- google.com ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5009ms
rtt min/avg/max/mdev = 51.684/51.815/51.900/0.306 ms
Finished: SUCCESS
Completed tst-so #17 : SUCCESS
In my example jenkins is setup with no authentication for simplicity reason.
I am looking for a solution to ping a Docker container using its hostname, from another Docker Container.
I tried as follow:
starting first Docker container:
docker run --rm -ti --hostname=repohost --name=repo repo
starting second Docker container, link to first and start bash:
docker run --rm -ti --hostname=repo2host --link repo:rp repo2 /bin/bash
on bash started on repo2
ping repohost
it remain on pending without any result.
Can someone tell me if there is a solution for this?
You should be able to ping using the alias you gave in the link command (the part after the :), in your case ping rp should work.
The following works for me, given a running container called furious_turing:
$ docker run -it --link furious_turing:ft debian /bin/bash
root#06b18931d80b:/# ping ft
PING ft (172.17.0.3): 48 data bytes
56 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.136 ms
56 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.091 ms
56 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.092 ms
^C--- ft ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.091/0.106/0.136/0.000 ms
root#06b18931d80b:/#
If you need to ping on another name, you can add entries to /etc/hosts with the --add-host argument to docker run.
One way to achieve what you need would be with WeaveDNS.