How to access task and node information within docker container service - docker

With docker service I can get the following running tasks and associated nodes. I am wondering how each running task can retrieve its node ID and task name? Is there any environment variable to access those? If not how can I set one?
$ docker service ps appservice
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
0qihejybwf1x appservice.1 appservice:3.0.5 manager1 Running Running 8 seconds
bk658fpbex0d appservice.2 appservice:3.0.5 worker2 Running Running 9 seconds
5ls5s5fldaqg appservice.3 appservice:3.0.5 worker1 Running Running 9 seconds
8ryt076polmc appservice.4 appservice:3.0.5 worker1 Running Running 9 seconds
1x0v8yomsncd appservice.5 appservice:3.0.5 manager1 Running Running 8 seconds
71v7je3el7rr appservice.6 appservice:3.0.5 worker2 Running Running 9 seconds
4l3zm9b7tfr7 appservice.7 appservice:3.0.5 worker2 Running Running 9 seconds
9tfpyixiy2i7 appservice.8 appservice:3.0.5 worker1 Running Running 9 seconds
3w1wu13yupln appservice.9 appservice:3.0.5 manager1 Running Running 8 seconds
8eaxrb2fqpbn appservice.10 appservice:3.0.5 manager1 Running Running 8 seconds

I was able to achieve this by setting environment variables in the docker-compose. To retrieve task id I added following line in the service configuration:
environment:
- MYTASKID={{.Task.ID}}

Related

Trouble connecting to my docker app via VM IP

Solved at bottom
But why do I have to append :4000?
I'm following the docker get-started Guide here, https://docs.docker.com/get-started/part4/
I'm fairly certain I've done everything correctly, but am wondering why I can't connect to view the app after deploying it.
I've set my env to my VM, myvm1, for reference to following commands.
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
099e16249604 beresj/getting-started:part2 "python app.py" 12 seconds ago Up 12 seconds 80/tcp getstartedlab_web.5.y0e2k1r1ev47u24e5iufkyn3i
6f9a24b343a7 beresj/getting-started:part2 "python app.py" 12 seconds ago Up 12 seconds 80/tcp getstartedlab_web.3.1pls3osj3uhsb5dyqtt4ts8j6
docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
beresj/getting-started <none> e290b6208c21 22 hours ago 131MB
docker stack ls
NAME SERVICES ORCHESTRATOR
getstartedlab 1 Swarm
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v18.09.6
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.09.6
docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
vkxx79fh3h85 getstartedlab_web.1 beresj/getting-started:part2 myvm2 Running Running 3 minutes ago
qexbaa3wz0pd getstartedlab_web.2 beresj/getting-started:part2 myvm2 Running Running 3 minutes ago
1pls3osj3uhs getstartedlab_web.3 beresj/getting-started:part2 myvm1 Running Running 3 minutes ago
ucuwen1jrncf getstartedlab_web.4 beresj/getting-started:part2 myvm2 Running Running 3 minutes ago
y0e2k1r1ev47 getstartedlab_web.5 beresj/getting-started:part2 myvm1 Running Running 3 minutes ago
curl 192.168.99.100
curl: (7) Failed to connect to 192.168.99.100 port 80: Connection refused
docker info
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 1
Server Version: 18.09.6
...
Swarm: active
NodeID: 0p9qrax9h3by0fupat8ufkfbq
Is Manager: true
ClusterID: 7vnqdk85n8jx6fqck9k7dv2ka
Managers: 1
Nodes: 2
Default Address Pool: 10.0.0.0/8
...
Node Address: 192.168.99.100
Manager Addresses:
192.168.99.100:2377
...
Kernel Version: 4.14.116-boot2docker
Operating System: Boot2Docker 18.09.6 (TCL 8.2.1)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 989.4MiB
Name: myvm1
I would expect to see what I was able to see when I just ran it on my local machine instead of on a VM in a swarm (I think I have the lingo correct?)
Not sure how to check open ports.
Again: this works if I simply remove the stack, unset the docker-machine environment, and just run:
docker stack deploy -c docker-compose.yml getstartedlab
not on the vm.
Thank you in advance. (Also, I'm new hence the get-started guide so I appreciate any help)
Edit
It works if I append :4000 to the VM IP in my url, ex: 192.168.99.100:4000 or 192.168.99.101:4000. It shows the two container Id's listed in 'docker container ls' for myvm1, and the other three are from myvm2. Could anyone tell me why I have to append 4000? Is it because I have ports: "4000:80" in my docker-compose.yml?
Not sure if this will help but if you use docker inspect <instance_id_here>, you can see what ports are exposed.
Exposed ports aren't open ports. You would need to bind a host port to a container port in the docker-compose.yml in order for it to be to be open.

Docker Swarm - Can't pull from private registry ON UPDATE, only works on initial DEPLOY

I am having issues with swarm worker nodes not updating images when doing either an update or deploy (on an existing stack). The stack will always work when created
This solution only works on creation.
To reproduce the issue, do the following
1) create a container, something like httpd with an index.html, store it at private-registry.example.com/path/image
2) create test.yml
version: '3.4'
services:
test:
# Use the build in the current pipeline
image: private-registry.example.com/path/image
deploy:
replicas: 3
3) deploy stack
docker login private-registry.example.com
docker stack deploy --with-registry-auth --compose-file=test.yml test
4) update the container, change some text
5) re-deploy the stack
docker login private-registry.example.com
docker stack deploy --with-registry-auth --compose-file=test.yml test
the swarm manager will have the latest image, the swarm nodes will not.
docker service ps test_test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
j9497jwolee4 test_test.1 private-registry.example.com/path/image swarm-slave-01.example.com Running Running 5 seconds ago
zsqxx3m0mpk3 \_ test_test.1 private-registry.example.com/path/image swarm-slave-01.example.com Shutdown Shutdown 7 seconds ago
sjjggcqmjcvo test_test.2 private-registry.example.com/path/image swarm-master.example.com Running Running 10 seconds ago
uyey60wv2vsc \_ test_test.2 private-registry.example.com/path/image swarm-slave-01.example.com Shutdown Rejected 20 seconds ago "No such image: private-registry..."
ttzvf4j3whk3 \_ test_test.2 private-registry.example.com/path/image swarm-slave-01.example.com Shutdown Rejected 25 seconds ago "No such image: private-registry..."
x77e3r46zl1j \_ test_test.2 private-registry.example.com/path/image swarm-master.example.com Shutdown Rejected 31 seconds ago "No such image: private-registry..."
5a7lywn6zycz \_ test_test.2 private-registry.example.com/path/image swarm-master.example.com Shutdown Rejected 36 seconds ago "No such image: private-registry..."
qp1acqgthl33 test_test.3 private-registry.example.com/path/image swarm-slave-02.example.com Running Running 11 seconds ago
osyn19o6c30j \_ test_test.3 private-registry.example.com/path/image swarm-master.example.com Shutdown Shutdown 12 seconds ago
WORKAROUND
This pulls the lastest image everytime without issue.
docker login private-registry.example.com
docker stack rm test
docker stack deploy --with-registry-auth --compose-file=test.yml test
System
Server Version: 18.06.1-ce
Operating System: Ubuntu 18.04.1 LTS
Try not to check docker image digest by using --resolve-image never. Looks it like works.
docker stack deploy --prune --with-registry-auth --resolve-image never -c docker-compose.yml xxxx

Manager node stuck in preparing state for service

I have a swarm of 3 managers, 3 workers as below:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
ocnuul8dcbrf4gjtdzv06t0yf * manager1 Ready Active Leader 18.06.0-ce
z297dhtfon50pt4hllu4qfz6i manager2 Ready Active Reachable 18.06.0-ce
ondpdzyq06pd3oysn34p4xi9o manager3 Ready Active Reachable 18.06.0-ce
0bls0g65gee1wbv7wr6rwgbjk worker1 Ready Active 18.06.0-ce
mxtg28slr5rvljrayaf4k1wkk worker2 Ready Active 18.06.0-ce
hqu1436bvbar9srbr34er3fl4 worker3 Ready Active 18.06.0-ce
All managers are available.
However, when i deploy a service on the swarm, manager3 is stuck in preparing state
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
lmhpsgeqax13 web-fe.1 nigelpoulton/pluralsight-docker-ci:latest worker1 Running Running 19 minutes ago
nivas3gkh0pa web-fe.2 nigelpoulton/pluralsight-docker-ci:latest worker3 Running Running 19 minutes ago
5plwh46jri3t web-fe.3 nigelpoulton/pluralsight-docker-ci:latest worker2 Running Running 19 minutes ago
l1ykqzgzbgmb web-fe.4 nigelpoulton/pluralsight-docker-ci:latest manager2 Running Running 19 minutes ago
q788hrm6rba9 web-fe.5 nigelpoulton/pluralsight-docker-ci:latest manager3 Running Preparing 21 minutes ago
I could see in the /var/log/docker.log for manager3 that its failing while trying to establish connection with manager2's IP(192.168.99.105:2377)
7T00:10:54.230023789Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {192.168.99.105:2377 0 <nil>}. Err :connection error: desc = \"transport: Err7T00:10:54.230049538Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420a86940, TRANSIENT_FAILURE" module=grpc
Since manager1 is the leader , i was expecting it to send the message/signal to manager1 on preparing, but i dont understand why its trying to connect to manager2.
Could some one help me understand? Also, how do i recover from this and move manager3 from preparing to running state?
Regards

Ports not accessable

I installed docker and issues a 'docker swarm init' command.
I'm trying to launch a stack using the following command: docker stack deploy -c docker-compose.yml mystack
The docker-compose file can be found here, the first docker file here and the second here
The output of 'docker ps' is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f582b3e8d33e tons/ip2country:latest "/bin/sh -c 'java -D…" 8 seconds ago Up 6 seconds 8080/tcp ip2flag_country-service.1.t5rvuqaw8tj7v20u0xo0dgy6x
bbf2c8304f1a tons/ip2flag:latest "/bin/sh -c 'java -D…" 10 seconds ago Up 8 seconds 8080/tcp ip2flag_app.1.z00gz8adj2yshpgimaw2o55d3
cbc7eaace4bf portainer/portainer "/portainer" 39 minutes ago Up 39 minutes 0.0.0.0:9000->9000/tcp portainer
The output of 'docker service ls' is:
ID NAME MODE REPLICAS IMAGE PORTS
ex51pyh1oyyo ip2flag_app replicated 1/1 tons/ip2flag:latest *:8080->8080/tcp
yhbt97lmjqan ip2flag_country-service replicated 1/1 tons/ip2country:latest
Since I'm running this on localhost I'd expect http://localhost:8080/ to return some sort of data. But it just times out. If I attach to the container and execute something like wget localhost:8080/some/path it works as expected. So the service is running and within the container listening to port 8080. However the port isn't exposed outside of dockers net. Further more I can add that launching with 'docker-compose up' works just fine too. But not with 'docker stack deploy'. Any clue about what I'm doing wrong?

How to use curl -4 http://localhost in the Docker part 3 tutorial?

Using the Docker tutorial I'm stuck at this part: https://docs.docker.com/get-started/part3/#run-your-new-load-balanced-app
I use curl -4 http://localhost but i get a curl: (7) Failed to connect to localhost port 80: Connection refused error.
output of previous step:
docker service ps getstartedlab_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
kqu5qggifnlm getstartedlab_web.1 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
prhrmm6hpop3 getstartedlab_web.2 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
ytrwy5gxp2rk getstartedlab_web.3 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
mayvauijghbj getstartedlab_web.4 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
r625x2k7n6ta getstartedlab_web.5 s1mpl3/get-started:part2 moby Running Running 29 minutes ago
So error and ports are empty.
What should I analyse to fix this issue?
For part 4 when you deploy to your swarm, you get an URL with docker-machine ls.
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v17.10.0-ce
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v17.10.0-ce
Change in docker-compose.yml file 80:80 to 4000:80
Use 192.168.99.100:4000 and it should be working.

Resources