Jmeter and Docker - docker

I saw some blog posts where people talk about JMeter and Docker. I understand that Docker will be helpful for setting up a container with all the dependencies. But they all run/create the containers in the same host. So ideally all the containers will share the host resources. It is like you run multiple instances of jmeter in the same host. It will not be helpful to generate more load.
When a host has 12GB RAM, I think 1 instance of JMeter with 10GB heap can generate more load than running 10 containers with 1 jmeter instance in each container.
What is the point of running docker here?

I made an automatic solution that can be easily integrated with Jenkins.
The dockerfile should be extended from java8 and add the JMeter build. This Docker image I will call jmeter-base:
FROM java:8
RUN mkdir /jmeter \
&& cd /jmeter/ \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-3.3.tgz \
&& tar -xvzf apache-jmeter-3.3.tgz \
&& rm apache-jmeter-3.3.tgz
ENV JMETER_HOME /jmeter/apache-jmeter-3.3/
# Add Jmeter to the Path
ENV PATH $JMETER_HOME/bin:$PATH
If you want to use a master-slave solution, this is the jmeter master Dockerfile:
FROM jmeter-base
WORKDIR $JMETER_HOME
# Ports to be exposed from the container for JMeter Master
RUN mkdir scripts
EXPOSE 60000
And this is the jmeter slave Dockerfile:
FROM jmeter-base
# Ports to be exposed from the container for JMeter Slaves/Server
EXPOSE 1099 50000
# Application to run on starting the container
ENTRYPOINT $JMETER_HOME/bin/jmeter-server \
-Dserver.rmi.localport=50000 \
-Dserver_port=1099
Now, with the both images, you should execute a script to execute you should know all slave IPs. This script make all the job:
#!/bin/bash
COUNT=${1-1}
docker build -t jmeter-base jmeter-base
docker-compose build && docker-compose up -d && docker-compose scale master=1 slave=$COUNT
SLAVE_IP=$(docker inspect -f '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) | grep slave | awk -F' ' '{print $2}' | tr '\n' ',' | sed 's/.$//')
WDIR=`docker exec -it master /bin/pwd | tr -d '\r'`
mkdir -p results
for filename in scripts/*.jmx; do
NAME=$(basename $filename)
NAME="${NAME%.*}"
eval "docker cp $filename master:$WDIR/scripts/"
eval "docker exec -it master /bin/bash -c 'mkdir $NAME && cd $NAME && ../bin/jmeter -n -t ../$filename -R$SLAVE_IP'"
eval "docker cp master:$WDIR/$NAME results/"
done
docker-compose stop && docker-compose rm -f

I came to understand from this post from a friend of mine that we should not be running multiple docker containers in the same host to generate more load.
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker/
Instead the usage of docker here is to quickly setup the jmeter environment.

Related

Trying to copy a script into a detached Docker container, and execute it with docker exec

Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.

How to migrate volume data from docker-for-mac to colima

How do I move volumes from docker-for-mac into colima?
Will copy all the volumes from docker-for-mac and move them to colima.
Note: there will be a lot of volumes you may not want to copy over since they're temporary ones, you can ignore them by simply adding a | grep "YOUR FILTER" to the for loop, either before or after the awk.
The following code makes 2 assumptions:
you have docker-for-mac installed and running
you have colima running
That is all you need, now copy-and-paste this into your terminal. No need to touch anything.
(
# set -x # uncomment to debug
set -e
# ssh doesn't like file descriptor piping, we need to write the configuration into someplace real
tmpconfig=$(mktemp);
# Need to have permissions to copy the volumes, and need to remove the ControlPath and add ForwardAgent
(limactl show-ssh --format config colima | grep -v "^ ControlPath\| ^User"; echo " ForwardAgent=yes") > $tmpconfig;
# Setup root account
ssh -F $tmpconfig $USER#lima-colima "sudo mkdir -p /root/.ssh/; sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys"
# Loop over each volume inside docker-for-mac
for volume_name in $(DOCKER_CONTEXT=desktop-linux docker volume ls | awk '{print $2}'); do
echo $volume_name;
# Make the volume backup
DOCKER_CONTEXT=desktop-linux docker run -d --rm --mount source=$volume_name,target=/volume --name copy-instance busybox sleep infinate;
DOCKER_CONTEXT=desktop-linux docker exec copy-instance sh -c "tar czf /$volume_name.tar /volume";
DOCKER_CONTEXT=desktop-linux docker cp copy-instance:/$volume_name.tar /tmp/$volume_name.tar;
DOCKER_CONTEXT=desktop-linux docker kill copy-instance;
# Restore the backup inside colima
DOCKER_CONTEXT=colima docker volume create $volume_name;
ssh -F $tmpconfig root#lima-colima "rm -rf /var/lib/docker/volumes/$volume_name; mkdir -p /var/lib/docker/volumes/$volume_name/_data";
scp -r -F $tmpconfig /tmp/$volume_name.tar root#lima-colima:/tmp/$volume_name.tar;
ssh -F $tmpconfig root#lima-colima "tar -xf /tmp/$volume_name.tar --strip-components=1 --directory /var/lib/docker/volumes/$volume_name/_data";
done
)

Unable to run systemd inside docker which is being run inside jenkins

I'm trying to get Jenkins to run Docker that runs SystemD.
So far I've been able to run systemd inside docker locally without Jenkins. Here are the steps to run it locally without jenkins:
# pull unop/fedora-systemd and create and run the container for it
sudo docker run --cap-add=SYS_ADMIN -e container=docker --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro -t -i unop/fedora-systemd
# on a different terminal window, I can:
# get the container id of the "unop/fedora-systemd" image
sudo docker ps
# then exec bash on it
sudo docker container exec -t -i a98aa2bcd19e bash # where a98aa2bcd19e is the container id found above
# once inside the container, I can run systemd without any problems. examples:
systemctl status
systemctl start dbus.service
systemctl status dbus.service
The above works locally and I am able to run systemd inside the docker container.
The problem I get is when I try the same thing, but inside Jenkins.
I've tried to tweak Jenkinsfile several times, but not of my previous tries seemed to work. I always get an error when running under Jenkins similar to this:
+ systemctl status
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
This is my latest Jenkinsfile that I've tried
pipeline {
agent {
docker {
image 'unop/fedora-systemd'
args '--cap-add=SYS_ADMIN -e container=docker --tmpfs /run --tmpfs /tmp -v /sys/fs/cgroup:/sys/fs/cgroup:ro -t -i'
}
}
stages {
stage('test') {
steps {
sh "echo hello world"
sh "systemctl status"
sh "systemctl start dbus.service"
sh "systemctl dbus.service"
}
}
}
}
On previous iterations of the Jenkinsfile, I've tried to replace -cap-add=SYS_ADMIN -e container=docker for --privileged, but that didn't help, I still got the same errors
Anyone have an idea of how can I get this to work? Why does the above work locally, but not on Jenkins? what am I missing here?
Note: Jenkins version: 2.150.2 and this is the Dockerfile used by unop/fedora-systemd
FROM fedora:rawhide
MAINTAINER http://fedoraproject.org/wiki/Cloud
ENV container docker
RUN dnf -y update && dnf clean all
RUN dnf -y install systemd && dnf clean all && \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup", "/tmp", "/run" ]
CMD ["/usr/sbin/init"]
PS: I've seen a related question, but what they were asking is different
I did not know about the related question. Let me point out again that you do not need to run a systemd daemon in a systemd controlled container if it is just about running multiple services in it. Simply overwrite /usr/bin/systemctl with the docker-systemctl-replacement script. Then go to register it with CMD ["/usr/bin/systemctl"] as the init process of the container.
That's it. Now you can run any systemctl-start process from the operating system. It works to the extent that even provisioning with ansible/puppet scripts have no problem at all. An specficially, I am using that to provision Jenkins images with the operating system that the developers like to have as a basis. No priviledged mode required.
You may try an image that has Fedora with System D already active with this command:
docker run -d --name systemd-fedora --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro jrei/systemd-fedora
Then you just need to run:
docker exec -it systemd-fedora /bin/bash
and there you can just install, start and restart any service you need.

Enabling ssh at docker build time

Docker version 17.11.0-ce, build 1caf76c
I need to run Ansible to build & deploy to wildfly some java projects during docker build time, so that when I run docker image I have everything setup. However, Ansible needs ssh to localhost. So far I was unable to make it working. I've tried different docker images and now I ended up with phusion (https://github.com/phusion/baseimage-docker#login_ssh). What I have atm:
FROM phusion/baseimage
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd -d &
RUN ssh -tt root#127.0.0.1
CMD ["/bin/bash"]
But I still get
Step 11/12 : RUN ssh -tt root#127.0.0.1
---> Running in cf83f9906e55
ssh: connect to host 127.0.0.1 port 22: Connection refused
The command '/bin/sh -c ssh -tt root#127.0.0.1' returned a non-zero code: 255
Any suggestions what could be wrong? Is it even possible to achieve that?
RUN /usr/sbin/sshd -d &
That will run a process in the background using a shell. As soon as the shell that started the process returns from running the background command, it exits with no more input, and the container used for that RUN command terminates. The only thing saved from a RUN is the change to the filesystem. You do not save running processes, environment variables, or shell state.
Something like this may work, but you may also need a sleep command to give sshd time to finish starting.
RUN /usr/sbin/sshd -d & \
ssh -tt root#127.0.0.1
I'd personally look for another way to do this without sshd during the build. This feels very kludgy and error prone.
There are multiple problems in that Dockerfile
First of all, you can't run a background process in a RUN statement and expect that process in another RUN. Each statement of a Dockerfile are run in a different containers so processes don't persist between them.
Other issue was that 127.0.0.1 is not in known_hosts.
And finally, you must give some time to sshd to start.
Here is a working Dockerfile:
FROM phusion/baseimage
CMD ["/sbin/my_init"]
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
RUN cat ~/.ssh/id_rsa.pub | tee -a ~/.ssh/authorized_keys
RUN printf "Host 127.0.0.1\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
RUN sed -i "s/#PermitRootLogin no/PermitRootLogin yes/" /etc/ssh/sshd_config && \
exec ssh-agent bash && \
ssh-add ~/.ssh/id_rsa
RUN /usr/sbin/sshd & sleep 5 && ssh -tt root#127.0.0.1 'ls -al'
CMD ["/bin/bash"]
Anyway, I would rather find another solution than provisioning you image with Ansible in Dockerfile. Check out ansible-container

execute a command within docker swarm service

Initialize swarm mode:
root#ip-172-31-44-207:/home/ubuntu# docker swarm init --advertise-addr 172.31.44.207
Swarm initialized: current node (4mj61oxcc8ulbwd7zedxnz6ce) is now a manager.
To add a worker to this swarm, run the following command:
Join the second node:
docker swarm join \
--token SWMTKN-1-4xvddif3wf8tpzcg23tem3zlncth8460srbm7qtyx5qk3ton55-6g05kuek1jhs170d8fub83vs5 \
172.31.44.207:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# start 2 services
docker service create continuumio/miniconda3
docker service create --name redis redis:3.0.6
root#ip-172-31-44-207:/home/ubuntu# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2yc1xjmita67 miniconda3 0/1 continuumio/miniconda3
c3ptcf2q9zv2 redis 1/1 redis:3.0.6
As shown above, redis has it's replica while miniconda does not seem to be replicated.
I do usually log-in to miniconda container to type these commands:
/opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser
The problem is that docker exec -it XXX bash command does not work with swarm mode.
You can execute commands by filtering container name without needing to pass the entire swarm container hash, just by the service name. Like this:
docker exec $(docker ps -q -f name=servicename) ls
There is one liner for accessing corresponding instance of the service for localhost:
docker exec -ti stack_myservice.1.$(docker service ps -f 'name=stack_myservice.1' stack_myservice -q --no-trunc | head -n1) /bin/bash
It is tested on PowerShell, but bash should be the same. The oneliner accesses the first instance, but replace '1' with the number of the instance you want to access in two places to get other one.
More complex example is for distributed case:
#! /bin/bash
set -e
exec_task=$1
exec_instance=$2
strindex() {
x="${1%%$2*}"
[[ "$x" = "$1" ]] && echo -1 || echo "${#x}"
}
parse_node() {
read title
id_start=0
name_start=`strindex "$title" NAME`
image_start=`strindex "$title" IMAGE`
node_start=`strindex "$title" NODE`
dstate_start=`strindex "$title" DESIRED`
id_length=name_start
name_length=`expr $image_start - $name_start`
node_length=`expr $dstate_start - $node_start`
read line
id=${line:$id_start:$id_length}
name=${line:$name_start:$name_length}
name=$(echo $name)
node=${line:$node_start:$node_length}
echo $name.$id
echo $node
}
if true; then
read fn
docker_fullname=$fn
read nn
docker_node=$nn
fi < <( docker service ps -f name=$exec_task.$exec_instance --no-trunc -f desired-state=running $exec_task | parse_node )
echo "Executing in $docker_node $docker_fullname"
eval `docker-machine env $docker_node`
docker exec -ti $docker_fullname /bin/bash
This script could be used later as:
swarm_bash stack_task 1
It just execute bash on required node.
EDIT 2017-10-06:
Nowadays you can create the overlay network with --attachable flag to enable any container to join the network. This is great feature as it allows a lot of flexibility.
E.g.
$ docker network create --attachable --driver overlay my-network
$ docker service create --network my-network --name web --publish 80:80 nginx
$ docker run --network=my-network -ti alpine sh
(in alpine container) $ wget -qO- web
<!DOCTYPE html>
<html>
<head>
....
You are right, you cannot run docker exec on docker swarm mode service. But you can still find out, which node is running the container and then run exec directly on the container. E.g.
docker service ps miniconda3 # find out, which node is running the container
eval `docker-machine env <node name here>`
docker ps # find out the container id of miniconda
docker exec -it <container id here> sh
In your case you first have to find out, why service cannot get the miniconda container up. Maybe running docker service ps miniconda3 shows some helpful error messages..?
Using the Docker API
Right now, Docker does not provide an API like docker service exec or docker stack exec for this. But regarding this, there already exists two issues dealing with this functionality:
github.com - moby/moby - Docker service exec
github.com - docker/swarmkit - Support for executing into a task
(Regarding the first issue, for me, it was not directly clear that this issue deals with exactly this kind of functionality. But Exec for Swarm was closed and marked as duplicate of the Docker service exec issue.)
Using Docker daemon over HTTP
As mentioned by BMitch on run docker exec from swarm manager, you could also configure the Docker daemon to use HTTP and than connect to every node without the need of ssh. But you should protect this using TLS authentication which is already integrated into Docker. Afterwards you would be able to execute the docker exec like this:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
-H=$HOST:2376 exec $containerId $cmd
Using skopos-plugin-swarm-exec
There exists a github project which claims to solve the problem and provide the desired functionality binding the docker daemon:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
datagridsys/skopos-plugin-swarm-exec \
task-exec <taskID> <command> [<arguments>...]
As far as I can see, this works by creating another container at same node where the container reside where the docker exec should by executed on. On this node this container mounts the docker daemon socket to be able to execute docker exec there locally.
For more information have a look at: skopos-plugin-swarm-exec
Using docker swarm helpers
There is also another project called docker swarm helpers which seems to be more or less a wrapper around ssh and docker exec.
Reference:
https://github.com/docker/swarmkit/issues/1895#issuecomment-302147604
https://github.com/docker/swarmkit/issues/1895#issuecomment-358925313
You can jump in a Swarm node and list the docker containers running using:
docker container ls
That will give you the container name in a format similar to: containername.1.q5k89uctyx27zmntkcfooh68f
You can then use the regular exec option to run commands on it:
docker container exec -it containername.1.q5k89uctyx27zmntkcfooh68f bash
created a small script for our docker swarm cluster.
this script takes 3 params. first is the service you want to connect to second the task you want to run this can be /bin/bash or any other process you want to run. Third is optional and will fill -c option for bash or sh
-n is optional to force it to connect to a node
it retrieves the node that runs the service and runs the command.
#! /bin/bash
set -e
task=${1}
service=$2
bash=$3
serviceID=$(sudo docker service ps -f name=$service -f desired-state=running $service -q --no-trunc |head -n1)
node=$(sudo docker service ps -f name=$service -f desired-state=running $service --format="{{.Node}}"| head -n1 )
sudo docker -H $node exec -it $service".1."$serviceID $bash -c "$task"
note: this requires the docker nodes to accept tcp connections by exposing docker on port 2375 on the worker nodes
For those who have multiple replicas and just want to run a command within any of them, here is another shortcut:
docker exec -it $(docker ps -q -f name=SERVICE_NAME | head -1) bash
I wrote script to exec command in docker swarm by service name. For example it can be used in cron. Also you can use bash pipelines and passes all params to docker exec command. But works only on same node where service started. I wish it could help someone
#!/bin/bash
# swarm-exec.sh
set -e
for ((i=1;i<=$#;i++)); do
val=${!i}
if [ ${val:0:1} != "-" ]; then
service_id=$(docker ps -q -f "name=$val");
if [[ $service_id == "" ]]; then
echo "Container $val not found!";
exit 1;
fi
docker exec ${#:1:$i-1} $service_id ${#:$i+1:$#};
exit 0;
fi
done
echo "Usage: $0 [OPTIONS] SERVICE_NAME COMMAND [ARG...]";
exit 1;
Example of using:
./swarm-exec.sh app_postgres pg_dump -Z 9 -F p -U postgres app > /backups/app.sql.gz
echo ls | ./swarm-exec.sh -i app /bin/bash
./swarm-exec.sh -it some_app /bin/bash
The simpliest command I found to docker exec into a swarm node (with a swarm manager at $SWARM_MANAGER_HOST) running the service $SERVICE_NAME (for example mystack_myservice) is the following:
SERVICE_JSON=$(ssh $SWARM_MANAGER_HOST "docker service ps $SERVICE_NAME --no-trunc --format '{{ json . }}' -f desired-state=running")
ssh -t $(echo $SERVICE_JSON | jq -r '.Node') "docker exec -it $(echo $SERVICE_JSON | jq -r '.Name').$(echo $SERVICE_JSON | jq -r '.ID') bash"
This asserts that you have ssh access to $SWARM_MANAGER_HOST as well as the swarm node currently running the service task.
This also asserts that you have jq installed (apt install jq), but if you can't or don't want to install it and you have python installed you can create the following alias (based on this answer):
alias jq="python3 -c 'import sys, json; print(json.load(sys.stdin)[sys.argv[2].partition(\".\")[-1]])'"
See addendum 2...
Example of a oneliner for entering the database my_db on node master:
DB_NODE_ID=master && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
In case you want to configure, say max_connections:
DB_NODE_ID=master && $(docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
This approach allows to enter all database nodes (e.g. slaves) just by setting the DB_NODE_ID variable accordingly.
Example for slave s2:
DB_NODE_ID=s2 && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
or
DB_NODE_ID=s2 && $(docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
Put this into your KiTTY or PuTTY configuration for master / s2 under Data/Command and you are set.
As an addendum:
The old, non swarm mode version reads simply
docker exec -it master mysql my_db
resp.
DB_ID=master && $(docker exec -it $DB_ID mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $DB_ID mysql tmp
Addendum 2:
As it turned out by example, the term docker ps -q -f name=$DB_NODE_ID may return wrong values under certain conditions.
The following approach works correctily:
docker ps -a | grep "_$DB_NODE_ID." | awk '{print $1}'
You may substitute the examples above accordingly.
Addendum 3:
Well, these terms look awful and they certainly are painful to type, so you may want to ease your work. On Linux, everybody knows how to do this. On Windws, you may want to use AHK.
This is the AHK term I use:
:*:ii::DB_NODE_ID=$(docker ps -a | grep "_." | awk '{{}print $1{}}') && docker exec -it $id ash{Left 49}
So when I type ii -- which is as simple as it can get -- I get the desired term with the cursor in place and just have to fill in the container name.
I edited the script Brian van Rooijen added above. Because my reputation is to low, I cannot add it
#! /bin/bash
set -e
service=${1}
shift
task="$*"
echo $task
serviceID=$(docker service ps -f name=$service -f desired-state=running $service -q --no-trunc |head -n1)
node=$(docker service ps -f name=$service -f desired-state=running $service --format="{{.Node}}"| head -n1 )
serviceName=$(docker service ps -f name=$service -f desired-state=running $service --format="{{.Name}}"| head -n1 )
docker -H $node exec -it $serviceName"."$serviceID $task
I had the issue that the container didn't exists with the hard coded .1. in the execution.
Take a look at my solution: https://github.com/binbrayer/swarmServiceExec.
This approach is based on Docker Machines. I also created the prototype of the script to call containers asynchronously and as a result simultaneously.

Resources