GNU Parallel fundamentals: how can I feed some parameter that is not in the last position? - gnu-parallel

How can I parallelize ping operations like these below using GNU Parallel?
ping -c 5 -S ${AdapterIP[1]} 8.8.8.8
ping -c 5 -S ${AdapterIP[2]} 8.8.8.8
ping -c 5 -S ${AdapterIP[3]} 8.8.8.8
Problem: I am using FreeBSD, so this change is not allowed (note the host parameter to ping must be the last one) on the syntax:
ping -c 5 8.8.8.8 -S ${AdapterIP[1]}
On Linux systems I just do:
parallel ping -c 5 8.8.8.8 -S ::: "${AdapterIP[#]}"
I have tested this method on FreeBSD:
luis#Balanceador:~$ parallel ping -c 5 -S ::: "${AdapterIP[#]}" 8.8.8.8
usage: ping [-AaDdfnoQqRrv] [-c count] [-G sweepmaxsize] [-g sweepminsize]
[-h sweepincrsize] [-i wait] [-l preload] [-M mask | time] [-m ttl]
[-P policy] [-p pattern] [-S src_addr] [-s packetsize] [-t timeout]
[-W waittime] [-z tos] host
ping [-AaDdfLnoQqRrv] [-c count] [-I iface] [-i wait] [-l preload]
[-M mask | time] [-m ttl] [-P policy] [-p pattern] [-S src_addr]
[-s packetsize] [-T ttl] [-t timeout] [-W waittime]
[-z tos] mcast-group
usage: ping [-AaDdfnoQqRrv] [-c count] [-G sweepmaxsize] [-g sweepminsize]
[-h sweepincrsize] [-i wait] [-l preload] [-M mask | time] [-m ttl]
[-P policy] [-p pattern] [-S src_addr] [-s packetsize] [-t timeout]
[-W waittime] [-z tos] host
ping [-AaDdfLnoQqRrv] [-c count] [-I iface] [-i wait] [-l preload]
[-M mask | time] [-m ttl] [-P policy] [-p pattern] [-S src_addr]
[-s packetsize] [-T ttl] [-t timeout] [-W waittime]
[-z tos] mcast-group
usage: ping [-AaDdfnoQqRrv] [-c count] [-G sweepmaxsize] [-g sweepminsize]
[-h sweepincrsize] [-i wait] [-l preload] [-M mask | time] [-m ttl]
[-P policy] [-p pattern] [-S src_addr] [-s packetsize] [-t timeout]
[-W waittime] [-z tos] host
ping [-AaDdfLnoQqRrv] [-c count] [-I iface] [-i wait] [-l preload]
[-M mask | time] [-m ttl] [-P policy] [-p pattern] [-S src_addr]
[-s packetsize] [-T ttl] [-t timeout] [-W waittime]
[-z tos] mcast-group
... with no luck, as can be seen.
This is probably a dumb question, but I am new to GNU Parallel. Some help, please?
For those interested, this is a possible value of the array that will feed GNU Parallel:
luis#Balanceador:~$ echo ${AdapterIP[#]}
192.168.1.254 192.168.2.254 192.168.3.254

This is the way I have found to specify the parameter when not in the last position (the --dry-run flag is to not execute, just test):
luis#Balanceador:~$ parallel --dry-run sudo ping -c 5 -S {1} 8.8.8.8 ::: "${AdapterIP[#]}"
sudo ping -c 5 -S 192.168.1.254 8.8.8.8
sudo ping -c 5 -S 192.168.2.254 8.8.8.8
sudo ping -c 5 -S 192.168.3.254 8.8.8.8
Note the {1}.
It can be used with multiple parameters. Example for {2}:
luis#Balanceador:~$ parallel --dry-run sudo ping -c 5 -S {1} {2} ::: "${AdapterIP[#]}" ::: 8.8.8.8 8.8.4.4
sudo ping -c 5 -S 192.168.1.254 8.8.8.8
sudo ping -c 5 -S 192.168.1.254 8.8.4.4
sudo ping -c 5 -S 192.168.2.254 8.8.8.8
sudo ping -c 5 -S 192.168.2.254 8.8.4.4
sudo ping -c 5 -S 192.168.3.254 8.8.8.8
sudo ping -c 5 -S 192.168.3.254 8.8.4.4

Related

mpicc takes long time in alpine container

I have a docker container running alpine:latest in which I installed build-base, openmpi, openmpi-dev,.. and basically everything works fine, except when I run
mpicc -v -time=time_out -o /root/cloud/test /root/cloud/mpi_hello_world.c
The preprocessing stage [-E] takes ~90sec for the first time. Second time is less than a second. I attached the -v option to mpiccbelow. Please note that the produced executable runs fine and fast with all my nodes/slots.
What I tried to fix this issue was looking at the verbose output of mpicc -v [...] and between
...
End of search list.
<---- Between these two lines we spend ~85sec estimated ---->
GNU C17 (Alpine 10.3.1_git20211027) version 10.3.1 20211027 (x86_64-alpine-linux-musl)
...
we loose time. I have a hunch, that gcc searches for something which it eventually finds. But I dont know what it is.
Can please someone help me identify the missing element?
Please see the output of the mpicc -v [...] command:
bash-5.1# mpicc -v -time=time_out -o /root/cloud/test /root/cloud/mpi_hello_world.c | tee /root/myFiles/mpicc_verbose
Using built-in specs.
COLLECT_GCC=/usr/bin/gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/lto-wrapper
Target: x86_64-alpine-linux-musl
Configured with: /home/buildozer/aports/main/gcc/src/gcc-10.3.1_git20211027/configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --build=x86_64-alpine-linux-musl --host=x86_64-alpine-linux-musl --target=x86_64-alpine-linux-musl --with-pkgversion='Alpine 10.3.1_git20211027' --enable-checking=release --disable-fixed-point --disable-libstdcxx-pch --disable-multilib --disable-nls --disable-werror --disable-symvers --enable-__cxa_atexit --enable-default-pie --enable-default-ssp --enable-cloog-backend --enable-languages=c,c++,d,objc,go,fortran,ada --disable-libssp --disable-libmpx --disable-libmudflap --disable-libsanitizer --enable-shared --enable-threads --enable-tls --with-system-zlib --with-linker-hash-style=gnu
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 10.3.1 20211027 (Alpine 10.3.1_git20211027)
COLLECT_GCC_OPTIONS='-v' '-o' '/root/cloud/test' '-mtune=generic' '-march=x86-64'
/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/cc1 -quiet -v /root/cloud/mpi_hello_world.c -quiet -dumpbase mpi_hello_world.c -mtune=generic -march=x86-64 -auxbase mpi_hello_world -version -o /tmp/ccdhIMIE.s
GNU C17 (Alpine 10.3.1_git20211027) version 10.3.1 20211027 (x86_64-alpine-linux-musl)
compiled by GNU C version 10.3.1 20211027, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.22-GMP
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
ignoring nonexistent directory "/usr/local/include"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/include/fortify
/usr/include
/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/include
End of search list.
GNU C17 (Alpine 10.3.1_git20211027) version 10.3.1 20211027 (x86_64-alpine-linux-musl)
compiled by GNU C version 10.3.1 20211027, GMP version 6.2.1, MPFR version 4.1.0, MPC version 1.2.1, isl version isl-0.22-GMP
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
Compiler executable checksum: 3193578801129247e8be66bd6dd0fe05
COLLECT_GCC_OPTIONS='-v' '-o' '/root/cloud/test' '-mtune=generic' '-march=x86-64'
/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/as -v --64 -o /tmp/ccFKfcmE.o /tmp/ccdhIMIE.s
GNU assembler version 2.37 (x86_64-alpine-linux-musl) using BFD version (GNU Binutils) 2.37
COMPILER_PATH=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/
LIBRARY_PATH=/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-o' '/root/cloud/test' '-mtune=generic' '-march=x86-64'
/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/collect2 -plugin /usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/liblto_plugin.so -plugin-opt=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/lto-wrapper -plugin-opt=-fresolution=/tmp/ccmkCMLh.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --eh-frame-hdr --hash-style=gnu -m elf_x86_64 --as-needed -dynamic-linker /lib/ld-musl-x86_64.so.1 -pie -z relro -z now -o /root/cloud/test /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/Scrt1.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/crti.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/crtbeginS.o -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1 -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib -L/lib/../lib -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../.. /tmp/ccFKfcmE.o -rpath /usr/lib --enable-new-dtags -lmpi -lssp_nonshared -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/crtendS.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/crtn.o
COLLECT_GCC_OPTIONS='-v' '-o' '/root/cloud/test' '-mtune=generic' '-march=x86-64'
Also here my time_out file:
0.030072 0.006682 cc1 -quiet -v /root/cloud/mpi_hello_world.c -quiet -dumpbase mpi_hello_world.c -mtune=generic -march=x86-64 -auxbase mpi_hello_world -version -o /tmp/ccdhIMIE.s
0.002234 0.0017 as -v --64 -o /tmp/ccFKfcmE.o /tmp/ccdhIMIE.s
0.009905 0.011814 collect2 -plugin /usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/liblto_plugin.so -plugin-opt=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/lto-wrapper -plugin-opt=-fresolution=/tmp/ccmkCMLh.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --eh-frame-hdr --hash-style=gnu -m elf_x86_64 --as-needed -dynamic-linker /lib/ld-musl-x86_64.so.1 -pie -z relro -z now -o /root/cloud/test /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/Scrt1.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/crti.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/crtbeginS.o -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1 -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib -L/lib/../lib -L/usr/lib/../lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib -L/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../.. /tmp/ccFKfcmE.o -rpath /usr/lib --enable-new-dtags -lmpi -lssp_nonshared -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/crtendS.o /usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/crtn.o
There doesnt seem to be a problem in the time_out, I mean its fast.
Code is from here: mpi-hello-world/code
Thank you <3
Edit: Please see the Dockerfile
FROM amd64/alpine#sha256:a777c9c66ba177ccfea23f2a216ff6721e78a662cd17019488c417135299cd89 as node
ARG USER=mpiuser
ARG SSH_PATH=/etc/ssh
RUN ping -c 2 8.8.8.8
RUN apk add --no-cache \
bash \
build-base \
libc6-compat \
openmpi openmpi-dev\
openssh \
openrc \
nfs-utils \
neovim \
tini
RUN rm -rf /var/cache/apk
#https://wiki.alpinelinux.org/wiki/Setting_up_a_nfs-server
#https://wiki.alpinelinux.org/wiki/Setting_up_a_SSH_server
RUN adduser -S ${USER} -g "MPI Test User" -s /bin/ash -D ${USER} \
&& echo "${USER} ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers \
&& echo ${USER}:* | chpasswd \
&& echo root:* | chpasswd
RUN mkdir ~/.ssh \
# && rc-update add sshd \
# && rc-status \
# touch softlevel because system was initialized without openrc
&& echo "PermitRootLogin yes" >> ${SSH_PATH}/sshd_config \
&& echo "PubkeyAuthentication yes" >> ${SSH_PATH}/sshd_config \
&& echo "StrictHostKeyChecking no" >> ${SSH_PATH}/ssh_config \
&& rm /etc/motd
COPY --chmod=770 ./node_script/helper_node.sh /root/
RUN mkdir ~/cloud
# Using tini - All Tini does is spawn a single child (Tini is meant to be run in a container), and wait for it to exit all the while reaping zombies and performing signal forwarding.
# Docu: https://github.com/krallin/tini
ENTRYPOINT ["/sbin/tini", "-g", "-e 143" ,"-e 137", "--", "/root/helper_node.sh"]
And also /root/helper_node.sh:
# Start sshd i.e. ssh server but gracefully make it shout up
/usr/sbin/sshd -D -d -h /root/.ssh/id_rsa -f /etc/ssh/sshd_config > /dev/null 2>&1
Launch with docker-compose: docker-compose rm -fsv;docker-compose build && docker compose up --scale node=4
Edit 2 - Reproduce with mpicc -E; mpicc -S; mpicc -C (commands omitted due to readability) and we see the same behaviour.
But funny observation mpicc -v -E [...] gives:
mpicc -v -E -o test.i /root/cloud/mpi_hello_world.c
Using built-in specs.
COLLECT_GCC=/usr/bin/gcc
Target: x86_64-alpine-linux-musl
Configured with: /home/buildozer/aports/main/gcc/src/gcc-10.3.1_git20211027/configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --build=x86_64-alpine-linux-musl --host=x86_64-alpine-linux-musl --target=x86_64-alpine-linux-musl --with-pkgversion='Alpine 10.3.1_git20211027' --enable-checking=release --disable-fixed-point --disable-libstdcxx-pch --disable-multilib --disable-nls --disable-werror --disable-symvers --enable-__cxa_atexit --enable-default-pie --enable-default-ssp --enable-cloog-backend --enable-languages=c,c++,d,objc,go,fortran,ada --disable-libssp --disable-libmpx --disable-libmudflap --disable-libsanitizer --enable-shared --enable-threads --enable-tls --with-system-zlib --with-linker-hash-style=gnu
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 10.3.1 20211027 (Alpine 10.3.1_git20211027)
COLLECT_GCC_OPTIONS='-v' '-E' '-o' 'test.i' '-mtune=generic' '-march=x86-64'
/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/cc1 -E -quiet -v /root/cloud/mpi_hello_world.c -o test.i -mtune=generic -march=x86-64
ignoring nonexistent directory "/usr/local/include"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/include"
#include "..." search starts here:
#include <...> search starts here:
/usr/include/fortify
/usr/include
/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/include
End of search list.
<------------------ Wait time here ------------------->
COMPILER_PATH=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/
LIBRARY_PATH=/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../:/lib/:/usr/lib/
COLLECT_GCC_OPTIONS='-v' '-E' '-o' 'test.i' '-mtune=generic' '-march=x86-64'
Temporary fix - If I add
export COMPILER_PATH=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/
export LIBRARY_PATH=/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../:/lib/:/usr/lib/
to /etc/profile and source /etc/profile everything works as fine as one could wish :)
Temporary fix. Add
export COMPILER_PATH=/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/libexec/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/bin/
export LIBRARY_PATH=/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../../x86_64-alpine-linux-musl/lib/:/usr/lib/gcc/x86_64-alpine-linux-musl/10.3.1/../../../:/lib/:/usr/lib/
to /etc/profile and then source /etc/profile.
Everything works as fine as one could wish :)

same playbook between jenkins and ansible server doesn't work

I try to manage my Ansible server with Jenkins job and I observe two differents results for two similars actions.
This is my playbook :
- hosts: lpdepmld2
gather_facts: no
tasks:
- shell: whoami; hostname; pwd
register: test
- debug:
msg: "{{ test.stdout_lines }}"
Locally on Ansible serveur, I execute :
cd /etc/ansible
whoami; hostname; pwd
ansible-playbook /etc/ansible/playbooks/test.yml --private-key /home/ansible/.ssh/id_rsa -u ansible -vvv
And it works as expected, result :
root
lpansmld1
/etc/ansible
ansible-playbook 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/etc/ansible/library']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible-playbook
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
Parsed /etc/ansible/hosts inventory source with ini plugin
PLAYBOOK: test.yml **********************************************************************************************************************************************************************************************************************
1 plays in /etc/ansible/playbooks/test.yml
PLAY [lpdepmld2] ************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [shell] ****************************************************************************************************************************************************************************************************************************
task path: /etc/ansible/playbooks/test.yml:6
Tuesday 29 December 2020 16:35:05 +0100 (0:00:00.111) 0:00:00.112 ******
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '/home/ansible\n', '')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057 `" && echo ansible-tmp-1609256105.11-16196748238057="` echo /home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057 `" ) && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, 'ansible-tmp-1609256105.11-16196748238057=/home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057\n', '')
<lpdepmld2> Attempting python interpreter discovery
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, 'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python2.7\n/usr/bin/python\nENDFOUND\n', '')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '{"osrelease_content": "NAME=\\"Red Hat Enterprise Linux Server\\"\\nVERSION=\\"7.5 (Maipo)\\"\\nID=\\"rhel\\"\\nID_LIKE=\\"fedora\\"\\nVARIANT=\\"Server\\"\\nVARIANT_ID=\\"server\\"\\nVERSION_ID=\\"7.5\\"\\nPRETTY_NAME=\\"Red Hat Enterprise Linux Server 7.5 (Maipo)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:redhat:enterprise_linux:7.5:GA:server\\"\\nHOME_URL=\\"https://www.redhat.com/\\"\\nBUG_REPORT_URL=\\"https://bugzilla.redhat.com/\\"\\n\\nREDHAT_BUGZILLA_PRODUCT=\\"Red Hat Enterprise Linux 7\\"\\nREDHAT_BUGZILLA_PRODUCT_VERSION=7.5\\nREDHAT_SUPPORT_PRODUCT=\\"Red Hat Enterprise Linux\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7.5\\"\\n", "platform_dist_result": ["redhat", "7.5", "Maipo"]}\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
<lpdepmld2.uem.lan> PUT /root/.ansible/tmp/ansible-local-102513iMMnYg/tmpzX9hsf TO /home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057/AnsiballZ_command.py
<lpdepmld2.uem.lan> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee '[lpdepmld2.uem.lan]'
<lpdepmld2.uem.lan> (0, 'sftp> put /root/.ansible/tmp/ansible-local-102513iMMnYg/tmpzX9hsf /home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057/AnsiballZ_command.py\n', '')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057/ /home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057/AnsiballZ_command.py && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '', '')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee -tt lpdepmld2.uem.lan '/bin/sh -c '"'"'/usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057/AnsiballZ_command.py && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '\r\n{"changed": true, "end": "2020-12-29 16:35:06.054473", "stdout": "ansible\\nlpdepmld2\\n/home/ansible", "cmd": "whoami; hostname; pwd", "rc": 0, "start": "2020-12-29 16:35:06.047227", "stderr": "", "delta": "0:00:00.007246", "invocation": {"module_args": {"creates": null, "executable": null, "_uses_shell": true, "strip_empty_ends": true, "_raw_params": "whoami; hostname; pwd", "removes": null, "argv": null, "warn": true, "chdir": null, "stdin_add_newline": true, "stdin": null}}}\r\n', 'Shared connection to lpdepmld2.uem.lan closed.\r\n')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1609256105.11-16196748238057/ > /dev/null 2>&1 && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '', '')
changed: [lpdepmld2] => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"cmd": "whoami; hostname; pwd",
"delta": "0:00:00.007246",
"end": "2020-12-29 16:35:06.054473",
"invocation": {
"module_args": {
"_raw_params": "whoami; hostname; pwd",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"rc": 0,
"start": "2020-12-29 16:35:06.047227",
"stderr": "",
"stderr_lines": [],
"stdout": "ansible\nlpdepmld2\n/home/ansible",
"stdout_lines": [
"ansible",
"lpdepmld2",
"/home/ansible"
]
}
TASK [debug] ****************************************************************************************************************************************************************************************************************************
task path: /etc/ansible/playbooks/test.yml:9
Tuesday 29 December 2020 16:35:06 +0100 (0:00:01.067) 0:00:01.179 ******
ok: [lpdepmld2] => {
"msg": [
"ansible",
"lpdepmld2",
"/home/ansible"
]
}
META: ran handlers
META: ran handlers
PLAY RECAP ******************************************************************************************************************************************************************************************************************************
lpdepmld2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Tuesday 29 December 2020 16:35:06 +0100 (0:00:00.034) 0:00:01.214 ******
===============================================================================
shell ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.07s
/etc/ansible/playbooks/test.yml:6 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
debug ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.04s
/etc/ansible/playbooks/test.yml:9 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Playbook run took 0 days, 0 hours, 0 minutes, 1 seconds
And the /var/log/secure log on remote server at this moment :
Dec 29 16:35:05 lpdepmld2 sshd[61126]: Accepted publickey for ansible from 192.168.210.101 port 55946 ssh2: RSA SHA256:iZKO/9tfS6am2YAk8JRKDalRRwDNDubC5FAm+UUA9qw
Dec 29 16:35:05 lpdepmld2 sshd[61126]: pam_unix(sshd:session): session opened for user ansible by (uid=0)
So now, i'm doing the same thing with Jenkins, through this job :
#!/bin/bash
cd /etc/ansible
whoami; hostname; pwd
ansible-playbook /etc/ansible/playbooks/test.yml --private-key /home/ansible/.ssh/id_rsa -u ansible -vvv
The Jenkins result :
Started by user adminlocal
Running as SYSTEM
Building remotely on lpansmld1 in workspace /data/jenkins_agent/workspace/test/test
[test] $ /bin/bash /tmp/jenkins1557636937643197894.sh
root
lpansmld1
/etc/ansible
ansible-playbook 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/etc/ansible/library']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
Parsed /etc/ansible/hosts inventory source with ini plugin
PLAYBOOK: test.yml *************************************************************
1 plays in /etc/ansible/playbooks/test.yml
PLAY [lpdepmld2] ***************************************************************
META: ran handlers
TASK [shell] *******************************************************************
task path: /etc/ansible/playbooks/test.yml:6
Tuesday 29 December 2020 16:38:53 +0100 (0:00:00.106) 0:00:00.106 ******
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '/home/ansible\n', '')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394 `" && echo ansible-tmp-1609256333.17-248021594072394="` echo /home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394 `" ) && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, 'ansible-tmp-1609256333.17-248021594072394=/home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394\n', '')
<lpdepmld2> Attempting python interpreter discovery
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, 'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python2.7\n/usr/bin/python\nENDFOUND\n', '')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '{"osrelease_content": "NAME=\\"Red Hat Enterprise Linux Server\\"\\nVERSION=\\"7.5 (Maipo)\\"\\nID=\\"rhel\\"\\nID_LIKE=\\"fedora\\"\\nVARIANT=\\"Server\\"\\nVARIANT_ID=\\"server\\"\\nVERSION_ID=\\"7.5\\"\\nPRETTY_NAME=\\"Red Hat Enterprise Linux Server 7.5 (Maipo)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:redhat:enterprise_linux:7.5:GA:server\\"\\nHOME_URL=\\"https://www.redhat.com/\\"\\nBUG_REPORT_URL=\\"https://bugzilla.redhat.com/\\"\\n\\nREDHAT_BUGZILLA_PRODUCT=\\"Red Hat Enterprise Linux 7\\"\\nREDHAT_BUGZILLA_PRODUCT_VERSION=7.5\\nREDHAT_SUPPORT_PRODUCT=\\"Red Hat Enterprise Linux\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7.5\\"\\n", "platform_dist_result": ["redhat", "7.5", "Maipo"]}\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
<lpdepmld2.uem.lan> PUT /root/.ansible/tmp/ansible-local-105179U75Grh/tmp7Lwygf TO /home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394/AnsiballZ_command.py
<lpdepmld2.uem.lan> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee '[lpdepmld2.uem.lan]'
<lpdepmld2.uem.lan> (0, 'sftp> put /root/.ansible/tmp/ansible-local-105179U75Grh/tmp7Lwygf /home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394/AnsiballZ_command.py\n', '')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394/ /home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394/AnsiballZ_command.py && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '', '')
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee -tt lpdepmld2.uem.lan '/bin/sh -c '"'"'/usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394/AnsiballZ_command.py && sleep 0'"'"''
<lpdepmld2.uem.lan> (2, "/usr/bin/python: can't open file '/home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394/AnsiballZ_command.py': [Errno 13] Permission denied\r\n", 'Shared connection to lpdepmld2.uem.lan closed.\r\n')
<lpdepmld2.uem.lan> Failed to connect to the host via ssh: Shared connection to lpdepmld2.uem.lan closed.
<lpdepmld2.uem.lan> ESTABLISH SSH CONNECTION FOR USER: ansible
<lpdepmld2.uem.lan> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/ansible/.ssh/id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/a35139d2ee lpdepmld2.uem.lan '/bin/sh -c '"'"'rm -f -r /home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394/ > /dev/null 2>&1 && sleep 0'"'"''
<lpdepmld2.uem.lan> (0, '', '')
fatal: [lpdepmld2]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to lpdepmld2.uem.lan closed.\r\n",
"module_stdout": "/usr/bin/python: can't open file '/home/ansible/.ansible/tmp/ansible-tmp-1609256333.17-248021594072394/AnsiballZ_command.py': [Errno 13] Permission denied\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 2
}
PLAY RECAP *********************************************************************
lpdepmld2 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Tuesday 29 December 2020 16:38:54 +0100 (0:00:00.956) 0:00:01.063 ******
===============================================================================
shell ------------------------------------------------------------------- 0.96s
/etc/ansible/playbooks/test.yml:6 ---------------------------------------------
Playbook run took 0 days, 0 hours, 0 minutes, 1 seconds
Build step 'Execute shell' marked build as failure
Finished: FAILURE
And the /var/log/secure log on remote server at this moment :
Dec 29 16:38:53 lpdepmld2 sshd[64613]: Accepted publickey for ansible from 192.168.210.101 port 56150 ssh2: RSA SHA256:iZKO/9tfS6am2YAk8JRKDalRRwDNDubC5FAm+UUA9qw
Dec 29 16:38:53 lpdepmld2 sshd[64613]: pam_unix(sshd:session): session opened for user ansible by (uid=0)
In both case, I can see on the remote user i'm correctly connect with the private key and with "Ansible" user. So that's why I don't understand the Jenkins error result..
I'm already try to set something like this in ansible.cfg :
remote_tmp = /tmp/.ansible-${USER}/tmp
But it doesn't works too.
Can somebody knows what's the problem ?
Thanks.

Docker containers are not available by port

I couldn’t access nginx on port 8080. In general, I can’t access any container.
CentOS Linux 7
Docker version 19.03.13, build 4484c46d9d
containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b0ce058600e6 nginx "/docker-entrypoint.…" 9 hours ago Up 9 hours 0.0.0.0:8080->80/tcp web
1fc09982cb16 portainer/portainer-ce "/portainer" 10 hours ago Up About a minute 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp portainer
error
curl http://localhost:8080
curl: (56) Recv failure: Connection reset by peer
nginx container:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of
/etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in
/etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
same with portainer
iptables
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-e080a2054aa8 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-e080a2054aa8 -j DOCKER
-A FORWARD -i br-e080a2054aa8 ! -o br-e080a2054aa8 -j ACCEPT
-A FORWARD -i br-e080a2054aa8 -o br-e080a2054aa8 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9000 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8000 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-e080a2054aa8 ! -o br-e080a2054aa8 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-e080a2054aa8 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
But it not work
Everything works. Probably the problem was in the firewalld.

Blocking YouTube videos with iptables

I'm trying to find a way to block YouTube video playback on my kid's Ubuntu computer. I created a shell script to get Youtube IPs and add them to iptables for incoming packets to be dropped. To do so I grab IPs with whois -h whois.radb.net -- '-i origin AS15169'
The problem is that I not only get YouTube IPs, but all Google IPs. Thus, blocking them also blocks access to other Google services, among them Google Search, Google Drive, Google Mail, etc.
I added a few exceptions too, with a domain whitelist, but this remains not enough.
Here is the shell script:
#!/bin/bash
IPTABLES=/sbin/iptables
IP6TABLES=/sbin/ip6tables
function block_ips() {
for THIS_IP in $1; do
# IPv4 range
if [[ $THIS_IP =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+\/[0-9]+$ ]]; then
echo "Blocking $THIS_IP"
$IPTABLES -A funban -s $THIS_IP -j fundrop
fi
# IPv4
if [[ $THIS_IP =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Blocking $THIS_IP"
$IPTABLES -A funban -s $THIS_IP -j fundrop
fi
# IPv6 range
if [[ $THIS_IP =~ ^([0-9A-Fa-f]{0,4}:){0,7}[0-9A-Fa-f]{0,4}\/[0-9]{1,3}$ ]]; then
echo "Blocking $THIS_IP"
$IP6TABLES -A funban -s $THIS_IP -j fundrop
fi
# IPv6
if [[ $THIS_IP =~ ^([0-9A-Fa-f]{0,4}:){0,7}[0-9A-Fa-f]{0,4}$ ]]; then
echo "Blocking $THIS_IP"
$IP6TABLES -A funban -s $THIS_IP -j fundrop
fi
done
}
function accept_ips() {
for THIS_IP in $1; do
# IPv4 range
if [[ $THIS_IP =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+\/[0-9]+$ ]]; then
echo "Allowing $THIS_IP"
errormessage=$(${IPTABLES} -C funban -s $THIS_IP -j ACCEPT 2>&1)
if [[ $errormessage =~ 'Bad rule' ]]; then
echo " Added $THIS_IP"
$IPTABLES -I funban -s $THIS_IP -j ACCEPT
fi
fi
# IPv4
if [[ $THIS_IP =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
errormessage=$(${IPTABLES} -C funban -s $THIS_IP -j ACCEPT 2>&1)
if [[ $errormessage =~ 'Bad rule' ]]; then
echo " Added $THIS_IP"
$IPTABLES -I funban -s $THIS_IP -j ACCEPT
fi
fi
# IPv6 range
if [[ $THIS_IP =~ ^([0-9A-Fa-f]{0,4}:){0,7}[0-9A-Fa-f]{0,4}\/[0-9]{1,3}$ ]]; then
errormessage=$(${IP6TABLES} -C funban -s $THIS_IP -j ACCEPT 2>&1)
if [[ $errormessage =~ 'Bad rule' ]]; then
echo " Added $THIS_IP"
$IP6TABLES -I funban -s $THIS_IP -j ACCEPT
fi
fi
# IPv6
if [[ $THIS_IP =~ ^[0-9A-Fa-f]{0,4}:([0-9A-Fa-f]{0,4}:){0,6}[0-9A-Fa-f]{0,4}$ ]]; then
errormessage=$(${IP6TABLES} -C funban -s $THIS_IP -j ACCEPT 2>&1)
if [[ $errormessage =~ 'Bad rule' ]]; then
echo " Added $THIS_IP"
$IP6TABLES -I funban -s $THIS_IP -j ACCEPT
fi
fi
done
}
function get_ip4() {
echo "$(dig ${1} A | grep -E '^[^;]' | grep -o -E '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+')"
}
function get_ip6() {
echo "$(dig ${1} AAAA | grep -E '^[^;]' | grep -o -E '[0-9A-Fa-f]{0,4}:([0-9A-Fa-f]{0,4}:){0,6}[0-9A-Fa-f]{0,4}')"
}
errormessage=$(${IPTABLES} -n -L funban 2>&1)
if [[ $errormessage =~ 'No chain/target/match by that name' ]]; then
echo "Create funban (IPv4)"
$IPTABLES -N funban
fi
errormessage=$(${IP6TABLES} -n -L funban 2>&1)
if [[ $errormessage =~ 'No chain/target/match by that name' ]]; then
echo "Create funban (IPv6)"
$IP6TABLES -N funban
fi
errormessage=$(${IPTABLES} -L fundrop 2>&1)
if [[ $errormessage =~ 'No chain/target/match by that name' ]]; then
echo "Create fundrop (IPv4)"
$IPTABLES -N fundrop
$IPTABLES -A fundrop -m limit --limit 2/min -j LOG --log-prefix "IPTables-Dropped: " --log-level 4
$IPTABLES -A fundrop -j DROP
fi
errormessage=$(${IP6TABLES} -L fundrop 2>&1)
if [[ $errormessage =~ 'No chain/target/match by that name' ]]; then
echo "Create fundrop (IPv6)"
$IP6TABLES -N fundrop
$IP6TABLES -A fundrop -m limit --limit 2/min -j LOG --log-prefix "IPTables-Dropped: " --log-level 4
$IP6TABLES -A fundrop -j DROP
fi
errormessage=$(${IPTABLES} -C INPUT -j funban 2>&1)
if [[ $errormessage =~ 'No chain/target/match by that name' ]]; then
echo "Filter IPv4"
$IPTABLES -A INPUT -j funban
fi
errormessage=$(${IP6TABLES} -C INPUT -j funban 2>&1)
if [[ $errormessage =~ 'No chain/target/match by that name' ]]; then
echo "Filter IPv6"
$IP6TABLES -A INPUT -j funban
fi
# Flush funban chain
$IPTABLES -F funban
$IP6TABLES -F funban
# Block all Google-related IPs. The "AS15169" is taken from
# http://networktools.nl/asinfo/google.com
# Add these IPs to make google search to work (NOTE: This is not sufficient and blocks Google searches)
block_ips "$(whois -h whois.radb.net -- '-i origin AS15169' | grep -E '^route6?\:')"
while read domain; do
echo "Whitelisting $domain"
accept_ips $(get_ip4 $domain)
accept_ips $(get_ip6 $domain)
done <whitelist.txt
I am trying to find another robust solution, based on iptables (my kid is clever enough to circumvent hosts blocking, for example).
I though about mDPI netfilter but it seems it's no longer available as an iptables module in Ubuntu 20.04.
$ iptables -mndpi –help
iptables v1.8.4 (legacy): Couldn't load match `ndip':No such file or directory
Any idea?

rpm scriptlet error does not send proper exit code during install

I let jenkins build rpm with post-install scriptlets that perform several tasks during install.
One of those may fail and send an error like
warning: %post(test-1-1.noarch) scriptlet failed, exit status 1
Unfortunately this seems not to result in a proper exit status from the rpm call.
# echo $?
0
Actually I need the rpm install send an exit state >0
Is there some hidden rpm option that I am missing or is there something missing in my scriptlet?
This is how my scriptlets look like (created by effing package manager via jenkins with variables <%= installPath %>, <%= webUser %> inserted during creation)
#!/usr/bin/env bash
INSTALLPATH=<%= installPath %>
WEBUSER=<%= webUser %>
HTTPDUSER=`ps axo user,comm | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`
mkdir -p $INSTALLPATH/app/cache
setfacl -R -m u:"$HTTPDUSER":rwX -m u:"$WEBUSER":rwX $INSTALLPATH/app/cache
setfacl -dR -m u:"$HTTPDUSER":rwX -m u:"$WEBUSER":rwX $INSTALLPATH/app/cache
mkdir -p $INSTALLPATH/app/logs
setfacl -R -m u:"$HTTPDUSER":rwX -m u:"$WEBUSER":rwX $INSTALLPATH/app/logs
setfacl -dR -m u:"$HTTPDUSER":rwX -m u:"$WEBUSER":rwX $INSTALLPATH/app/logs
mkdir -p $INSTALLPATH/web/cache
setfacl -R -m u:"$HTTPDUSER":rwX -m u:"$WEBUSER":rwX $INSTALLPATH/web/cache
setfacl -dR -m u:"$HTTPDUSER":rwX -m u:"$WEBUSER":rwX $INSTALLPATH/web/cache
sudo su $WEBUSER -c "php $INSTALLPATH/vendor/sensio/distribution-bundle/Resources/bin/build_bootstrap.php"
sudo su $WEBUSER -c "php $INSTALLPATH/app/console cache:warmup"
sudo su $WEBUSER -c "php $INSTALLPATH/app/console assets:install $INSTALLPATH/web"
sudo su $WEBUSER -c "php $INSTALLPATH/app/console doctrine:database:drop --if-exists --no-interaction --force"
sudo su $WEBUSER -c "php $INSTALLPATH/app/console doctrine:database:create --if-not-exists"
sudo su $WEBUSER -c "php $INSTALLPATH/app/console doctrine:schema:create"
The rpm install is called by
#!/bin/sh
#
# allows jenkins to install rpm as privileged user
#
# add the following line to /etc/sudoers:
# jenkins ALL = NOPASSWD: /usr/local/sbin/jenkins-rpm-install
#
artifact=$1
rpm -vv --install --force $artifact
err_code=$?
if [[ err_code > 0 ]]; then exit 1; fi
Any recommendations are welcome.
(This question is a follow up from jenkins shall fail on errors during rpm install job)
rpm (from #rpm.org, not from #rpm5.org) chose to treat errors from %post as non-fatal (i.e. the exit code is zero) quite some years ago. There are no re-enabler options I am aware of. Well you can detect the warning: message instead of checking the exit code.

Resources