I'm using Jenkins to launch a simple Ansible Pipeline (to create a folder on localhost as follow
name: Play1
hosts: localhost
become: true
remote_user: ec2-user
tasks:
- name: Create directory
file:
path: /home/ec2-user/Newfolder
state: directory
group: ec2-user
owner: ec2-user
mode: 0700
But when i build the pipeline I got this error message:
TASK [Gathering Facts] *********************************************************
fatal: [localhost]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"ansible.legacy.setup": {"failed": true, "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}}, "msg": "The following modules failed to execute: ansible.legacy.setup\n"}
Have you an idea about this error please ?
I think you are not providing the sudo password, and the ansible_user is not already configured for passwordless sudo. Following is the error from the snippet you provided.
{"failed": true, "module_stderr": "sudo: a password is required\n",
In the playbook, you have provided become: true and remote_user: ec2-user
become: true
remote_user: ec2-user
So your ec2-user is not having permission to become a root user without password being passed. You can run sudo -l command to check what a user can do with his current sudo configurations.
If you do not want to change the sudoers file, then simply pass -K flag to the playbook while execution to get prompted for the sudo password.
-K, --ask-become-pass
However, the user ec2-user must be in a sudoer file; without it you will an error like the below regardless of supplying ec2-user password via -K
ec2-user is not in the sudoers file
Related
I am trying to use molecule to test a very basic role.
(venv) [red#jumphost docker-ops]$ cat roles/fake_role/tasks/main.yml
---
# tasks file for fake_role
- name: fake_role | debug remote_tmp
debug:
msg: "remote_tmp is {{ remote_tmp | default('not_set') }}"
- name: who am i
shell:
cmd: whoami
register: whoami_output
- name: debug who am i
debug:
msg: "{{ whoami_output }}"
This is my molecule.yml:
(venv) [red#jumphost docker-ops]$ cat roles/fake_role/molecule/default/molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
# platforms:
# - name: instance
platforms:
- name: instance
image: docker.io/pycontribs/centos:7
pre_build_image: true
privileged: true
volume mounts:
- "sys/fs/cgroup:/sys/fs/cgroup:rw"
command: "/usr/sbin/init"
provisioner:
name: ansible
verifier:
name: ansible
And when I run ansible version I can see my ansible.cfg is /etc/ansible/ansible.cfg and I set the remote_tmp in it.
(venv) [red#jumphost fake_role]$ ansible --version
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
ansible [core 2.11.12]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/red/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/red/GIT/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/red/.ansible/collections:/usr/share/ansible/collections
executable location = /home/russell.cecala/GIT/venv/bin/ansible
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.0.3
libyaml = True
(venv) [red#ajumphost fake_role]$ grep remote_tmp /etc/ansible/ansible.cfg
#remote_tmp = ~/.ansible/tmp
remote_tmp = /tmp
When I run ...
(venv) [red#jumphost docker-ops]$ cd roles/fake_role/
(venv) [russell.cecala#jumphost fake_role]$ molecule test
... I get this output ...
... lots of output ...
PLAY [Converge] ****************************************************************
TASK [Include red.fake_role] *****************************************
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
TASK [brightpattern.fake_role : fake_role | debug remote_tmp] ******************
ok: [instance] => {
"msg": "remote_tmp is not_set"
}
TASK [red.fake_role : who am i] **************************************
fatal: [instance]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.
In some cases, you may have been able to authenticate and did not have permissions on the
target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted
in \"/tmp\", for more error information use -vvv. Failed command was:
( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1668100608.7567627-2234645-21513917172593 `\" && echo ansible-tmp-1668100608.7567627-2234645-21513917172593=\"` echo ~/.ansible/tmp/ansible-tmp-1668100608.7567627-2234645-21513917172593 `\" ), exited with result 1",
"unreachable": true}
PLAY RECAP *********************************************************************
instance : ok=1 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
... a lot more output ...
Why wasn't remote_tmp set to /tmp?
UPDATE:
Here is my new molecule.yml:
(venv) [red#ap-jumphost fake_role]$ cat molecule/default/molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: docker.io/pycontribs/centos:7
pre_build_image: true
privileged: true
volume mounts:
- "sys/fs/cgroup:/sys/fs/cgroup:rw"
command: "/usr/sbin/init"
provisioner:
name: ansible
config_options:
defaults:
remote_tmp: /tmp
verifier:
name: ansible
But I am still getting the same error:
(venv) [red#ap-jumphost fake_role]$ molecule test
...
INFO Running default > prepare
WARNING Skipping, prepare playbook not configured.
INFO Running default > converge
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the
controller starting with Ansible 2.12. Current version: 3.6.8 (default, Oct 19
2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]. This feature will be
removed from ansible-core in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
PLAY [Converge] ****************************************************************
TASK [Include red.fake_role] *****************************************
/home/red/GIT/venv/lib64/python3.6/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
TASK [red.fake_role : fake_role | debug remote_tmp] ******************
ok: [instance] => {
"msg": "remote_tmp is not_set"
}
TASK [red.fake_role : fake_role | debug ansible_remote_tmp] **********
ok: [instance] => {
"msg": "ansible_remote_tmp is not_set"
}
TASK [red.fake_role : who am i] **************************************
fatal: [instance]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp `\"&& mkdir \"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" && echo ansible-tmp-1668192366.5684752-2515263-14400147623756=\"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" ), exited with result 1", "unreachable": true}
PLAY RECAP *********************************************************************
instance : ok=2 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
WARNING Retrying execution failure 4 of: ansible-playbook --inventory /home/red/.cache/molecule/fake_role/default/inventory --skip-tags molecule-notest,notest /home/red/GIT/docker-ops/roles/fake_role/molecule/default/converge.yml
CRITICAL Ansible return code was 4, command was: ['ansible-playbook', '--inventory', '/home/red/.cache/molecule/fake_role/default/inventory', '--skip-tags', 'molecule-notest,notest', '/home/red/GIT/docker-ops/roles/fake_role/molecule/default/converge.yml']
Easier to read error message:
fatal: [instance]: UNREACHABLE! =>
{"changed": false,
"msg": "Failed to create temporary directory.In some cases, you may have been able to
authenticate and did not have permissions on the target directory. Consider
changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\",
for more error information use -vvv.
Failed command was: ( umask 77 && mkdir -p \"` echo /tmp `\"&& mkdir \"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" && echo ansible-tmp-1668192366.5684752-2515263-14400147623756=\"` echo /tmp/ansible-tmp-1668192366.5684752-2515263-14400147623756 `\" ), exited with result 1", "unreachable": true}
I did happen to notice that the
~/.cache/molecule/fake_role/default/ansible.cfg file does have remote_tmp set.
[defaults]
ansible_managed = Ansible managed: Do NOT edit this file manually!
display_failed_stderr = True
forks = 50
retry_files_enabled = False
host_key_checking = False
nocows = 1
interpreter_python = auto_silent
remote_tmp = /tmp
[ssh_connection]
scp_if_ssh = True
control_path = %(directory)s/%%h-%%p-%%r
Molecule produces it's own ansible.cfg for its own test use which will not take into account any global or local existing config file.
Depending on your version/configuration, this file is either created in:
molecule/<scenario>/.molecule/ansible.cfg
/home/<user>/.cache/molecule/<role>/<scenario>/ansible.cfg.
The easiest way to see where that file is generated and used on your own platform is to run molecule in --debug mode and inspect the output for the ANSIBLE_CONFIG variable in current use.
Now don't try to modify that file as it will be overwritten at some point anyway. Instead, you have to modify your provisionner environment in molecule.yml.
Below is an example adapted from the documentation for your particular case.
provisioner:
name: ansible
config_options:
defaults:
remote_tmp: /tmp
You can force regenerating the ansible.cfg cache file (and other molecule cached/temporary resources) for your scenario by running molecule reset
Please pay attention in the documentation link to the note warning you that some ansible.cfg config variables are blacklisted to warranty molecule functioning and will not be taken into account
I am using Jenkins to run some Ansible playbooks. One of the simple tests I did was to have the playbook to cat the fstab file on a remote server:
The playbook looks like this:
---
- hosts: "tesst-1-server"
tasks:
- name: dislpay /etc/fstab
shell: cat /etc/fstab
register: fstab_reg
- debug: msg="{{ fstab_reg.stdout }}"
In Jenkins, I have a freestyle project, it uses Invoke Ansible Playbook to call the above playbook, and the project credentials was setup with a different: ansible-user. This is different from the default user-jenkins that runs Jenkins. User ansible-user can ssh to all my servers. I have ansible-user setup in Jenkins Credential with its private key and passphrase. But when I run the project, I got an error:
[update_fstab] $ /usr/bin/ansible-playbook google/ansible/test-scripts/test/sub_book.yml -i /etc/ansible/hosts -f 5 --private-key /tmp/ssh14117407503194058572.key -u ansible-user
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
fatal: [test-1-server]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ansible-user#test-1-server: Permission denied (publickey).", "unreachable": true}
I am not quiet sure what exactly the error is saying as I have setup the private key and passphrase to ansible-user's credentials. What does the group names in the message mean? Because this is done through Jenkins, I am not sure how to do the -vvv as it suggested.
How can I make Jenkins to pass the private key and passphrase to the Ansible playbook?
Thanks!
I think I have found the "issue". After I switched to a different user other than ansible-user, the playbook worked. Interesting thing is that when I created the private key pairs for ansible-user, I used "-m PEM" and it should be good for Jenkins.
Looks like this question has been asked before but I have done what other people suggested yet, still get the error.
The user I am running jenkins on is called: jenkinsuser
docker is installed with version: Docker version 20.10.4, build d3cb89e
jenkinsuser is already in docker group:
$> grep docker /etc/group
docker:x:497:jenkinsuser
My ansible script looks like this:
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name:
"get the username running the deploy"
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
- name:
"Download tensorflow/serving image"
shell: docker pull tensorflow/serving
become: false
and when I invoke it using Jenkins it errors with:
TASK [get the username running the deploy] *************************************
changed: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"username_on_the_host": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.014707",
"end": "2021-03-05 16:29:34.138218",
"failed": false,
"rc": 0,
"start": "2021-03-05 16:29:34.123511",
"stderr": "",
"stderr_lines": [],
"stdout": "jenkinsuser",
"stdout_lines": [
"jenkinsuser"
]
}
}
TASK [Download tensorflow/serving image] ***************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "docker pull tensorflow/serving", "delta": "0:00:00.120564", "end": "2021-03-05 16:29:50.688169", "msg": "non-zero return code", "rc": 1, "start": "2021-03-05 16:29:50.567605", "stderr": "Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/create?fromImage=tensorflow%2Fserving&tag=latest: dial unix /var/run/docker.sock: connect: permission denied", "stderr_lines": ["Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/create?fromImage=tensorflow%2Fserving&tag=latest: dial unix /var/run/docker.sock: connect: permission denied"], "stdout": "Using default tag: latest", "stdout_lines": ["Using default tag: latest"]}
Am i missing something??
Also, I can not run as root on the jenkins server so i can't run it as root.
$> stat /var/run/docker.sock
File: ‘/var/run/docker.sock’
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 14h/20d Inode: 558480959 Links: 1
Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 497/ docker)
Access: 2021-03-05 20:01:04.712848585 +0000
Modify: 2021-03-02 22:00:01.367880977 +0000
Change: 2021-03-02 22:00:01.376880979 +0000
Birth: -
I get the following error when I try to ping another docker container I setup as a remote:
"changed": false,
"msg": "Failed to connect to the host via ssh: bind: File name too long\r\nunix_listener: cannot bind to path: /var/jenkins_home/.ansible/cp/jenkins_remote-22-remote_user.15sibyvAohxbTCvh",
"unreachable": true
}
However, when I run the same command using the root user, it works.
I have tried to add add the following command to my ansible.cfg file, but it still fails.
control_path = %(directory)s/%%h-%%p-%%r
Please what could be the issue?
I had the same issue it worked with root user and printed the same error otherwise. What did help was to add the following:
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
control_path = /dev/shm/cp%%h-%%p-%%r
to /etc/ansible/ansible.cfg file (create it if it doesn't exist).
I use an ansible script to load & start the https://hub.docker.com/r/rastasheep/ubuntu-sshd/ container.
so it starts well of course :
bash-4.4$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8bedbd3b7d88 rastasheep/ubuntu-sshd "/usr/sbin/sshd -D" 37 minutes ago Up 36 minutes 0.0.0.0:49154->22/tcp test
bash-4.4$
so after ansible failure on ssh access to it I tested manually from shell
this is also ok.
bash-4.4$ ssh root#172.17.0.2
The authenticity of host '172.17.0.2 (172.17.0.2)' can't be established.
ECDSA key fingerprint is SHA256:YtTfuoRRR5qStSVA5UuznGamA/dvf+djbIT6Y48IYD0.
ECDSA key fingerprint is MD5:43:3f:41:e9:89:45:06:6f:f6:42:c4:6a:70:37:f8:1d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
root#172.17.0.2's password:
root#8bedbd3b7d88:~# logout
Connection to 172.17.0.2 closed.
bash-4.4$
so the step that failed is trying to get on it from ansible script & make access to ssh-copy-id
ansible error message is :
Fatal: [172.17.0.2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n", "unreachable": true}
---
- hosts: 127.0.0.1
tasks:
- name: start docker service
service:
name: docker
state: started
- name: load and start the container we wanna use
docker_container:
name: test
image: rastasheep/ubuntu-sshd
state: started
ports:
- "49154:22"
- name: Wait maximum of 300 seconds for ports to be available
wait_for:
host: 0.0.0.0
port: 49154
state: started
- hosts: 172.17.0.2
vars:
passwordadmin: $6$pbE6yznA$AeFIdI.....K0
passwordroot: $6$TMrxQUxT$I8.JIzR.....TV1
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
tasks:
- name: Build test container root user rsa ssh-key
shell: docker exec test ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
so I cannot even run the needed step to build ssh
how to do then ??
1st step (ansible task) : load docker container
2cd step (ansible task on only 172.17.0.2) : connect to it & setup it
there will be 3rd step to run application on it after that.
the problem occurs only when starting the 2cd step
Ok after many trys on a second container
conclusion is my procedure was bad
what I have done to solve that :
build a diroctory tree separating ./ ./inventory ./includes
build 1 yaml file by host (local, docker, labo)
build 1 main yaml file on ./
build 1 new host file in ./inventory
connect forced by sshpass to docker on default password
changed it
add the host key on authorized key to a login dedicated usage
installed pyhton (needed to answer ansible host else it makes
randomly module errors or refused connections depending on current
action)
setup a ssh login user in sudoers
then I can un the docker.yaml actions
then only at last I can run the labo.yaml actions.
Thanks for help
now I'm able to build the missing tools.