Ansible ping fails when I am not using root priviledges - docker

I get the following error when I try to ping another docker container I setup as a remote:
"changed": false,
"msg": "Failed to connect to the host via ssh: bind: File name too long\r\nunix_listener: cannot bind to path: /var/jenkins_home/.ansible/cp/jenkins_remote-22-remote_user.15sibyvAohxbTCvh",
"unreachable": true
}
However, when I run the same command using the root user, it works.
I have tried to add add the following command to my ansible.cfg file, but it still fails.
control_path = %(directory)s/%%h-%%p-%%r
Please what could be the issue?

I had the same issue it worked with root user and printed the same error otherwise. What did help was to add the following:
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
control_path = /dev/shm/cp%%h-%%p-%%r
to /etc/ansible/ansible.cfg file (create it if it doesn't exist).

Related

Failed to connect to the host via ssh: Host key verification failed

I am facing an issue while executing the ansible-playbook form Jenkins,
like :
PLAY [centos-slave-02] *********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [centos-slave-02]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Host key verification failed.", "unreachable": true}
PLAY RECAP *********************************************************************
centos-slave-02 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
but I am able to get ping-pong response and each time its asking for
Matching host key in /var/jenkins_home/.ssh/known_hosts:5 :
jenkins#c11582cb5024:~/jenkins-ansible$ ansible -i hosts -m ping centos-slave-02
Warning: the ECDSA host key for 'centos-slave-02' differs from the key for the IP address '172.19.0.3'
Offending key for IP in /var/jenkins_home/.ssh/known_hosts:2
Matching host key in /var/jenkins_home/.ssh/known_hosts:5
Are you sure you want to continue connecting (yes/no)? yes
centos-slave-02 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
could anyone please fix this issue!thanks in advance.
Your known_hosts file in the jenkins-ansible host already has an entry for the host centos-slave-02. Now that the centos-slave-02 host's identity has changed, a new entry needs to be added. But the existing entry in the file is throwing this warning.
Warning: the ECDSA host key for 'centos-slave-02' differs from the key for the IP address '172.19.0.3'
Offending key for IP in /var/jenkins_home/.ssh/known_hosts:2
Matching host key in /var/jenkins_home/.ssh/known_hosts:5
You can either manually edit the /var/jenkins_home/.ssh/known_hosts file to remove the key for this centos-slave-02 host or run the below command,
ssh-keygen -R centos-slave-02
The workaround with ansible would be to add this line in ansible.cfg under [defaults] section,
[defaults]
host_key_checking = False
This will disable HostKeyChecking when making SSH connections.
Make sure you don't use sudo in your jenkin build Exec command.

How to ensure the remote-exec provisioner environment is the same as running ssh directly?

I'm able to run a command in my instance using ssh directly:
ssh -o StrictHostKeyChecking=no -i myid_rsa centos#x.x.x.x sudo docker exec samdom samba-tool user create myuser Passw0rd
Warning: Permanently added 'x.x.x.x' (ECDSA) to the list of known hosts.
User 'myuser' created successfully
I would like to do the same thing using a remote-exec provisioner:
provisioner "remote-exec" {
connection {
type = "ssh"
user = "centos"
...
}
inline = [
...
"sudo docker exec samdom samba-tool user create myuser Passw0rd",
...
However, I get an error:
aws_instance.ad_server[0] (remote-exec): ERROR(<type 'exceptions.ValueError'>): Failed to add user 'myuser': - unable to parse dn string
aws_instance.ad_server[0] (remote-exec): File "/usr/lib/python2.7/dist-packages/samba/netcmd/user.py", line 197, in run
aws_instance.ad_server[0] (remote-exec): gecos=gecos, loginshell=login_shell)
aws_instance.ad_server[0] (remote-exec): File "/usr/lib/python2.7/dist-packages/samba/samdb.py", line 356, in newuser
aws_instance.ad_server[0] (remote-exec): dnsdomain = ldb.Dn(self, self.domain_dn()).canonical_str().replace("/", "")
I'm assuming its just that the environments are different for each approach, but I'm not sure how to correct this for the terraform provisioner.

Running 'docker-compose up' throws permission denied when trying official samaple of Docker

I am using Docker 1.13 community edition on a CentOS 7 x64 machine. When I was following a Docker Compose sample from Docker official tutorial, all things were OK until I added these lines to the docker-compose.yml file:
volumes:
- .:/code
After adding it, I faced the following error:
can't open file 'app.py': [Errno 13] Permission denied. It seems that the problem is due to a SELinux limit. Using this post I ran the following command:
su -c "setenforce 0"
to solve the problem temporarily, but running this command:
chcon -Rt svirt_sandbox_file_t /path/to/volume
couldn't help me.
Finally I found the correct rule to add to SELinux:
# ausearch -c 'python' --raw | audit2allow -M my-python
# semodule -i my-python.pp
I found it when I opened the SELinux Alert Browser and clicked on 'Details' button on the row related to this error. The more detailed information from SELinux:
SELinux is preventing /usr/local/bin/python3.4 from read access on the
file app.py.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that python3.4 should be allowed read access on the
app.py file by default. Then you should report this as a bug. You can
generate a local policy module to allow this access. Do allow this
access for now by executing:
ausearch -c 'python' --raw | audit2allow -M my-python
semodule -i my-python.pp

unable to load ansible playbook to a docker host (host unreachable)

im trying to : ansible-playbook install_docker.yml
and keep getting the following error:
TASK [setup] *******************************************************************
fatal: [172.17.0.2]: UNREACHABLE! => {"changed": false, "msg": "ERROR! SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue", "unreachable": true}
my playbook looks like this
---
- hosts: all
vars:
docker_opts: >
- "H unix:///var/run/docker.sock"
- "H tcp://0.0.0.0:2375"
remote_user: root
roles:
- angstwad.docker.ubuntu
im providing the docker host ip by copying the ip using:
docker inspect apacheweb1 | grep IPAddress
how can i reach the docker host?

error while executing the following commands

When I run the following commands I am getting the below output:
sudo docker run ubuntu /bin/echo hello world
WARNING: WARNING: Local (127.0.0.1) DNS resolver found in resolv.conf and containers can't use it. Using default external servers : [8.8.8.8 8.8.4.4]
And when I run docker version, the output is:
mkdir /var/lib/docker/containers: permission denied[/var/lib/docker|a0f30ece] -job initserver() = ERR (1)
2014/03/03 21:49:51 initserver: mkdir /var/lib/docker/containers: permission denied
What is the problem?
My Problem solved by following :
Try modify the /etc/default/docker file, un-comment the OPTS line:
6 # Use DOCKER_OPTS to modify the daemon startup options.
7 #DOCKER_OPTS="-dns 8.8.8.8"

Resources