I am not sure how to specify my host name in ansible host file. I am not able to ping my remote machine because of this.
jenkins slave node name: agent007
In /etc/ansible/hosts list the host machines:
[localhost]
IP address of local host
u should have passwordless connection using ssh-keygen -t rsa which will give u a rsa.pubkey which should be copied on authorization keys of host machine
then u can run the playbook
Please make sure you have mapped the local DNS if you are trying to ping using hostname.
Or else try Ip. Also you should make the password-less authentication for this.
Use ssh-keygen and copy the pub key to the remote server in authorized hosts file.
The remote host you are trying to reach needs to be defined in your ansible inventory file, usually named hosts:
agent007 ansible_ssh_host=<ip addr>
Then you can explicitly use this inventory file with the -i option
ansible-playbook -i hosts firstplaybook.yml
Related
I have two docker container one with Jenkins and one is remote container. I want to Run a Jenkins job on remote container.
For that I have given the private keys in credentials, but still connection is not successful.
I am able to ping remote container using Jenkins container also I am able to ssh to remote container using Jenkins container.
See the screenshot below.
How are you connecting with remote host ?? Can you please share the ssh command ?
Can you please share output of below command from your Jenkins container :
cat /etc/hosts
There might be possibility that Jenkins container is able to connect using IP but not using the host_name. Try updating the the /etc/hosts file with remote container host name and then connect.
eg: 172.0.0.1 remote_host local_host
Also If you have used port forwarding, then you can simply connect with base server IP and different ports assigned to those containers.
TL;DR: how do I get a client in my container to make an HTTPS connection to a service on the host?
I've got a service running on a VM on my local dev machine (macOS) that's serving HTTPS on port 8443; it's got a certificate for dev.mycoolproject.com and dev.mycoolproject.com has an A record pointing to 127.0.0.1. So, if I run my client on my local machine and point it to https://dev.mycoolproject.com:8443 it makes a secure connection to my local service.
I want to run my client inside a docker container and still have it connect to that local server on the host. But obviously dev.mycoolproject.com pointing at 127.0.0.1 won't work, and I can't just use /etc/hosts to redirect it because the host's IP is dynamic. I can reach the local server at host.docker.internal:8443, but I'll get TLS errors because the hostname doesn't match.
Is there any way I can get docker's DNS to map dev.mycoolproject.com to the host IP? I looked into running dnsmasq locally in the container but I had trouble getting it to work.
In a container where you might not have access to tools like dig or nslookup and don't want to install another 55MB package (like debian's dnsutils) just to get the host.docker.internal IP it might be better to use getent instead of dig:
getent hosts host.docker.internal | awk '{ print $1 }'
I ran into a similar issue yesterday and came up with a workaround that adds an entry to /etc/hosts resolving to the the host IP.
You'll need dig or another DNS tool to query for the IP.
If you are running as root you can use:
echo "$(dig +short host.docker.internal) dev.mycoolproject.com" >> /etc/hosts
If you have sudo you can run:
echo "$(dig +short host.docker.internal) dev.mycoolproject.com" | sudo tee -a /etc/hosts
Initially I was hoping the --add-host run option would allow for special docker entries in the host ip argument (like host.docker.internal) but unfortunately they don't.
I wanted to avoid more container configuration so I went with this. Setting up dnsmasq would be a more stable solution.
I have a machine with ssh running on it. Now, I wanted to run the gitlab inside the docker container. So, followed the instructions mentioned here https://docs.gitlab.com/omnibus/docker/. The instruction says bind the container ssh port 22 with host machine's ssh port(22). I was unable to do this because port was already binded with openssh server in the host machine. So I binded the container's ssh port to some other port say 222 or so. Doing so gitlab got set-up but when I try to clone the project using ssh way I am not able to do.
Is there a way to fix this issue? what could be reason, I suspect it's because of the port mapping. I want to have the ssh running on my host machine, run the gitlab inside the container and should be able to use ssh way for code commit,clone and push.
Docker port mapping is one thing but you also need to adapt the gitlab rails configuration in gitlab.rb to specify the custom ssh port :
gitlab_rails['gitlab_shell_ssh_port'] = 222
and restart the container
I am able to install kubernetes using kubeadm method successfully. My environment is behind a proxy. I applied proxy to system, docker and I am able to pull images from Docker Hub without any issues. But at the last step where we have to install the pod network (like weave or flannel), its not able to connect via proxy. It gives a time out error. I am just checking to know if there is any command like curl -x http:// command for kubectl apply -f? Until I perform this step it says the master is NotReady.
When you do work with a proxy for internet access, do not forget to configure the NO_PROXY environment variable, in addition of HTTP(S)_PROXY.
See this example:
NO_PROXY accepts a comma-separated list of hosts, IP addresses, or IP ranges in CIDR format:
For master hosts
Node host name
Master IP or host name
For node hosts
Master IP or host name
For the Docker service
Registry service IP and host name
See also for instance weaveworks/scope issue 2246.
I'm trying to access Docker remote API from within a container because I need to start other containers.
The host address is 172.19.0.1, so I'm using http://172.19.0.1:2375/images/json to get the list of images (from host, http://localhost:2375/images/json works as expected.
The connection is refused, I guess because Docker (for Windows) listens on 127.0.0.1 and not on 0.0.0.0.
I've tried to change configuration (both from UI and daemon.json) adding the entry:
"hosts": ["tcp://0.0.0.0:2375"]
but the daemon fails to start. How can I access the api?
You can set DOCKER_OPTS in windows as below and try. In Windows, Docker runs inside a VM. So, you have to ssh into the VM and make the changes.
DOCKER_OPTS='-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock'
Check if it works for you.
Update :- To ssh into the VM (assuming default is the VM name you have created using Docker toolbox), enter the following command in the Docker Quickstart Terminal,
docker-machine ssh default
You can find more details here.
You could link the host's /var/run/docker.sock within the container where you need it. This way, you don't expose the Docker Remote API via an open port.
Be aware that it does provide root-like access to docker.
-v /var/run/docker.sock:/var/run/docker.sock
You should use "tcp://host.docker.internal:2375" to connect to host machine from container. Please make sure that you can ping the "host.docker.internal" address
https://github.com/docker/for-win/issues/1976