Is it possible to configure the dns for the concourse build container.
I know there is a build_args: argument with the docker-image-resource but I am unable get it to replicate the following docker build parameter--dns=IP_ADDRESS...
Has anyone done something similar in their pipeline.yml?
It's unlikely you will be able to set this via Concourse due to lack of support in Docker.
The --dns=IP_ADDRESS option you reference is a docker run argument.
The docker build command doesn't allow you to change the DNS settings for the build containers that run under it.
This recent github issue links to a bunch of the related issues:
#1916 (comment)
#2267
#3851
#5779
#7966
#10171
#10324
#24928
Workarounds
Set Container DNS for a RUN step
You can modify the local /etc/resolv.conf during a build step in a Dockerfile:
FROM busybox:latest
RUN set -uex; \
echo "nameserver 8.8.8.8" > /etc/resolv.conf; \
cat /etc/resolv.conf; \
ping -c 4 google.com
RUN cat /etc/resolv.conf
It will be back to normal for the next run step though.
Set Daemon DNS
You can configure a Docker daemon with a custom DNS server for all containers that don't override the dns.
dockerd --dns 8.8.8.8
It's possible to run a specific "Build" instance of Docker with custom DNS if you needed the builds to be different to what you run your containers run with.
Set Host DNS
Edit /etc/resolv.conf on the host to point at your DNS. This obviously effects everything running on the host.
It's possible to run a local caching server that can be configured to forward your required requests to a local DNS server and forward anything else to your normal DNS servers (similar to what Docker does locally for a container DNS).
Related
I need to access a domain name during my docker build, say example.com, which is currently running on my host machine (Mac), through the hosts file:
[/etc/hosts]
127.0.0.1 example.com
and exposed on port 8888. If I try to use this ip in the docker build process, it fails to establish the connection:
docker build --add-host=example.com:127.0.0.1 .
Using my host's local ip address using ipconfig getifaddr en0 also fails:
docker build --add-host=example.com:$(ipconfig getifaddr en0) .
Presumably this is because my host system is not allowing incoming connections on this port; I am also fine with this, ideally I would not need to open the port up externally to make this work.
Indeed I can't even access the resource in the host terminal with this ip, for example with curl $(ipconfig getifaddr en0):8888 fails. However ping $(ipconfig getifaddr en0) does work, verifying the ip address is correct.
Since Docker 20.10, the "host-gateway" token was added to the cli. This means you can now access the host using host.docker.internal (yes, even on linux) with the command
docker build --add-host=host.docker.internal:host-gateway .
However I need to access the host by resolving the local domain example.com and the following does not work for me:
docker build --add-host=example.com:host-gateway .
During the build process we can get the ip address of the host by resolving host.docker.internal however updating the /etc/hosts file in the image, does not work as detailed in this answer. The accepted solution is to use --add-host to the docker build ... command that kicks off the build process.
Before the build process however, when we are able to configure example.com to point to an ip address with docker build --add-host, I don't know how to see where docker.host.internal would point to.
Currently, the solution I have is to run the docker build with the directive
RUN apt-get -y install iputils-ping && ping host.docker.internal
Then I quit the build, copy the ip address output from the directive (192.168.65.2) and into the command
docker build --add-host=example.com:192.168.65.2 .
Now, this does work, but it's a brittle, cumbersome and fairly desperate hack. What would be the best way to achieve this result?
WIthin a Docker container, I would like to connect to a MySQL database that resides on the local network. However, I get errors because it can not find the host name, so my current hot fix is to hardcode the IP (which is bound to change at some time).
Hence; is it possible to forward a hostname from the host machine to the Docker container at docker run?
Yes, it is possible. Just inject hostname variable when run docker run command:
$ hostname
np-laptop
$ docker run -ti -e HOSTNAME=$(hostname) alpine:3.7
/ # env
HOSTNAME=np-laptop
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
Update:
I think you can do two things with docker run for your particular case:
1. Bind /etc/hosts file from the host to a container.
2. Define any dns server you want inside a container with --dns flag.
So, finally the command is:
docker run -ti -v /etc/hosts:/etc/hosts --dns=<IP_of_DNS> alpine:3.7
Docker containers by default has access to the host network, and they're able to resolve DNS names using DNS servers configured on the host, so it should work out of the box.
I remember having similar problem in my corporate network, I solved it by referencing in the app the remote server with FQDN - our-database.mycompany.com instad just using our-database.
Hope this helps.
People has asked similar questions and got good answers:
How do I pass environment variables to Docker containers?
Alternatively you can configure the DHCP/DNS server that serves the docker machines to resolve the hostnames properly. DDNS is another option that can simplify configuration as well.
Using this docker image from Docker Hub, I'm trying to run an ansible playbook that would configure the machine on which the container is running.
As an example, I run this:
docker run --net="host" -v <path_inventory>:/inventory -v <path_playbook>:/playbook.yml williamyeh/ansible:ubuntu16.04 ansible-playbook -vvvv -i /inventory /playbook.yml
With this options, I can ping localhost and the inventory and playbook are both accessible.
The inventory is configured to use a local connection:
[executors]
127.0.0.1
[executors:vars]
ansible_connection=local
ansible_user=<my_user_in_docker_host>
ansible_become=True
The group executors is the one referenced from the playbook.
I see that the playbook is trying to connect as root (what I get by default when I attach to the container). Specifying -u when running the container doesn't seem to get along with Ansible.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
... followed by errors complaining about any command not available, after a successful local connection. That is what makes no sense for me given that both root or non-root users can execute them.
Any idea?
this image is designed to serve as a base for other images, and to take advantadge of ansible as a way of provisioning the requirements of the image rather than using the Dockerfile only.
This is stated in the documentation of the docker image:
Used mostly as a base image for configuring other software stack on
some specified Linux distribution(s).
Think of it as a base image to perform CI tasks on a lighter way than using other options (VMs, Vagrant...)
Take in account that the good thing about docker is that it isolates the host from the containers, so you can not reach the host files from the containers (except for whatever volumes you bind). Otherwise, it would be a security problem. See Here
regards
I was able to use ansible to configure the host from within a docker container. However, I didn't use a docker host network, but a docker bridge network.
When you start an ansible playbook in a container, then localhost will be the localhost of the container itself. This is just fine, because local_action(s) in ansible run in the container itself and remote actions on the host.
This is the modified version of your docker run example:
docker run -v <path_inventory>:/inventory -v <path_playbook>:/playbook.yml williamyeh/ansible:ubuntu16.04 ansible-playbook -vvvv -i /inventory/playbook.yml
You should't configure the inventory to use localhost or a local connection, but to use the host (machine) and connect via ssh. This is an example:
[executors]
<my_host_ip>
[executors:vars]
ansible_connection=ssh
ansible_user=<my_host_user>
ansible_become=True
Assuming your docker container is running in the default bridge, you can find my_host_ip with the following command:
ip addr show docker0
The container will connect with ssh to the docker interface on the host.
Some additional hints:
ssh needs to listen on the docker0 interface
iptables/nftables needs to provide ssh access from the ansible container to the docker0 interface
Ansible uses keys to connect via ssh by default. By using the -k and/or the -K parameters of the ansible-playbook command, you can provide a password instead.
I'm trying to set up Docker with two containers. One is a web app and the second is a dnsmasq DHCP server.
Docker should update the dnsmasq container and the dhcp ip list from a event from the web app. The only option I have so far is to generate the dhcp hosts file and restart the dnsmasq container but it need to be done manually in the Docker host outside the web app container.
Is there a way to restart the service from another container?
The only way to restart a container from another container would be to mount /var/run/docker.sock and use the API. But I wouldn't do that from a webapp for obvious security reasons.
I would share the dhcp hosts file between the containers (with the -v option) and have a script running in the dnsmasq container that checks for changes in this file and restart the dnsmasq service in the container. There's no need to restart the container. You could use Supervisord to start dnsmasq and this script. I would use the --init flag to avoid zombie process.
From your host:
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock --name=xxx ubuntu bash
docker cp /usr/bin/docker xxx:/usr/bin/docker
Go inside the container and check unresolved libs:
ldd /usr/bin/docker
Manually copy missing libs from host into container and setup including symlinks as required. In my case I had to:
docker cp /usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1 xxx:/usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1
And then inside the container I had to:
ln -sf /usr/lib/x86_64-linux-gnu/libltdl.so.7.3.1 /usr/lib/x86_64-linux-gnu/libltdl.so.7
Inside the container check again: ldd /usr/bin/docker if all is well, you can now run docker inside the container.
Note, that docker-compose run right away when i copied from host to container. Only docker i had to copy the extra library and setup the symlinks.
I have two docker containers - one running jenkins and one running docker registry. I want to build/push images from jenkins to docker registry. How do I achieve this in an easy and secure way (meaning no hacks)?
The easiest would be to make sure the jenkins container and registry container are on the same host. Then you can mount the docker socket onto the jenkins container and use the dockerd from the host machine to push the image to the registry. /var/run/docker.sock is the unix socket the dockerd is listening to.
By mounting the docker socket any docker command you run from that container executes as if it was the host.
$ docker run -dti --name jenkins -v /var/run/docker.sock:/var/run/docker.sock jenkins:latest
If you use pipelines, you can install this Docker Plugin https://plugins.jenkins.io/docker-workflow,
create a credentials resource on Jenkins,to access the Docker registry, and do this in your pipeline:
stage("Build Docker image") {
steps {
script {
docker_image = docker.build("myregistry/mynode:latest")
}
}
}
stage("Push images") {
steps {
script {
withDockerRegistry(credentialsId: 'registrycredentials', url: "https://myregistry") {
docker_image.push("latest")
}
}
}
}
Full example at: https://pillsfromtheweb.blogspot.com/2020/06/build-and-push-docker-images-with.html
I use this type of workflow in a Jenkins docker container, and the good news is that it doesn't require any hackery to accomplish. Some people use "docker in docker" to accomplish this, but I can't help you if that is the route you want to go as I don't have experience doing that. What I will outline here is how to use the existing docker service (the one that is running the jenkins container) to do the builds.
I will make some assumptions since you didn't specify what your setup looks like:
you are running both containers on the same host
you are not using docker-compose
you are not running docker swarm (or swarm mode)
you are using docker on Linux
This can easily be modified if any of the above conditions are not true, but I needed a baseline to start with.
You will need the following:
access from the Jenkins container to docker running on the host
access from the Jenkins container to the registry container
Prerequisites/Setup
Setting that up is pretty straight forward. In the case of getting Jenkins access to the running docker service on the host, you can do it one of two ways. 1) over TCP and 2) via the docker unix socket. If you already have docker listening on TCP you would simply take note of the host's IP address and the default docker TCP port number (2375 or 2376 depending on whether or not you use TLS) along with and TLS configuration you may have.
If you prefer not to enable the docker TCP service it's slightly more involved, but you can use the UNIX socket at /var/run/docker.sock. This requires you to bind mount the socket to the Jenkins container. You do this by adding the following to your run command when you run jenkins:
-v /var/run/docker.sock:/var/run/docker.sock
You will also need to create a jenkins user on the host system with the same UID as the jenkins user in the container and then add that user to the docker group.
Jenkins
You'll now need a Docker build/publish plugin like the CloudBees Docker Build and Publish plugin or some other plugin depending on your needs. You'll want to note the following configuration items:
Docker URI/URL will be something like tcp://<HOST_IP>:2375 or unix:///var/run/docker.sock depending on how we did the above setup. If you use TCP and TLS for the docker service you will need to upload the TLS client certificates for your Jenkins instance as "Docker Host Certificate Authentication" to your usual credentials section in Jenkins.
Docker Registry URL will be the URL to the registry container, NOT localhost. It might be something like http://<HOST_IP>:32768 or similar depending on your configuration. You could also link the containers, but that doesn't easily scale if you move the containers to separate hosts later. You'll also want to add the credentials for logging in to your registry as a username/password pair in the appropriate credentials section.
I've done this exact setup so I'll give you a "tl;dr" version of it as getting into depth here is way outside of the scope of something for StackOVerflow:
Install PID1 handler files in container (i.e. tini). You need this to handle signaling and process reaping. This will be your entrypoint.
Install some process control service (i.e. supervisord) packages. Generally running multiple services in containers is not recommended but in this particular case, your options are very limited.
Install Java/Jenkins package or base your image from their DockerHub image.
Add a dind (Docker-in-Docker) wrapper script. This is the one I based my config on.
Create the configuration for the process control service to start Jenkins (as jenkins user) and the dind wrapper (as root).
Add jenkins user to docker group in Dockerfile
Run docker container with --privileged flag (DinD requires it).
You're done!
Thanks for your input! I came up with this after some experimentation.
docker run -d \
-p 8080:8080 \
-p 50000:50000 \
--name jenkins \
-v pwd/data/jenkins:/var/jenkins_home \
-v /Users/.../.docker/machine/machines/docker:/Users/.../.docker/machine/machines/docker \
-e DOCKER_TLS_VERIFY="1" \
-e DOCKER_HOST="tcp://192.168.99.100:2376" \
-e DOCKER_CERT_PATH="/Users/.../.docker/machine/machines/docker" \
-e DOCKER_MACHINE_NAME="docker" \
johannesw/jenkins-docker-cli