I have built a docker image to run a jenkins server in and after creating a container for this image, I find that the container remains on exit status, and never starts. Even when I attempt to start the container with the UI.
Here are the steps I have taken, and perhaps I am missing something?
docker pull jenkins/jenkins
sudo mkdir /var/jenkins_home
docker run -p 9080:8080 -d -v /var/jenkins_home:/var/jenkins_home jenkins/jenkins
I already have java running on the port 8080, maybe this is impacting the container status?
java 2968 user 45u IPv6 0xbf254983f0051d87 0t0 TCP *:http-alt (LISTEN)
Not sure why its running on this port, I have attempted to kill the PID but it recreates itself.
Following the comments:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fc880ccd31ed jenkins/jenkins "/usr/bin/tini -- /u…" 3 seconds ago Exited (1) 2 seconds ago vigorous_lewin
docker logs vigorous_lewin
touch: setting times of '/var/jenkins_home/copy_reference_file.log': No such file or directory
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
The docs say
NOTE: Avoid using a bind mount from a folder on the host machine into
/var/jenkins_home, as this might result in file permission issues (the
user used inside the container might not have rights to the folder on
the host machine). If you really need to bind mount jenkins_home,
ensure that the directory on the host is accessible by the jenkins
user inside the container (jenkins user - uid 1000) or use -u
some_other_user parameter with docker run.
So they recommend using a docker volume rather than a bind mount like you do. If you have to use a bind mount, you need to ensure that UID 1000 can read and write the host directory.
The easiest solution is to run the container as root by adding -u root to your docker run command, like this
docker run -p 9080:8080 -d -v /var/jenkins_home:/var/jenkins_home -u root jenkins/jenkins
That's not as secure though, so depending on what environment you're running your container in, that might not be a good idea.
How can I access the host ARP records from within a Docker container?
I tried to mount a volume (in a docker-compose file) /proc/net/arp:/proc/net/arp but found out that I can't make any volume with /proc. Then I tried to mount it elsewhere like /proc/net/arp:/root/arp, but then if I cat /root/arp, from within the container, the table comes out empty.
docker run -v /proc/net/arp:/root/arp alpine cat /root/arp <-- returns empty table
Ideas?
You should be good if you add privileged mode and make sure you're in host networking mode. This worked for me:
>$ docker run --net host --privileged -v /proc/net/arp:/host/arp alpine cat /host/arp
I was running a docker container process with:
host$ docker run -it <image> /etc/bootstrap.sh -bash
Then inside of the container I created a file /a.txt:
container$ echo "abc" > /a.txt
container$ cat a.txt
abc
I noticed the filesystem type for / is none:
container$ df -h
Filesystem Size Used Avail Use% Mounted on
none 355G 19G 318G 6% /
tmpfs 1.9G 0 1.9G 0% /dev
...
The inspect command shows the volumes is null.
host$ docker inspect <image>
...
"Volumes": null,
...
After I exited the container process and restarted, the file disappeared. I wanted to understand:
1) what the root filesystem of the container process actually is;
2) how can I persist the newly created file?
Q: After I exited the container process and restarted, the file disappeared.
A: Data in a docker container is not persisted. That is you lost everything when that container gets restarted.
Q: What the root filesystem of the container process actually is?
A: Don't really understand this question but I assume you are asking about where is the root user's home directory? If it is, then root's home is at /root.
Q: How can I persist the newly created file?
A: If you are intending to keep the changes even after you restart the container then you will need to use docker's data volume.
See:
https://docs.docker.com/engine/tutorials/dockervolumes/
Essentially when you start the container, you can pass in the -v option to tell the container that you would like to map the directory from the host's file system to the container's directory.
That is by doing the example command below,
$ docker run -d -P --name web -v $(pwd):/root
You will you would like to map your current working directory to the container's /root directory. So everything gets written to the container's /root/ area gets reflected to your host's file system and it is persisted.
How I can add new nameserver in /etc/resolv.conf (dockerfile)?
On my dockerfile I use:
FROM ubuntu:14.04
RUN echo "nameserver 10.111.122.1" >> /etc/resolv.conf
On my test I use:
docker run --rm 746cb98d6c9b echo cat /etc/resolv.conf
I didn't get my change (the new nameserver)... So I try adding mannualy with
docker run --rm 746cb98d6c9b echo "nameserver 10.111.122.1" >> /etc/resolv.conf
and I get
zsh: permission denied: /etc/resolv.conf
How I can change permission of this file OR use a root user OR use a chmod in docker files ? My real task is to add and dns server for my build of this dockerfile.
I'm using a linux mint.
I'm get a correct result with a ping test on docker run command (with --dns)
So, one of the ways you can add new DNS information to your container's build process is by adding some startup options to your Docker daemon. The documentation for that process reveals that the option you'll use is --dns. The location of your configuration file depends on your specific distro. On my Linux Mint machine, the file is in /etc/default/docker. On Linux Mint, look for the DOCKER_OPTS= line, and add the appropriate --dns=x.x.x.x entries to that line.
For example, if you want to use Google's DNS, you should change that line to look like this:
DOCKER_OPTS="--dns=8.8.4.4 --dns=8.8.8.8"
Additionally, in the absense of --dns or --dns-search startup options, Docker will use the /etc/resolv.conf of the host it's running on instead.
The DNS configuration of a Docker container may be adjusted during the creation of the container and does not need to be hard-coded in the Docker image itself.
Passing a single DNS server to the container works by providing the --dns parameter:
$ docker run --rm --dns=8.8.8.8 <image>
You're free to provide more than one DNS server and you can also define other DNS related options like the DNS search name or common DNS options:
$ docker run --rm --dns=8.8.8.8 --dns=8.8.4.4 --dns-search=your.search.domain --dns-opt=timeout:50 <image>
If you pass cat /etc/resolv.conf as command to your container, you can easily verify that the passed DNS configuration options made it into the container's DNS configuration:
$ docker run --rm --dns=8.8.4.4 --dns=8.8.8.8 --dns-search=your.domain.name --dns-opt=timeout:50 alpine cat /etc/resolv.conf
search your.domain.name
nameserver 8.8.4.4
nameserver 8.8.8.8
options timeout:50
Please also refer to the docker run configuration which can be found at https://docs.docker.com/engine/reference/commandline/run/
What are the ways get the docker host's hostname from inside a container running on that host besides using environment variables? I know I can pass the hostname as an environment variable to the container at container creation time. I'm wondering how I can look it up at run time.
foo.example.com (docker host)
bar (docker container)
Is there a way for container bar running in docker host foo.example.com to get "foo.example.com"?
Edit to add use case:
The container will create an SRV record for service discovery of the form
_service._proto.name. TTL class SRV priority weight port target.
-----------------------------------------------------------------
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo.example.com.
where 20003 is a dynamically allocated port on the docker host for a service listening on some fixed port in bar (docker handles the mapping from host port to container port).
My container will run a health check to make sure it has successfully created that SRV record as there will be many other bar containers on other docker hosts that also create their own SRV records.
_service._proto.name. TTL class SRV priority weight port target.
-----------------------------------------------------------------
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo.example.com. <--
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo2.example.com.
_bar._http.example.com 60 IN SRV 5000 5000 20003 foo3.example.com.
The health check will loop through the SRV records looking for the first one above and thus needs to know its hostname.
aside
I'm using Helios and just found out it adds an env var for me from which I can get the hostname. But I was just curious in case I was using docker without Helios.
You can easily pass it as an environment variable
docker run .. -e HOST_HOSTNAME=`hostname` ..
using
-e HOST_HOSTNAME=`hostname`
will call the hostname and use it's return as an environment variable called HOST_HOSTNAME, of course you can customize the key as you like.
note that this works on bash shell, if you using a different shell you might need to see the alternative for "backtick", for example a fish shell alternative would be
docker run .. -e HOST_HOSTNAME=(hostname) ..
I'm adding this because it's not mentioned in any of the other answers. You can give a container a specific hostname at runtime with the -h directive.
docker run -h=my.docker.container.example.com ubuntu:latest
You can use backticks (or whatever equivalent your shell uses) to get the output of hosthame into the -h argument.
docker run -h=`hostname` ubuntu:latest
There is a caveat, the value of hostname will be taken from the host you run the command from, so if you want the hostname of a virtual machine that's running your docker container then using hostname as an argument may not be correct if you are using the host machine to execute docker commands on the virtual machine.
You can pass in the hostname as an environment variable. You could also mount /etc so you can cat /etc/hostname. But I agree with Vitaly, this isn't the intended use case for containers IMO.
Another option that worked for me was to bind the network namespace of the host to the docker.
By adding:
docker run --net host
You can pass it as an environment variable like this. Generally Node is the host that it is running in. The hostname is defaulted to the host name of the node when it is created.
docker service create -e 'FOO={{.Node.Hostname}}' nginx
Then you can do docker ps to get the process ID and look at the env
$ docker exec -it c81640b6d1f1 env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c81640b6d1f1
TERM=xterm
FOO=docker-desktop
NGINX_VERSION=1.17.4
NJS_VERSION=0.3.5
PKG_RELEASE=1~buster
HOME=/root
An example of usage would be with metricbeats so you know which node is having system issues which I put in https://github.com/trajano/elk-swarm:
metricbeat:
image: docker.elastic.co/beats/metricbeat:7.4.0
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /proc:/hostfs/proc:ro
- /:/hostfs:ro
user: root
hostname: "{{.Node.Hostname}}"
command:
- -E
- |
metricbeat.modules=[
{
module:docker,
hosts:[unix:///var/run/docker.sock],
period:10s,
enabled:true
}
]
- -E
- processors={1:{add_docker_metadata:{host:unix:///var/run/docker.sock}}}
- -E
- output.elasticsearch.enabled=false
- -E
- output.logstash.enabled=true
- -E
- output.logstash.hosts=["logstash:5044"]
deploy:
mode: global
I think the reason that I have the same issue is a bug in the latest Docker for Mac beta, but buried in the comments there I was able to find a solution that worked for me & my team. We're using this for local development, where we need our containerized services to talk to a monolith as we work to replace it. This is probably not a production-viable solution.
On the host machine, alias a known available IP address to the loopback interface:
$ sudo ifconfig lo0 alias 10.200.10.1/24
Then add that IP with a hostname to your docker config. In my case, I'm using docker-compose, so I added this to my docker-compose.yml:
extra_hosts:
# configure your host to alias 10.200.10.1 to the loopback interface:
# sudo ifconfig lo0 alias 10.200.10.1/24
- "relevant_hostname:10.200.10.1"
I then verified that the desired host service (a web server) was available from inside the container by attaching to a bash session, and using wget to request a page from the host's web server:
$ docker exec -it container_name /bin/bash
$ wget relevant_hostname/index.html
$ cat index.html
OK, this isn't the hostname (as OP was asking), but this will resolve to your docker host from inside your container for connectivity purposes.
host.docker.internal
I was redirected here when googling for this.
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print $2}' | cut -d / -f 1 | sed -n 1p`
docker run --add-host=myhost:${HOSTIP} --rm -it debian
Now you can access the host under the alias "myhost"
The first line won't run on cygwin, but you can figure out some other way to obtain the local IP address using ipconfig.
you can run:
docker run --network="host"
for sending the value of the machine host to the container.
I ran
docker info | grep Name: | xargs | cut -d' ' -f2
inside my container.
I know it's an old question, but I needed this solution too, and I acme with another solution.
I used an entrypoint.sh to execute the following line, and define a variable with the actual hostname for that instance:
HOST=`hostname --fqdn`
Then, I used it across my entrypoint script:
echo "Value: $HOST"
Hope this helps