Why is /etc/hosts file empty in my docker container? - docker

I created a minimal docker container, following https://github.com/snoyberg/haskell-scratch containing a single Haskell application. When run the application works fine except it cannot resolve hosts from /etc/hosts because it is empty which implies linking does not work correctly (or at least I need to use numeric addresses which is impractical...).
I can see the file pointed at by HostsPath in container config is correctly populated but it seems it gets overwritten at some point when container starts.
docker version is 1.6.2 on Mac OS X Yosemite.
Container is built in several stages. First stage builds a container with a specially populated filesystem:
FROM ubuntu:trusty
MAINTAINER arnaud#capital-match.com
RUN apt-get install -qqy libgmp-dev netbase
ADD . /
RUN chmod +x /create_rootfs.sh
RUN /create_rootfs.sh
the create_rootfs.sh file contains the following:
#!/bin/sh
ROOTFS=/rootfs
echo "Creating directories"
mkdir -p /rootfs/bin
mkdir -p /rootfs/lib
mkdir /rootfs/lib/x86_64-linux-gnu
mkdir /rootfs/lib64
mkdir -p /rootfs/usr/lib/x86_64-linux-gnu/gconv
# mkdir -p /rootfs/etc
echo "Copying library files"
cp -L /bin/sh /rootfs/bin/
#cp -L /etc/protocols /rootfs/etc
#cp -L /etc/services /rootfs/etc
cp -L /lib/x86_64-linux-gnu/libc.so.6 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libdl.so.2 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libm.so.6 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libpthread.so.0 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libutil.so.1 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/librt.so.1 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libz.so.1 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libnss_files.so.2 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libnss_dns.so.2 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib/x86_64-linux-gnu/libresolv.so.2 /rootfs/lib/x86_64-linux-gnu/
cp -L /lib64/ld-linux-x86-64.so.2 /rootfs/lib64/
cp -L /usr/lib/x86_64-linux-gnu/gconv/UTF-16.so /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/gconv/UTF-32.so /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/gconv/UTF-7.so /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/gconv/gconv-modules /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache /rootfs/usr/lib/x86_64-linux-gnu/gconv/
cp -L /usr/lib/x86_64-linux-gnu/libgmp.so.10 /rootfs/usr/lib/x86_64-linux-gnu/
Then I export the content of this filesystem building a
docker run capitalmatch/tinybuilder tar -cC /rootfs . | docker import - capitalmatch/tiny
The final container is built from "tiny", adding some .tar.gz files. It is then run as:
docker run --link stunnel:monitor capitalmatch/app
The stunnel container is run as:
docker run --name=stunnel -p 5555:5555 -v $(pwd)/stunnel:/etc/stunnel capitalmatch/stunnel
I expect /etc/hosts to contain an entry for monitor which is indeed the case before it is mounted. When I run another container built in a more "classical" way, e.g. based on ubuntu:trusty, I found the/etc/hosts` file to be correctly populated and everything works fine so I suspect it is the way the container is built that gets in the way.

/etc/hosts is regenerated every time base on how you run your container.
Moreover, if you put something into this file in Dockerfile... this will last till to end of building process of all layers, but will be wiped out when container will be started.
Editing networking config files
Starting with Docker v.1.2.0, you can now edit /etc/hosts, /etc/hostname and /etc/resolve.conf in a running container. This is useful if you need to install bind or other services that might override one of those files.
Note, however, that changes to these files will not be saved by docker commit, nor will they be saved during docker run. That means they won't be saved in the image, nor will they persist when a container is restarted; they will only "stick" in a running container.
source: https://docs.docker.com/articles/networking/#editing-networking-config-files
If your /etc/hosts file in your container doesn't contain expected entries, this means, that probably you not initialize container properly.
Please provide information how you actually run your containers or for simplification, just prepared docker-compose.yml file.

I don't have an answer but I have a workaround: use
FROM busybox
...
and everything works ok.

Related

How to migrate volume data from docker-for-mac to colima

How do I move volumes from docker-for-mac into colima?
Will copy all the volumes from docker-for-mac and move them to colima.
Note: there will be a lot of volumes you may not want to copy over since they're temporary ones, you can ignore them by simply adding a | grep "YOUR FILTER" to the for loop, either before or after the awk.
The following code makes 2 assumptions:
you have docker-for-mac installed and running
you have colima running
That is all you need, now copy-and-paste this into your terminal. No need to touch anything.
(
# set -x # uncomment to debug
set -e
# ssh doesn't like file descriptor piping, we need to write the configuration into someplace real
tmpconfig=$(mktemp);
# Need to have permissions to copy the volumes, and need to remove the ControlPath and add ForwardAgent
(limactl show-ssh --format config colima | grep -v "^ ControlPath\| ^User"; echo " ForwardAgent=yes") > $tmpconfig;
# Setup root account
ssh -F $tmpconfig $USER#lima-colima "sudo mkdir -p /root/.ssh/; sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys"
# Loop over each volume inside docker-for-mac
for volume_name in $(DOCKER_CONTEXT=desktop-linux docker volume ls | awk '{print $2}'); do
echo $volume_name;
# Make the volume backup
DOCKER_CONTEXT=desktop-linux docker run -d --rm --mount source=$volume_name,target=/volume --name copy-instance busybox sleep infinate;
DOCKER_CONTEXT=desktop-linux docker exec copy-instance sh -c "tar czf /$volume_name.tar /volume";
DOCKER_CONTEXT=desktop-linux docker cp copy-instance:/$volume_name.tar /tmp/$volume_name.tar;
DOCKER_CONTEXT=desktop-linux docker kill copy-instance;
# Restore the backup inside colima
DOCKER_CONTEXT=colima docker volume create $volume_name;
ssh -F $tmpconfig root#lima-colima "rm -rf /var/lib/docker/volumes/$volume_name; mkdir -p /var/lib/docker/volumes/$volume_name/_data";
scp -r -F $tmpconfig /tmp/$volume_name.tar root#lima-colima:/tmp/$volume_name.tar;
ssh -F $tmpconfig root#lima-colima "tar -xf /tmp/$volume_name.tar --strip-components=1 --directory /var/lib/docker/volumes/$volume_name/_data";
done
)

Extracting tar from host to docker container

I cannot understand from documentation how to extract a tar to a container with a single command: let's say:
tar xf ./tarfile.tar.gz | docker cp - $CONTAINERID:/var/www/html
Pass the tar archive itself as stdin
gzip -cd tarfile.tar.gz | docker cp - $CONTAINERID:/var/www/html
gunzip tarfile.tar.gz
docker cp - $CONTAINERID:/var/www/html < tarfile.tar
Remember that it's extremely routine to delete Docker containers, and when you do, their entire filesystem is lost. It's usually better to use the docker run -v option to inject some part of the host filesystem into a container, and not try to use imperative commands like docker cp (or docker exec) that you'll have to repeat when you recreate the container.
mkdir html
(cd html && tar xf ../tarfile.tar.gz)
docker run -v "$PWD/html:/var/www/html" ...

Can't Delete file created via Docker

I used a docker image to run a program on our school's server using this command.
docker run -t -i -v /target/new_directory 990210oliver/mycc.docker:v1 /bin/bash
After I ran it it created a firectory on my account called new_directory. Now I don't have permissions to delete or modify the files.
How do I remove this directory?
I also had this problem.
After:
docker run --name jenkins -p 8080:8080 -v $HOME/jenkins:/var/jenkins_home jenkins jenkins
I couldn't remove files in $HOME/jenkins.
Ricardo Branco's answer didn't work for me because chown gave me:
chown: changing ownership of '/var/jenkins_home': Operation not permitted
Solution:
exec /bin/bash into container as a root user:
docker exec -it --privileged --user root container_id /bin/bash
then:
cd /var/jenkins_home/ && rm -r * .*
I made #siulkilulki's answer into one line:
docker exec --privileged --user root <CONTAINER_ID> chown -R "$(id -u):$(id -g)" <TARGET_DIR>
Note that here the CONTAINER must be up.
Change the owner of all the files on the directory to your used ID within the container running as root, then exit the container and remove the directory.
docker run --rm -v /target/new_directory 990210oliver/mycc.docker:v1 chown -R $(id -un):$(id -un) /target/new_directory
exit
rm -rf $HOME/new_directory
I had the same problem. I am using ubuntu 18.04. I ran the following code and then I was able to delete files locally. I have app dir inside docker project dir
cd to your docker project dir
sudo chown -R $(whoami):$(whoami) app/
docker run -v {absolute path to dir with the file}:/to_delete -it ubuntu /bin/bash
Then just:
$ cd to_delete
$ rm -rf <file/dir>
Here is a solution that does not require --privileged.
Game Plan
Determine UIDs of all offending files created by previous docker runs. Use docker to find them, since in-container UID is not the same as host UID. An offending file is any file not owned by container user root which maps to the current user running docker.
Run a container using each discovered UID and delete the offending files (or chown them).
Code
# Assumes that current dir is the volume
# find files owned by docker internal uuids (not root) on the mounted volume:
BAD_FILE_UIDS=$(docker run --rm -v $(pwd):/build alpine sh -c 'find /build -mindepth 1 -not -user root | xargs stat -c "%u" | sort -u')
if [ -n "${BAD_FILE_UIDS}" ] ; then
for uuid in $BAD_FILE_UIDS ; do
echo "Cleaning up files owned by $uuid using docker"
docker run --rm -v $(pwd):/build --user $uuid:0 alpine find /build -mindepth 1 -user $uuid -delete
done
fi
You can change the -delete to -exec chown SOME_USER {} \; to chown.
The above works well for use in CI as post-build cleanup.
Try this:
docker stop $CONTAINER_NAME
docker rm -v $CONTAINER_NAME
I guess this should remove the mounted dir. If it doesn't, do this explicitly:
sudo rm -rf /target/new_directory

How to port data-only volumes from one host to another?

As described in the Docker documentation on Working with Volumes there is the concept of so-called data-only containers, which provide a volume that can be mounted into multiple other containers, no matter whether the data-only container is actually running or not.
Basically, this sounds awesome. But there is one thing I do not understand.
These volumes (which do not explicitly map to a folder on the host for portability reasons, as the documentation states) are created and managed by Docker in some internal folder on the host (/var/docker/volumes/…).
Supposed I use such a volume, and then I need to migrate it from one host to another - how do I port the volume? AFAICS it has a unique ID - can I just go and copy the volume and its according data-only container to a new host? How do I find out which files to copy? Or is there some support built-in to Docker that I did not discover yet?
The official answer is available in the section "Back up, restore, or migrate data volumes":
BACKUP:
sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
--rm: remove the container when it exits
--volumes-from DATA: attach to the volumes shared by the DATA container
-v $(pwd):/backup: bind mount the current directory into the container; to write the tar file to
busybox: a small simpler image - good for quick maintenance
tar cvf /backup/backup.tar /data: creates an uncompressed tar file of all the files in the /data directory
RESTORE:
# create a new data container
$ sudo docker create -v /data --name DATA2 busybox true
# untar the backup files into the new container᾿s data volume
$ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
data/
data/sven.txt
# compare to the original container
$ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
sven.txt
Extending the official answer from Docker docs and the top answer here, you can have following functions in your .bashrc or .zshrc:
# backup files from a docker volume into /tmp/backup.tar.gz
function docker-volume-backup-compressed() {
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie tar -czvf /backup/backup.tar.gz "${#:2}"
}
# restore files from /tmp/backup.tar.gz into a docker volume
function docker-volume-restore-compressed() {
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie tar -xzvf /backup/backup.tar.gz "${#:2}"
echo "Double checking files..."
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie ls -lh "${#:2}"
}
# backup files from a docker volume into /tmp/backup.tar
function docker-volume-backup() {
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox tar -cvf /backup/backup.tar "${#:2}"
}
# restore files from /tmp/backup.tar into a docker volume
function docker-volume-restore() {
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox tar -xvf /backup/backup.tar "${#:2}"
echo "Double checking files..."
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox ls -lh "${#:2}"
}
Note that the backup is saved into /tmp, so you can move the backup file saved there between docker hosts.
There is also two pairs of backup/restore aliases. One using compression and debian:jessie and other with no compression but with busybox. Favor using compression if the files to backup are big.
You can export the volume to tar and transfer to another machine. And import the data with tar on the second machine. This does not rely on implementation details of the volumes.
# you can list shared directories of the data container
docker inspect <data container> | grep "/vfs/dir/"
# you can export data container directory to tgz
docker run --cidfile=id.tmp --volumes-from <data container> ubuntu tar -cO <volume path> | gzip -c > volume.tgz
# clean up: remove exited container used for export and temporary file
docker rm `cat id.tmp` && rm -f id.tmp
I'll add another recent tool here from IBM which is actually made for the volume migration from one container host to another. This is a currently on-going project. So, you may find a different version with additional features in future.
Cargo was developed to migrate containers from one host to another host along with their data with minimal downtime. Cargo uses data federation capabilities of union filesystem to create a unified view of data (mainly the root file system) across the source and target hosts. This allows Cargo to start up a container almost immediately (within milliseconds) on the target host as the data from source root file system gets copied to target hosts either on-demand (using a copy-on-write (COW) partition) or lazily in the background (using rsync).
Important points are:
- a centralized server handles the migration process
The link to the project is given here:
https://github.com/nadgowdas/cargo
In case your machines are in different VPCs or you want to copy from/to local machine (like in my case) you can use dvsync I created. It's basically ngrok combined with rsync over SSH packaged into two small (both ~25MB) images. First, you start the dvsync-server on a machine you want to copy data from (You'll need the NGROK_AUTHTOKEN which can be obtained from ngrok dashboard):
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=MY_VOLUME,target=/data,readonly \
quay.io/suda/dvsync-server
Then you can start the dvsync-client on the machine you want to copy the files to, passing the DVSYNC_TOKEN shown by the server:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
Once the copying will be done, the client will exit. This works with Docker CLI, Compose, Swarm and Kubernetes as well.
Here's a one-liner in case it can be established an SSH connection between the machines:
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c "cd /from ; tar -cf - . " | ssh <TARGET_HOST> 'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - " '
Credits go to Guido Diepen's post.
Just wrote docker-volume-snapshot command for similar usecase. This command is based on tommasop's answer.
With the command,
Create snapshot
docker-volume-snapshot create <volume-name> snapshot.tar
Move snapshot.tar to another host
Restore snapshot
docker-volume-snapshot restore snapshot.tar <volume-name>
Adding an answer here as I don't have reputation to comment. While all the above answers have helped me, I imagine there may be others like me who are also looking to copy the contents of a backup.tar file into a named docker volume on the collaborator's machine. I don't see this discussed specifically above or in docker volumes documentation.
Why would you do want to do copy the backup.tar file into a named docker volume?
This could be helpful in a scenario where a named docker volume has been specified inside an existing docker-compose.yml file to be used by some of the containers.
Copying contents of backup.tar into a named docker volume
On host machine, follow the steps in accepted answer or docker volumes documentation to create a backup.tar file and push it to some repository.
Pull backup.tar into collaborator's machine from repository.
On collaborator's machine, create a temporary container and a named docker volume.
docker run -v named_docker_volume:/dbdata --name temp_db_container ubuntu /bin/bash
--name temp_db_container : Create a container called
temp_db_container
ubuntu /bin/bash : Use a ubuntu image to
build temp_db_container with starting command of /bin/bash
-v named_docker_volume:/dbdata : Mount the /dbdata folder of
temp_db_container into a docker volume called
named_docker_volume. We use this specifically named volume
named_docker_volume to match with volume name specified in our
docker-compose.yml file.
On collaborator's machine, Copy over the contents of backup.tar into the named docker volume.
docker run --rm --volumes-from temp_db_container -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
--volumes-from temp_db_container : temp_db_container container's /dbdata folder was mapped to named_docker_volume volume in previous step. So any file that gets stored in /dbdata folder will immediately get copied over to named_docker_volume docker volume.
-v $(pwd):/backup : map the local machine's present working directory to the /backup folder located inside temp_db_container
ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1" : Untar the backup.tar file and store the untarred contents inside /dbdata folder.
On collaborator's machine, clear the temporary container temp_db_container
docker rm temp_db_container
Adapted from the accepted answer, but gives more flexibility in that you can use it in bash pipeline:
#!/bin/bash
if [ $# != 2 ]; then
echo Usage "$0": volume /path/of/the/dir/in/volume/to/backup
exit 1
fi
if [ -t 1 ]; then
echo The output of the cmd is binary data "(tar)", \
and it should be redirected instead of printed to terminal
exit 1
fi
volume="$1"
path="$2"
exec docker run --rm --mount type=volume,src="$volume",dst=/mnt/volume/ alpine tar cf - . -C /mnt/volume/"$path"
If you want to backup the volume periodically and incrementally, then you can use the following script:
#!/bin/bash
if [ $# != 3 ]; then
echo Usage "$0": volume /path/of/the/dir/in/volume/to/backup /path/to/put/backup
exit 1
fi
volume="$1"
volume_path="$2"
path="$3"
if [[ "$path" =~ ^.*/$ ]]; then
echo "The 3rd argument shouldn't end in '/', otherwise rsync would not behave as expected"
exit 1
fi
container_name="docker-backup-rsync-service-$RANDOM"
docker run --rm --name="$container_name" -d -p 8738:873 \
--mount type=volume,src="$volume",dst=/mnt/volume/ \
nobodyxu/rsyncd
echo -e '\nStarting syncing...'
rsync --info=progress2,stats,symsafe -aHAX --delete \
"rsync://localhost:8738/root/mnt/volume/$volume_path/" "$path"
exit_status=$?
echo -e '\nStopping the rsyncd docker...'
docker stop -t 1 "$container_name"
exit $exit_status
It utilizes rsync's server and client functionality to directly sync the dir between volume and your host dir.
I was dissatisfied with the answer using tar. I decided to take matters into my own hands. As I am going to be syncing the data often, and it's going to be big, I wanted specifically to use rsync. Using tar to send all the data every time would be just a waste of time and transfer.
After days spent on how to solve the problem of communicating between two remote docker containers, I finally got a solution using socat.
run two docker containers - one on the source the other on destination, each with one volume mounted - the source volume and destination volume.
run rsync --deamon on one of the containers that will stream/load data from the volume
run docker exec source_container socat - TCP:localhost and run docker exec desintation_container socat TCP-LISTEN:rsync - and connect stdin and stdout of both these together. So one socat connects to rsync --daemon and redirects data from/to stdout/stdin, the other socat listens on :rsync port (port 873) and redirect to/from stdin/stdout. Then connect them together, so basically we pipe data from one container port to the other.
then run on the other of volumes rsync client that would connect to localhost:rsync, effective connecting via "socat pipe" to the rsync --daemon.
Basically, it works like this:
log "Running both destination and source containers"
src_did=$(
env DOCKER_HOST=$src_docker_host docker run --rm -d -i -v \
"$src_volume":/data:ro -w /data alpine_with_rsync_and_socat\
sleep infinity
)
dst_did=$(
env DOCKER_HOST=$dst_docker_host docker run --rm -d -i -v \
"$dst_volume":/data:rw -w /data alpine_with_rsync_and_socat \
sleep infinity
)
log "Running rsyncd on destination container"
env DOCKER_HOST=$dst_docker_host docker exec "$dst_did" sh -c "
cat <<EOF > /etc/rsyncd.conf &&
uid = root
gid = root
use chroot = no
max connections = 1
numeric ids = yes
reverse lookup = no
[data]
path = /data/
read only = no
EOF
rsync --daemon
"
log "Setup rsync socat forwarding between containers"
{
coproc { env DOCKER_HOST=$dst_docker_host docker exec -i "$dst_did" \
socat -T 10 - TCP:localhost:rsync,forever; }
env DOCKER_HOST=$src_docker_host docker exec -i "$src_did" \
socat -T 10 TCP-LISTEN:rsync,forever,reuseaddr - <&"${COPROC[0]}" >&"${COPROC[1]}"
} &
log "Running rsync on source that will connect to destination"
env DOCKER_HOST=$src_docker docker exec -e RSYNC_PASSWORD="$g_password" -w /data "$src_did" \
rsync -aivxsAHSX --progress /data/ rsync://root#localhost/data
Another the really nice thing about that approach, is that you can copy data between two remote hosts, without ever storing the data locally. I also share the script ,docker-rsync-volumes that I've written around this idea. With that script, copying volume from two remote hosts is just simple ,docker-rsync-volumes --delete -f ssh://user#productionserver grafana_data -t ssh://user#backupserver grafana_data_backup.
This ssh copies your volume from one server to another.
docker run --rm -v $VOLUME:/$VOLUME alpine tar -czv --to-stdout -C /$VOLUME . | ssh $REMOTEHOST "docker run --rm -i -v $VOLUME:/$VOLUME alpine tar xzf - -C /$VOLUME"
If you want to copy more than one volume that matches a filter.
REMOTEHOST=root#123.123.123.123
Volumes=($(docker volume ls --filter "name=mailcow*" --format="{{.Name}}"))
for VOLUME in ${Volumes[#]}; do
docker run --rm -v $VOLUME:/$VOLUME alpine tar -czv --to-stdout -C /$VOLUME . | ssh $REMOTEHOST "docker run --rm -i -v $VOLUME:/$VOLUME alpine tar xzf - -C /$VOLUME"
done

Difference between tar and cp

I have a microSD card to which I am writing a linux kernel and root filesystem.
If I create the root filesystem using tar then my board has no problem booting from the microSD card.
If I create the root filesystem using cp then the system hangs halfway during boot. The kernel boots OK but the system hangs when trying to start openssh.
TAR command:
sudo tar xfp ./debian-7.1-minimal-armhf-2013-08-25/arm*-rootfs-*.tar -C /media/rootfs/
sync
CP command:
sudo tar xfp ./debian-7.1-minimal-armhf-2013-08-25/arm*-rootfs-*.tar -C ./fileSystem/debian/
sudo cp -p -r ./fileSystem/debian/* /media/rootfs
sync
I don't think cp is coping your symbolic links properly. Add -P (upper case) option to your cp command. That will ensure your symbolic links get copied properly. If on a mac use -a instead of -P.

Resources