Docker data volume container. I can't seem to get to backup - docker

Reading these links:
https://docs.docker.com/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes
Backing up data volume containers off machine
My understanding is I can take a data volume container and archive its backup.
However reading the first link I can't seem to get it to work.
docker create -v /sonatype-work --name sonatype-work sonatype/nexus /bin/true
I launch sonatype/nexus image in a container using:
--volumes-from sonatype-nexus
All good, after running nexus, i inspect the data volume, i can see the innards created, and stop and remove nexus and start again, all changes saved.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f84abb054d2e sonatype/nexus "/bin/sh -c 'java -" 22 seconds ago Up 21 seconds 0.0.0.0:8081->8081/tcp nexus
1aea2674e482 sonatype/nexus "/bin/true" 25 seconds ago Created sonatype-work
I want to now back up sonatype-work, but with no luck.
[root#ansible22 ~]# pwd
/root
[root#ansible22 ~]# docker run --volumes-from sonatype-work -v $(pwd):/backup ubuntu tar cvf /backup/sonatype-work-backup.tar /sonatype-work
tar: /backup/sonatype-work-backup.tar: Cannot open: Permission denied
tar: Error is not recoverable: exiting now
I have tried running as -u root, I also tried with:
/root/sonatype-work-backup.tar
When doing so, i can see it taring stuff, but I don't see the tar file. Based on the example and my understanding I don't think thats right anyway.
Can anyone see what I'm doing wrong?
EDIT: Linux Version Info
Fedora release 22 (Twenty Two)
NAME=Fedora
VERSION="22 (Twenty Two)"
ID=fedora
VERSION_ID=22
PRETTY_NAME="Fedora 22 (Twenty Two)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:22"
HOME_URL="https://fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=22
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=22
PRIVACY_POLICY_URL=https://fedoraproject.org/wiki/Legal:PrivacyPolicy
VARIANT="Server Edition"
VARIANT_ID=server
Fedora release 22 (Twenty Two)
Fedora release 22 (Twenty Two)

The reason for this is related to selinux labelling. There are a couple of good Project Atomic pages on this:
Docker and Linux
The default type for a confined container process is svirt_lxc_net_t. This type is permitted to read and execute all files types under /usr and most types under /etc. svirt_lxc_net_t is permitted to use the network but is not permitted to read content under /var, /home, /root, /mnt … svirt_lxc_net_t is permitted to write only to files labeled svirt_sandbox_file_t and docker_var_lib_t. All files in a container are labeled by default as svirt_sandbox_file_t.
Then in Using Volumes with Docker can Cause Problems with SELinux:
This will label the content inside the container with the exact MCS label that the container will run with, basically it runs chcon -Rt svirt_sandbox_file_t -l s0:c1,c2 /var/db where s0:c1,c2 differs for each container.
(In this case not /var/db but /root)
If you volume mount a image with -v /SOURCE:/DESTINATION:z docker will automatically relabel the content for you to s0. If you volume mount with a Z, then the label will be specific to the container, and not be able to be shared between containers.
So either z or Z are suitable in this case but one might usually prefer Z for the isolation.

The reason I'm getting permission denied is because of selinux. I am not sure why yet, but will edit this answer when/if I find out. Disabling selinux and restarting, i was able to take a back up.

Related

Docker container cannot start

I have built a docker image to run a jenkins server in and after creating a container for this image, I find that the container remains on exit status, and never starts. Even when I attempt to start the container with the UI.
Here are the steps I have taken, and perhaps I am missing something?
docker pull jenkins/jenkins
sudo mkdir /var/jenkins_home
docker run -p 9080:8080 -d -v /var/jenkins_home:/var/jenkins_home jenkins/jenkins
I already have java running on the port 8080, maybe this is impacting the container status?
java 2968 user 45u IPv6 0xbf254983f0051d87 0t0 TCP *:http-alt (LISTEN)
Not sure why its running on this port, I have attempted to kill the PID but it recreates itself.
Following the comments:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fc880ccd31ed jenkins/jenkins "/usr/bin/tini -- /u…" 3 seconds ago Exited (1) 2 seconds ago vigorous_lewin
docker logs vigorous_lewin
touch: setting times of '/var/jenkins_home/copy_reference_file.log': No such file or directory
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
The docs say
NOTE: Avoid using a bind mount from a folder on the host machine into
/var/jenkins_home, as this might result in file permission issues (the
user used inside the container might not have rights to the folder on
the host machine). If you really need to bind mount jenkins_home,
ensure that the directory on the host is accessible by the jenkins
user inside the container (jenkins user - uid 1000) or use -u
some_other_user parameter with docker run.
So they recommend using a docker volume rather than a bind mount like you do. If you have to use a bind mount, you need to ensure that UID 1000 can read and write the host directory.
The easiest solution is to run the container as root by adding -u root to your docker run command, like this
docker run -p 9080:8080 -d -v /var/jenkins_home:/var/jenkins_home -u root jenkins/jenkins
That's not as secure though, so depending on what environment you're running your container in, that might not be a good idea.

My changes were lost in new Docker container

Steps to reproduce:
Download and run postgres:9.6.24:
docker run --name my_container --restart=always -d -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=pgmypass postgres:9.6.24
Here result:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
879883bfc84a postgres:9.6.24 "docker-entrypoint.s…" 26 seconds ago Up 25 seconds 127.0.0.1:5432->5432/tcp my_container
OK.
Open file inside container /var/lib/postgresql/data/pg_hba.conf
docker exec -it my_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 127.0.0.1/32 trust
Replace file /var/lib/postgresql/data/pg_hba.conf inside container by my file. Copy and overwrite my file from host to container:
tar --overwrite -c pg_hba.conf | docker exec -i my_container /bin/tar -C /var/lib/postgresql/data/ -x
Make sure the file has been modified. Go inside container and open changed file
docker exec -it my_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 0.0.0.0/0 trust
As you can see the content of file was changed.
Create new image from container
docker commit my_container
See result:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> ee57ad4bc6b4 3 seconds ago 200MB
postgres 9.6.24 027ccf656dc1 12 months ago 200MB
Now tag my new image
docker tag ee57ad4bc6b4 my_new_image:1.0.0
See reult:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my_new_image 1.0.0 ee57ad4bc6b4 About a minute ago 200MB
postgres 9.6.24 027ccf656dc1 12 months ago 200MB
OK.
Stop and delete old continer:
docker stop my_continer
docker rm my_container
See result:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
As you can see not exit any container. OK.
Create new continer from new image
docker run --name my_new_container_test --restart=always -d -p 127.0.0.1:5432:5432 -e POSTGRES_PASSWORD=pg1210 my_new_image:1.0.0
See result:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a965dbbd991 my_new_image:1.0.0 "docker-entrypoint.s…" 7 seconds ago Up 6 seconds 127.0.0.1:5432->5432/tcp my_new_container
Open file inside container /var/lib/postgresql/data/pg_hba.conf
docker exec -it my_new_container bash
root#879883bfc84a:/# cat /var/lib/postgresql/data/pg_hba.conf
IPv4 local connections:
host all all 127.0.0.1/32 trust
As you can see my change in files are lost. The content of file is original. Not my changes.
P.S. This problem is only with file pg_hba.config. E.g if I created in the container the folder and file: /Downaloads/myfile.txt then this file not lost in the my container "my_new_container".
Editing files inside container with docker exec, in general, will in fact cause you to lose work. You mention docker commit but that's almost never a best practice. (If this was successful, but then you discovered PostgreSQL 9.6.24 exactly had some critical bug and you must upgrade, could you recreate the exact some image?)
In the case of the postgres image, the files in /var/lib/postgresql/data are always stored in a Docker volume or mount point. In your case you didn't use a docker run -v option, but the image is configured to create an anonymous volume in that directory. The volume is not included in docker commit, which is why you're not seeing it on the rebuilt container. (Also see docker postgres with initial data is not persisted over commits.)
For editing a configuration file, the easiest thing to do is to store the data on the host system. Create a directory to hold it, and extract the configuration file from the image. (Since the data directory is created by the image's startup script, you need a slightly longer path to get it out.)
mkdir pgdata
docker run -d --name pgtmp postgres:9.6.24
docker cp pgtmp:/var/lib/postgresql/data/pg_hba.conf ./pgdata
docker stop pgtmp
docker rm pgtmp
$EDITOR pgdata/pg_hba.conf
Now when you run the container, provide this data directory as a bind mount. That will inject the configuration file, but also cause the database data to persist over container exits.
docker run -v "$PWD/pgdata:/var/lib/postgresql/data" -u $(id -u) ... postgres:9.6.24
Note that this sequence doesn't use docker exec or "go inside" containers at all, and you haven't created an image without corresponding source. Everything is run with commands from the host. If you do need to reset the database data, in this setup, it's just files, and you can rm -rf pgdata, maybe saving the modified configuration file along the way.
(If I'm reading this configuration change correctly, you're trying to globally disable passwords and instead allow trust authentication for all inbound connections. That's not usually a good idea, especially since username/password authentication is standard in every database library I've encountered. You probably still want the volume to persist data, but I might not make this change to pg_hba.conf.)
Docker Container is a readyonly entity, which means if you will create a file into the container, remove it and re-create it (The container), the file is not supposed to be there.
what you want to do is one of two things,
Map your container to a local directory (volume)
Create a docker file based on the postgres image, and generate this modifications in a script, that your dockerfile reads.
docker volume usages
Dockerfile Reference

Centos docker container crashes with 6 Segmentation fault - where's the core dump

running a Centos 7.1.1503 docker container, when adding a few lines of code (node.js) it crashes with the error:
/bin/sh: line 1: 6 Segmentation fault (core dumped) node --inspect server.js
the file /proc/sys/kernel/core_pattern contains the following:
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
There's no /var/spool/abrt directory within the container. The /var/spool/abrt directory on the server running the containers doesn't get anything.
I can't change the /proc/sys/kernel/core_pattern to point to another directory/program because of the read-only fs thing. Can't run the container in privileged, either :-(
I've read through tonnes of docker/stackexchange and other docs and can't figure out where/how to get the core dump?
In the olden days I'd play with the settings and wreck a replica of the machine, but this is a production container and I'm very limited in what I can do and when/how many times I can crash it :-(
Host is RHEL 7.1, docker version is 1.7
EDIT: On my laptop, running the same container (with docker 1.12 though), I sometimes get core dumps on the host /var/spool/abrt by running sleep 60 & in the container, then running (still in the container) kill -ABRT <pid of the sleep 60> . By "sometimes" I mean that trying again doesn't always work... I'm not sure why, but about 2 out of 3 tries succeed. I figure this might have to do with a privileged run or something..? I run the container with docker run -it centos bash. If I can understand this I might replicate this behavior in the production env.
Execute the following command to get a report of the paths of the upper layer of the filesystem of all the centos containers you may have launched:
docker ps -a | grep centos | awk '{print $1}' | xargs docker inspect | grep UpperDir | cut -d\" -f4
Bear in mind that you will have to become sudo to be able to access them (run sudo su before cd'ing)
The command above does the following:
Get a report of all the containers existing in your host
Select only the ones that have centos in their line
Get the first row of that report (container ID)
Get the inspect of every one of those containers
Look for the UpperDir parameter (upper layer of your container filesystem, and the one you tinkered with when your process crashed)
Cut the UpperDir string for improved presentation
After that, you are on your own. I am afraid I am of no help with the crash itself. But if you are still doubtful, write me some lines and I'll do my best ot help.
I hope this helps you!
I ended up skipping the abrt and changing the core_pattern file to out to a directory on the host. Here's my two bytes on getting a core dump out of a crashing docker instance:
On the host:
docker run --privileged -it -v /tmp:/core image-name bash
(you can do this with docker exec, but my machine didn't have the flags available to exec)
--privileged = required to be able to edit the /proc/sys/kernel/core_pattern file
-v = to mount the /tmp directory of the host in a /core directory in the container
In the instance:
set the location of the core dumps to /core (which is a mount of the /tmp dir in the host machine):
echo "/core/core-%e-%s-%u-%g-%p-%t" > /proc/sys/kernel/core_pattern
test it:
sleep 60 &
kill -SEGV <pid of that sleep process>
Should be able to see a core file in the /tmp dir on the host. When my instance crashed, I finally got the dump in the host machine.

How to mount the root directory of docker container as a NFS mount point

I'm new to docker, and I'm trying mount the root directory of docker container as a NFS mount point.
for example, I had a NFS mount point test:/home/user/3243, and I'm trying:
docker run -it -v "test:/home/user/3243":/ centos7 /bin/bash
absolutely, it's failed. So I tried this:
mount -t nfs test:/home/user/3243 /mnt/nfs/3243
docker run -it -v /mnt/nfs/3243:/ centos7 /bin/bash
but failed again, so how to do this? Could it be worked out?
A couple of issues here:
You cannot mount to the root directory of a container. So docker run -v /foo:/ will never work.
With the syntax of your first attempt, -v test:/foo:bar, Docker would see this as wanting to create a "named" volume called "test".
You should be able to first do the NFS mount, then do docker run -v /mnt/nfs/3243:/foo to have the nfs path mounted to /foo.
But again, you can't mount to /.
That is currently discussed (since mid 2014) in issue 4213.
One recent workaround by Jeroen van Bemmel (jbemmel) was:
It appears that NFS functionality depends on the underlying storage driver ( aufs, devicemapper, etc. ), as well as the sharing of file handles between processes ( see blog post "docker: devicemapper fix for “device or resource busy” (EBUSY)") i.e. 'unshare' may have an impact on NFS mounts.
I've moved away from using the 'MOUNTPOINT=/vm/nfs' as I am not sure if that event is even emitted.
Instead I created an upstart file like this:
cat > /etc/init/ecdn.conf << EOF
description "eCDN container"
author "Jeroen van Bemmel"
# mounted MOUNTPOINT=/vm/nfs doesn't seem to work, at least not the first time
start on started docker and virtual-filesystems
stop on starting rc RUNLEVEL=[016]
respawn
script
exec /usr/bin/docker start -a ecdn
end script
pre-stop script
/usr/bin/docker stop ecdn
# dont /usr/bin/docker rm ecdn
end script
EOF
and then create the container like this:
script -c "docker create -it --name='ecdn' --volume /vm:/usr/share/nginx/html/vm:ro image/name"

Port data out of docker container

I use this method below to port data out of one container.
docker run --volumes-from <data container> ubuntu tar -cO <volume path> | gzip -c > volume.tgz
But there is one problem with it is every time it performs a backup, there will be a zombie container left. What is the good way to get that id and remove the zombie container afterward.
Thanks
Apparently, you just want to be able to export volume data. To do that, you just need to start your initial container with a volume pointing to a directory on the host with the -v option. You can tar on the host without creating a container for it. Your current tactic seems a bit over-engineered ;)
The easy way to remove the container after executing the command, is to use the option --rm, from here
However, if you feel that the container you are creating will have data that you will need to
1. update in real time
2. access after the container has been created
then you may also mount a host directory as a container volume and access the contents of that directory from the host.
If you start a container using the -volume option, you can also call reference the directory created on this host
$ docker run -v /volume_directory ubuntu
$ container=$(docker ps -n=1 -q)
$ docker inspect -f '{{.Volumes}}' $container

Resources