docker cp does not copy recursively - docker

I have the following containers:
» sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
753f4d01ae32 postgres:9 /usr/src/postgres/do 9 minutes ago Up 9 minutes 5432/tcp postgres
edfbfb3b0837 tutum/couchdb:latest /run.sh 18 minutes ago Up 18 minutes 0.0.0.0:5985->5984/tcp couchdb2
4aa61e51d86f tutum/couchdb:latest /run.sh 18 minutes ago Up 18 minutes 0.0.0.0:5984->5984/tcp couchdb
I am working with the postgres container:
CID=753f4d01ae32
PID=$(sudo docker inspect --format {{.State.Pid}} $CID)
echo $PID
13949
» sudo nsenter --target $PID --mount --uts --ipc --net --pid /bin/bash
root#753f4d01ae32:/# ls -l /var/lib/postgresql/data/
total 100
-rw------- 1 postgres postgres 4 Aug 1 05:30 PG_VERSION
drwx------ 5 postgres postgres 4096 Aug 1 05:30 base
drwx------ 2 postgres postgres 4096 Aug 1 05:30 global
drwx------ 2 postgres postgres 4096 Aug 1 05:30 pg_clog
-rw------- 1 postgres postgres 4506 Aug 1 05:30 pg_hba.conf
-rw------- 1 postgres postgres 1636 Aug 1 05:30 pg_ident.conf
drwx------ 4 postgres postgres 4096 Aug 1 05:30 pg_multixact
drwx------ 2 postgres postgres 4096 Aug 1 05:30 pg_notify
drwx------ 2 postgres postgres 4096 Aug 1 05:30 pg_serial
drwx------ 2 postgres postgres 4096 Aug 1 05:30 pg_snapshots
drwx------ 2 postgres postgres 4096 Aug 1 05:30 pg_stat
drwx------ 2 postgres postgres 4096 Aug 1 05:41 pg_stat_tmp
drwx------ 2 postgres postgres 4096 Aug 1 05:30 pg_subtrans
drwx------ 2 postgres postgres 4096 Aug 1 05:30 pg_tblspc
drwx------ 2 postgres postgres 4096 Aug 1 05:30 pg_twophase
drwx------ 3 postgres postgres 4096 Aug 1 05:30 pg_xlog
-rw------- 1 postgres postgres 20494 Aug 1 05:30 postgresql.conf
-rw------- 1 postgres postgres 30 Aug 1 05:30 postmaster.opts
-rw------- 1 postgres postgres 70 Aug 1 05:30 postmaster.pid
Cool, so I have the data in /var/lib/postgresql/data. I want a copy of that data, to check the config in the host and so on:
» sudo docker cp $CID:/var/lib/postgresql/data .
» tree data/
data/
0 directories, 0 files
So, docker cp has not copied anything! Why?
I am using:
» docker --version
Docker version 0.9.1, build 3600720

Old Version cannot docker cp data from Volume. Update to 1.4.0 or newer, it works!

This is an old question, but it's relevant again for Docker 1.12 - files are copied fine but not into directories already created on volumes on the container.
Created an issue here:
https://github.com/docker/docker/issues/25755

Related

how to migrate data from old docker gitlab container to new container

I have a very old docker gitlab that was running fine until i tried to upgrade and the rest is history. It never worked.
So now my plans are to setup a new container on a new server and then just copy over the git repo data from the old docker host to the new docker host running the new container
here are the volumes i exposed to the host for both the old and new containers
--volume /srv/gitlab/config:/etc/gitlab
--volume /srv/gitlab/logs:/var/log/gitlab
--volume /srv/gitlab/data:/var/opt/gitlab
what i have tried it check the gitlab data directories to see where the git repos are stored on the old docker host but i cant find anything as it shows empty
root#gitlab:~# ls -lha /srv/gitlab/data/git-data/repositories/
total 48K
drwxrws--- 9 998 998 4.0K Aug 9 2021 .
drwx------ 3 998 root 4.0K Jun 8 2016 ..
drwxrwx--- 2 998 998 12K Oct 24 2020 devops
drwxr-sr-x 3 998 998 4.0K Aug 1 2021 +gitaly
-rw------- 1 998 998 64 Jun 18 2020 .gitaly-metadata
drwxr-s--- 91 998 998 4.0K Oct 24 2020 #hashed
drwxr-s--- 3 998 998 4.0K Oct 25 2020 #snippets
root#gitlab:~# ls -lha /srv/gitlab/data/git-data/repositories/devops/
total 16K
drwxrwx--- 2 998 998 12K Oct 24 2020 .
drwxrws--- 9 998 998 4.0K Aug 9 2021 ..
so anyone know where the git repo data are stored on a docker host running gitlab?

Docker - committed MySQL image not starting (data directory is unusable)

I used docker commit and now the container won't run properly. Should I:
specify a new data folder within the container, so it will get deleted when I delete the container
delete the host folder contents under /var/lib/mysql?
What I did was I started a Docker container:
docker run -p 33069:3306 --name some-mysql -e MYSQL_ROOT_PASSWORD=test -d mysql:8.0.26 mysqld --default-authentication-plugin=mysql_native_password
Configured some stuff like remote root login, inserted some data into it. Then I wanted to back it up and did:
docker commit -p 6b836bfdf062 backup-mysql8
Which went OK:
root#server:/home/user# docker images | grep mysql
backup-mysql8 latest 1effec593a03 45 minutes ago 514MB
Then I stopped and removed the old container. And tried to start a new one from the backup:
docker run -p 33069:3306 --name some-mysql -e MYSQL_ROOT_PASSWORD=test -d mysql:8.0.26 mysqld --default-authentication-plugin=mysql_native_password -d backup-mysql8
After a few seconds, it would just die.
root#server:/var/lib/mysql# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
13b17d3af8f7 mysql:8.0.26 "docker-entrypoint.s…" 21 minutes ago Exited (1) 21 minutes ago some-mysql
I looked at the logs:
docker logs 13b17d3af8f7
And found this:
2021-09-10T15:15:37.074480Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it.
I used inspect and saw that this new host is using my host folder /var/lib/mysql, is that what this means?
docker inspect 13b17d3af8f7
The problem is that that folder on my host machine is already being used and I don't think it's used by the previous container.
root#server:/var/lib/mysql# ls -l
total 110652
-rw-r----- 1 mysql mysql 56 feb 13 2020 auto.cnf
-rw------- 1 mysql mysql 1676 feb 13 2020 ca-key.pem
-rw-r--r-- 1 mysql mysql 1112 feb 13 2020 ca.pem
-rw-r--r-- 1 mysql mysql 1112 feb 13 2020 client-cert.pem
-rw------- 1 mysql mysql 1680 feb 13 2020 client-key.pem
-rw-r--r-- 1 mysql mysql 0 iul 28 06:01 debian-5.7.flag
-rw-r----- 1 mysql mysql 291 feb 13 2020 ib_buffer_pool
-rw-r----- 1 mysql mysql 12582912 feb 13 2020 ibdata1
-rw-r----- 1 mysql mysql 50331648 feb 13 2020 ib_logfile0
-rw-r----- 1 mysql mysql 50331648 feb 13 2020 ib_logfile1
drwxr-x--- 2 mysql mysql 4096 feb 13 2020 mysql
drwxr-x--- 2 mysql mysql 4096 feb 13 2020 performance_schema
-rw------- 1 mysql mysql 1680 feb 13 2020 private_key.pem
-rw-r--r-- 1 mysql mysql 452 feb 13 2020 public_key.pem
-rw-r--r-- 1 mysql mysql 1112 feb 13 2020 server-cert.pem
-rw------- 1 mysql mysql 1676 feb 13 2020 server-key.pem
drwxr-x--- 2 mysql mysql 12288 feb 13 2020 sys
What and how to do it?
if you need persistent data stored, you should map /var/lib/mysql to a host folder instead.
e.g.
docker run -p 33069:3306 --name some-mysql -v ./mydata:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=test -d mysql:8.0.26 mysqld --default-authentication-plugin=mysql_native_password
Update:
docker inspect output just represent the section VOLUME ["/var/lib/mysql"] in Dockerfile.

docker-compose mounted volume is empty, but other volumes created during Docker image build are populated

Starting with an empty directory, I created this docker-compose.yml:
version: '3.9'
services:
neo4j:
image: neo4j:3.2
restart: unless-stopped
ports:
- 7474:7474
- 7687:7687
volumes:
- ./conf:/conf
- ./data:/data
- ./import:/import
- ./logs:/logs
- ./plugins:/plugins
environment:
# Raise memory limits
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms.memory.heap.initial_size=1G
- NEO4J_dbms_memory_heap_max__size=1G
Then I add the import directory, which contains data files I intend to work with in the container.
At this point, my directory looks like this:
0 drwxr-xr-x 9 cc staff 288 Dec 11 18:57 .
0 drwxr-xr-x 5 cc staff 160 Dec 11 18:15 ..
8 -rw-r--r-- 1 cc staff 458 Dec 11 18:45 docker-compose.yml
0 drwxr-xr-x 20 cc staff 640 Dec 11 18:57 import
I run docker-compose up -d --build, and the container is built. Now the local directory looks like this:
0 drwxr-xr-x 9 cc staff 288 Dec 11 18:57 .
0 drwxr-xr-x 5 cc staff 160 Dec 11 18:15 ..
0 drwxr-xr-x 2 cc staff 64 Dec 11 13:59 conf
0 drwxrwxrwx# 4 cc staff 128 Dec 11 18:08 data
8 -rw-r--r-- 1 cc staff 458 Dec 11 18:45 docker-compose.yml
0 drwxr-xr-x 20 cc staff 640 Dec 11 18:57 import
0 drwxrwxrwx# 3 cc staff 96 Dec 11 13:59 logs
0 drwxr-xr-x 3 cc staff 96 Dec 11 15:32 plugins
The conf, data, logs, and plugins directories are created.
data and logs are populated from the build of the Neo4j image, and conf and plugins are empty, as expected.
I use docker exec to look at the directory structures on the container:
8 drwx------ 1 neo4j neo4j 4096 Dec 11 23:46 .
8 drwxr-xr-x 1 root root 4096 May 11 2019 ..
36 -rwxrwxrwx 1 neo4j neo4j 36005 Feb 18 2019 LICENSE.txt
128 -rwxrwxrwx 1 neo4j neo4j 130044 Feb 18 2019 LICENSES.txt
12 -rwxrwxrwx 1 neo4j neo4j 8493 Feb 18 2019 NOTICE.txt
4 -rwxrwxrwx 1 neo4j neo4j 1594 Feb 18 2019 README.txt
4 -rwxrwxrwx 1 neo4j neo4j 96 Feb 18 2019 UPGRADE.txt
8 drwx------ 1 neo4j neo4j 4096 May 11 2019 bin
4 drwxr-xr-x 2 neo4j neo4j 4096 Dec 11 23:46 certificates
8 drwx------ 1 neo4j neo4j 4096 Dec 11 23:46 conf
0 lrwxrwxrwx 1 root root 5 May 11 2019 data -> /data
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 import
8 drwx------ 1 neo4j neo4j 4096 May 11 2019 lib
0 lrwxrwxrwx 1 root root 5 May 11 2019 logs -> /logs
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 plugins
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 run
My problem is that the import directory in the container is empty. The data and logs directories are not empty though.
The data and logs directories on my local have extended attributes which the conf and plugins do not:
xattr -l data
com.docker.grpcfuse.ownership: {"UID":100,"GID":101}
The only difference I can identify is that those directories that had data created by docker-compose when it grabbed the Neo4j image.
Does anyone understand what is happening here, and tell me how I can get this to work? I'm using Mac OS X 10.15 and docker-compose version 1.27.4, build 40524192.
Thanks.
TL;DR: your current setup probably works fine.
To walk through the specific behavior you're observing:
On container startup, Docker will create empty directories on the host if they don't exist, and mount-point directories inside the container. (Which is why those directories appear.)
Docker never copies data from an image into a bind mount. This behavior only happens for named volumes (and only the very first time you use them, not on later runs; and only on native Docker, not on Kubernetes).
But, the standard database images generally know how to initialize an empty data directory. In the case of the neo4j image, its Dockerfile ends with an ENTRYPOINT directive that runs at container startup; that docker-entrypoint.sh knows how to do various sorts of first-time setup. That's how data gets into ./data.
The image also declares a WORKDIR /var/lib/neo4j (via an intermediate environment variable). That explains, in your ls -l listing, why there are symlinks like data -> /data. Your bind mount is to /import, but if you docker-compose exec neo4j ls import, it will look relative to that WORKDIR, which is why the directory looks empty.
But, the entrypoint script specifically looks for a /import directory inside the container, and if it exists and is readable, it sets an environment variable NEO4J_dbms_directories_import=/import.
This all suggests to me that your setup is correct, and if you try to execute an import, it will work correctly and see your host data. You are looking at a /var/lib/neo4j/import directory from the image, and it's empty, but the image startup knows to also look for /import in the container, and your mount points there.

How I can access docker data volumes on Windows machine?

I have docker-compose.yml like this:
version: '3'
services:
mysql:
image: mysql
volumes:
- data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=$ROOT_PASSWORD
volumes:
data:
And my mount point looks like: /var/lib/docker/volumes/some_app/_data and I want to access data from that mount point and I'm not sure how to do it on Windows machine. Maybe I can create some additional container which can pass data from docker virtual machine to my directory?
When I'm mounting folder like this:
volumes:
- ./data:/var/lib/mysql
to use my local directory - I had no success because of permissions issue. And read that "right way" is using docker volumes.
UPD: MySQL container it's just example. I want to use such behaviour for my codebase and use docker foe local development.
For Linux containers under Windows, docker runs actually over a Linux virtual machine, so your named volume is a mapping of a local directory in that VM to a directory in the container.
So what you got as /var/lib/docker/volumes/some_app/_data is a directory inside that VM. To inspect it you can:
docker run --rm -it -v /:/vm-root alpine:edge ls -l /vm-root/var/lib/docker/volumes/some_app/_data
total 188476
-rw-r----- 1 999 ping 56 Jun 4 04:49 auto.cnf
-rw------- 1 999 ping 1675 Jun 4 04:49 ca-key.pem
-rw-r--r-- 1 999 ping 1074 Jun 4 04:49 ca.pem
-rw-r--r-- 1 999 ping 1078 Jun 4 04:49 client-cert.pem
-rw------- 1 999 ping 1679 Jun 4 04:49 client-key.pem
-rw-r----- 1 999 ping 1321 Jun 4 04:50 ib_buffer_pool
-rw-r----- 1 999 ping 50331648 Jun 4 04:50 ib_logfile0
-rw-r----- 1 999 ping 50331648 Jun 4 04:49 ib_logfile1
-rw-r----- 1 999 ping 79691776 Jun 4 04:50 ibdata1
-rw-r----- 1 999 ping 12582912 Jun 4 04:50 ibtmp1
drwxr-x--- 2 999 ping 4096 Jun 4 04:49 mysql
drwxr-x--- 2 999 ping 4096 Jun 4 04:49 performance_schema
-rw------- 1 999 ping 1679 Jun 4 04:49 private_key.pem
-rw-r--r-- 1 999 ping 451 Jun 4 04:49 public_key.pem
-rw-r--r-- 1 999 ping 1078 Jun 4 04:49 server-cert.pem
-rw------- 1 999 ping 1675 Jun 4 04:49 server-key.pem
drwxr-x--- 2 999 ping 12288 Jun 4 04:49 sys
That is running an auxiliar container which has mounted the hole root filesystem of that VM / into the container dir /vm-root.
To get some file run the container with some command in background (tail -f /dev/null in my case), then you can use docker cp:
docker run --name volume-holder -d -it -v /:/vm-root alpine:edge tail -f /dev/null
docker cp volume-holder:/vm-root/var/lib/docker/volumes/volumes_data/_data/public_key.pem .
If you want a transparent SSH to that VM, it seems that is not supported yet, as of Jun-2017. Here a docker staff member said that.

DSE upgrade deleted datastax-agent conf folder

I'm running a 3-node cluster in AWS. Yesterday, I upgraded my cluster from DSE 4.7.3 to 4.8.0.
After the upgrade, the datastax-agent service is no longer registered and the /usr/share/datastax-agent/conf folder has been removed.
PRE-UPGRADE:
$ ls -alr
total 24836
drwxrwxr-x 3 cassandra cassandra 4096 Aug 10 14:57 tmp
drwxrwxr-x 2 cassandra cassandra 4096 Aug 10 14:56 ssl
drwxrwxr-x 2 cassandra cassandra 4096 Sep 28 15:14 doc
-rw-r--r-- 1 cassandra cassandra 25402305 Jul 14 18:55 datastax-agent-5.2.0-standalone.jar
drwxrwxr-x 2 cassandra cassandra 4096 Sep 28 18:23 conf
drwxrwxr-x 3 cassandra cassandra 4096 Sep 28 18:13 bin
drwxr-xr-x 118 root root 4096 Oct 2 18:02 ..
drwxrwxr-x 7 cassandra cassandra 4096 Oct 7 19:03 .
POST-UPGRADE:
$ ls -al
total 24976
drwxr-xr-x 3 cassandra cassandra 4096 Oct 5 20:45 .
drwxr-xr-x 114 root root 4096 Oct 5 18:23 ..
drwxr-xr-x 3 cassandra cassandra 4096 Oct 5 20:45 bin
-rw-r--r-- 1 cassandra cassandra 25562841 Sep 10 20:43 datastax-agent-5.2.1-standalone.jar
Also, /etc/init.d/datastax-agent file has been deleted. I don't know how I'm supposed to start/stop the service now.
Can I restore the files from the rollback directory? What effect will that have?
In this particular case what happened was that the dpkg install found a preexisting /etc/init.d/datastax-agent file and only put /etc/init.d/datastax-agent.fpk.bak into place. A "sudo dpkg -P datastax-agent" followed by a "sudo dpkg -i /usr/share/dse/datastax-agent/datastax-agent_5.2.1_all.deb" fixed the issue. We had to first kill the already running agent processes and then do a service restart.
Will investigate how that could have happened... that's still a little bit of a mystery to me.

Resources