Delete files in offline docker container - docker

I am working with the following docker files: https://github.com/zanata/zanata-docker-files
After I ran the ./zanata-server/runapp.sh, It started two docker containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
654580794e7c zanata/server:latest "/opt/jboss/wildfl..." 18 seconds ago Up 17 seconds 0.0.0.0:8080->8080/tcp zanata
311f3379635e mariadb:10.1 "docker-entrypoint..." 2 weeks ago Up 2 weeks 3306/tcp zanatadb
After a blackout, the zanata server container broke, it left some Lock files around and I cannot start it again:
org.zanata.exception.ZanataInitializationException: Lucene lock files found. Check if Zanata is already running. Otherwise, Zanata was not shut down cleanly: delete the lock files: [/var/lib/zanata/indexes/org.zanata.model.
HTextFlowTarget/write.lock, /var/lib/zanata/indexes/org.zanata.model.HProjectIteration/write.lock, /var/lib/zanata/indexes/org.zanata.model.HProject/write.lock]
How can I delete the lock files?

Okay, I thought I need to delete the files while the container is offline, but indeed I needed to run the container, after I could connect to it and run commands on like I was on a normal server.
The main solution:
sudo docker exec -it 654580794e7c bash
This allows me to execute commands on the container:
[jboss#654580794e7c ~]$ ls
wildfly
The whole process, if you would like to see:
zanata#zanata:~/docker/zanata-docker-files-platform-4.1.1/zanata-server$ sudo docker ps
[sudo] password for zanata:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
654580794e7c zanata/server:latest "/opt/jboss/wildfl..." 17 minutes ago Up 17 minutes 0.0.0.0:8080->8080/tcp zanata
311f3379635e mariadb:10.1 "docker-entrypoint..." 2 weeks ago Up 2 weeks 3306/tcp zanatadb
zanata#zanata:~/docker/zanata-docker-files-platform-4.1.1/zanata-server$ sudo docker exec -it 654580794e7c bash
[jboss#654580794e7c ~]$ ls
wildfly
[jboss#654580794e7c ~]$ cd /var/lib
[jboss#654580794e7c lib]$ ls
alternatives games machines rpm systemd zanata
dbus initramfs misc rpm-state yum
[jboss#654580794e7c lib]$ cd zanata/indexes
[jboss#654580794e7c indexes]$ ls -lh
total 28K
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:14 org.zanata.model.HAccount
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:14 org.zanata.model.HGlossaryEntry
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:14 org.zanata.model.HGlossaryTerm
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:30 org.zanata.model.HProject
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:30 org.zanata.model.HProjectIteration
drwxr-xr-x 2 jboss jboss 4.0K Mar 3 07:23 org.zanata.model.HTextFlowTarget
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:14 org.zanata.model.tm.TransMemoryUnit
[jboss#654580794e7c indexes]$ cd org.zanata.model.HTextFlowTarget/
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ ls
_0.cfe _0.cfs _0.si segments_2 write.lock
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ rm write.lock
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ ls
_0.cfe _0.cfs _0.si segments_2
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ cd .
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ cd ..
[jboss#654580794e7c indexes]$ cd org.zanata.model.HProject
[jboss#654580794e7c org.zanata.model.HProject]$ ls
_0.cfe _0.cfs _0.si segments_2 write.lock
[jboss#654580794e7c org.zanata.model.HProject]$ rm write.lock
[jboss#654580794e7c org.zanata.model.HProject]$ cd ..
[jboss#654580794e7c indexes]$ cd org.zanata.model.HProjectIteration/
[jboss#654580794e7c org.zanata.model.HProjectIteration]$ ls
_0.cfe _0.cfs _0.si segments_2 write.lock
[jboss#654580794e7c org.zanata.model.HProjectIteration]$ rm write.lock
[jboss#654580794e7c org.zanata.model.HProjectIteration]$ ^C
[jboss#654580794e7c org.zanata.model.HProjectIteration]$ exit
zanata#zanata:~/docker/zanata-docker-files-platform-4.1.1/zanata-server$

Related

Docker Volume Persistence not working for more than one run of a container

Docker version 20.10.2
I'm just starting out on Docker and following training guides - but something hasn't been mentioned so far (that I have discovered) - when I run a container to write some data out to Docker volume, if I run that container again and attach to the same volume, the newly named data will not append into it ?
Here is my rather basic Dockerfile
FROM ubuntu
RUN mkdir applocal
RUN touch applocal/applocalfile."$(date --iso-8601=seconds)"
RUN ls -la applocal
I run this sequence of commands...
docker build Dockerfile -t mine/applocal-persist
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM ubuntu
---> c29284518f49
Step 2/4 : RUN mkdir applocal
---> Running in 9f796f4d988a
Removing intermediate container 9f796f4d988a
---> 99005a7ffed1
Step 3/4 : RUN touch applocal/applocalfile."$(date --iso-8601=seconds)"
---> Running in ffbf2f4c636a
Removing intermediate container ffbf2f4c636a
---> 199bc706dcc6
Step 4/4 : RUN ls -la applocal
---> Running in 7da02faa9fba
total 8
drwxr-xr-x 1 root root 4096 Jul 16 13:52 .
drwxr-xr-x 1 root root 4096 Jul 16 13:52 ..
-rw-r--r-- 1 root root 0 Jul 16 13:52 applocalfile.2021-07-16T13:52:00+00:00
Removing intermediate container 7da02faa9fba
---> 7387c521d82b
Successfully built 7387c521d82b
Successfully tagged mine/applocal-persist:latest
Then run the command...
docker run -v applocalsaved:/applocal mine/applocal-persist
Looking at the Volume data it has worked
ls -la /var/lib/docker/volumes/applocalsaved/_data/
total 8
drwxr-xr-x 2 root root 4096 Jul 16 14:55 .
drwxr-xr-x 3 root root 4096 Jul 16 14:55 ..
-rw-r--r-- 1 root root 0 Jul 16 14:52 applocalfile.2021-07-16T13:52:00+00:00
If I wait a few minutes later and re-run docker run -v applocalsaved:/applocal mine/applocal-persist
...and check the volume data again, no new file exists
ls -la /var/lib/docker/volumes/applocalsaved/_data/
total 8
drwxr-xr-x 2 root root 4096 Jul 16 14:55 .
drwxr-xr-x 3 root root 4096 Jul 16 14:55 ..
-rw-r--r-- 1 root root 0 Jul 16 14:52 applocalfile.2021-07-16T13:52:00+00:00
Run history...
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d16e9aa495e mine/applocal-persist "bash" 57 seconds ago Exited (0) 55 seconds ago distracted_cohen
69ff06d9c886 mine/applocal-persist "bash" 2 minutes ago Exited (0) 2 minutes ago affectionate_lehmann
I've listed the Volume Inspect here...
docker volume inspect applocalsaved
[
{
"CreatedAt": "2021-07-16T14:55:24+01:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/applocalsaved/_data",
"Name": "applocalsaved",
"Options": null,
"Scope": "local"
}
]
I'm obviously missing a trick here - or misunderstanding what is going on or the design around this.
Thanks in advance
For info: I'm using Windows running Virtual Box and running Ubuntu 21.04 as a VM
Those commands run once when the image is built.
If you want something to run on container startup, you can use CMD or ENTRYPOINT
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
The commands in the Dockerfile only run once, when the image is originally built. You can verify this for example by just running the image without a volume mount:
docker build -t mine/applocal-persist .
docker run --rm mine/applocal-persist \
ls -l ./applocal
sleep 60
docker run --rm mine/applocal-persist \
ls -l ./applocal
If you start the container with a named volume mounted, only if the volume is a Docker named volume and only if the volume is empty, the contents of the image will be copied into the volume. (This doesn't happen on Docker bind mounts, Kubernetes volumes, or if the image has changed; I would not rely on this for any sort of data sharing since it works in so few contexts.)
Conversely, if you start the container with any sort of volume mounted, whatever content is in the volume completely replaces what's in the image. You can see this with some more experimentation:
# Build the image
docker build -t mine/applocal-persist
# Start the container with a new named volume mounted; see what's there.
docker volume rm applocalsaved
docker run --rm -v applocalsaved:/applocal mine/applocal-persist \
ls -l /applocal
# Edit a file in the volume and see that it gets persisted across restarts
docker run --rm -v applocalsaved:/applocal mine/applocal-persist \
touch /applocal/foo
docker run --rm -v applocalsaved:/applocal mine/applocal-persist \
ls -l /applocal
# But it is not included in the image without the bind mount
docker run --rm mine/applocal-persist \
ls -l /applocal
sleep 60
# Rebuild the image
docker build -t mine/applocal-persist
# In the base image, you will see the updated timestamp
docker run --rm mine/applocal-persist \
ls -l /applocal
# But if you mount the volume, the old volume contents replace the
# image contents and you will only see the old timestamp
docker run --rm -v applocalsaved:/applocal mine/applocal-persist \
ls -l /applocal

prevent changing of permissions in mounts with rootless container

In rootful containers, the solution to this problem is run with --user "$(id -u):$(id -g)" however this does not work for rootless contain systems (rootless docker, or in my case podman):
$ mkdir x
$ podman run --user "$(id -u):$(id -g)" -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'echo hi >> /x/test'
bash: /x/test: Permission denied
so for rootless container systems I should remove --user since the root user is automatically mapped to the calling user:
$ podman run -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'echo hi >> /x/test'
$ ls -al x
total 12
drwxr-xr-x 2 asottile asottile 4096 Sep 3 10:02 .
drwxrwxrwt 18 root root 4096 Sep 3 10:01 ..
-rw-r--r-- 1 asottile asottile 3 Sep 3 10:02 test
but, because this is now the root user, they can change the ownership to users which are undeleteable outside container:
$ podman run -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'mkdir -p /x/1/2/3 && chown -R nobody /x/1'
$ ls -al x/
total 16
drwxr-xr-x 3 asottile asottile 4096 Sep 3 10:03 .
drwxrwxrwt 18 root root 4096 Sep 3 10:01 ..
drwxr-xr-x 3 165533 asottile 4096 Sep 3 10:03 1
-rw-r--r-- 1 asottile asottile 3 Sep 3 10:02 test
$ rm -rf x/
rm: cannot remove 'x/1/2/3': Permission denied
so my question is: how do I allow writes to a mount, but prevent changing ownership for rootless containers?
I think --user $(id -u):$(id -g) --userns=keep-id will get what you want.
$ id -un
erik
$ id -gn
erik
$ mkdir x
$ podman run -v "$PWD/x:/x:Z" --user $(id -u):$(id -g) --userns=keep-id docker.io/library/ubuntu:focal bash -c 'mkdir -p /x/1/2/3 && chown -R nobody /x/1'
chown: changing ownership of '/x/1/2/3': Operation not permitted
chown: changing ownership of '/x/1/2': Operation not permitted
chown: changing ownership of '/x/1': Operation not permitted
$ ls x
1
$ ls -l x
total 0
drwxr-xr-x. 3 erik erik 15 Sep 6 19:34 1
$ ls -l x/1
total 0
drwxr-xr-x. 3 erik erik 15 Sep 6 19:34 2
$ ls -l x/1/2
total 0
drwxr-xr-x. 2 erik erik 6 Sep 6 19:34 3
$
Regarding deleting files and directories that are not owned by your normal UID and GID (but from the extra ranges in /etc/subuid and /etc/subgid) , you could
use podman unshare rm filepath
and podman unshare rm -rf directorypath

Understanding Docker volumes

I am trying to learn Docker volumes, and I am using centos:latest as my base image. When I try to run a Docker command, I am unable to access the attached volume inside the container:
Command:
sudo docker run -it --name test -v /home/user/Myhostdir:/mydata centos:latest /bin/bash
Error:
[user#0bd1bb78b1a5 mydata]$ ls
ls: cannot open directory .: Permission denied
When I try to ls to find the folder permission, it says 1001. What's happening, and how can to solve this?
drwxrwxr-x. 2 1001 1001 38 Jun 2 23:12 mydata
My local machine:
[user#xxx07012 Myhostdir]$ pwd
/home/user/Myhostdir
[user#swathi07012 Myhostdir]$ ls -al
total 12
drwxrwxr-x. 2 user user 38 Jun 2 23:12 .
drwx------. 18 user user 4096 Jun 2 23:11 ..
-rw-rw-r--. 1 user user 15 Jun 2 23:12 text.2.txt
-rw-rw-r--. 1 user user 25 Jun 2 23:12 text.txt
This is partially a Docker issue, but mostly an SELinux issue. I am assuming you are running an old 1.x version of Docker.
You have a couple of options. First, you could take a look at this blog post to understand the issue a bit more and possibly use the fix mentioned there.
Or you could just upgrade to a newer version of Docker. I tested mounting a simple volume on Docker version 18.03.1-ce:
docker run -it --name test -v /home/chris/test:/mydata centos:latest /bin/bash
[root#bfec7af20b99 /]# cd mydata/
[root#bfec7af20b99 mydata]# ls
test.txt.txt
[root#bfec7af20b99 mydata]# ls -l
total 0
-rwxr-xr-x 1 root root 0 Jun 3 00:40 test.txt.txt

Permission denied when get contents generated by a docker container on the local fileystem

I use the following command to run a container:
docker run -it -v /home/:/usr/ ubuntu64 /bin/bash
Then I run a program in the container, the program generates some files in the folder:/usr/ which also appear in /home/ but I can't access the generated files with an error: Permission denied outside the container.
I think this may because the files generated by root in the container but outside the container, the user have no root authority, but how to solve it?
What I want to do is accessing the files generated by the program(installed in the container) outside the container.
You need to use the -u flag
docker run -it -v $PWD:/data -w /data alpine touch nouser.txt
docker run -u `id -u` -it -v $PWD:/data -w /data alpine touch onlyuser.txt
docker run -u `id -u`:`id -g` -it -v $PWD:/data -w /data alpine touch usergroup.txt
Now if you do ls -alh on the host system
$ ls -alh
total 8.0K
drwxrwxr-x 2 vagrant vagrant 4.0K Sep 9 05:22 .
drwxrwxr-x 30 vagrant vagrant 4.0K Sep 9 05:19 ..
-rw-r--r-- 1 root root 0 Sep 9 05:21 nouser.txt
-rw-r--r-- 1 vagrant root 0 Sep 9 05:21 onlyuser.txt
-rw-r--r-- 1 vagrant vagrant 0 Sep 9 05:22 usergroup.txt

How to read and write to mounted volume without running as root?

When mounting a volume with the following command:
docker run -t -i --volumes-from FOO BAR
the volumes from FOO are mounted with root as owner. I can't read and write to that without running as root as far as I know. Must I run as root or is there some other way?
I have tried by creating the folder with other owner before mounting but the mounting seems to overwrite that.
Edit: A chown would work if it could be done automatically after the mounting somehow.
I'm not sure why you aren't able to change your folder permissions in your source image. This works without issue in my lab:
$ cat df.vf-uid
FROM busybox
RUN mkdir -p /data && echo "hello world" > /data/hello && chown -R 1000 /data
$ docker build -f df.vf-uid -t test-vf-uid .
...
Successfully built 41390b132940
$ docker create --name test-vf-uid -v /data test-vf-uid
e12df8f84a3b1f113ad5440b62552b40c4fd86f99eec44698af9163a7b960727
$ docker run --volumes-from test-vf-uid -u 1000 -it --rm busybox /bin/sh
/ $ ls -al /data
total 12
drwxr-xr-x 2 1000 root 4096 Aug 22 11:44 .
drwxr-xr-x 19 root root 4096 Aug 22 11:45 ..
-rw-r--r-- 1 1000 root 12 Aug 22 11:43 hello
/ $ echo "success" >/data/world
/ $ ls -al /data
total 16
drwxr-xr-x 2 1000 root 4096 Aug 22 11:46 .
drwxr-xr-x 19 root root 4096 Aug 22 11:45 ..
-rw-r--r-- 1 1000 root 12 Aug 22 11:43 hello
-rw-r--r-- 1 1000 root 8 Aug 22 11:46 world
/ $ cat /data/hello /data/world
hello world
success
/ $ exit
So, what I ended up doing was mounting the volume to another container and change the owner (using uid of the owner I wanted in the final setup) from that container. Apparently uid's are uid's regardless. This means that I can run without being root in the final container. Perhaps there are easier ways to do it but this seems to work at least. Something like this: (untested code clip from my final solution)
docker run -v /opt/app --name Foo ubuntu /bin/bash
docker run --rm --volumes-from Foo -v $(pwd):/FOO ubuntu bash -c "chown -R 9999 /opt/app"
docker run -t -i --volumes-from FOO BAR

Resources