How to use apparmor inside lxc container? - lxc

I have lxc container 'foo' created with ubuntu template in:
/var/lib/lxc/foo/.
I have file a.out in /var/lib/lxc/foo/rootfs/home/ubuntu/test/ (or /home/ubuntu/test/ as visible from inside container).
I would like to use apparmor to prevent a.out from writing to 'test' folder. Is it possible and if yes, how should I configure apparmor/lxc?

Related

Change mountpoint of docker volume to a custom directory

I would like to have a Docker Volume that mounts to a container. This volume would need to be somewhere other than the default location of volumes, preferably somewhere on the Desktop. This is because I am running a web server and would like some directories to be editable by something like VSCode so I don't always have to go inside the container to edit a file. I am not going to be using Docker Compose and instead will be using a Docker File for the container. The functionality I'm going for is the following equivalent of Docker Compose, but in a Dockerfile or through docker run, whichever is easiest to accomplish:
volumes:
- <local-dir>:<container-dir>
This directory will need to be editable LIVE and using the Dockerfile ADD command will not suffice, because after building, the image gets put into a tar archive and cannot be accessed after that.
with this solution you can move even A live container to new partition:
Add a configuration file to tell the docker daemon what is the location of the data directory
Using your preferred text editor add a file named daemon.json under the directory /etc/docker. The file should have this content:
{
"data-root": "/path/to/your/docker"
}
Copy the current data directory to the new one
sudo rsync -aP /var/lib/docker/ /path/to/your/docker
Rename the old docker directory
sudo mv /var/lib/docker /var/lib/docker.old
Restart the docker daemon
sudo service docker start
resource: https://www.guguweb.com/2019/02/07/how-to-move-docker-data-directory-to-another-location-on-ubuntu/
You can mount a directory from your host inside your container when you launch the docker container, using -v or --volume
docker run -v /path/to/desktop/some-dir:/container-dir/path <docker-image>
Volumes specified in the Dockerfile, as you exemplified, will automatically create those volumes under /var/lib/docker/volumes/ every time a container is launched from that image, but it is NOT recommended have these volumes altered by non-Docker processes.

runc and ctr commands do not show docker images and containers

I have multiple Docker images and containers running on a VM. But commands like "runc list" doesn't list any of these.
How can I make runc/containerd aware of my existing docker images?
The runtime (runc) uses so-called runtime root directory to store and obtain the information about containers. Under this root directory, runc places sub-directories (one per container), and each of them contains the state.json file, where the container state description resides.
The default location for runtime root directory is either /run/runc (for non-rootless containers) or $XDG_RUNTIME_DIR/runc (for rootless containers) - the latter also usually points to somewhere under /run (e.g. /run/user/$UID/runc).
When the container engine invokes runc, it may override the default runtime root directory and specify the custom one (--root option of runc). Docker uses this possibility, e.g. on my box, it specifies /run/docker/runtime-runc/moby as the runtime root.
That said, to make runc list see your Docker containers, you have to point it to Docker's runtime root directory by specifying --root option. Also, given that Docker containers are not rootless by default, you will need the appropriate privileges to access the runtime root (e.g. with sudo).
So, that's how this should work:
$ docker run -d alpine sleep 1000
4acd4af5ba8da324b7a902618aeb3fd0b8fce39db5285546e1f80169f157fc69
$ sudo runc --root /run/docker/runtime-runc/moby/ list
ID PID STATUS BUNDLE CREATED OWNER
4acd4af5ba8da324b7a902618aeb3fd0b8fce39db5285546e1f80169f157fc69 18372 running /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/4acd4af5ba8da324b7a902618aeb3fd0b8fce39db5285546e1f80169f157fc69 2019-07-12T17:33:23.401746168Z root
As to images, you can not make runc see them, as it has no notion of image at all - instead, it operates on bundles. Creating the bundle (e.g. based on image) is responsibility of the caller (in your case - containerd).

Dockerfile - How to define mounting of host file system to the container

I want to mount a folder of the host system to the container and it need to be defined in the Dockerfile so that user doesn't need to do it manually by passing the argument in the command line to run the container. How to achieve this ?
This simply cannot be done. Docker images are designed to be portable. Host mounts are host specific. Thus if you are able to specify a host mount at build time, it will make the image non-portable across machine that don't have this mount folder. Thus this is why this option is not available.
You can use docker compose to help the user not choose the mount folder. Take a look at How do I mount a host directory as a volume in docker compose
Dockerfile is for create images not containers.
You can not define a volume on a image. The volume must be defined on execution time when the container is created.
docker run -v /host-folder:/root/containerfolder -i -t imagename

Change file permissions in mounted folder inside docker container on Windows Host

Disclaimer/Edit 2
Some years later, for everyone reading this question - If you are on Windows and want to use docker with linux containers, I highly recommend not using docker for windows at all and instead starting the entire docker environment inside a VM altogether. This Ext3 NTFS issue will break your neck on so many different levels that installing docker-machine might not even be worth the effort.
Edit:
I am using docker-machine which starts a boot2docker instance inside a Virtualbox VM with a shared folder on /c/Users from which you can mount volumes into your containers. The permissions of said volumes are the ones the question is about. The VMs are stored under /c/Users/tom/.docker/
I chose to use the docker-machine Virtualbox workflow over Hyper-V because I need VBox in my daily workflow and running Hyper-V and Virtualbox together on one system is not possible due to incompabilities between different Hypervisors.
Original question
I am currently trying to set up PHPMyAdmin in a container on windows but I can't change the permissions of the config.inc.php file.
I found: Cannot call chown inside Docker container (Docker for Windows) and thought this might be somewhat related but it appears to apply only to MongoDB.
This is my docker-compose.yml
version: "3"
services:
pma:
image: (secrect company registry)/phpmyadmin
ports:
- 9090:80
volumes:
- /c/Users/tom/projects/myproject/data/var/www/public/config.inc.php:/var/www/public/config.inc.php
now, when I docker exec -it [container] bash and change in the mounted directory, I try to run chmod on the config.inc.php but for some reason, it fails silently.
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
root#22a4bag43245: chmod 655 config.inc.php
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
Considering the linked answer, I thought I could just move the volume out of my Userhome but then vbox doesn't mount the folder at all.
How do I change the file permissions of /var/www/public/config.inc.php persistently?
I had the same problem of not being able to change ownership even after using chown. And as I researched, it was because of NTFS volumes being mounted inside ext filesystem. So I used another approach.
The volumes internal to docker are free from these problems. So you can mount your file on internal docker volume and then create a hard symlink to that file inside your local folder wherever you want:
sudo ln $(docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>) <absolute_path_of_destination>
This way you can have your files in desired place, inside docker and without any permission issues, and you will be able to modify the contents of file as in the normal volume mount due to hard symlink.
Here is a working implementation of this process which mounts and links a directory. In case you wanna know about the details, see possible fix section in issue.
EDIT
Steps to implement this approach:
Mount the concerned file in internal docker-volume(also known as named volumes).
Before making hardlink, make sure volumes and concerned file are present there. To ensure this, you should have run your container at least once before or if you want to automate this file creation, you can include a docker run which creates the required files and exits.
docker run --rm -itd \
-v "<Project_name>_<volume_name>:/absolute/path" \
<image> bash -c "touch /absolute/path/<my_file>"
This docker run will create volumes and required files. Here, container is my project name, by default, it is the name of the folder in which project is present and <volume_name> is the same as one which we want to use in our original container. <image> can be the same one which is already being used in your original containers.
Create a hardlink in your OS to the actual file location on your system. You can find the file location using docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>/<my_file>. Linux users can use ln in terminal and windows users can use mklink in command prompt.
In step 3 we have not used /absolute/path since the <volume_name> refers to that location already, and we just need to refer to the file.
Try one of the following:
If you can rebuild the image image: image: (secrect company registry)/docker-stretchimal-apache2-php7-pma then inside the docker file, add the following
USER root
RUN chmod 655 config.inc.php
Then you can rebuild the image and push it to the registry, and what you were doing should work. This should be your preferred solution, as you don't want to be manually changing the permissions everytime you start a new container
Try to exec using the user root explicitly
docker exec -it -u root [container] bash

Share folder from docker container to host

Is there a way to share folder from docker container to host?
For example I have tomcat inside docker container and I want it to be visible from the outside.
If I do
volumes:
- /opt/tomcat:/opt/tomcat
I receive an error in the container:
"No such file or directory /opt/tomcat/bin/catalina.sh"
I don't think Docker allows you to that. That command will mount your host folder in the container, so your files in the container are not visible anymore.
Two options:
You can access the container files using this trick (GitHub issue):
sudo ls /proc/$(docker inspect --format {{.State.Pid}} YOUR_CONTAINER_NAME)/root. To access them you will need root privileges, or you can use bindfs to match root user with your user name (see the same thread).
Create a new volume, copy the files you need to be accessible to there and mount it inside the container, in the right place

Resources