Docker : How to avoid Operation not permitted in Docker Container? - docker

I created one docker image of sles12 machine by taking backing of all file system which are necessary and created one tar file. For creating docker image I run following command -
cat fullbackup.tar | docker import - sles_image
After that I run docker image in container using below command -
docker run --net network1 -i -t sles_image /bin/bash
note - I already set up networking in this docker container (IP address which I want).
Now In my docker container, some applications are already configured because that applications are available in sles12 machine from which I created this docker image. These custom applications are internally running some kernel low level commands like modprobe.
But when I starts my application, application will start correctly. I'm facing this error -
Operation not permitted
How I can give correct permissions so that it will not give me this error?

You might try set the Docker container with Runtime privilege and Linux capabilities, with the
docker run --privileged

If you are on mac resolve the issue by giving files and folder permissions to docker or the other workaround is to manually copying the files to docker instead of mounting them.

Related

docker run not syncing local folder in windows

I want to sync my local folder with that of a docker container. I am using a windows system with Wsl 2 backend. I tried running the following command as per the instructions of a docker course instructor but it didn't seem to have synced it:
docker run -v ${pwd}:\app:ro --env-file ./.env -d -p 3000:4000 --name node-app node-app-image
I faced a similar issue when I started syncing local folders with that of a docker container in my windows system. The solution was actually quite simple, instead of using -v ${pwd}:\app:ro in your first volume it should be -v ${pwd}:/app:ro. Notice the / instead of \. Since your docker container is a Linux container the path should have /.
As #Sysix pointed out, docker will always overwrite the folder in the container with the one on the host (no matter if it already existed or not). Only those files will be in that folder/volume that were created either on the host, or in the container during runtime.
Learn more about bind mounts and volumes here.

Share windows directory to Linux docker container

I've been trying the whole day to accomplish a simplistic example of sharing a Windows directory to Linux container running on Windows Docker host.
Have read all the guidelines and run the following:
docker run -it --rm -p 5002:80 --name mount-test --mount type=bind,source=D:\DockerArea\PortScanner,target=/app/PortScannerWorkingDirectory barebonewebapi:latest
The origin PortScanner directory on host machine has got some text file in it. The container is created successfully.
The issue is that when I'm trying to
docker exec -it mount-test /bin/bash
and then list the mounted directory in the container PortScannerWorkingDirectory - it just shows that it's empty. Nor the C# code can read the contents of the host file in the mapped directory.
Am I missing something simple here? I feel like I got stuck and can't share files on the host Windows machine to Linux container.
After several days of dealing with the issue I've found quite apparent answer. Although I had had C and D drives already shared to Docker in Docker settings I did an experiment and re-shared both drives (there's a special button Reset Credentials for that purpose in Docker agent settings for Windows). After that the issue is resolved. So saving it here with the hope that it may help someone else since this seems to be a glitch with permissions or similar.
The issue is quite hard to diagnose - when there's an issue the Docker container just silently writes into its writable layer and no error pops up.
Go to the docker settings -> shared drives -> reset credentials.
and then click the drive and click apply button.
then execute following command as suggested by docker
docker run --rm -v c:/Users:/data alpine ls /data

Run commands on host from container command prompt

I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.

Copy files from within a docker container to local machine

Is it possible to copy files to a local machine by running a command inside of a docker container. I am aware of docker cp <containerId>:container/file/path /host/file/path However, my understanding is that this has to be run from outside of the docker container. Is there a way to do it or something similar from within?
For some context I have a python script that is run inside of a docker container with something like the following command docker run -ti -rm --net=host buildServer:5000/myProgram /myProgram.py -h. I would like to retrieve the files that are generated from this program so they can be edited. I could run the docker container in detached mode, docker cp the desired file and the shutdown the container. However, I would like to be able to abstract this away from the user.
Docker containers by design don't have any access to the host filesystem unless you provide it explicitly via volume mounts. So, in your example, you could do something like:
docker run -ti -v /tmp/data:/data -rm --net=host buildServer:5000/myProgram /myProgram.py -h
And within the container, the /data directory would be mapped to /tmp/data on your host. You could then copy files into /data to get at them on your host.
This assumes that you're running Docker on Linux. If you are using Windows or OS X there may be additional steps, since in those environments Docker is actually running on a Linux virtual machine and volume access may or may not behave as expected (I don't use those platforms so I can't comment authoritatively).
For more information:
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume

Docker exec command not using the mounted directory for /

I am new to docker containers and I and am trying to solve a problem I am facing right now.
These are my understanding based on limited knowledge.
When we create a docker container, Docker creates a local mount and use it as the root file system for the docker container.
Now, if I run any commands in the container from the host server using docker exec the docker is not using the mounted partition as the / file system for the container. I mean, it still pics up the binaries and env variables from the host server. Is there any option/alternate solution for making the docker use the original mounted directory for docker exec too ?
If I access/start the container with docker attach or docker run -i -t /bin/bash, I get the mounted directory as my / file system, which gives me an entirely independent environment from my host system. But this doesn't happen with the docker exec command.
Please help !!
You are operating under a misconception. The docker image only contains what was installed in it. This is usually a very cut down version of an operating system for efficiency reasons.
The docker container is started from an image - and that's a running version, which can change and store state - but may be discarded.
docker run starts a container from an image. You can run the same image multiple times to create completely different containers (which happen to have the same starting point for their content).
docker exec attaches to one of those containers to run a command. So you will only see the things inside it that ... were inside the image, or added post start (like log files). It has no vision of the host filesystem, and may not be the same OS - the only requirement is that it shares elements of the kernel ... although it usually has a selection of the commonly used binaries.
And when you run an image to create a container, you can specify a mount. One of the options when you do this is passing through a host filesystem, with e.g. -v /path/on/host:/path_in/container. But you don't have to, you can use data containers or use a docker volume mount instead. e.g. docker run -v /mount creates a mount point within the container, using the docker filesystem, which isn't part of the parent host. This can be used to make a data container with: docker create -v /path/to/data --name data_for_acontainer some_basic_image
And then mount volumes from that data container on a new one:
docker run -d --volumes-from data_for_acontainer some_app_image
Which will attach that data container onto the /path/to/data mount. But in neither case is the 'host' filesystem touched directly - this is the whole point of dockerising things.

Resources