code changes in the docker container as a root user - docker

I'm experimenting with the docker concept for C++ development. I wrote a Dockerfile that includes instructions for installing all of the necessary libraries and tools. CMake is being used to build C++ code and install binaries. Because of a specific use case, the binaries should be executed as root user. I'm running the docker container with the following command.
docker run -it --rm -u 0 -v $(pwd):/mnt buildmcu:focal
The issue now is that I am a root user inside the Docker container, and if I make any code changes/create a new file inside the Docker container, I am getting a permission error on the host machine If I try to access it.I need to run the sudo chmod ... to change the permissions.Is there any way to allow the source modification in docker container and host machine without permission error?

Related

Docker Issue: Removing a Bind-Mounted Volume

I have been unable to find any help online for this simple mistake I made, so I was looking for some help. I am using a server to run a docker image in a container and I mistyped and have caused an annoyance for myself. I ran the command
docker run --rm -v typo:location docker_name
and since I had a typo with the directory to mount it created a directory on the host machine, and when the container ended the directory remained. I tried to remove it, but i just get the error
rm -rf typo/
rm: cannot remove 'typo': Permission denied
I know now that I should have used --mount instead of -v for safety, but the damage is done; how can I remove this directory without having access to the container that created it?
I apologize in advance, my knowledge of docker is quite limited. I have mostly learned it only to use a particular image and I do a bunch of Google searches to do the rest.
The first rule of Docker security is, if you can run any docker command at all, you can get unrestricted root access over the entire host.
So you can fix this issue by running a container, running as root and bind-mounting the parent directory, that can delete the directory in question:
docker run \
--rm \
-v "$PWD:/pwd" \
busybox \
rm -rf /pwd/typo
I do not have sudo permissions
You can fix that
docker run --rm -v /:/host busybox vi /host/etc/sudoers
(This has implications in a lot of places. Don't indiscriminately add users to a docker group, particularly on a multi-user system, since they can trivially get root access. Be careful publishing the host's Docker socket into a container, since the container can then root the host; perhaps redesign your application to avoid needing it. Definitely do not expose the Docker socket over the network.)

Install package in running docker container

i've been using a docker container to build the chromium browser (building for Android on Debian 10). I've already created a Dockerfile that contains most of the packages I need.
Now, after building and running the container, I followed the instructions, which asked me to execute an install script (./build/install-build-deps-android.sh). In this script multiple apt install commands are executed.
My question now is, is there a way to install these packages without rebuilding the container? Downloading and building it took rather long, plus rebuilding a container each time a new package is required seems kind of suboptimal. The error I get when executing the install script is:
./build/install-build-deps-android.sh: line 21: lsb_release: command not found
(I guess there will be multiple missing packages). And using apt will give:
root#677e294147dd:/android-build/chromium/src# apt install nginx
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nginx
(nginx just as an example install).
I'm thankfull for any hints, as I could only find guides that use the Dockerfile to install packages.
You can use docker commit:
Start your container sudo docker run IMAGE_NAME
Access your container using bash: sudo docker exec -it CONTAINER_ID bash
Install whatever you need inside the container
Exit container's bash
Commit your changes: sudo docker commit CONTAINER_ID NEW_IMAGE_NAME
If you run now docker images, you will see NEW_IMAGE_NAME listed under your local images.
Next time, when starting the docker container, use the new docker image you just created:
sudo docker run **NEW_IMAGE_NAME** - this one will include your additional installations.
Answer based on the following tutorial: How to commit changes to docker image
Thanks for #adnanmuttaleb and #David Maze (unfortunately, they only replied, so I cannot accept their answers).
What I did was to edit the Dockerfile for any later updates (which already happened), and use the exec command to install the needed dependencies from outside the container. Also remember to
apt update
otherwise you cannot find anything...
A slight variation of the steps suggested by Arye that worked better for me:
Create container from image and access in interactive mode: docker run -it IMAGE_NAME bin/bash
Modify container as desired
Leave container: exit
List launched containers: docker ps -a and copy the ID of the container just modified
Save to a new image: docker commit CONTAINER_ID NEW_IMAGE_NAME
If you haven't followed the Post-installation steps for Linux
, you might have to prefix Docker commands with sudo.

Permisson Denied when tried to copied file from location to cointainer

I'm follow steps in Here to set up a distributed test by Jmeter but in copy my local jmeter test into the master container I got a permission denied error, specifically
sh: 2: /jmeter/apache-jmeter-3.3/bin/: Permission denied
I'm not clear on what you're trying to do.
If you're trying to copy a file from your Host to Docker container, why not just mount the file/directory in to the container during runtime using --mount or -v. For example: docker run -v <local path>:<dst path on docker container> <ImageName>
Edit: This works between multiple containers as well. You can use SharedVolumes to share storage between 2 or more containers. Read more here: https://docs.docker.com/storage/volumes/
Execute the following commands:
docker exec -t master chmod +x /jmeter/apache-jmeter-3.3/bin/jmeter.sh
docker exec -t slave01 chmod +x /jmeter/apache-jmeter-3.3/bin/jmeter.sh
etc.
This will make jmeter.sh script executable via chmod command
Also be aware that according to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.1 (or whatever is the latest version available at JMeter Downloads page) on next available opportunity.

Docker on Windows getting "Could not locate Gemfile"

I'm trying to learn Docker using Windows as the host OS to create a container using Rails image from Docker Hub.
I've created a Dockerfile with the content below and an empty Gemfile, however I'm still getting the error "Could not locate Gemfile".
Dockerfile
FROM rails:4.2.6
The commands I used are the following (not understanding what they actually do though):
ju.oliveira#br-54 MINGW64 /d/Juliano/ddoc
$ docker build -t ddoc .
Sending build context to Docker daemon 4.608 kB
Step 1 : FROM rails:4.2.6
---> 3fc52e59c752
Step 2 : MAINTAINER Juliano Nunes
---> Using cache
---> d3ab93260f0f
Successfully built d3ab93260f0f
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
$ docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install
Unable to find image 'ruby:2.1' locally
2.1: Pulling from library/ruby
fdd5d7827f33: Already exists
a3ed95caeb02: Pull complete
0f35d0fe50cc: Already exists
627b6479c8f7: Already exists
67c44324f4e3: Already exists
1429c50af3b7: Already exists
f4f9e6a0d68b: Pull complete
eada5eb51f5d: Pull complete
19aeb2fc6eae: Pull complete
Digest: sha256:efc655def76e69e7443aa0629846c2dd650a953298134a6f35ec32ecee444688
Status: Downloaded newer image for ruby:2.1
Could not locate Gemfile
So, my questions are:
Why it can't find the Gemfile if it's in the same directory as the Dockerfile?
What does the command docker run --rm -v "$PWD":/usr/src/app -w /usr/src/app ruby:2.1 bundle install do?
How do I set a folder in my host file system to be synced to the container (I'm trying to create a development environment for Rails projects using Docker on Windows)?
I don't know if this makes any difference, but I'm running this command from the bash executed via "Docker Quickstart Terminal" shortcut. I think all it does is run these commands in a default VM, though I could create a new one (but I don't know if I should do this).
Thank you and sorry for all these questions, but right know Docker seems very confusing for me.
You must mount HOST directory somewhere inside your HOME directory (e.g. c:/Users/john/*)
$PWD will give you a Unix-like path. If your shell is like Cygwin, it will look like /cygdrive/c/Users/... or something funny. However, Docker and VirtualBox is a Windows executable, so they expect a plain Windows path. However it seems Docker cannot accept a Windows path in the -v command line, so it is converted to /c/Users/.... The other people may be right; you may not be able to access a directory outside your home for some reason (but I wouldn't know why). To solve your problem, create a junction within your home that points to the path you want, then mount that path in your home.
>mklink /j \users\chloe\workspace\juliano \temp
Junction created for \users\chloe\workspace\juliano <<===>> \temp
>docker run -v /c/Users/Chloe/workspace/juliano:/app IMAGE-NAME ls
007.jpg
...
In your case that would be
mklink /j C:\Users\Juliano\project D:\Juliano\ddoc
docker run -v /c/Users/Juliano/project:/usr/src/app -w /usr/src/app ruby:2.1 bundle install
I don't know what --rm does. I assume -w sets the working directory. -v sets the volume mount and maps the host path to the container path. ruby:2.1 uses the Docker standard Ruby 2.1 image. bundle install run Bundler!

docker within docker, post http error

I am trying to run docker within docker. The sole purpose is experimental, I am by no means trying to implement anything functional, I just want to check how docker performs when it is run from another docker.
I start docker through boot2docker om my mac and then spin up a simple ubuntu image.
$ docker run -t -i ubuntu /bin/bash
I then go ahead and install docker as well as python.
root#aa9263c874e4: apt-get update
root#aa9263c874e4: apt-get install -y docker.io python2.7
It is able to connect to the internet, because it performs this apt-get. I then get the following error when I am trying to start a docker instance from within docker:
root#aa9263c874e4: sudo docker run -t -i ubuntu /bin/bash
2015/01/09 08:59:09 Post http:///var/run/docker.sock/v1.12/containers/create: dial unix /var/run/docker.sock: no such file or directory
Any idea what I missed? It seems weird that I get a post error because it seems to be able to connect to the internet via apt-get before.
I answered a similar question before on howto run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
An excellent use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
So, I believe that your POST problem has nothing to do with connecting to the internet since the container is trying to talk to the docker socket. To solve the problem, simply add the --privileged=true flag on the outer container when starting it, like this:
docker run --privileged=true ...

Resources