Building Docker images on a system with no root access - docker

I'm trying to build a Dockerfile into an image. I would like to use a system running OpenSUSE that is fairly locked down -- I have no root access and so can't install docker or run a docker daemon to use the usual docker build method.
I have looked into various ways of doing this but they all seem to require root access at one point despite claiming to run as unprivileged.
img seemed promising, but running the binary results in the error failed to use newuidmap and fixing this seems to require modifying a root-owned file.
Buildah also seemed promising, but I run into similar uid issues that require root to fix.

You can build container images using buildah without root access. In order to do that, you need to set up UID mapping, which requires root access - editing /etc/subuid:
$ cat /etc/subuid
username:100000:65536
$ ll /etc/subuid
-rw-r--r--. 1 root root 34 Dec 3 14:17 /etc/subuid
Unless your distribution provides /etc/subuid pre-filled, which works with your unprivileged user, there is nothing you can do given the current linux kernel implementation (<4.20).

If you are asking about run Docker with using root Or sudo you can use this.
Type:
$ sudo gpasswd -a $USER docker
Hit Enter then type:
$ newgrp docker
Now run docker with using root

Related

configuring docker to put files [duplicate]

This question already has answers here:
How to fix docker: Got permission denied issue
(33 answers)
Closed 1 year ago.
I was just getting started with docker, and I run this:
docker pull redis
and I get a permission denied error. It turns out, docker writes to /var/* directories, which requires permission to write. and so many other docker commands also require something like:
sudo docker ***
Now, I don't really like the notion of add root privileges to every docker command.(It might be because I just don't know docker much yet, but that's true with every program). Is this a requirement by docker?
If it is not required, then how do I configure it so that it is much like other programs, that only ask me privileges when they need to, all the pulling, running commands would just write to my normal directories or run from them, not from a system directory.
EDIT: my concern was, if docker was allowed access to system files, meaning, it has some embedded scipt that had a potential harm to the computer, and it executed when I ran the docker. Since, I give it root privileges, the script could do anything. Would adding it to the user group instead of sudo fix that?
By default Docker runs an always-on daemon on your system which requires root privileges (Experimental non-root Docker support exists though).
The common approach is to add your User to the docker group which allows you to run docker without having to sudo: https://docs.docker.com/engine/install/linux-postinstall/
sudo usermod -aG docker $USER
newgrp docker
If you are interested in non-root Docker the following might be interesting:
https://podman.io/
https://docs.docker.com/engine/security/rootless/
You are not probably part of docker group as user. You could try post-installations steps mentioned on here.
Create group docker:
sudo groupadd docker
Add user to the group
sudo usermod -aG docker $USER
Reload changes:
newgrp docker

Switch to Docker Root User

I am working on Docker and before i execute any command on Docker CLI , I need to switch to root used using the command
sudo su - root
Can anyone please tell me why we need to switch to root user to perform any operation on Docker Engine?
you don't need to switch to root for docker cli commands and it is common to add your user to the docker group
sudo groupadd docker
sudo usermod -aG docker $USER
see: https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
the reason why docker is run as root:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
Using docker commands, you can trivially get root-level access to any part of the host filesystem. The very most basic example is
docker run --rm -v /:/host busybox cat /host/etc/shadow
which will get you a file of encrypted passwords that you can crack offline at your leisure; but if I wanted to actually take over the machine I'd just write my own line into /host/etc/passwd and /host/etc/shadow creating an alternate uid-0 user with no password and go to town.
Docker doesn't really have any way to limit what docker commands you can run or what files or volumes you can mount. So if you can run any docker command at all, you have unrestricted root access to the host. Putting it behind sudo is appropriate.
The other important corollary to this is that using the dockerd -H option to make the Docker socket network-accessible is asking for your system to get remotely rooted. Google "Docker cryptojacking" for some more details and prominent real-life examples.

Why am I not able to execute minikube like other executables, even when it is installed and ready?

Learning kubernetes, I am trying to spin a minikube cluster in an alpine container running on docker. Regardless of whether this is possible or not, I don't understand why kernel is unable to see that "minikube" exists as an executable file, in /usr/local/bin. I am able to execute "kubectl" though which exists at the same path.
I have already tried to execute "minikube", "./minikube" from root and /usr/local/bin paths. I've also looked up a similar problem, but the solution didn't help.
Below is what I see on my screen. Both "kubectl" and "minikube" appear green in color.
/usr/local/bin # ls -l
total 96540
-rwxr-xr-x 1 root root 42985504 Aug 18 11:31 kubectl
-rwxr-xr-x 1 root root 55869264 Aug 18 11:36 minikube
/usr/local/bin # minikube
/bin/sh: minikube: not found
/usr/local/bin # ./minikube
/bin/sh: ./minikube: not found
/usr/local/bin # minikube --help
/bin/sh: minikube: not found
/usr/local/bin #
I expect "minikube" to execute and throw a help or error message. However, what I am seeing is an error from Kernel unable to find any executable with that name.
One of the things that makes the Alpine base image small is that it uses a reduced version of core system libraries that can be incompatible with some binaries, apparently including the minikube binary. Either of these works for me:
# The hard way
/lib/ld-musl-x86_64.so.1 ./minikube-linux-amd64
# The slightly easier way
apk add libc6-compat
./minikube-linux-amd64
This having been said, it still won't work, because Minikube launches a single-node Kubernetes cluster inside a virtual machine, and you can't launch a VM from inside a Docker container. You need to run this command directly on the host.

Warning when trying run tensorflow with Docker on Windows

I cannot start tensorflow with image download from tensorflow
I used docker on windows 10 and for error ouput said this:
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u $(id -u):$(id -g) args...
I try search a problem for google... but cannot found, my experience with docker is null
This is a warning specifying that to access/change the files created in the mounted directory you may require sudo and you may not be able to change such files as a non sudo user, since your docker container used sudo permissions while creating them.
A quick search shows that there are many blog references available, check these -
Docker creates files as root in mounted volume
Running a Docker container as a non-root user
Setup Docker for windows using windows subsystem linux
https://jtreminio.com/blog/running-docker-containers-as-current-host-user/
https://medium.com/better-programming/running-a-container-with-a-non-root-user-e35830d1f42a
https://docs.docker.com/install/linux/linux-postinstall/

chown: changing ownership of '/var/lib/mysql/': Operation not permitted

I am trying to deploy a mariadb image on openshift origin. I am using mariadb:10.2.12 in my docker file. It works ok on local but I get following error when I try to deploy on openshift origin.
Initializing database
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
Cannot change ownership of the database directories to the 'mysql'
user. Check that you have the necessary permissions and try again.
The chown command comes from mariadb:10.2.12 Docker file.
Initially I had the issue of root user which is not allowed on openshift origin, so now I am using
USER mysql
in the docker file. Now I don't get warning of running as root but still openshift origin don't like chown. Remember I am not the admin of origin, only a user. My docker file is as follows:
FROM mariadb:10.2.12
ENV MYSQL_DATABASE="db_profile"
COPY ./my.cnf /etc/mysql/my.cnf
COPY ./db_profile.sql /docker-entrypoint-initdb.d/
USER mysql
EXPOSE 3306
and on local I run it as follows:
docker build . -t laeeq/ligandprofiledb:0.0.1
docker run --name test-mysql -e MYSQL_ROOT_PASSWORD=mypassword -d laeeq/ligandprofiledb:0.0.1
Is there a workaround to solve this chown problem?
The MariaDB images on DockerHub don't follow good practice of not requiring to be run as root user.
You should instead use the MariaDB images provided by OpenShift. Eg:
centos/mariadb-102-centos7
See:
https://github.com/sclorg/mariadb-container
There should be an ability to select MariaDB from the service catalog browser in the OpenShift web console, or use the mariadb template from the command line.
$ ls -ld /var/lib
drwxr-xr-x 79 root root 4096 Oct 7 20:58 /var/lib
So, to change anything in that directory, including /var/lib/mysql/, you need to be root.
You should change ownership before USER mysql in Dockerfile or if you need to run container as a root you should define service account and make it privileged for your deployment. You can follow this https://github.com/openshift/origin/issues/9131#issuecomment-231952259

Resources