I am trying to deploy a mariadb image on openshift origin. I am using mariadb:10.2.12 in my docker file. It works ok on local but I get following error when I try to deploy on openshift origin.
Initializing database
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
Cannot change ownership of the database directories to the 'mysql'
user. Check that you have the necessary permissions and try again.
The chown command comes from mariadb:10.2.12 Docker file.
Initially I had the issue of root user which is not allowed on openshift origin, so now I am using
USER mysql
in the docker file. Now I don't get warning of running as root but still openshift origin don't like chown. Remember I am not the admin of origin, only a user. My docker file is as follows:
FROM mariadb:10.2.12
ENV MYSQL_DATABASE="db_profile"
COPY ./my.cnf /etc/mysql/my.cnf
COPY ./db_profile.sql /docker-entrypoint-initdb.d/
USER mysql
EXPOSE 3306
and on local I run it as follows:
docker build . -t laeeq/ligandprofiledb:0.0.1
docker run --name test-mysql -e MYSQL_ROOT_PASSWORD=mypassword -d laeeq/ligandprofiledb:0.0.1
Is there a workaround to solve this chown problem?
The MariaDB images on DockerHub don't follow good practice of not requiring to be run as root user.
You should instead use the MariaDB images provided by OpenShift. Eg:
centos/mariadb-102-centos7
See:
https://github.com/sclorg/mariadb-container
There should be an ability to select MariaDB from the service catalog browser in the OpenShift web console, or use the mariadb template from the command line.
$ ls -ld /var/lib
drwxr-xr-x 79 root root 4096 Oct 7 20:58 /var/lib
So, to change anything in that directory, including /var/lib/mysql/, you need to be root.
You should change ownership before USER mysql in Dockerfile or if you need to run container as a root you should define service account and make it privileged for your deployment. You can follow this https://github.com/openshift/origin/issues/9131#issuecomment-231952259
Related
I am working on Docker and before i execute any command on Docker CLI , I need to switch to root used using the command
sudo su - root
Can anyone please tell me why we need to switch to root user to perform any operation on Docker Engine?
you don't need to switch to root for docker cli commands and it is common to add your user to the docker group
sudo groupadd docker
sudo usermod -aG docker $USER
see: https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
the reason why docker is run as root:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
Using docker commands, you can trivially get root-level access to any part of the host filesystem. The very most basic example is
docker run --rm -v /:/host busybox cat /host/etc/shadow
which will get you a file of encrypted passwords that you can crack offline at your leisure; but if I wanted to actually take over the machine I'd just write my own line into /host/etc/passwd and /host/etc/shadow creating an alternate uid-0 user with no password and go to town.
Docker doesn't really have any way to limit what docker commands you can run or what files or volumes you can mount. So if you can run any docker command at all, you have unrestricted root access to the host. Putting it behind sudo is appropriate.
The other important corollary to this is that using the dockerd -H option to make the Docker socket network-accessible is asking for your system to get remotely rooted. Google "Docker cryptojacking" for some more details and prominent real-life examples.
I'm trying to build a Dockerfile into an image. I would like to use a system running OpenSUSE that is fairly locked down -- I have no root access and so can't install docker or run a docker daemon to use the usual docker build method.
I have looked into various ways of doing this but they all seem to require root access at one point despite claiming to run as unprivileged.
img seemed promising, but running the binary results in the error failed to use newuidmap and fixing this seems to require modifying a root-owned file.
Buildah also seemed promising, but I run into similar uid issues that require root to fix.
You can build container images using buildah without root access. In order to do that, you need to set up UID mapping, which requires root access - editing /etc/subuid:
$ cat /etc/subuid
username:100000:65536
$ ll /etc/subuid
-rw-r--r--. 1 root root 34 Dec 3 14:17 /etc/subuid
Unless your distribution provides /etc/subuid pre-filled, which works with your unprivileged user, there is nothing you can do given the current linux kernel implementation (<4.20).
If you are asking about run Docker with using root Or sudo you can use this.
Type:
$ sudo gpasswd -a $USER docker
Hit Enter then type:
$ newgrp docker
Now run docker with using root
I am launch a jenkins docker container for CI work. And the host OS I am using is CoreOS. Inside the jenkins container, I also installed docker-cli in order to run build on docker containers in the host system. In order to do that, I use below configuration to mount /var/run on the jenkins container for mapper Docker socket:
volumes:
- /jenkins/data:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock:rw
when I launch the container and run docker command, I got below error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.29/containers/json: dial unix /var/run/docker.sock: connect: permission denied
The /var/run is root permission but my user is jenkins. How can I solve the permission issue to allow jenkins user to use docker command through mapper socket?
I have tried below command but the container doesn't allow me to run sudo:
$ sudo usermod -a -G docker jenkins
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
sudo: no tty present and no askpass program specified
There's nothing magical about permissions in Docker: they work just like permissions outside of Docker. That is, if you want a user to have access to a file (like /var/run/docker.sock), then either that file needs to be owned by the user, or they need to be a member of the appropriate group, or the permissions on the file need to permit access to anybody.
Exposing /var/run/docker.sock to a non-root user is a little tricky, because typical solutions (just chown/chmod things from inside the container) will potentially break things on your host.
I suspect the best solution may be:
Ensure that /var/run/docker.sock on your host is group-writable (e.g., create a docker group on your host and make sure that users in that group can use Docker).
Pass the numeric group id of your docker group into the container as an environment variable.
Have an ENTRYPOINT script in your container that runs as root that (a) creates a group with a matching numeric gid, and (b) modifies the Jenkins users to be a member of that group, and then (c) exec your docker CMD as the jenkins user.
So, your entrypoint script might look something like this (assuming that you have passed in a value for $DOCKER_GROUP_ID in your docker-compose.yml):
#!/bin/sh
groupadd -g $DOCKER_GROUP_ID docker
usermod -a -G docker jenkins
exec runuser -u jenkins "$#"
You would need to copy this into your image and add the appropriate ENTRYPOINT directive to your Dockerfile.
You may not have the runuser command. You can accomplish something similar using sudo or su or other similar commands.
I'm newbie in docker. And i tried to create a Dockerfile to run a website was written byrails, postgresql on apache+passenger.
But when i run Dockerfile, it run successfully, but it had a problem with permission denied. I found problem that folder web must belongs to apache user. Then i tried to change apache user to source web (on container). And it run ok.
But every time i modified a file on local. It always ask password when i saved this file.
And i checked permission source on local. It changed all role to weird role.
How can i solved this problem ?
This is my Dockerfile.
And i used two commands to run.
docker build -t wics .
docker run -v /home/khanhpn/Project/wics:/home/abc -p 80:80 -it wics /bin/bash
After a mount of time, i found a solution to solve this problem.
I just add this line in Dockerfile, the problem was solved.
RUN usermod -u 1000 apache
I am starting Postgresql image with following volume
/Users/me/Desktop/volumes/postgresql/data:/var/lib/postgresql/data
According to docker docs it should work as docker should have access to /Users directory on Mac OS. After creating & runing container I can see
that empty directory /Users/me/Desktop/volumes/postgresql/data is created however Postgres does not start and show these lines in log:
could not create directory "/var/lib/postgresql/data/global": Permission denied
What I am doing wrong?
Your directory belongs to a different User then the user, that the User that executes the container.
Could you change your directory like that for a start.
chmod 777 /Users/me/Desktop/volumes/postgresql/data
If you can start your container with this setting, then this missing permissions are the root cause.
You could then try to start your container with
run -u uid ...
and specify the userid of your user on macos.
You have to create the user in boot2docker too, i.e.
boot2docker ssh
sudo sh
adduser <anyuserid> -u <your uid>
I had the same problem.
What I done was the workaround by creating the folder inside the Virtual Box.
Set your docker-compose.yml postgresql folder:
/Users/me/Desktop/volumes/postgresql/data
to
/root/data
Enter at the Docker at Virtual Box and create the data folder