I'm experimenting with Ruby on Rails and Docker, following this tutorial. In Build the Project section You can see that Rails scaffold is run with
docker-compose run web rails new . --force --database=postgresql --skip-bundle
And immediately after, the:
sudo chown -R $USER:$USER .
To get access to generated files, because Docker creates them as root.
How can I omit changing the permissions every time? Let's say I want to create simple migration file using:
docker-compose run web rails g migration create_users
It seems unpractical to modify the ownership after every simple command like this, but in every tutorial/source I've found noone talks about it.
I've exeprienced the same issue and the main problem for me was sublime-text not able to edit the files owned by root. So my solution was to use sublime-text as root for I could comfortably edit any project files. To do this execute in your shell:
gksu subl
That solved my problem. Hope it solved yours.
UPDATED
Ok. I found solution which doesn't depend on any Ruby editor or IDE. So what you need is add your current ubuntu user to docker users group for you could run docker commands as its user, not root.
First, you may need to add the docker group if it doesn't exist (but it should exist now). In your shell execute
sudo groupadd docker
Add your current user to docker group
sudo gpasswd -a ${USER} docker
Restart the docker daemon
sudo service docker restart
or
sudo service docker.io restart
(depends on your system)
Activate user groups changes. To do it either perform LOG OUT / LOG IN or in your shell
newgrp docker
Then add to the end of your dockerfile
USER <your_user_name>
or
USER <your_user_id>
Now you can execute commands like
docker-compose run web rails g migration SomeMigration
without sudo but as your current user.
And the files created by that commands will be owned by your current user not root.
Related
This is likely a standard task, but I've spent a lot of time googling and prototyping this without success.
I want to set up CI for a Java application that needs a database (MySQL/MariaDB) for its tests. Basically, just a clean database where it can write to. I have decided to use Jenkins for this. I have managed to set up an environment where I can compile the application, but fail to provide it with a database.
What I have tried is to use a Docker image with Java and MariaDB. However, I run into problems starting MariaDB daemon, because at that point Jenkins already activates its user (UID 1000), which doesn't have permissions to start the daemon, which only the root user can do.
My Dockerfile:
FROM eclipse-temurin:17-jdk-focal
RUN apt-get update \
&& apt-get install -y git mariadb-client mariadb-server wget \
&& apt-get clean
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
The docker-entrypoint.sh is pretty trivial (and also chmod a+x'd, that's not the problem):
#! /bin/sh
service mysql start
exec "$#"
However, Jenkins fails with these messages:
$ docker run -t -d -u 1000:1001 [...] c8b472cda8b242e11e2d42c27001df616dbd9356 cat
$ docker top cbc373ea10653153a9fe76720c204e8c2fb5e2eb572ecbdbd7db28e1d42f122d -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
I have tried debugging this from the command line using the built Docker image c8b472cda8b. The problem is as described before: because Jenkins passes -u 1000:1001 to Docker, docker-entrypoint.sh script no longer runs as root and therefore fails to start the daemon. Somewhere in Docker or Jenkins the error is "eaten up" and not shown, but basically the end result is that mysqld doesn't run and also it doesn't get to exec "$#".
If I execute exactly the same command as Jenkins, but without -u ... argument, leaving me as root, then everything works fine.
I'm sure there must be a simple way to start the daemon and/or set this up somehow completely differently (external database?), but can't figure it out. I'm practically new to Docker and especially to Jenkins.
My suggestion is:
Run the docker build command without -u (as root)
Create Jenkins user inside the container (via Dockerfile)
At the end of the entrypoint.sh switch to jenkins user by su - jenkins
One disadvantage is that every time you enter the container you will be root user
I want to run a specific docker-compose file without entering the sudo password and without assigning that user who runs the command to the docker group for security reasons.
I thought about using the NOPASSWD inside sudoers file and run a bash script called "bash-dockercompose-up.sh" that simply runs docker-compose up -d.
However, it needs the sudo command before the docker-compose up -d to connect to docker host.
This is my /etc/sudoers file:
exampleuser ALL=(root) NOPASSWD:/usr/bin/bash-dockercompose-up.sh
Ok I was able to run it by using the python official sdk library.
https://docs.docker.com/engine/api/sdk/
I created a python script called "service-up.py"
service-up.py
import docker
client = docker.from_env()
container = client.containers.get('id or name here')
container.start()
Then compile it into a binary file in order to change it's uid permissions so a non root user can run it:
pyinstaller service-up.py
move into the dist folder where file is located and run:
chown root:root service-up
chmod 4755 service-up
I am trying to understand how to properly add non-root users in docker and give them sudo privileges. Let's say my current Ubuntu 18.04 system has janedoe as a sudo user. I want to create a docker image where I want to add janedoe as a non-root user who can have sudo privileges when needed. Since I new to this Linux system as well as Docker, I typically would appreciate someone explaining through an example how to do this.
The thing that I understand is that whenever I issue the command "USER janedoe" in the Dockerfile, many commands after that line cannot be executed by janedoe's privileges. I would assume we have to add janedoe to a sudo "group" when building the container similar to what we do when an admin adds a new user to the system.
I have been trying to look for some demo Dockerfile explaining the example but couldn't find it.
Generally you should think of a Docker container as a wrapper around a single process. If you ask this question about other processes, it doesn't really make sense. (How do I add a user to my PostgreSQL server with sudo privileges? How do I add a user to my Web browser?)
In Docker you almost never need sudo, for three reasons: it's trivial to switch users in most contexts; you don't typically get interactive shells in containers (how do I get a directory listing from inside the cron daemon?); and if you can run any docker command at all you can very easily root the whole host. sudo is also hard to script, and it's very hard to usefully maintain a user password in Docker (writing a root-equivalent password in a plain-text file that can be easily retrieved isn't a security best practice).
In the context of your question, if you've already switched to some non-root user, and you need to run some administrative command, use USER to switch back to root.
USER janedoe
...
USER root
RUN apt-get update && apt-get install -y some-package
USER janedoe
Since your containers have some isolation from the host system, you don't generally need containers to have the same user names or user IDs as the host system. The exception is when sharing files with the host using bind mounts, but there it's better to specify this detail when you start the container.
The typical practice I'm used to works like this:
In your Dockerfile, create some non-root user. It can have any name. It does not need a password, login shell, home directory, or any other details. Treating it as a "system" user is fine.
FROM ubuntu:18.04
RUN adduser --system --group --no-create-home appuser
Still in your Dockerfile, do almost everything as root. This includes installing your application.
RUN apt-get update && apt-get install ...
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
When you describe the default way to run the container, only then switch to the non-root user.
EXPOSE 8000
USER appuser
CMD ["./main.py"]
Ideally that's the end of the story: your code is built into your image and it stores all of its data somewhere external like a database, so it doesn't care about the host user space at all (there by default shouldn't be docker run -v or Docker Compose volumes: options).
If file permissions really matter, you can specify the numeric host user ID to use when you launch the container. The user doesn't specifically need to exist in the container's /etc/passwd file.
docker run \
--name myapp \
-d \
-p 8000:8000 \
-v $PWD:/data \
-u $(id -u) \
myimage
I think you are looking for the answer in this question:
How to add users to a docker container
RUN useradd -ms /bin/bash janedoe <-- this command adds the user
usermod -aG sudo janedoe <-- this command tells the container to put the user janedoe inside the SUDO group
Then, if you want to switch to that user for the remainder of the script, use:
USER janedoe <-- all lines after this now use the janedoe user to execute them
WORKDIR /home/janedoe <-- this tells your script from this line on to use paths relative to janedoe's home folder
Since the container itself runs linux modules, most (if not all) linux commands should work inside your container as well. If you have static users (i.e. it's predictable which users), you should be able to create them inside the Dockerfile used to create the image. Now everytime you run a container from said image you should get the janedoe user in there.
I am wanting to build a production ready image for clients to use and I am wondering if there is a way to prevent access to my code within the image?
My current approach is storing my code in /root/ and creating a "customer" user that only has a startup script in their home dir.
My Dockerfile looks like this
FROM node:8.11.3-alpine
# Tools
RUN apk update && apk add alpine-sdk
# Create customer user
RUN adduser -s /bin/ash -D customer
# Add code
COPY ./code /root/code
COPY ./start.sh /home/customer/
# Set execution permissions
RUN chown root:root /home/customer/start.sh
RUN chmod 4755 /home/customer/start.sh
# Allow customer to execute start.sh
RUN echo 'customer ALL=(ALL) NOPASSWD: /home/customer/start.sh' | EDITOR='tee -a' visudo
# Default to use customer
USER customer
ENTRYPOINT ["sudo","/home/customer/start.sh"]
This approach works as expected, if I were to enter the container I won't be able to see the codebase but I can start up services.
The final step in my Dockerfile would be to either, set a password for the root user or remove it entirely.
I am wondering if this is a correct production flow or am I attempting to use docker for something it is not meant to?
If this is the correct, what other things should I lock down?
any tips appreciated!
Anybody who has your image can always do
docker run -u root imagename sh
Anybody who can run Docker commands at all has root access to their system (or can trivially give it to themselves via docker run -v /etc:/hostetc ...) and so can freely poke around in /var/lib/docker to see what's there. It will have all of the contents of all of the images, if scattered across directories in a system-specific way.
If your source code is actually secret, you should make sure you're using a compiled language (C, Go, Java kind of) and that your build process doesn't accidentally leak the source code into the built image, and it will be as secure as anything else where you're distributing binaries to end users. If you're using a scripting language (Python, JavaScript, Ruby) then intrinsically the end user has to have the code to be able to run the program.
Something else to consider is the use of docker container export. This would allow anyone to export the containers file system, and therefore have access to code files.
I believe this bypasses removing the sh/bash and any user permission changes as others have mentioned.
You can protect your source-code even it can't be have a build stage or state,By removing the bash and sh in your base Image.
By this approach you can restrict the user to not enter into your docker container and Image either through these commands
docker (exec or run) -it (container id) bash or sh.
To have this kind of approach after all your build step add this command at the end of your build stage.
RUN rm -rf bin/bash bin/sh
you can also refer more about google distroless images which follow the same approach above.
You can remove the users from the docker group and create sudos for the docker start and docker stop
I'm deploying a rails application using Google App Engine and it takes a lot of time to reinstall libraries like rbenv, ruby,...
Is there anyway to prevent this, I just want to install new library only
Yeah... we're actively working on making this faster. In the interim, here's how you can make it faster. At the end of the day - all we're really doing with App Engine Flex is creating a Dockerfile for you, and then doing a docker build. With Ruby, we try to play some fancy tricks like letting you tell us what version of rbenv or ruby you want to run. If you're fine hard coding all of that, you can just use our base image.
To do that, first open the terminal and cd into the dir with your code. Then run:
gcloud beta app gen-config --custom
Follow along with the prompts. This is going to create a Dockerfile in your CWD. Go ahead and edit that file, and check out what it's doing. In the simplest form, you can delete most of it and end up with something like this:
FROM gcr.io/google_appengine/ruby
COPY . /app/
RUN bundle install --deployment && rbenv rehash;
ENV RACK_ENV=production \
RAILS_ENV=production \
RAILS_SERVE_STATIC_FILES=true
RUN if test -d app/assets -a -f config/application.rb; then \
bundle exec rake assets:precompile; \
fi
ENTRYPOINT []
CMD bundle exec rackup -p $PORT
Most of the heavy lifting is already done in gcr.io/google_appengine/ruby, so you can just essentially add your code, perform any gem installs you need, and then set the entrypoint. You could also fork our base docker image and create your own. After you have this file, you should do a build to test it:
docker build -t myapp .
Now go ahead and run it, just to make sure:
docker run -it -p 8080:8080 myapp
Visit http://localhost:8080 to make sure it's all looking good. Now when you run glcoud app deploy the next time, we're going to use this Dockerfile. Should be much, much faster.
Hope this helps!