Running docker container with user - docker

I have created this docker file to run a python script in docker container.
I am creating a user here and I want this user to run the container from docker image.
FROM ubuntu:16.04
MAINTAINER "Vijendra Kulhade" <xxxxxx#xxxxxx.com>
RUN yum makecache fast
RUN yum -y update
RUN yum -y install gcc
RUN yum -y install zlib-devel
RUN yum -y install openssl-devel
RUN yum -y install python-setuptools python-setuptools-devel
RUN yum -y install libyaml
RUN useradd newuser -d /home/newuser
RUN chown -R newuser.newuser /usr/bin/
RUN chown -R newuser.newuser /usr/lib64/
RUN chown -R newuser.newuser /usr/lib/
ENV https_proxy=http://proxy.xxxx.com:8080
RUN easy_install pip
RUN pip -V
RUN pip install --upgrade pip
RUN pip install --upgrade --force-reinstall setuptools
I use this command to create the image
docker build -t python-container .
And I am using
docker run --security-opt label=user:newuser -i -t python-container:latest /bin/bash to run container from image. I was expecting that this would start the container and login into it with newuser#xxxxxxxx. But It is not happening.
Please let know how I can achieve that.

There are two possibilities to run docker containers with a user different from root.
First possibility: Create user in Dockerfile
In your example Dockerfile, you create user newuser with command useradd. You can write instruction
USER newuser
in the Dockerfile. All following commands will be executed as user newuser. This goes for all following RUN instructions as well as for docker run commands.
Second possibility: option --user (tops possible USER instruction in image)
You can use docker run option --user. It can be used to specify either an UID without a name:
docker run --user 1000
Or specify UID and GID without a name:
docker run --user 1000:100
or specify a name only without knowing which UID the user will get:
docker run --user newuser
You can combine both ways. Create a user in Dockerfile with specified (!) UID and GID and add him to all desired groups. Use matching docker run --user UID:GID, and your container user will have all attributes you gave him in the Dockerfile.
(I do not understand your approach with --security-opt label=user:newuser, either it is wrong or it is something I know nothing about.)

Related

Rust compiler in jupyter lab docker instance

I am trying to install the rust compiler within a jupyter docker image. Here in the following the dockerfile:
FROM jupyter/scipy-notebook:python-3.10.5 as base
RUN pip install nb_black
USER root
RUN apt update && apt upgrade
RUN apt install build-essential -y
RUN apt install curl -y
USER jovyan
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
RUN pip install maturin
COPY ./docker_helpers /rust_inst
RUN chmod a+x /rust_inst/setup_rust.sh
RUN /rust_inst/setup_rust.sh
FROM base as prod
CMD ["jupyter", "lab", "--ip", "0.0.0.0"]
and the setup_rust.sh contains just an export statement:
#!/bin/bash
export PATH="$HOME/.cargo/bin:$PATH"
I need to use the root user initially for some permission denied, but after that the jovyan user is able to install all the necessary, or at least I do not get any error from docker at building time.
Does the jupyter docker structure mask the path variable, or make unavailable anything outside jovyan?
How can I have the rust compiler available from a terminal within jupyter?
I realised that the home directory is set to "/home/jovyan" itself, which in docker compose I overwrote with a volume in order to have dynamical code. Once I moved the volume I found the rust compiler in the scope of the jovyan user

Docker image cannot be built

Hello I created a base image; however, whenever I run the docker build ., I don't see the successfully built
My docker file
FROM centos:7
ARG user=john
ARG home=/home/$user
RUN yum update -y
RUN yum install openssh-server -y
RUN yum install openssh-clients -y
RUN useradd -d $home -p "$(openssl passwd $user)" $user
CMD ["hostnamectl"]
I tried running but, I get this
Your container includes systemd which will not run under this circumstances.

Dockerfile - Hide --build-args from showing up in the build time

I have the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
git \
make \
python-pip \
python2.7 \
python2.7-dev \
ssh \
&& apt-get autoremove \
&& apt-get clean
ARG password
ARG username
ENV password $password
ENV username $username
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
I use the following commands to build the image from this Dockerfile:
docker build -t myimage:v1 --build-arg password="somepassoword" --build-arg username="someuser" .
However, in the build log the username and password that I pass as --build-arg are visible.
Step 8/8 : RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
---> Running in 650d9423b549
Collecting git+http://someuser:somepassword#org.bitbucket.com/scm/do/repo.git
How to hide them? Or is there a different way of passing the credentials in the Dockerfile?
Update
You know, I was focusing on the wrong part of your question. You shouldn't be using a username and password at all. You should be using access keys, which permit read-only access to private repositories.
Once you've created an ssh key and added the public component to your repository, you can then drop the private key into your image:
RUN mkdir -m 700 -p /root/.ssh
COPY my_access_key /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
And now you can use that key when installing your Python project:
RUN pip install git+ssh://git#bitbucket.org/you/yourproject.repo
(Original answer follows)
You would generally not bake credentials into an image like this. In addition to the problem you've already discovered, it makes your image less useful because you would need to rebuild it every time your credentials changed, or if more than one person wanted to be able to use it.
Credentials are more generally provided at runtime via one of various mechanisms:
Environment variables: you can place your credentials in a file, e.g.:
USERNAME=myname
PASSWORD=secret
And then include that on the docker run command line:
docker run --env-file myenvfile.env ...
The USERNAME and PASSWORD environment variables will be available to processes in your container.
Bind mounts: you can place your credentials in a file, and then expose that file inside your container as a bind mount using the -v option to docker run:
docker run -v /path/to/myfile:/path/inside/container ...
This would expose the file as /path/inside/container inside your container.
Docker secrets: If you're running Docker in swarm mode, you can expose your credentials as docker secrets.
It's worse than that: they're in docker history in perpetuity.
I've done two things here in the past that work:
You can configure pip to use local packages, or to download dependencies ahead of time into "wheel" files. Outside of Docker you can download the package from the private repository, giving the credentials there, and then you can COPY in the resulting .whl file.
pip install wheel
pip wheel --wheel-dir ./wheels git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
docker build .
COPY ./wheels/ ./wheels/
RUN pip install wheels/*.whl
The second is to use a multi-stage Dockerfile where the first stage does all of the installation, and the second doesn't need the credentials. This might look something like
FROM ubuntu:16.04 AS build
RUN apt-get update && ...
...
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install \
python2.7
COPY --from=build /usr/lib/python2.7/site-packages/ /usr/lib/python2.7/site-packages/
COPY ...
CMD ["./app.py"]
It's worth double-checking in the second case that nothing has gotten leaked into your final image, because the ARG values are still available to the second stage.
For me, I created a bash file call set-up-cred.sh.
Inside set-up-cred.sh
echo $CRED > cred.txt;
Then, in Dockerfile,
RUN bash set-up-cred.sh;
...
RUN rm cred.txt;
This is for hiding echoing credential variables.

Build and run container but there is no container

I have a dockerfile look like this
FROM ubuntu
MAINTAINER abc <abc.yur#gmail.com>
RUN apt-get update
RUN apt-get install nano
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository ppa:longsleep/golang-backports
RUN apt-get update
RUN apt-get -y install golang-go git
RUN mkdir /work
ENV GOPATH=/work
RUN go get github.com/abc/golang
RUN go build github.com/abc/golang
CMD /golang -addr $ADDR -workers $WORKERS
So I want to build and run container but after the building (docker build .) I can not run this container. So when I am running docker ps -a or docker ps there is not container to run
docker build .
This creates the image and not a container. You need to use
docker images
To get the list of images.
docker ps will show when you run a container using something like below
docker run -d <image>

Why won't my docker container run unless I use -i -t?

If I run my Dockerfile with the following command, the docker container starts running and all is well.
docker run --name test1 -i -t 660c93c32a
However, if I run this command without the -it, the container does not appear to be running as docker ps returns nothing:
docker run -d --name test1 660c93c32a
.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
All I'm trying to do is run the container and then be able to attach and/or open a shell in the container later.
Not sure if the issue is in my dockerfile or not, so have pasted the dockerfile below.
############################################################
# Dockerfile to build Ubuntu/Ansible/Django
############################################################
# Set the base image to Ansible
FROM ubuntu:16.10
# File Author / Maintainer
MAINTAINER David
# Install Ansible and Related Deps #
RUN apt-get -y update && \
apt-get install -y python-yaml python-jinja2 python-httplib2 python-keyczar python-paramiko python-setuptools python-pkg-resources git python-pip
RUN mkdir /etc/ansible/
RUN echo '[local]\nlocalhost\n' > /etc/ansible/hosts
RUN mkdir /opt/ansible/
RUN git clone http://github.com/ansible/ansible.git /opt/ansible/ansible
WORKDIR /opt/ansible/ansible
RUN git submodule update --init
ENV PATH /opt/ansible/ansible/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin
ENV PYTHONPATH /opt/ansible/ansible/lib
ENV ANSIBLE_LIBRARY /opt/ansible/ansible/library
# Update the repository sources list
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-dev -y
RUN apt-get install python-setuptools -y
RUN apt-get install python-pip
RUN mkdir /ansible/
WORKDIR /ansible
COPY ./ansible ./
WORKDIR /
RUN ansible-playbook -c local ansible/playbooks/installdjango.yml
ENV PROJECTNAME davidswebsite
CMD django-admin startproject $PROJECTNAME
When you run your container, command after CMD or ENTRYPOINT becomes $1 process of you container. If this process doesn't run well, your container will die.
So, check container logs using: docker logs <container id>
and recheck your command in CMD django-admin startproject $PROJECTNAME

Resources