I am trying to install the rust compiler within a jupyter docker image. Here in the following the dockerfile:
FROM jupyter/scipy-notebook:python-3.10.5 as base
RUN pip install nb_black
USER root
RUN apt update && apt upgrade
RUN apt install build-essential -y
RUN apt install curl -y
USER jovyan
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
RUN pip install maturin
COPY ./docker_helpers /rust_inst
RUN chmod a+x /rust_inst/setup_rust.sh
RUN /rust_inst/setup_rust.sh
FROM base as prod
CMD ["jupyter", "lab", "--ip", "0.0.0.0"]
and the setup_rust.sh contains just an export statement:
#!/bin/bash
export PATH="$HOME/.cargo/bin:$PATH"
I need to use the root user initially for some permission denied, but after that the jovyan user is able to install all the necessary, or at least I do not get any error from docker at building time.
Does the jupyter docker structure mask the path variable, or make unavailable anything outside jovyan?
How can I have the rust compiler available from a terminal within jupyter?
I realised that the home directory is set to "/home/jovyan" itself, which in docker compose I overwrote with a volume in order to have dynamical code. Once I moved the volume I found the rust compiler in the scope of the jovyan user
Related
I have the following Dockerfile. This is what the "n" package is.
FROM ubuntu:18.04
SHELL ["/bin/bash", "-c"]
# Need to install curl, git, build-essential
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN apt-get install -y git
# Per docs, the following allows automated installation of n without installing node https://github.com/mklement0/n-install
RUN curl -L https://git.io/n-install | bash -s -- -y
# This refreshes the terminal to use "n"
RUN . /root/.bashrc
# Install node version 6.9.0
RUN /root/n/bin/n 6.9.0
This works perfectly and does everything I expect.
Unfortunately, after refreshing the terminal via RUN . /root/.bashrc, I can't seem to call "n" directly and instead I have to reference the exact binary using RUN /root/n/bin/n 6.9.0.
However, when I docker run -it container /bin/bash into the container and run the above sequence of commands, I am able to call "n" like so: Shell command: n 6.9.0 with no issues.
Why does the following command not work in the Dockerfile?
RUN n 6.9.0
I get the following error when I try to build my image:
/bin/bash: n: command not found
Each RUN command runs a separate shell and a separate container; any environment variables set in a RUN command are lost at the end of that RUN command. You must use the ENV command to permanently change environment variables like $PATH.
# Does nothing
RUN export FOO=bar
# Does nothing, if all the script does is set environment variables
RUN . ./vars.sh
# Needed to set variables
ENV FOO=bar
Since a Docker image generally only contains one prepackaged application and its runtime, you don't need version managers like this. Install the single version of the language runtime you need, or use a prepackaged image with it preinstalled.
# Easiest
FROM node:6.9.0
# The hard way
FROM ubuntu:18.04
ARG NODE_VERSION=6.9.0
ENV NODE_VERSION=NODE_VERSION
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl
RUN cd /usr/local \
&& curl -LO https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz \
&& tar xjf node-v${NODE_VERSION}-linux-x64.tar.xz \
&& rm node-v${NODE_VERSION}-linux-x64.tar.xz \
&& for f in node npm npx; do \
ln -s ../node-v${NODE_VERSION}-linux-x64/bin/$f bin/$f; \
done
How can I add tcpdump package in Dockerfile if the base image is node:10.0.0
Dockerfile:
FROM node:10.0.0
EXPOSE $SERVICE_PORT
USER node
RUN mkdir -p /home/node/
WORKDIR /home/node/
COPY package.json /home/node/
RUN npm install
COPY . /home/node/
CMD ["npm", "run", "staging"]
I want to trace the traffic in this container.
It is unnecessary to modify your image to access the network of the container. You can run a second container in the same network namespace:
docker run -it --net container:${container_to_debug} nicolaka/netshoot
From there, you can run tcpdump and a variety of other network debugging tools and see the traffic going to your other container. To see all the tools included in netshoot, see the github repo: https://github.com/nicolaka/netshoot
you base image is debian based, therefore use apt-get as your package manager. add to your dockerfile the following instructions:
USER root
RUN apt-get update -y; exit 0
RUN apt-get install tcpdump -y
explanation:
USER root - apt-get requires root permissions.
RUN apt-get update -y; exit 0 - i am adding exit 0 to tell docker i want to keep the build, even if apt-get couldn't get all of its mirror files
RUN apt-get install tcpdump -y - installation of the package.
I am attempting to build a simple app with Jenkins in a docker container. I have the following Dockerfile:
FROM ubuntu:trusty
# Install dependencies for Flask app.
RUN sudo apt-get update
RUN sudo apt-get install -y vim
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y python3-pip
RUN pip3 install flask
# Install dependencies for Jenkins (Java).
# Install Java 1.8.
RUN sudo apt-get install -y python-software-properties debconf-utils
RUN sudo apt-get install -y software-properties-common
RUN sudo add-apt-repository -y ppa:webupd8team/java
RUN sudo apt-get update
RUN echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections
RUN sudo apt-get install -y oracle-java8-installer
# Install, start Jenkins.
RUN sudo apt-get install -y wget
RUN wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | apt-key add -
RUN echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list
RUN sudo apt-get update
RUN sudo apt-get install -y jenkins
RUN sudo /etc/init.d/jenkins start
COPY ./app /app
CMD ["python3","/app/main.py"]
I run this container with the following:
docker build -t jenkins_test .
docker run --name jenkins_test_container -tid -p 5000:5000 -p 8080:8080 jenkins_test:latest
I am able to start flask and install Jenkins, however, when running, Jenkins is not running. curl localhost:8080 is not successful.
In the log output, I am able to see:
Correct java version found
* Starting Jenkins Automation Server jenkins [ OK ]
However, it's still not running.
I can ssh into the container and manually run sudo /etc/init.d/jenkins start to start it, but I want it to start on docker run or docker build.
I have also tried putting sudo /etc/init.d/jenkins start in the CMD portion of the Docker file:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
With this, I am able to curl Flask, but still not Jenkins.
How can I get Jenkins to start automatically?
You have some points that you need to be aware of:
No need to use sudo as the default user is root already.
In order to run multiple service in the same container you need to use any kind of service manager like Supervisord. Jenkins is not running because the CMD is the main entry point for your container so only flask should be running. Check the following link in order to know how to start multiple service in docker.
RUN will be executed only during the build process unlike CMD which will be executed each time you start a container from that image.
Combine all the RUN lines together as possible in order to minimize the build layers which lead to a smaller docker image.
Regarding the usage of this:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
It does not work for you because this command python3 /app/main.py is not running as a background process so this command sudo /etc/init.d/jenkins start wont run until the previous command is done.
I was only able to get this to work by starting Jenkins in the CMD portion, but needed to start Jenkins before Flask since Flask would continuously run and the next command would never execute:
Did not work:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
This did work:
CMD sudo /etc/init.d/jenkins start; python3 /app/main.py
EDIT:
I believe putting it in the RUN portion would not work because container would build but not save the any running services. I'm not sure if containers can be saved and loaded with running processes like that but I might be wrong. Would appreciate clarification if so.
It seems like a thing that should be in RUN so if anyone knows why that didn't work or some best practices, would also appreciate the info.
I have the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
git \
make \
python-pip \
python2.7 \
python2.7-dev \
ssh \
&& apt-get autoremove \
&& apt-get clean
ARG password
ARG username
ENV password $password
ENV username $username
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
I use the following commands to build the image from this Dockerfile:
docker build -t myimage:v1 --build-arg password="somepassoword" --build-arg username="someuser" .
However, in the build log the username and password that I pass as --build-arg are visible.
Step 8/8 : RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
---> Running in 650d9423b549
Collecting git+http://someuser:somepassword#org.bitbucket.com/scm/do/repo.git
How to hide them? Or is there a different way of passing the credentials in the Dockerfile?
Update
You know, I was focusing on the wrong part of your question. You shouldn't be using a username and password at all. You should be using access keys, which permit read-only access to private repositories.
Once you've created an ssh key and added the public component to your repository, you can then drop the private key into your image:
RUN mkdir -m 700 -p /root/.ssh
COPY my_access_key /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
And now you can use that key when installing your Python project:
RUN pip install git+ssh://git#bitbucket.org/you/yourproject.repo
(Original answer follows)
You would generally not bake credentials into an image like this. In addition to the problem you've already discovered, it makes your image less useful because you would need to rebuild it every time your credentials changed, or if more than one person wanted to be able to use it.
Credentials are more generally provided at runtime via one of various mechanisms:
Environment variables: you can place your credentials in a file, e.g.:
USERNAME=myname
PASSWORD=secret
And then include that on the docker run command line:
docker run --env-file myenvfile.env ...
The USERNAME and PASSWORD environment variables will be available to processes in your container.
Bind mounts: you can place your credentials in a file, and then expose that file inside your container as a bind mount using the -v option to docker run:
docker run -v /path/to/myfile:/path/inside/container ...
This would expose the file as /path/inside/container inside your container.
Docker secrets: If you're running Docker in swarm mode, you can expose your credentials as docker secrets.
It's worse than that: they're in docker history in perpetuity.
I've done two things here in the past that work:
You can configure pip to use local packages, or to download dependencies ahead of time into "wheel" files. Outside of Docker you can download the package from the private repository, giving the credentials there, and then you can COPY in the resulting .whl file.
pip install wheel
pip wheel --wheel-dir ./wheels git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
docker build .
COPY ./wheels/ ./wheels/
RUN pip install wheels/*.whl
The second is to use a multi-stage Dockerfile where the first stage does all of the installation, and the second doesn't need the credentials. This might look something like
FROM ubuntu:16.04 AS build
RUN apt-get update && ...
...
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install \
python2.7
COPY --from=build /usr/lib/python2.7/site-packages/ /usr/lib/python2.7/site-packages/
COPY ...
CMD ["./app.py"]
It's worth double-checking in the second case that nothing has gotten leaked into your final image, because the ARG values are still available to the second stage.
For me, I created a bash file call set-up-cred.sh.
Inside set-up-cred.sh
echo $CRED > cred.txt;
Then, in Dockerfile,
RUN bash set-up-cred.sh;
...
RUN rm cred.txt;
This is for hiding echoing credential variables.
I have created this docker file to run a python script in docker container.
I am creating a user here and I want this user to run the container from docker image.
FROM ubuntu:16.04
MAINTAINER "Vijendra Kulhade" <xxxxxx#xxxxxx.com>
RUN yum makecache fast
RUN yum -y update
RUN yum -y install gcc
RUN yum -y install zlib-devel
RUN yum -y install openssl-devel
RUN yum -y install python-setuptools python-setuptools-devel
RUN yum -y install libyaml
RUN useradd newuser -d /home/newuser
RUN chown -R newuser.newuser /usr/bin/
RUN chown -R newuser.newuser /usr/lib64/
RUN chown -R newuser.newuser /usr/lib/
ENV https_proxy=http://proxy.xxxx.com:8080
RUN easy_install pip
RUN pip -V
RUN pip install --upgrade pip
RUN pip install --upgrade --force-reinstall setuptools
I use this command to create the image
docker build -t python-container .
And I am using
docker run --security-opt label=user:newuser -i -t python-container:latest /bin/bash to run container from image. I was expecting that this would start the container and login into it with newuser#xxxxxxxx. But It is not happening.
Please let know how I can achieve that.
There are two possibilities to run docker containers with a user different from root.
First possibility: Create user in Dockerfile
In your example Dockerfile, you create user newuser with command useradd. You can write instruction
USER newuser
in the Dockerfile. All following commands will be executed as user newuser. This goes for all following RUN instructions as well as for docker run commands.
Second possibility: option --user (tops possible USER instruction in image)
You can use docker run option --user. It can be used to specify either an UID without a name:
docker run --user 1000
Or specify UID and GID without a name:
docker run --user 1000:100
or specify a name only without knowing which UID the user will get:
docker run --user newuser
You can combine both ways. Create a user in Dockerfile with specified (!) UID and GID and add him to all desired groups. Use matching docker run --user UID:GID, and your container user will have all attributes you gave him in the Dockerfile.
(I do not understand your approach with --security-opt label=user:newuser, either it is wrong or it is something I know nothing about.)