My Docker file in the .devcontainer folder in the root of my Python workspace looks like this
FROM mcr.microsoft.com/vscode/devcontainers/miniconda:0-3
RUN apt-get update
RUN export DEBIAN_FRONTEND=noninteractive
RUN apt-get -y install --no-install-recommends make
RUN apt-get -y install gcc git-lfs python3-dev build-essential
# Copy environment.yml (if found) to a temp location so we update the environment. Also
# copy "noop.txt" so the COPY instruction does not fail if no environment.yml exists.
COPY environment.yml* .devcontainer/noop.txt /tmp/conda-tmp/
RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then /opt/conda/bin/conda env update --name base --file /tmp/conda-tmp/environment.yml; fi
RUN rm -rf /tmp/conda-tmp
What I am hoping this would achieve is that
Conda itself would be updated and
The conda environment in my environment.env file would be created.
In fact neither of these happens.
When I let VSCode build the Docker image and reopen my workspace in the resulting container there is only one conda environment listed (base) and it has not been updated, i.e. I still have to run the commands
conda update --name base conda --channel anaconda
conda env create --file environment.yml
Why hasn't the conda stuff in my Dockerfile persisted into the container? Or is there something wrong with the stuff I am doing in the Dockerfile (there's no error I can see in the logs)?
Related
I want to use a docker container with VS code. I added configuration files for Anaconda configuration container and build it. VS Code created a dockerfile:
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.231.6/containers/python-3-anaconda/.devcontainer/base.Dockerfile
FROM mcr.microsoft.com/vscode/devcontainers/anaconda:0-3
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="none"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# Copy environment.yml (if found) to a temp location so we update the environment. Also
# copy "noop.txt" so the COPY instruction does not fail if no environment.yml exists.
COPY environment.yml* .devcontainer/noop.txt /tmp/conda-tmp/
RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then umask 0002 && /opt/conda/bin/conda env update -n base -f /tmp/conda-tmp/environment.yml; fi \
&& rm -rf /tmp/conda-tmp
RUN apt-get update
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
But when I add command RUN apt-get or any another command (for example: RUN pip3 install opencv-python) at the and of this dockerfile then I get an error: "An error occurred setting up the container."
It is very strange problem. This commands are correctly, but I can't to change original dockerfile. After any change in the dockerfile I get this error. What is way to solve this problem? I want to ask for some examples of correct dockerfiles with right pip3 packages installation.
Also I want to ask. Where does VS code keep created docker images?
I want to create a shareable image which can be docker loaded into another docker installation directly from docker build
e.g.
Dockerfile:
FROM my-app/env as builder
COPY /my-app /my-app
RUN /my-app/build.sh
FROM ubuntu:20.04
COPY --from=builder /my-app/build/my-app.exe /bin
RUN DEBIAN_FRONTEND=noninteractive apt-get update \
&& apt-get install -y --no-install-recommends \
***some stuff required at runtime*** \
&& apt-get autoclean \
&& apt-get autoremove \
&& ldconfig
ENTRYPOINT ["my-app.exe"]
CMD ["--help"]
Running something like:
/my-app$ docker build -t my-app/exe .
will produce an image which I can use with
$ docker run my-app/exe
If I want to now share this, I need to do:
$ docker save -o my-app.tar my-app/exe
which creates the archive my-app.tar which can be shared with another system running the same arch/OS and used by:
/my-othersystem/my-app$ docker load < my-app.tar
And
$ docker run my-app/exe
now works on the other system.
HOWEVER
This requires building the image into the build system docker repository then saving the image to a file. I don't plan to run the exeutable on the build system so don't want it taking up space.
You can do:
/my-app$ DOCKER_BUILDKIT=1 docker build --output type=tar,dest=out.tar .
But this creates a FS, not an image. I want it to be exported as a docker image compatible with docker load directly, is this possible?
I have the following Dockerfile. This is what the "n" package is.
FROM ubuntu:18.04
SHELL ["/bin/bash", "-c"]
# Need to install curl, git, build-essential
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN apt-get install -y git
# Per docs, the following allows automated installation of n without installing node https://github.com/mklement0/n-install
RUN curl -L https://git.io/n-install | bash -s -- -y
# This refreshes the terminal to use "n"
RUN . /root/.bashrc
# Install node version 6.9.0
RUN /root/n/bin/n 6.9.0
This works perfectly and does everything I expect.
Unfortunately, after refreshing the terminal via RUN . /root/.bashrc, I can't seem to call "n" directly and instead I have to reference the exact binary using RUN /root/n/bin/n 6.9.0.
However, when I docker run -it container /bin/bash into the container and run the above sequence of commands, I am able to call "n" like so: Shell command: n 6.9.0 with no issues.
Why does the following command not work in the Dockerfile?
RUN n 6.9.0
I get the following error when I try to build my image:
/bin/bash: n: command not found
Each RUN command runs a separate shell and a separate container; any environment variables set in a RUN command are lost at the end of that RUN command. You must use the ENV command to permanently change environment variables like $PATH.
# Does nothing
RUN export FOO=bar
# Does nothing, if all the script does is set environment variables
RUN . ./vars.sh
# Needed to set variables
ENV FOO=bar
Since a Docker image generally only contains one prepackaged application and its runtime, you don't need version managers like this. Install the single version of the language runtime you need, or use a prepackaged image with it preinstalled.
# Easiest
FROM node:6.9.0
# The hard way
FROM ubuntu:18.04
ARG NODE_VERSION=6.9.0
ENV NODE_VERSION=NODE_VERSION
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl
RUN cd /usr/local \
&& curl -LO https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz \
&& tar xjf node-v${NODE_VERSION}-linux-x64.tar.xz \
&& rm node-v${NODE_VERSION}-linux-x64.tar.xz \
&& for f in node npm npx; do \
ln -s ../node-v${NODE_VERSION}-linux-x64/bin/$f bin/$f; \
done
I have the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
git \
make \
python-pip \
python2.7 \
python2.7-dev \
ssh \
&& apt-get autoremove \
&& apt-get clean
ARG password
ARG username
ENV password $password
ENV username $username
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
I use the following commands to build the image from this Dockerfile:
docker build -t myimage:v1 --build-arg password="somepassoword" --build-arg username="someuser" .
However, in the build log the username and password that I pass as --build-arg are visible.
Step 8/8 : RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
---> Running in 650d9423b549
Collecting git+http://someuser:somepassword#org.bitbucket.com/scm/do/repo.git
How to hide them? Or is there a different way of passing the credentials in the Dockerfile?
Update
You know, I was focusing on the wrong part of your question. You shouldn't be using a username and password at all. You should be using access keys, which permit read-only access to private repositories.
Once you've created an ssh key and added the public component to your repository, you can then drop the private key into your image:
RUN mkdir -m 700 -p /root/.ssh
COPY my_access_key /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
And now you can use that key when installing your Python project:
RUN pip install git+ssh://git#bitbucket.org/you/yourproject.repo
(Original answer follows)
You would generally not bake credentials into an image like this. In addition to the problem you've already discovered, it makes your image less useful because you would need to rebuild it every time your credentials changed, or if more than one person wanted to be able to use it.
Credentials are more generally provided at runtime via one of various mechanisms:
Environment variables: you can place your credentials in a file, e.g.:
USERNAME=myname
PASSWORD=secret
And then include that on the docker run command line:
docker run --env-file myenvfile.env ...
The USERNAME and PASSWORD environment variables will be available to processes in your container.
Bind mounts: you can place your credentials in a file, and then expose that file inside your container as a bind mount using the -v option to docker run:
docker run -v /path/to/myfile:/path/inside/container ...
This would expose the file as /path/inside/container inside your container.
Docker secrets: If you're running Docker in swarm mode, you can expose your credentials as docker secrets.
It's worse than that: they're in docker history in perpetuity.
I've done two things here in the past that work:
You can configure pip to use local packages, or to download dependencies ahead of time into "wheel" files. Outside of Docker you can download the package from the private repository, giving the credentials there, and then you can COPY in the resulting .whl file.
pip install wheel
pip wheel --wheel-dir ./wheels git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
docker build .
COPY ./wheels/ ./wheels/
RUN pip install wheels/*.whl
The second is to use a multi-stage Dockerfile where the first stage does all of the installation, and the second doesn't need the credentials. This might look something like
FROM ubuntu:16.04 AS build
RUN apt-get update && ...
...
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install \
python2.7
COPY --from=build /usr/lib/python2.7/site-packages/ /usr/lib/python2.7/site-packages/
COPY ...
CMD ["./app.py"]
It's worth double-checking in the second case that nothing has gotten leaked into your final image, because the ARG values are still available to the second stage.
For me, I created a bash file call set-up-cred.sh.
Inside set-up-cred.sh
echo $CRED > cred.txt;
Then, in Dockerfile,
RUN bash set-up-cred.sh;
...
RUN rm cred.txt;
This is for hiding echoing credential variables.
If I run my Dockerfile with the following command, the docker container starts running and all is well.
docker run --name test1 -i -t 660c93c32a
However, if I run this command without the -it, the container does not appear to be running as docker ps returns nothing:
docker run -d --name test1 660c93c32a
.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
All I'm trying to do is run the container and then be able to attach and/or open a shell in the container later.
Not sure if the issue is in my dockerfile or not, so have pasted the dockerfile below.
############################################################
# Dockerfile to build Ubuntu/Ansible/Django
############################################################
# Set the base image to Ansible
FROM ubuntu:16.10
# File Author / Maintainer
MAINTAINER David
# Install Ansible and Related Deps #
RUN apt-get -y update && \
apt-get install -y python-yaml python-jinja2 python-httplib2 python-keyczar python-paramiko python-setuptools python-pkg-resources git python-pip
RUN mkdir /etc/ansible/
RUN echo '[local]\nlocalhost\n' > /etc/ansible/hosts
RUN mkdir /opt/ansible/
RUN git clone http://github.com/ansible/ansible.git /opt/ansible/ansible
WORKDIR /opt/ansible/ansible
RUN git submodule update --init
ENV PATH /opt/ansible/ansible/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin
ENV PYTHONPATH /opt/ansible/ansible/lib
ENV ANSIBLE_LIBRARY /opt/ansible/ansible/library
# Update the repository sources list
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-dev -y
RUN apt-get install python-setuptools -y
RUN apt-get install python-pip
RUN mkdir /ansible/
WORKDIR /ansible
COPY ./ansible ./
WORKDIR /
RUN ansible-playbook -c local ansible/playbooks/installdjango.yml
ENV PROJECTNAME davidswebsite
CMD django-admin startproject $PROJECTNAME
When you run your container, command after CMD or ENTRYPOINT becomes $1 process of you container. If this process doesn't run well, your container will die.
So, check container logs using: docker logs <container id>
and recheck your command in CMD django-admin startproject $PROJECTNAME