I'm trying to install Miniconda in a docker image as a first step, right now this is what I have:
FROM ubuntu:14.04
RUN apt-get update && apt-get install wget
RUN wget *miniconda download URL* && bash file_downloaded.sh
When I try to build the image, it goes well until it starts popping the following message continously:
>>> Please answer 'yes' or 'no'
At that point I need to stop docker build. How can I fix it? Should I include something in the dockerfile?
You can't attach interactive tty during image build. If it is asking for 'yes' or 'no' during package installation, wget in your case, you can replace the corresponding line with RUN apt-get update -qq && apt-get install -y wget. If it is bash file_downloaded.sh, check if file_downloaded.sh accepts 'yes' or 'no' as a command line argument.
If file_downloaded.sh doesn't have that option, create a container from ubuntu:14.04 image, install wget and run your commands manually there. Then, you can make an image of the container by committing your changes like: docker commit <cotainer_id> <image_name>.
I believe you can pass -b flag to miniconda shell script to avoid manual answering
Installs Miniconda3 4.0.5
-b run install in batch mode (without manual intervention),
it is expected the license terms are agreed upon
-f no error if install prefix already exists
-h print this help message and exit
-p PREFIX install prefix, defaults to $PREFIX
something like that:
RUN wget http://......-x86_64.sh -O miniconda.sh
RUN chmod +x miniconda.sh \
&& bash ./miniconda.sh -b
Related
I have the following Dockerfile. This is what the "n" package is.
FROM ubuntu:18.04
SHELL ["/bin/bash", "-c"]
# Need to install curl, git, build-essential
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN apt-get install -y git
# Per docs, the following allows automated installation of n without installing node https://github.com/mklement0/n-install
RUN curl -L https://git.io/n-install | bash -s -- -y
# This refreshes the terminal to use "n"
RUN . /root/.bashrc
# Install node version 6.9.0
RUN /root/n/bin/n 6.9.0
This works perfectly and does everything I expect.
Unfortunately, after refreshing the terminal via RUN . /root/.bashrc, I can't seem to call "n" directly and instead I have to reference the exact binary using RUN /root/n/bin/n 6.9.0.
However, when I docker run -it container /bin/bash into the container and run the above sequence of commands, I am able to call "n" like so: Shell command: n 6.9.0 with no issues.
Why does the following command not work in the Dockerfile?
RUN n 6.9.0
I get the following error when I try to build my image:
/bin/bash: n: command not found
Each RUN command runs a separate shell and a separate container; any environment variables set in a RUN command are lost at the end of that RUN command. You must use the ENV command to permanently change environment variables like $PATH.
# Does nothing
RUN export FOO=bar
# Does nothing, if all the script does is set environment variables
RUN . ./vars.sh
# Needed to set variables
ENV FOO=bar
Since a Docker image generally only contains one prepackaged application and its runtime, you don't need version managers like this. Install the single version of the language runtime you need, or use a prepackaged image with it preinstalled.
# Easiest
FROM node:6.9.0
# The hard way
FROM ubuntu:18.04
ARG NODE_VERSION=6.9.0
ENV NODE_VERSION=NODE_VERSION
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl
RUN cd /usr/local \
&& curl -LO https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz \
&& tar xjf node-v${NODE_VERSION}-linux-x64.tar.xz \
&& rm node-v${NODE_VERSION}-linux-x64.tar.xz \
&& for f in node npm npx; do \
ln -s ../node-v${NODE_VERSION}-linux-x64/bin/$f bin/$f; \
done
I have the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
git \
make \
python-pip \
python2.7 \
python2.7-dev \
ssh \
&& apt-get autoremove \
&& apt-get clean
ARG password
ARG username
ENV password $password
ENV username $username
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
I use the following commands to build the image from this Dockerfile:
docker build -t myimage:v1 --build-arg password="somepassoword" --build-arg username="someuser" .
However, in the build log the username and password that I pass as --build-arg are visible.
Step 8/8 : RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
---> Running in 650d9423b549
Collecting git+http://someuser:somepassword#org.bitbucket.com/scm/do/repo.git
How to hide them? Or is there a different way of passing the credentials in the Dockerfile?
Update
You know, I was focusing on the wrong part of your question. You shouldn't be using a username and password at all. You should be using access keys, which permit read-only access to private repositories.
Once you've created an ssh key and added the public component to your repository, you can then drop the private key into your image:
RUN mkdir -m 700 -p /root/.ssh
COPY my_access_key /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
And now you can use that key when installing your Python project:
RUN pip install git+ssh://git#bitbucket.org/you/yourproject.repo
(Original answer follows)
You would generally not bake credentials into an image like this. In addition to the problem you've already discovered, it makes your image less useful because you would need to rebuild it every time your credentials changed, or if more than one person wanted to be able to use it.
Credentials are more generally provided at runtime via one of various mechanisms:
Environment variables: you can place your credentials in a file, e.g.:
USERNAME=myname
PASSWORD=secret
And then include that on the docker run command line:
docker run --env-file myenvfile.env ...
The USERNAME and PASSWORD environment variables will be available to processes in your container.
Bind mounts: you can place your credentials in a file, and then expose that file inside your container as a bind mount using the -v option to docker run:
docker run -v /path/to/myfile:/path/inside/container ...
This would expose the file as /path/inside/container inside your container.
Docker secrets: If you're running Docker in swarm mode, you can expose your credentials as docker secrets.
It's worse than that: they're in docker history in perpetuity.
I've done two things here in the past that work:
You can configure pip to use local packages, or to download dependencies ahead of time into "wheel" files. Outside of Docker you can download the package from the private repository, giving the credentials there, and then you can COPY in the resulting .whl file.
pip install wheel
pip wheel --wheel-dir ./wheels git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
docker build .
COPY ./wheels/ ./wheels/
RUN pip install wheels/*.whl
The second is to use a multi-stage Dockerfile where the first stage does all of the installation, and the second doesn't need the credentials. This might look something like
FROM ubuntu:16.04 AS build
RUN apt-get update && ...
...
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install \
python2.7
COPY --from=build /usr/lib/python2.7/site-packages/ /usr/lib/python2.7/site-packages/
COPY ...
CMD ["./app.py"]
It's worth double-checking in the second case that nothing has gotten leaked into your final image, because the ARG values are still available to the second stage.
For me, I created a bash file call set-up-cred.sh.
Inside set-up-cred.sh
echo $CRED > cred.txt;
Then, in Dockerfile,
RUN bash set-up-cred.sh;
...
RUN rm cred.txt;
This is for hiding echoing credential variables.
I am trying to build the docker image with perl installation.
Dockerfile:
FROM amazonlinux
WORKDIR /shared
RUN yum -y install gcc
ADD http://www.cpan.org/src/5.0/perl-5.22.1.tar.gz /shared
RUN tar -xzf perl-5.22.1.tar.gz
WORKDIR /shared/perl-5.22.1
RUN ./Configure -des -Dprefix=/opt/perl-5.22.1/localperl
RUN make
RUN make test
RUN make install
all these steps are executed i am can see it executing the make, make test and make install commands but when i do :
docker run -it testsh /bin/bash
Error:
when I check perl -v it says command not found.
and I need to go the perl directory
'cd perl-5.22.1' and run 'make install' again then perl -v works
But I want the perl installation to work when I build it with docker image. can anyone tell me what is going wrong here?
perl was indeed installed, just wasn't added to the path.
export PATH=$PATH:/shared/perl-5.22.1 should do it -- but of course, you'd want to add a PATH update in the Dockerfile.
At first glance I thought that when you run make install second time, it adds perl's bin directory to PATH env, but when I compared output of env before and after make install it showed the same PATH variable content.
The reason you getting perl -v working after make install in running container is that make install puts perl binary to /usr/bin/perl. I don't know why it works such way, but it is just as it is. Also, it's almost useless to store sources inside of your image.
Anyway, I agree with #belwood suggestion about adding your perl's bin directiry to PATH environment variable. I just wanna correct the path: /opt/perl-5.22.1/localperl/bin
You need to add it in your Dockerfile (basically I've rewritten your file to make it produce more efficient image), for example:
FROM amazonlinux
RUN mkdir -p /shared/perl-5.22.1
WORKDIR /shared/perl-5.22.1
RUN yum -y install gcc \
&& curl -SL http://www.cpan.org/src/5.0/perl-5.22.1.tar.gz -o perl-5.22.1.tar.gz \
&& tar --strip-components=1 -xzf perl-5.22.1.tar.gz \
&& rm perl-5.22.1.tar.gz \
&& ./Configure -des -Dprefix=/opt/perl-5.22.1/localperl \
&& make -j $(nproc) \
&& make -j $(nproc) test \
&& make install \
&& rm -fr /shared/perl-5.22.1 /tmp/*
ENV PATH="/opt/perl-5.22.1/localperl/bin:$PATH"
WORKDIR /root
CMD ["perl","-de0"]
When you simply run container with this image, you'll immediately get into perl's shell. If you need bash, then use docker run -it --rm amazon-perl /bin/bash
It would be also good to look at Environment replacement section in the Dockerfile reference documentation, just to figure out how things work. For example, it isn't a best pratice to have that many RUN lines in your Dockerfile because of the RUN instruction will execute commands in a new layer on top of the current image and commit the results. So you'll get many unnecessary layers.
I am new to docker and i am trying to build the docker image with the perl installation but not sure exactly how to fix this error.
Dockerfile:
FROM amazonlinux
RUN mkdir /shared
RUN cd /shared
RUN yum -y install sudo
RUN cd /shared
RUN echo "Installing Perl."
RUN sudo yum -y update; yum -y install gcc
RUN yum -y install wget
RUN wget http://www.cpan.org/src/5.0/perl-5.22.1.tar.gz
RUN tar -xzf perl-5.22.1.tar.gz
RUN cd perl-5.22.1
RUN /shared/perl-5.22.1/Configure -des -Dprefix=/opt/perl-5.22.1/localperl
RUN make
RUN make test
RUN make install
RUN echo "Perl installation complete."
instead of /shared/perl-5.22.1/Configure i tried to give ./configure as well but i get the same error No such file or directory
Error:
/bin/sh: /shared/perl-5.22.1/Configure: No such file or directory
The command '/bin/sh -c /shared/perl-5.22.1/Configure -des -Dprefix=/opt/perl-5.22.1/localperl' returned a non-zero code: 127
ish-mac:testanalyse ish$
Can anyone tell me how to fix this issue.
Each Dockerfile RUN command runs in its own shell. So, when you do something like RUN cd /shared, the subsequent RUN commands will not be run inside that working directory.
What you want to use in this case is the WORKDIR instruction (https://docs.docker.com/engine/reference/builder/#workdir). You can also combine and shorten some things by taking advantage of the ADD instruction. A more concise Dockerfile to do what you are after might be:
FROM amazonlinux
WORKDIR /shared
RUN yum -y install gcc
ADD http://www.cpan.org/src/5.0/perl-5.22.1.tar.gz /shared
RUN tar -xzf perl-5.22.1.tar.gz
RUN /shared/perl-5.22.1/Configure -des -Dprefix=/opt/perl-5.22.1/localperl
RUN make -C /shared/perl-5.22.1
RUN make -C /shared/perl-5.22.1 test
RUN make -C /shared/perl-5.22.1 install
For example, the build context is already running as root so there is no need for sudo. With ADD we can add directly from URLs and no wget is required. And the make utility has a -C option to specify the working directory for make.
This should get you closer to what you are after. But the build still fails for other reasons (which you should probably open another question for if you are stuck).
So I am trying to make a basic Dockerfile, but when I run this it says
The command bin/sh -c sudo apt-get install git python-yaml python-jinja2 returned a non-zero code: 1
My question is what am I doing wrong here, and is it even allowed to do commands like 'cd' and 'source' from the Dockerfile?
FROM Ubuntu
MAINTAINER example
#install and source ansible
RUN sudo apt-get update
RUN sudo apt-get install git python-yaml python-jinja2 python-pycurl
RUN sudo git clone https://github.com/ansible/ansible.git
RUN sudo cd ansible
RUN sudo source ./hacking/env-setup
Couple of pointers / comments here:
It's ubuntu not Ubuntu
From base ubuntu (and unfortunately, a lot of images) you don't need to use sudo, the default user is root (and in ubuntu, sudo is not included anyway)
You want your apt-get update and apt-get install to be RUN as part of the same command, to prevent issues with Docker's layer cache
You need to use the -y flag to apt-get install as the Docker build process runs non-interactively
Few other points I could make about clearing up your apt-cache and other non-required artifacts after running the commands, but this should be enough to get going on
New Dockerfile (taking into account the above) would look something like:
FROM ubuntu
MAINTAINER example
#install and source ansible
RUN apt-get update && apt-get install -y \
git \
python-yaml \
python-jinja2 \
python-pycurl
RUN git clone https://github.com/ansible/ansible.git
WORKDIR ansible/hacking
RUN chmod +x env-setup; sync \
&& ./env-setup
You might also find it useful to read the Dockerfile best practises.
Edit: Larsks answer also makes some useful points about the state of the container not persisting between layers so you should go upvote him too!
When building an image you're already running as root. You don't need sudo and there's a good chance it's not installed.
Along similar lines, this will never work:
RUN sudo cd ansible
The cd command only affects the current process; this would run cd and then exit, leaving you in the same directory you started in. The Docker WORKDIR directive can be used to persistently change the working directory:
WORKDIR ansible
You can also pass a series of shell commands to the RUN directive, like this:
RUN cd ansible; source ./hacking/env-setup
But even that probably won't do what you want, because like the sudo cd ... command earlier, that would modify your environment...and then exit, leaving the current environment unchanged in any subsequent commands.
If you want to run Ansible in a container, you should probably either install it, or plan to run the env-setup script manually after starting a container from the image.