I have this Dockerfile:
FROM ubuntu:18.04
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
RUN export EOSIO_LOCATION=~/eosio/eos \
export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install \
mkdir -p $EOSIO_INSTALL_LOCATION
RUN git clone https://github.com/EOSIO/eos.git $EOSIO_LOCATION \
cd $EOSIO_LOCATION && git submodule update --init --recursive
ENTRYPOINT ["/bin/bash"]
And error is: /bin/sh: 1: export: -p: bad variable name
How can i fix it?
You currently don't have any separation between the export and mkdir commands in the RUN statement.
You probably want to concatenate the commands with &&. This ensures that the previous commands (only) runs if the prior command succeds. You may also use ; to separate commands, i.e.
RUN export EOSIO_LOCATION=~/eosio/eos && \
export EOSIO_INSTALL_LOCATION=$EOSIO_LOCATION/../install && \
mkdir -p $EOSIO_INSTALL_LOCATION
NOTE You probably don't need to export these variables and could:
EOSIO_LOCATION=... && EOSIO_INSTALL_LOCATION=... && mkdir ...
There's a Dockerfile ENV command that may be preferable:
ENV EOSIO_LOCATION=${PWD}/eosio/eos
ENV EOSIO_INSTALL_LOCATION=${EOSIO_LOCATION}/../install && \
RUN mkdir -p ${EOSIO_INSTALL_LOCATION}
Personal preference is to wrap env vars in ${...} and to use ${PWD} instead of ~ as it feels more explicit.
Related
I have a base Docker image:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& cp /root/.bashrc /opt/conda/bashrc \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
and derive from it another one:
ARG BASE
FROM $BASE
RUN source /opt/conda/bashrc && micromamba activate \
&& micromamba create --file environment.yaml -p /env
While building the second image I get the following error: micromamba: command not found for the RUN section.
If I run 1st base image manually I can launch micromamba, it is running correctly
I can run temporary image which were created for 2nd image building, micromamba is available via CLI, running correctly.
If I inherit from debian:buster, or alpine, for example, it is building perfectly.
What a problem with the Ubuntu? Why it cannot see micromamba during 2nd Docker image building?
PS using scaffold for building, so it can understand correctly, where is $BASE and what is it.
The ubuntu:21.04 image comes with a /root/.bashrc file that begins with:
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
When the second Dockerfile executes RUN source /opt/conda/bashrc, PS1 is not set and thus the remainder of the bashrc file does not execute. The remainder of the bashrc file is where micromamba initialization occurs, including the setup of the micromamba bash function that is used to activate a micromamba environment.
The debian:buster image has a smaller /root/.bashrc that does not have a line similar to [ -z "$PS1" ] && return and therefore the micromamba function gets loaded.
The alpine image does not come with a /root/.bashrc so it also does not contain the code to exit the file early.
If you want to use the ubuntu:21.04 image, you could modify you first Dockerfile like this:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& grep -v '[ -z "\$PS1" ] && return' /root/.bashrc > /opt/conda/bashrc # this line has been modified \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
This will strip out the one line that causes the early termination.
Alternatively, you could make use of the existing mambaorg/micromamba docker image. The mambaorg/micromamba:latest is based on debian:slim, but mambaorg/micromamba:jammy will get you a ubuntu-based image (disclosure: I maintain this image).
I'm trying to build the following Dockerfile:
FROM ubuntu:focal
RUN ln -snf /usr/share/zoneinfo/Europe/Berlin /etc/localtime && echo Europe/Berlin > /etc/timezone \
&& apt-get update \
&& apt-get install -y git default-jdk-headless ant libcommons-lang3-java libbcprov-java \
&& git clone https://gitlab.com/pdftk-java/pdftk.git \
&& cd pdftk \
&& mkdir lib \
&& ln -st lib /usr/share/java/{commons-lang3,bcprov}.jar \
&& ant jar
CMD ["java", "-jar", "/pdftk/build/jar/pdftk.jar"]
When building the image, it fails upon the ant step with several errors like this:
[javac] symbol: class ASN1Sequence
[javac] location: class PdfPKCS7
[javac] /pdftk/java/pdftk/com/lowagie/text/pdf/PdfPKCS7.java:282: error: cannot find symbol
[javac] BigInteger serialNumber = ((ASN1Integer)issuerAndSerialNumber.getObjectAt(1)).getValue();
However when starting a container manually (docker run -it --rm ubuntu:focal) and executing exact the same commands (Sure no typo, Copy/Pasted the whole block several times), the build succeeds.
Any idea what might be different during the docker build and a manually started container?
Wow, this one is a tricky one. 🔎
When you build the container, the program which executes your instruction set is shell (/bin/sh) whereas when you run docker run -it --rm ubuntu:focal, it is going to be run on bash (/bin/bash).
Basically, you manually ran all your instructions on bash.
The easiest solution would be to use bash to run your instruction set because it already works as you tested.
You can simply instruct the docker to run all your instructions on bash using this command at the top:
SHELL ["/bin/bash", "-c"]
The changed Dockerfile will be:
FROM ubuntu:focal
SHELL ["/bin/bash", "-c"]
RUN ps -p $$
RUN ln -snf /usr/share/zoneinfo/Europe/Berlin /etc/localtime && echo Europe/Berlin > /etc/timezone \
&& apt-get update \
&& apt-get install -y git default-jdk-headless ant libcommons-lang3-java libbcprov-java \
&& git clone https://gitlab.com/pdftk-java/pdftk.git \
&& cd pdftk \
&& mkdir lib \
&& ln -st lib /usr/share/java/{commons-lang3,bcprov}.jar \
&& ant jar
CMD ["java", "-jar", "/pdftk/build/jar/pdftk.jar"]
Hope this helps you. Cheers 🍻 !!!
Premise · What I want to realize
I'm trying to clone a git public Repository into Dockerfiles Run order, but I'm not going well...
testing environment
MacOS Mojave
10.14.6
Docker
19.03.8
python
3.6.10
bash
3.2.57
What I did
**1. make a Dockerfile **
FROM python:3.6
LABEL maintainer="aaa"
SHELL ["/bin/bash", "-c"]
WORKDIR /usr/local/src/
RUN git clone https://path/to/target_repository.git \
&& chmod -R 755 ./target_repository \
&& cd ./target_repository \
&& pip install -r requirements.txt \
&& mkdir -p ./data/hojin/zip \
&& mv ../13_tokyo_all_20200529.zip ./data/hojin/zip/ \
&& sh scripts/download.sh \
&& pip install IPython seqeval \
&& sh scripts/generate_alias.sh \
&& python tools/dataset_converter.py \
&& python tools/dataset_preprocess.py
EXPOSE 80
CMD ["/sbin/init"]
Problems occurring · Error messages
...
Cloning into 'target-repository'...
chmod: cannot access './target-repository': No such file or directory
...
that's all
I got the errors. What shoud I do?
Could you lend me a hand?
I changed a bit your Dockerfile to test with my repo and it works well.
FROM python:3.6
LABEL maintainer="aaa"
SHELL ["/bin/bash", "-c"]
WORKDIR /usr/local/src/
RUN git clone https://path/to/my_repository.git
RUN chmod -R 755 ./my_repository
RUN cd ./my_repository
You can use RUN commands like me to be more clear and make sure that you type exactly the name of folder.
I'm just testing out Docker so this might be a pretty simple question but I cannot seem to find out why it's not doing what I expect.
I created a pretty simple Dockerfile for testing, just to build a simple image that installs some packages, clones a git repo and build its requirements:
FROM ubuntu:18.04
ENV PYTHONEXEC=python3 \
PIPEXEC=pip \
VIRTUALENVEXEC=virtualenv \
GITREPO=https://github.com/test/test.git \
REPODIR=test
RUN apt-get update && apt-get install -y git \
python3 \
python3-dev \
python3-virtualenv \
python-virtualenv \
qt5-default \
libcurl4-openssl-dev \
libxml2 \
libxml2-dev \
libxslt1-dev \
libssl-dev \
virt-viewer
RUN mkdir -p /app
WORKDIR /app
RUN git clone $GITREPO $REPODIR \
&& $VIRTUALENVEXEC -p $PYTHONEXEC venv \
&& . venv/bin/activate \
&& cd $REPODIR \
&& $PIPEXEC install -r requirements.txt
CMD ["sleep", "1000000"]
Then I build the image with:
docker build -t gitapp:latest .
This works so far. However, if I specify a -e parameter on the docker container run command, it seems not to replace it in the last RUN command.
So if I run docker container run -d -e "REPODIR=blah" gitapp, I expect it to be cloned in /app/blah, but it's still cloned in the /app/test directory.
When I run a docker container exec -it -e "REPODIR=blah" <container-id> env I get:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=2f6ba38341d6
TERM=xterm
REPODIR=blah
PYTHONEXEC=python3
PIPEXEC=pip
VIRTUALENVEXEC=virtualenv
GITREPO=https://github.com/test/test.git
HOME=/root
So it seems that the variable is indeed passed to the container. Then why it isn't passed to the last RUN command so it clones the repo in the right directory? Am I missing something basic here?
When you execute a docker run you are instructing a container to execute Dockerfile's CMD or ENTRYPOINT command. Dockerfile commands that are above entrypoint have been already executed during build and are not executing again at runtime.
That's exactly the reason your github repo is being cloned to the directory defined initially at the Dockerfile and not in the one passed at the run command with -e flag.
A workaround would be to alter your image's entrypoint. You may transfer this part
RUN git clone $GITREPO $REPODIR \
&& $VIRTUALENVEXEC -p $PYTHONEXEC venv \
&& . venv/bin/activate \
&& cd $REPODIR \
&& $PIPEXEC install -r requirements.txt
to a bash script(let's call it my.script.sh) file that will be executed as image's entrypoint. Copy this file during build process in a preferred location, ensuring it holds executable flag and edit your Dockerfile's entrypoint accordingly:
CMD ["/path_to_script/myscript.sh" ]
This however has the caveat that the script will be executed each time the container is started in contrast with your current setup, possibly leading to delay depending on myscript.sh content.
I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.