I have the following code
RUN apt-get update
RUN apt-get install -y wget #install wget lib
RUN mkdir -p example && cd example #create folder and cd to folder
RUN WGET -r https://host/file.tar && tar -xvf *.tar # download tar file to example folder and untar it in same folder
RUN rm -r example/*.tar # remove the tar file
RUN MV example/foo example/bar # rename untar directory from foo to bar
But i get the following errors:
/bin/sh: 1: WGET: not found
tar: example/*.tar: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
I am a newbie in docker.
Each subsequent RUN command in your Dockerfile will be in the context of the / directory. Therefore your .tar file is not in the example/ directory, it would actually be in the / directory since your 'cd to the folder' would mean nothing for subsequent RUN commands. Instead of doing cd example, rather do WORKDIR example before running wget, eg:
RUN apt-get update
RUN apt-get install -y wget #install wget lib
RUN mkdir -p example # create folder and cd to folder
WORKDIR example/ # change the working directory for subsequent commands
RUN wget -r https://host/file.tar && tar -xvf *.tar # download tar file to example folder and untar it in same folder
RUN rm -r example/*.tar # remove the tar file
RUN mv example/foo example/bar # rename untar directory from foo to bar
Or alternatively, add cd example && ... some command before any command you'd like to execute within theexample directory.
As Ntokozo stated, each RUN command is a separate "session" in the build process. As such, Docker is really designed to pack as many commands into a single RUN as possible allowing for smaller overall image size and fewer layers. So the command could be written like so:
RUN apt-get update && \
apt-get install -y wget && \
mkdir -p example && \
cd example/ && \
wget -r https://host/file.tar && \
tar -xvf *.tar && \
rm -r example/*.tar && \
mv example/foo example/bar
Related
I'm trying to deploy Atlantis on a Cloud Run Gen2 service with a GCS bucket mounted to it via gcsfuse.
Most seems to work fine, the atlantis server starts and can handle requests properly. Files are also written to the GCS bucket through gcsfuse.
But, when Atlantis tries to clone a git repository (as part of the: atlantis plan commmand) it returns the following error:
running git clone --branch f/gcsfuse-cloudrun --depth=1 --single-branch https://xxxxxxxx:<redacted>#github.com/xxxxxxxx/xxxxxxxx.git /app/atlantis/repos/xxxxxxxx/xxxxxxxx/29/default: Cloning into '/app/atlantis/repos/xxxxxxxx/xxxxxxxx/29/default'...
error: chmod on /app/atlantis/repos/xxxxxxxx/xxxxxxxx/29/default/.git/config.lock failed: Operation not permitted
fatal: could not set 'core.filemode' to 'false'
: exit status 128
I believe that I'm very close but I'm not too knowledgeable on Linux file system permissions.
My Dockerfile is as following:
FROM ghcr.io/runatlantis/atlantis:v0.21.1-pre.20221213-debian
USER root
# Install Python
ENV PYTHONUNBUFFERED=1
RUN apt-get update -y
RUN apt-get install -y python3 python3-pip
# Install system dependencies
RUN set -e; \
apt-get update -y && apt-get install -y \
tini \
lsb-release; \
gcsFuseRepo=gcsfuse-`lsb_release -c -s`; \
echo "deb http://packages.cloud.google.com/apt $gcsFuseRepo main" | \
tee /etc/apt/sources.list.d/gcsfuse.list; \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
apt-key add -; \
apt-get update; \
apt-get install -y gcsfuse \
&& apt-get clean
# Set fallback mount directory
ENV MNT_DIR /app/atlantis
# Create mount directory for service
RUN mkdir -p ${MNT_DIR}
RUN chown -R atlantis /app/atlantis/
RUN chmod -R 777 /app/atlantis/
WORKDIR $MNT_DIR
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY gcsfuse_run.sh ./
# Make the script an executable
RUN chmod +x /app/gcsfuse_run.sh
ENTRYPOINT ["/app/gcsfuse_run.sh"]
The entrypoint script ^, is as following:
#!/usr/bin/env bash
set -eo pipefail
echo "Mounting GCS Fuse to $MNT_DIR"
gcsfuse -o allow_other -file-mode=777 -dir-mode=777 --implicit-dirs --debug_gcs --debug_fuse $BUCKET $MNT_DIR
echo "Mounting completed."
# This is a atlantis provided docker script that comes from the base image
/usr/local/bin/docker-entrypoint.sh server
Help is highly appreciated!
We simulated the exact steps, but didn't face the issue.
Also we found the same type of issue on many places and for them below solutions worked :
Run the server with sudo permission.
Restart the system.
git config --global --replace-all core.fileMode false
The chmod operation is not supported by gcsfuse. As such, the suggestion by #tulsi-shah (git config --global --replace-all core.fileMode false) would provide a work-around.
https://github.com/googlecloudplatform/gcsfuse/blob/master/docs/semantics.md#inodes
I have a base Docker image:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& cp /root/.bashrc /opt/conda/bashrc \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
and derive from it another one:
ARG BASE
FROM $BASE
RUN source /opt/conda/bashrc && micromamba activate \
&& micromamba create --file environment.yaml -p /env
While building the second image I get the following error: micromamba: command not found for the RUN section.
If I run 1st base image manually I can launch micromamba, it is running correctly
I can run temporary image which were created for 2nd image building, micromamba is available via CLI, running correctly.
If I inherit from debian:buster, or alpine, for example, it is building perfectly.
What a problem with the Ubuntu? Why it cannot see micromamba during 2nd Docker image building?
PS using scaffold for building, so it can understand correctly, where is $BASE and what is it.
The ubuntu:21.04 image comes with a /root/.bashrc file that begins with:
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
When the second Dockerfile executes RUN source /opt/conda/bashrc, PS1 is not set and thus the remainder of the bashrc file does not execute. The remainder of the bashrc file is where micromamba initialization occurs, including the setup of the micromamba bash function that is used to activate a micromamba environment.
The debian:buster image has a smaller /root/.bashrc that does not have a line similar to [ -z "$PS1" ] && return and therefore the micromamba function gets loaded.
The alpine image does not come with a /root/.bashrc so it also does not contain the code to exit the file early.
If you want to use the ubuntu:21.04 image, you could modify you first Dockerfile like this:
FROM ubuntu:21.04
WORKDIR /app
RUN apt-get update && apt-get install -y wget bzip2 \
&& wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba \
&& touch /root/.bashrc \
&& ./bin/micromamba shell init -s bash -p /opt/conda \
&& grep -v '[ -z "\$PS1" ] && return' /root/.bashrc > /opt/conda/bashrc # this line has been modified \
&& apt-get clean autoremove --yes \
&& rm -rf /var/lib/{apt,dpkg,cache,log}
SHELL ["bash", "-l" ,"-c"]
This will strip out the one line that causes the early termination.
Alternatively, you could make use of the existing mambaorg/micromamba docker image. The mambaorg/micromamba:latest is based on debian:slim, but mambaorg/micromamba:jammy will get you a ubuntu-based image (disclosure: I maintain this image).
I'm writting a Dockerfile in order to create an image for a web server (a shiny server more precisely). It works well, but it depends on a huge database folder (db/) that it is not distributed with the package, so I want to do all this preprocessing while creating the image, by running the corresponding script in the Dockerfile.
I expected this to be simple, but I'm struggling figuring out where my files are being located within the image.
This repo has the following structure:
Dockerfile
preprocessing_files
configuration_files
app/
application_files
db/
processed_files
So that app/db/ does not exist, but is created and filled with files when preprocessing_files are run.
The Dockerfile is the following:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxml2-dev \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
RUN R -e "install.packages(c('shiny', 'flexdashboard','rmarkdown','tidyverse','plotly','DT','drc','gridExtra','fitdistrplus'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
COPY /app/db /srv/shiny-server/app/
# Make the ShinyApp available at port 80
EXPOSE 80
CMD ["/usr/bin/shiny-server"]
This above file works well if preprocessing_files are run in advance, so app/application_files can successfully read app/db/processed_files. How could this script be run in the Dockerfile? To me the intuitive solution would be simply to write:
RUN bash -c "preprocessing.sh"
Before the ADD instruction, but then preprocessing_files are not found. If the above instruction is written below ADD and also WORKDIR app/, the same error happens. I cannot understand why.
You cannot execute code on the host machine from Dockerfile. RUN command executes inside the container being built. You can:
Copy preprocessing_files inside docker container and run preprocessing.sh inside the container (this would increase size of the container)
Create a makefile/build.sh script which launches preprocessing.sh before executing docker build
I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.
I am trying to mount the current working directory onto Docker container but isn't working. Here is my Dockerfile
FROM ubuntu:14.04.3
MAINTAINER Upendra Devisetty
RUN apt-get update && apt-get install -y g++ \
make \
git \
zlib1g-dev \
python \
wget \
curl \
python-matplotlib
ENV BINPATH /usr/bin
ENV HISAT2GIT https://upendra_35#bitbucket.org/upendra_35/evolinc.git
RUN git clone "$HISAT2GIT"
RUN chmod +x evolinc/evolinc-part-I.sh && cp evolinc/evolinc-part-I.sh $BINPATH
RUN wget -O- http://cole-trapnell-lab.github.io/cufflinks/assets/downloads/cufflinks-2.2.1.Linux_x86_64.tar.gz | tar xzvf -
RUN wget -O- https://github.com/TransDecoder/TransDecoder/archive/2.0.1.tar.gz | tar xzvf -
RUN wget -O- http://seq.cs.iastate.edu/CAP3/cap3.linux.x86_64.tar | tar vfx -
RUN curl ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/ncbi-blast-2.2.31+-x64-linux.tar.gz > ncbi-blast-2.2.31+-x64-linux.tar.gz
RUN tar xvf ncbi-blast-2.2.31+-x64-linux.tar.gz
RUN wget -O- http://ftp.mirrorservice.org/sites/download.sourceforge.net/pub/sourceforge/q/qu/quast/quast-3.0.tar.gz | tar zxvf -
RUN curl -L http://cpanmin.us | perl - App::cpanminus
RUN cpanm URI/Escape.pm
ENV PATH /CAP3/:$PATH
ENV PATH /ncbi-blast-2.2.31+/bin/:$PATH
ENV PATH /quast-3.0/:$PATH
ENV PATH /cufflinks-2.2.1.Linux_x86_64/:$PATH
ENV PATH /TransDecoder-2.0.1/:$PATH
ENTRYPOINT ["/usr/bin/evolinc-part-I.sh"]
CMD ["-h"]
When i run the following to mount the current working directory to make sure everything is doing ok, what i see is that all those dependencies are getting installed in the current working directory.
docker run --rm -v $(pwd):/working-dir -w /working-dir ubuntu/evolinc:2.0 -c cuffcompare_out_annot_no_annot.combined.gtf -g Brassica_rapa_v1.2_genome.fa -r Brassica_rapa_v1.2_cds.fa -b TE_RNA_transcripts.fa
I thought, they should only be installed on the container and only the output is going to generate in the current working directory. Sorry, i am very new to Docker and i would need some help with this....
Mouting a volume in docker (-v) allows a container to share directories/volumes with the host. Therefore when changing the volume you are in fact changing the mounted directory. If you wanted to copy some files, rather than point at them, you may need to build your own container and use the COPY or ADD instructions.
What is the difference between the `COPY` and `ADD` commands in a Dockerfile?