Copy folder from Dockerfile to host [duplicate] - docker

This question already has answers here:
Docker: Copying files from Docker container to host
(27 answers)
Closed 1 year ago.
I have a docker file
FROM ubuntu:20.04
################################
### INSTALL Ubuntu build tools and prerequisites
################################
# Install build base
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
build-essential \
git \
subversion \
sharutils \
vim \
asciidoc \
binutils \
bison \
flex \
texinfo \
gawk \
help2man \
intltool \
libelf-dev \
zlib1g-dev \
libncurses5-dev \
ncurses-term \
libssl-dev \
python2.7-dev \
unzip \
wget \
rsync \
gettext \
xsltproc && \
apt-get clean && rm -rf /var/lib/apt/lists/*
ARG FORCE_UNSAFE_CONFIGURE=1
RUN git clone https://git.openwrt.org/openwrt/openwrt.git
WORKDIR /openwrt
RUN ./scripts/feeds update -a && ./scripts/feeds install -a
COPY .config /openwrt/.config
RUN mkdir files
WORKDIR /files
RUN mkdir etc
WORKDIR /etc
RUN mkdir uci-defaults
WORKDIR /uci-defaults
COPY xx_custom /openwrt/files/etc/uci-defaults/xx_custom
WORKDIR /openwrt
RUN make -j 4
RUN ls /openwrt/bin/targets/ramips/mt76x8
WORKDIR /root
CMD ["bash"]
I want to copy all the files inside the folder mt76x8 to the host. I want to that inside the dockerfile so that when I run the docker file I should get the generated files in my host.
How can I do that?

you can use the volume mount to access the docker-generated artifacts on the host machine.
you can also run the command
docker cp to copy the files to the host machine.
if don't want to use the docker command as mention only option is to use the volume.
you can also use docker create once the docker image is ready to create the writable layer and copy data.

You have two choices.
Use docker volumes to map the /openwrt/bin/targets/ramips/mt76x8 folder when you are running the container. i.e. docker run -v {VoluneName}:/openwrt/bin/targets/ramips/mt76x8. All of the files in the mt76x8 folder would be available in the volume folder. If you are using Linux then you will find the docker volumes in /var/lib/docker/volumes/
You can use docker cp command to copy data from container to the host machine. Here is an example

Related

Why Container does not run when I create a VOLUME?

I'm new using dockers and trying to create a volume between Windows directory and Debian Container.
although the dockerfile does not have WORKDIR, I decided use default path inside container, for example, /home/ or /, whatever folder, path.
Is necessary create a WORKDIR to use VOLUME? I think no! what I need to do?
docker run --rm -v %echo%:/home/ --name rust_container rust
dockerfile:
FROM debian:buster-slim
ENV RUSTUP_HOME=/usr/local/rustup \
CARGO_HOME=/usr/local/cargo \
PATH=/usr/local/cargo/bin:$PATH
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
ca-certificates \
gcc \
libc6-dev \
wget \
; \
\
url="https://static.rust-lang.org/rustup/dist/x86_64-unknown-linux-gnu/rustup-init"; \
wget "$url"; \
chmod +x rustup-init; \
./rustup-init -y --no-modify-path --default-toolchain nightly; \
rm rustup-init; \
chmod -R a+w $RUSTUP_HOME $CARGO_HOME; \
rustup --version; \
cargo --version; \
rustc --version; \
\
apt-get remove -y --auto-remove \
wget \
; \
rm -rf /var/lib/apt/lists/*;
You don’t need to set the workdir to use a volume, the workdir will default to the root folder. The directory you reference in the -v argument needs to exist in the container folder structure though because you are mounting a volume. So if /home/ doesn’t exist there is nothing for the volume to mount to. If you include a host point like /foo/bar:/home you will create a bind mount not a volume which mounts the host file system directory inside the container.

Cannot mount local directory into container from docker-compose file

I want to mount the local directory of a project to docker container before I used COPY command but when I make changes I have to rebuild those parts which involve some installation from bash scripts.
This is my docker-compose file
version "3.7"
services
tesseract:
container_name: tesseract
build:
context: ./app/services/tesseract/
dockerfile: Dockerfile
volumes:
- ./app/services/tesseract:/tesseract/
I don't have any errors when building and my WORKDIR tesseract is empty when i run container
This is my Dockerfile
FROM ubuntu:19.10
ENV DEBIAN_FRONTEND=noninteractive
ENV TESSERACT=/usr/share/tesseract
WORKDIR /tesseract
RUN apt-get update && apt-get install -y \
build-essential \
software-properties-common \
python3.7 \
python3-pip \
cmake \
autoconf \
automake \
libtool \
pkg-config \
libpng-dev \
tesseract-ocr \
libtesseract-dev \
libpango1.0-dev \
libicu-dev \
libcairo2-dev \
libjpeg8-dev \
zlib1g-dev \
libtiff5-dev \
wget \
git \
g++ \
vim
RUN git clone https://github.com/tesseract-ocr/tesseract $TESSERACT
COPY . /tesseract/
RUN chmod +x scripts/*
RUN scripts/compile_tesseract.sh
RUN scripts/langdata_lstm.sh scripts/start.sh
RUN pip3 install -r requirements.txt
ENV TESSDATA_PREFIX=/usr/share/tesseract/tessdata
Main objective of docker volume is
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
which means volumes are used to persist the data outside the lifecycle of a container. If you want to COPY a file or a directory into a container, please use COPY instruction.
If you’re copying in local files to your Docker image, always use COPY because it’s more explicit.
With Docker-compose, you can use a bind mount volume
https://docs.docker.com/compose/gettingstarted/#step-5-edit-the-compose-file-to-add-a-bind-mount#step-5-edit-the-compose-file-to-add-a-bind-mount

Dockerfile cannot find executable script (no such file or directory)

I'm writting a Dockerfile in order to create an image for a web server (a shiny server more precisely). It works well, but it depends on a huge database folder (db/) that it is not distributed with the package, so I want to do all this preprocessing while creating the image, by running the corresponding script in the Dockerfile.
I expected this to be simple, but I'm struggling figuring out where my files are being located within the image.
This repo has the following structure:
Dockerfile
preprocessing_files
configuration_files
app/
application_files
db/
processed_files
So that app/db/ does not exist, but is created and filled with files when preprocessing_files are run.
The Dockerfile is the following:
# Install R version 3.6
FROM r-base:3.6.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxml2-dev \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb && \
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
RUN R -e "install.packages(c('shiny', 'flexdashboard','rmarkdown','tidyverse','plotly','DT','drc','gridExtra','fitdistrplus'), repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
COPY /app/db /srv/shiny-server/app/
# Make the ShinyApp available at port 80
EXPOSE 80
CMD ["/usr/bin/shiny-server"]
This above file works well if preprocessing_files are run in advance, so app/application_files can successfully read app/db/processed_files. How could this script be run in the Dockerfile? To me the intuitive solution would be simply to write:
RUN bash -c "preprocessing.sh"
Before the ADD instruction, but then preprocessing_files are not found. If the above instruction is written below ADD and also WORKDIR app/, the same error happens. I cannot understand why.
You cannot execute code on the host machine from Dockerfile. RUN command executes inside the container being built. You can:
Copy preprocessing_files inside docker container and run preprocessing.sh inside the container (this would increase size of the container)
Create a makefile/build.sh script which launches preprocessing.sh before executing docker build

How to handle PHP project code in docker container

I ran into kind of a hen-and-egg problem with my docker setup. In my Dockerfile I install nginx, php and the needed configurations. I also install composer there:
FROM ubuntu
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Now, the next step would be to actually install all dependencies in the project directory via composer. And this is where the trouble starts: As this is my development machine, I don't want to copy my local project files over to the docker container. Instead, I mounted it in my docker-compose.yml:
version: '3'
services:
web:
...
volumes:
- "./crm-application:/var/www/orocrm/"
I cannot put composer install in the Dockerfile, as the mounting of the directory (in my docker-compose file) is taking place after the Dockerfile is run.
What is the best solution here? Another option which comes to my mind is intially copying the files into the container and later on use a filewatcher to scp the changed files into the container. Not a nice solution, though.
UPDATE I would like to emphasize what my actual problem is: I am on my development machine and I want to continuously update the code and have the changes mirrored instantly withouth building the image once again. Therefore, COPY is not an option.
My suggestion is to copy your content in your container using the COPYcommand, like this
FROM ubuntu
COPY ./crm-application /var/www/orocrm/
RUN apt-get update && apt-get install -y \
curl \
nginx \
nodejs \
php7.0-fpm \
php-intl \
php-pgsql
RUN rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin && \
chown -R www-data:www-data /var/www/ && \
composer install
COPY orocrm /etc/nginx/sites-available/
RUN ln -s /etc/nginx/sites-availabe/orocrm /etc/nginx/sites-enabled/orocrm
CMD nginx
Why? in this way you don't need to use docker-compose or another system. You're going to be able to run your single container.
Even if you want to use docker-compose, you're using a volume that allows you to update the code inside your container.
Notice that I've added composer install in the Docker because you've already the code inside the container at the moment of the build.
Regards,
Idir!

Syntaxnet spec file and Docker?

I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.

Resources