I have to copy file from gcp location to a specific directory in the docker image. I am using ubuntu:bionic as a parent image.
After installing Python and Pip, I tried following,
RUN pip install gsutil \
&& gsutil cp gs:<some location> /home/${USER}/<some other location>
when I am building docker image, I am getting following error,
13 19.84 /bin/sh: 1: gsutil: not found
Please let me know the mistake I am doing.
The best solution for your issue depends on whether you need to use gsutil for other purposes inside your container or just to copy the file.
If you just need to copy the file with gsutil, it would be a good idea to use a multi-stage build in Docker so that your final container does not have extra tools installed (Cloud SDK in this case). This way it would be much lighter. The Dockerfile would be:
FROM google/cloud-sdk:latest
RUN gsutil cp <src_location> <intermediate_location>
FROM ubuntu:bionic
COPY --from=0 <intermediate_location> <dst_location>
If you need gsutil for further actions in your container, the Dockerfile to install it in ubuntu is the following:
FROM ubuntu:bionic
RUN apt-get update && \
apt-get install -y curl gnupg && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
RUN gsutil cp <src_location> <dst_location>
Use Bash inside the Docker container docker exec -it (container name) bin/bash and run each command one at a time, and if the command is successful, and then add to the dockerfile, this would help you a lot.
Related
Ive created a Dockerfile that is based off jenkins/jenkins:lts-jdk11
Im trying to install docker + docker compose so that jenkins will have access to this when i create my pipeline for CD/CI.
Here is my Dockerfile:
FROM jenkins/jenkins:lts-jdk11 AS jenkins
WORKDIR /home/jenkins
RUN chown -R 1000:1000 /var/jenkins_home
USER root
# Install aws cli version 2
RUN apt-get update && apt-get install -y unzip curl vim bash sudo
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
#Install docker cli command
RUN sudo apt-get update
RUN sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
RUN echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN sudo apt-get update
RUN sudo apt-get install -y docker-ce docker-ce-cli containerd.io
##Install docker compose
RUN mkdir -p /usr/local/lib/docker/cli-plugins
RUN curl -SL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o /usr/local/lib/docker/cli-plugins/docker-compose
RUN chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
RUN sudo usermod -a -G docker jenkins
The docker commands work well within the container but as soon as i start to build an image it displays this error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
If i try to start the docker service with service docker start i get the following error:
mkdir: cannot create directory ‘cpuset’: Read-only file system
Im not sure how to solve this one.
TIA
container does not use an init system. The Docker service cannot be started because of this.
Our Dockerfile has the following lines:
# Installing google cloud SDK for gsutil
RUN apt-get update && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
When we launch a docker container locally from this image, and docker exec -it containerID bash into the container, we get:
airflow#containerID:~$ gsutil --version
gsutil version: 4.65
When we launch a docker container on our GCP compute engine from this image, and docker exec -it containerID bash into the container, we get:
airflow#containerID:~$ gsutil --version
bash: gsutil: command not found
I thought the whole point of docker and dockerfiles was so that we could avoid this exact issue of something working locally but not in production... We're at a loss for how to even debug this?
I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.
I am trying to mount the current working directory onto Docker container but isn't working. Here is my Dockerfile
FROM ubuntu:14.04.3
MAINTAINER Upendra Devisetty
RUN apt-get update && apt-get install -y g++ \
make \
git \
zlib1g-dev \
python \
wget \
curl \
python-matplotlib
ENV BINPATH /usr/bin
ENV HISAT2GIT https://upendra_35#bitbucket.org/upendra_35/evolinc.git
RUN git clone "$HISAT2GIT"
RUN chmod +x evolinc/evolinc-part-I.sh && cp evolinc/evolinc-part-I.sh $BINPATH
RUN wget -O- http://cole-trapnell-lab.github.io/cufflinks/assets/downloads/cufflinks-2.2.1.Linux_x86_64.tar.gz | tar xzvf -
RUN wget -O- https://github.com/TransDecoder/TransDecoder/archive/2.0.1.tar.gz | tar xzvf -
RUN wget -O- http://seq.cs.iastate.edu/CAP3/cap3.linux.x86_64.tar | tar vfx -
RUN curl ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST/ncbi-blast-2.2.31+-x64-linux.tar.gz > ncbi-blast-2.2.31+-x64-linux.tar.gz
RUN tar xvf ncbi-blast-2.2.31+-x64-linux.tar.gz
RUN wget -O- http://ftp.mirrorservice.org/sites/download.sourceforge.net/pub/sourceforge/q/qu/quast/quast-3.0.tar.gz | tar zxvf -
RUN curl -L http://cpanmin.us | perl - App::cpanminus
RUN cpanm URI/Escape.pm
ENV PATH /CAP3/:$PATH
ENV PATH /ncbi-blast-2.2.31+/bin/:$PATH
ENV PATH /quast-3.0/:$PATH
ENV PATH /cufflinks-2.2.1.Linux_x86_64/:$PATH
ENV PATH /TransDecoder-2.0.1/:$PATH
ENTRYPOINT ["/usr/bin/evolinc-part-I.sh"]
CMD ["-h"]
When i run the following to mount the current working directory to make sure everything is doing ok, what i see is that all those dependencies are getting installed in the current working directory.
docker run --rm -v $(pwd):/working-dir -w /working-dir ubuntu/evolinc:2.0 -c cuffcompare_out_annot_no_annot.combined.gtf -g Brassica_rapa_v1.2_genome.fa -r Brassica_rapa_v1.2_cds.fa -b TE_RNA_transcripts.fa
I thought, they should only be installed on the container and only the output is going to generate in the current working directory. Sorry, i am very new to Docker and i would need some help with this....
Mouting a volume in docker (-v) allows a container to share directories/volumes with the host. Therefore when changing the volume you are in fact changing the mounted directory. If you wanted to copy some files, rather than point at them, you may need to build your own container and use the COPY or ADD instructions.
What is the difference between the `COPY` and `ADD` commands in a Dockerfile?
I'm trying to create a VM with docker and boot2docker. I've made the following Dockerfile, which I'm trying to run through the command line
docker run Dockerfile
Immidiatly it says exactly this:
Unable to find image 'Dockerfile:latest' locally
FATA[0000] Invalid repository name <Dockerfile>, only [a-z0-9_.] are allowed
Dockerfile:
FROM ubuntu:latest
#Oracle Java7 install
RUN apt-get install software-properties-common -y
RUN apt-get update
RUN add-apt-repository -y ppa:webupd8team/java
RUN apt-get update
RUN echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections
RUN apt-get install -y oracle-java7-installer
#Jenkins install
RUN wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
RUN sudo echo "deb http://pkg.jenkins-ci.org/debian binary/" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install --force-yes -y jenkins
RUN sudo service jenkins start
#Zip support install
RUN apt-get update
RUN apt-get -y install zip
#Unzip hang.zip
RUN unzip -o /var/jenkins/hang.zip -d /var/lib/jenkins/
RUN chown -R jenkins:jenkins /vaR/lib/jenkins
RUN service jenkins restart
EXEC tail -f /etc/passwd
EXPOSE 8080
I am in the directory where the Dockerfile is, when trying to run this command.
Ignore the zip part, as that's for later use
You should run docker build first (which actually uses your Dockerfile):
docker build --tag=imagename .
Or
docker build --tag=imagename -f yourDockerfile .
Then you would use that image tag to docker run it:
docker run imagename
There are tools that can provide this type of feature.
We have achieved using docker compose, though you have to go through
(https://docs.docker.com/compose/overview/)
docker-compose up
but you can also do as work around
$ docker build -t foo . && docker run foo.