I'm providing both a staging and temp location via the gcloud dataflow flex-template run CLI. These are both valid optional flags (as per docs) and there isn't any mention that you should or shouldn't provide both.
Why else would this error come up?
ERROR: (gcloud.dataflow.flex-template.run) unrecognized arguments:
--temp-location (did you mean '--staging-location'?)
gs://gcs-bucket-name
Edit: adding context as requested
The process is executing within a Buildkite CI/CD pipeline, so generally speaking a Buildkite agent/step calls a gcloud Docker container which runs a bash script. I'm also able to run this command and container combo locally just fine -- the error is only present while running in the pipeline
Dockerfile
FROM gcr.io/google.com/cloudsdktool/cloud-sdk
COPY docker/scripts/gcloud-deploy-flex-template.sh /app/gcloud-deploy-flex-template.sh
WORKDIR /app
# RUN sudo apt-get install google-cloud-sdk <-- threw error
ENTRYPOINT "/app/gcloud-deploy-flex-template.sh"
gcloud-deploy-flex-template.sh
gcloud dataflow flex-template run ${JOB_NAME} \
--template-file-gcs-location ${TEMPLATE_PATH}.json \
--region us-central1 \
--staging-location ${GCS_PATH}/staging/${JOB_NAME} \
--temp-location ${GCS_PATH}/temp \
--parameters requirements_file=requirements.txt \
--parameters input_subscription=${INPUT_SUBSCRIPTION} \
--parameters output_table=${OUTPUT_TABLE} \
--parameters subject=${SUBJECT} \
--parameters schema_registry_url=${SCHEMA_REGISTRY_URL} \
--subnetwork=${SUBNETWORK} \
--service-account-email=${SERVICE_ACCOUNT_EMAIL}
As I proposed in my comment, you can use the offical google sdk image with the latest version :
FROM google/cloud-sdk:412.0.0
COPY docker/scripts/gcloud-deploy-flex-template.sh /app/gcloud-deploy-flex-template.sh
WORKDIR /app
# RUN sudo apt-get install google-cloud-sdk <-- threw error
ENTRYPOINT "/app/gcloud-deploy-flex-template.sh"
or
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:408.0.1
COPY docker/scripts/gcloud-deploy-flex-template.sh /app/gcloud-deploy-flex-template.sh
WORKDIR /app
# RUN sudo apt-get install google-cloud-sdk <-- threw error
ENTRYPOINT "/app/gcloud-deploy-flex-template.sh"
Related
I am trying to build an image from Dockerfile and I am getting below error:
E: Unsupported file /tmp given on commandline
This is my dockerfile:
FROM python:3.7-slim-stretch
LABEL version="0.1"
ENV DAEMON_RUN=true
ENV SPARK_VERSION=2.4.4
ENV HADOOP_VERSION=2.7
ENV SCALA_VERSION=2.12.4
ENV SCALA_HOME=/usr/share/scala
ENV SPARK_HOME=/spark
RUN apt-get update -yqq
RUN apt-get install -yqq --no-install-recommends \
wget \
tar \
bash \
vim \
less \
RUN cd "/tmp"
But when i run below line I'm getting mentioned error:
docker build --rm -t test/docker-airflow-spark -f Dockerfile-Spark >.
If i remove the last command : RUN cd "/tmp"
And i try to connect ssh to the container the folder exists
Any ideas?
you need to edit the last line in apt-get command change less \ to less
docker thinks that RUN cd "/tmp" is a parameter for apt-get
anyway you should use WORKDIR if you want to use /tmp for further steps
I'm just testing out Docker so this might be a pretty simple question but I cannot seem to find out why it's not doing what I expect.
I created a pretty simple Dockerfile for testing, just to build a simple image that installs some packages, clones a git repo and build its requirements:
FROM ubuntu:18.04
ENV PYTHONEXEC=python3 \
PIPEXEC=pip \
VIRTUALENVEXEC=virtualenv \
GITREPO=https://github.com/test/test.git \
REPODIR=test
RUN apt-get update && apt-get install -y git \
python3 \
python3-dev \
python3-virtualenv \
python-virtualenv \
qt5-default \
libcurl4-openssl-dev \
libxml2 \
libxml2-dev \
libxslt1-dev \
libssl-dev \
virt-viewer
RUN mkdir -p /app
WORKDIR /app
RUN git clone $GITREPO $REPODIR \
&& $VIRTUALENVEXEC -p $PYTHONEXEC venv \
&& . venv/bin/activate \
&& cd $REPODIR \
&& $PIPEXEC install -r requirements.txt
CMD ["sleep", "1000000"]
Then I build the image with:
docker build -t gitapp:latest .
This works so far. However, if I specify a -e parameter on the docker container run command, it seems not to replace it in the last RUN command.
So if I run docker container run -d -e "REPODIR=blah" gitapp, I expect it to be cloned in /app/blah, but it's still cloned in the /app/test directory.
When I run a docker container exec -it -e "REPODIR=blah" <container-id> env I get:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=2f6ba38341d6
TERM=xterm
REPODIR=blah
PYTHONEXEC=python3
PIPEXEC=pip
VIRTUALENVEXEC=virtualenv
GITREPO=https://github.com/test/test.git
HOME=/root
So it seems that the variable is indeed passed to the container. Then why it isn't passed to the last RUN command so it clones the repo in the right directory? Am I missing something basic here?
When you execute a docker run you are instructing a container to execute Dockerfile's CMD or ENTRYPOINT command. Dockerfile commands that are above entrypoint have been already executed during build and are not executing again at runtime.
That's exactly the reason your github repo is being cloned to the directory defined initially at the Dockerfile and not in the one passed at the run command with -e flag.
A workaround would be to alter your image's entrypoint. You may transfer this part
RUN git clone $GITREPO $REPODIR \
&& $VIRTUALENVEXEC -p $PYTHONEXEC venv \
&& . venv/bin/activate \
&& cd $REPODIR \
&& $PIPEXEC install -r requirements.txt
to a bash script(let's call it my.script.sh) file that will be executed as image's entrypoint. Copy this file during build process in a preferred location, ensuring it holds executable flag and edit your Dockerfile's entrypoint accordingly:
CMD ["/path_to_script/myscript.sh" ]
This however has the caveat that the script will be executed each time the container is started in contrast with your current setup, possibly leading to delay depending on myscript.sh content.
we are trying to host tensorflow object-detection model on GCP.
we have maintain below directory structure before running "gcloud app deploy".
For you convenient I am attaching the configuration files with the question.
Wer are getting deployment error which is mentioned below. Please suggest a solution.
+root
+object_detection/
+slim/
+env
+app.yaml
+Dockerfile
+requirement.txt
+index.html
+test.py
Dockerfile
FROM gcr.io/google-appengine/python
LABEL python_version=python2.7
RUN virtualenv --no-download /env -p python2.7
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Various Python and C/build deps
RUN apt-get update && apt-get install -y \
wget \
build-essential \
cmake \
git \
unzip \
pkg-config \
python-dev \
python-opencv \
libopencv-dev \
libav-tools \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libjasper-dev \
libgtk2.0-dev \
python-numpy \
python-pycurl \
libatlas-base-dev \
gfortran \
webp \
python-opencv \
qt5-default \
libvtk6-dev \
zlib1g-dev \
protobuf-compiler \
python-pil python-lxml \
python-tk
# Install Open CV - Warning, this takes absolutely forever
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
RUN protoc /app/object_detection/protos/*.proto --python_out=/app/.
RUN export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/app/slim
CMD exec gunicorn -b :$PORT UploadTest:app
requirement.txt
Flask==0.12.2
gunicorn==19.7.1
numpy==1.13.1
requests==0.11.1
bs4==0.0.1
nltk==3.2.1
pymysql==0.7.2
xlsxwriter==0.8.5
Pillow==4.2.1
pytesseract==0.1
opencv-python>=3.0
matplotlib==2.0.2
tensorflow==1.3.0
lxml==4.0.0
app.yaml
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT UploadTest:app
threadsafe: true
runtime_config:
python_version: 2
After all this i am seeting up the google cloud environment with gcloud init
And then start command gcloud app deploy
I am getting below error while deploying the solution.
Error:
Step 10/12 : RUN protoc /app/object_detection/protos/*.proto --python_out=/app/.
---> Running in 9b3ec9c43c2d
/app/object_detection/protos/anchor_generator.proto: File does not reside within any path specified using --proto_path (or -I). You must specify a --proto_path which encompasses this file. Note that the proto_path must be an exact prefix of the .proto file names -- protoc is too dumb to figure out when two paths (e.g. absolute and relative) are equivalent (it's harder than you think).
The command '/bin/sh -c protoc /app/object_detection/protos/*.proto --python_out=/app/.' returned a non-zero code: 1
ERROR
ERROR: build step "gcr.io/cloud-builders/docker#sha256:a4a83be9b2fb61452e864ecf1bcfca99d1845499ef9500ae2905cea0ea593769" failed: exit status 1
----------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.app.deploy) Cloud build failed. Check logs at https://console.cloud.google.com/gcr/builds/4dba3217-b7d6-4341-b28e-09a9dad45c18?
There is a directory "object_detection/protos" present and all necessary files are present there. Still getting deployment error. Please suggest where to change in dockerfile to deploy it successfully.
My assumption: GCP is not able to figure out the path of the protc file. May be I have to alter something in Docketfile. But not able to figure out the solution. Please answer.
NB: This setup is running well in local machine. But not working in GCP
I'm slowly making my way through the Riot Taking Control of your Docker Image tutorial http://engineering.riotgames.com/news/taking-control-your-docker-image. This tutorial is a little old, so there are some definite changes to how the end file looks. After hitting several walls I decided to work in the opposite order of the tutorial. I successfully folded the official jenkinsci image into my personal Dockerfile, starting with FROM: openjdk:8-dk. But when I try to fold in the openjdk:8-dk file into my personal image I receive the following error
E: Version '8u102-b14.1-1~bpo8+1' for 'openjdk-8-jdk' was not found
ERROR: Service 'jenkinsmaster' failed to build: The command '/bin/sh
-c set -x && apt-get update && apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION"
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" && rm -rf
/var/lib/apt/lists/* && [ "$JAVA_HOME" = "$(docker-java-home)" ]'
returned a non-zero code: 100 Cosettes-MacBook-Pro:docker-test
Cosette$
I'm receiving this error even when I gave up and directly copied and pasted the openjdk:8-jdk Dockerfile into my own. My end goal is to bring my personal Dockerfile down to the point that it starts FROM debian-jessie. Any help would be appreciated.
My Dockerfile:
FROM buildpack-deps:jessie-scm
# A few problems with compiling Java from source:
# 1. Oracle. Licensing prevents us from redistributing the official JDK.
# 2. Compiling OpenJDK also requires the JDK to be installed, and it gets
# really hairy.
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
unzip \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN echo 'deb http://deb.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-backports.list
# Default to UTF-8 file.encoding
ENV LANG C.UTF-8
# add a simple script that can auto-detect the appropriate JAVA_HOME value
# based on whether the JDK or only the JRE is installed
RUN { \
echo '#!/bin/sh'; \
echo 'set -e'; \
echo; \
echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
} > /usr/local/bin/docker-java-home \
&& chmod +x /usr/local/bin/docker-java-home
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV JAVA_VERSION 8u102
ENV JAVA_DEBIAN_VERSION 8u102-b14.1-1~bpo8+1
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN set -x \
&& apt-get update \
&& apt-get install -y \
openjdk-8-jdk="$JAVA_DEBIAN_VERSION" \
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" \
&& rm -rf /var/lib/apt/lists/* \
&& [ "$JAVA_HOME" = "$(docker-java-home)" ]
# see CA_CERTIFICATES_JAVA_VERSION notes above
RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
# Jenkins Specifics
# install Tini
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
# Set Jenkins Environmental Variables
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.19.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=dc28b91e553c1cd42cc30bd75d0f651671e6de0b
ENV JENKINS_UC https://updates.jenkins.io
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000. If you bind mount a volume from the host or a data
# container, ensure you use the same uid.
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Install Jenkins. Could use ADD but this one does not check Last-Modified header neither does it
# allow to control checksum. see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha1sum -c -
# Prep Jenkins Directories
USER root
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R ${group}:${user} /var/log/jenkins
RUN chown -R ${group}:${user} /var/cache/jenkins
# Expose ports for web (8080) & node (50000) agents
EXPOSE 8080
EXPOSE 50000
# Copy in local config filesfiles
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
# NOTE : Just set pluginID to download latest version of plugin.
# NOTE : All plugins need to be listed as there is no transitive dependency resolution.
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup
# /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/jenkins.sh
# Switch to the jenkins user
USER ${user}
# Tini as the entry point to manage zombie processes
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
Try a JAVA_DEBIAN_VERSION of 8u111-b14-2~bpo8+1
Here's what happens: when you build the docker file, docker tries to execute all the lines in the dockerfile. One of those is this apt command: apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION". This comand says "Install OpenJDK version $JAVA_DEBIAN_VERSION, exactly. Nothing else.". This version is no longer available in Debian repositories, so it can't be apt-get installed! I believe this happens with all packages in official mirrors: if a new version of the package is released, the older version is no longer around to be installed.
If you want to access older Debian packages, you can use something like http://snapshot.debian.org/. The older OpenJDK package has known security vulnerabilities. I recommend using the latest version.
You can use the latest version by leaving out the explicit version in the apt-get command. On the other hand, this will make your image less reproducible: building the image today may get you u111, building it tomorrow may get you u112.
As for why the instructions worked in the other Dockerfile, I think the reason is that at the time the other Dockerfile was built, the package was available. So docker could apt-get install it. Docker then built the image containing the (older) OpenJDK. That image is a binary, so you can install it, or use it in FROM without any issues. But you can't reproduce the image: if you were to try and build the same image yourself, you would run into the same errors.
This also brings up an issue about security updates: since docker images are effectively static binaries (built once, bundle in all dependencies), they don't get security updates once built. You need to keep track of any security updates affecting your docker images and rebuild any affected docker images.
I am trying to build a docker image using the following docker file.
FROM ubuntu:latest
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update packages
RUN apt-get -y update && apt-get install -y \
curl \
build-essential \
libssl-dev \
git \
&& rm -rf /var/lib/apt/lists/*
ENV APP_NAME testapp
ENV NODE_VERSION 5.10
ENV SERVE_PORT 8080
ENV LIVE_RELOAD_PORT 8888
# Install nvm, node, and angular
RUN (curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.1/install.sh | bash -) \
&& source /root/.nvm/nvm.sh \
&& nvm install $NODE_VERSION \
&& npm install -g angular-cli \
&& ng new $APP_NAME \
&& cd $APP_NAME \
&& npm run postinstall
EXPOSE $SERVE_PORT $LIVE_RELOAD_PORT
WORKDIR $APP_NAME
EXPOSE 8080
CMD ["node", "-v"]
But I keep getting an error when trying to run it:
docker: Error response from daemon: Container command 'node' not found or does not exist..
I know node is being properly installed because if I rebuild the image by commenting out the CMD line from the docker file
#CMD ["node", "-v"]
And then start a shell session
docker run -it testimage
I can see that all my dependencies are there and return proper results
node -v
v5.10.1
.....
ng -v
angular-cli: 1.0.0-beta.5
node: 5.10.1
os: linux x64
So my question is. Why is the CMD in Dockerfile not able to run these and how can I fix it?
When using the shell to RUN node via nvm, you have sourced the nvm.sh file and it will have a $PATH variable set in it's environment to search for executable files via nvm.
When you run commands via docker run it will only inject a default PATH
docker run <your-ubuntu-image> echo $PATH
docker run <your-ubuntu-image> which node
docker run <your-ubuntu-image> nvm which node
Specifying a CMD with an array execs a binary directly without a shell or a $PATH to lookup.
Provide the full path to your node binary.
CMD ["/bin/node","-v"]
It's better to use the node binary rather than the nvm helper scripts due to the way dockers signal processing works. It might be easier to use the node apt packages in docker rather than nvm.