How to edit mounted files in a devcontainer? - docker

I'm struggling to setup a dev container for my project. Initially, the setup was fine. However, when I edited files from inside the container, I could no longer modify them outside of the container because the user inside the container was a root user.
So the, I modified my Dockerfile to build my project inside of a user directory. The issue is now, I can't edit the mounted source code because I lack the necessary permissions inside the container.
Here is my (new) Dockerfile:
# We build everything from the perspective of the user
# This lets us edit mounted files from inside the container using
# the VsCode Remote Container extension w/o setting their permissions to root
FROM nvidia/cuda:11.6.1-devel-ubuntu20.04
# Prevents apt from giving prompts
# Set as ARG so it does not persist after build
# https://serverfault.com/questions/618994/when-building-from-dockerfile-debian-ubuntu-package-install-debconf-noninteract
ARG DEBIAN_FRONTEND=noninteractive
# Docker docs: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
# In addition, when you clean up the apt cache by removing /var/lib/apt/lists it reduces the image size, since the apt cache is not stored in a layer.
# Since the RUN statement starts with apt-get update, the package cache is always refreshed prior to apt-get install.
# Install all apt dependencies before we switch to non-root
RUN apt update && apt install curl \
# for bulding torchvision
git ninja-build -y && rm -rf /var/lib/apt/lists/*
# These commands are for the VsCode Docker extension
RUN adduser --system --group user
USER user
WORKDIR /home/user
# Install miniconda.sh
ENV PATH="/home/user/miniconda3/bin:${PATH}"
RUN curl \
https://repo.anaconda.com/miniconda/Miniconda3-py38_4.12.0-Linux-x86_64.sh -o miniconda.sh \
&& mkdir /home/user/.conda \
&& bash miniconda.sh -b \
&& rm miniconda.sh
COPY ./environment.yml ./environment.yml
RUN conda env create -f environment.yml && rm ./environment.yml
# pip throws a warning "don't run pip as root", but it's not actually running as root -- so ignore it
# (you can check by attaching to the container & running `pip list`)
COPY ./requirements.txt ./requirements.txt
RUN conda run --no-capture-output -n video-rec pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu116 && rm ./requirements.txt
# This is a hack, will figure it out later
# RUN conda run --no-capture-output -n video-rec pip install 'mosiacml[all]'
# TODO: Pillow SIMD for fast image augmentations
# Uncomment + remove FFMPEG from environment.yml if using GPU decoding
# Compile nv-codec
# RUN git clone --depth 1 --branch n11.1.5.1 https://git.videolan.org/git/ffmpeg/nv-codec-headers.git && \
# cd nv-codec-headers && \
# make install -j 100
# Build FFMPEG with Nvidia, torch requires FFMPEG 4.2 (I think)
# RUN git clone --depth 1 --branch n4.2.7 https://git.ffmpeg.org/ffmpeg.git ffmpeg/ \
# && cd ffmpeg && \
# ./configure --enable-nonfree --enable-shared --enable-cuda-nvcc --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64 \
# # I need this, I believe this generates code that works for A100s
# # See: https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
# # SM80 or SM_80, compute_80 => NVIDIA A100
# # SM76 => Tesla GA10x cards, RTX Ampere – RTX 3080, GA102 – RTX 3090, RTX A2000, A3000, RTX A4000, A5000, A6000, NVIDIA A40, GA106 – RTX 3060, GA104 – RTX 3070, GA107 – RTX 3050, RTX A10, RTX A16, RTX A40, A2 Tensor Core GPU
# --nvccflags="-gencode arch=compute_80,code=sm_80 -O2" \
# && make -j 100 \
# && make install
# torchvision (FFMPEG dependency is installed through conda)
RUN git clone --depth 1 --branch v0.13.1 https://github.com/pytorch/vision.git vision \
&& cd vision \
# remove the existing torchvision
&& conda run --no-capture-output -n video-rec pip uninstall --yes torchvision \
&& conda run --no-capture-output -n video-rec python3 setup.py install
# Set a default shell for the Dev container extension
ENV SHELL /bin/bash
and here is my devcontainer.json:
{
"dockerComposeFile": "../commands/develop/docker-compose.yml",
"service": "develop",
"workspaceFolder": "/home/user/app",
"shutdownAction": "stopCompose",
"updateRemoteUserUID": true
}
Lastly, here is my compose file:
version: "3.8"
services:
develop:
build: ../..
stdin_open: true
volumes:
- ../..:/home/user/app
- ${DATA_DIR}:/data
environment:
WANDB_API_KEY: ${WANDB_API_KEY}
user: user
# This removes a certain memory restriction ??
# w/o the container crashes
ipc: host
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
What can I do to let me edit files on the host machine from inside my dev container without ruining their permissions?

Related

docker-compose up failed - reading directory failed

I have a docker image which I wanted to bring up to run test automatically, the scripts are located at /opt/robotframework/tests
Error occurred that docker cannot read the directory:
$ docker-compose up
Creating network "docker-robot-framework_default" with the default driver
Creating robot-runner ... done
Attaching to robot-runner
robot-runner | [ ERROR ] Reading directory '/opt/robotframework/tests' failed: PermissionError: [Errno 13] Permission denied: '/opt/robotframework/tests'
robot-runner |
robot-runner | Try --help for usage information.
robot-runner exited with code 252
docker-compose.yml
version: '3'
services:
robot-runner:
build:
context: .
dockerfile: /Dockerfile
container_name: robot-runner
image: ppodgorsek/robot-framework:latest
volumes:
- ./test:/opt/robotframework/tests
- ./test-audios:/opt/robotframework/test-audios
- ./output-local:/opt/robotframework/reports
environment:
PYTHONWARNINGS: "ignore:Unverified HTTPS request"
Dockerfile:
FROM fedora:36
MAINTAINER Paul Podgorsek <ppodgorsek#users.noreply.github.com>
LABEL description Robot Framework in Docker.
# Set the reports directory environment variable
ENV ROBOT_REPORTS_DIR /opt/robotframework/reports
# Set the tests directory environment variable
ENV ROBOT_TESTS_DIR /opt/robotframework/tests
# ENV ROBOT_TEST_AUDIOS_DIR /opt/robotframework/test-audios
# Set the working directory environment variable
ENV ROBOT_WORK_DIR /opt/robotframework/temp
# Setup X Window Virtual Framebuffer
ENV SCREEN_COLOUR_DEPTH 24
ENV SCREEN_HEIGHT 1080
ENV SCREEN_WIDTH 1920
# Setup the timezone to use, defaults to UTC
ENV TZ UTC
# Set number of threads for parallel execution
# By default, no parallelisation
ENV ROBOT_THREADS 1
# Define the default user who'll run the tests
ENV ROBOT_UID 1000
ENV ROBOT_GID 1000
# Dependency versions
ENV ALPINE_GLIBC 2.35-r0
ENV AWS_CLI_VERSION 1.22.87
ENV AXE_SELENIUM_LIBRARY_VERSION 2.1.6
ENV BROWSER_LIBRARY_VERSION 12.2.0
ENV CHROMIUM_VERSION 99.0
ENV DATABASE_LIBRARY_VERSION 1.2.4
ENV DATADRIVER_VERSION 1.6.0
ENV DATETIMETZ_VERSION 1.0.6
ENV FAKER_VERSION 5.0.0
ENV FIREFOX_VERSION 98.0
ENV FTP_LIBRARY_VERSION 1.9
ENV GECKO_DRIVER_VERSION v0.30.0
ENV IMAP_LIBRARY_VERSION 0.4.2
ENV PABOT_VERSION 2.5.2
ENV REQUESTS_VERSION 0.9.2
ENV ROBOT_FRAMEWORK_VERSION 5.0
ENV SELENIUM_LIBRARY_VERSION 6.0.0
ENV SSH_LIBRARY_VERSION 3.8.0
ENV XVFB_VERSION 1.20
# By default, no reports are uploaded to AWS S3
ENV AWS_UPLOAD_TO_S3 false
# Prepare binaries to be executed
COPY bin/chromedriver.sh /opt/robotframework/bin/chromedriver
COPY bin/chromium-browser.sh /opt/robotframework/bin/chromium-browser
COPY bin/run-tests-in-virtual-screen.sh /opt/robotframework/bin/
# COPY bin/mml_4_apr_2018_b_session3_2.wav /opt/robotframework/test-audios
# COPY bin/mml_4_apr_2018_b_session3_2.stm /opt/robotframework/test-audios
# Install system dependencies
RUN dnf upgrade -y --refresh \
&& dnf install -y \
chromedriver-${CHROMIUM_VERSION}* \
chromium-${CHROMIUM_VERSION}* \
firefox-${FIREFOX_VERSION}* \
npm \
nodejs \
python3-pip \
tzdata \
xorg-x11-server-Xvfb-${XVFB_VERSION}* \
&& dnf clean all
# FIXME: below is a workaround, as the path is ignored
RUN mv /usr/lib64/chromium-browser/chromium-browser /usr/lib64/chromium-browser/chromium-browser-original \
&& ln -sfv /opt/robotframework/bin/chromium-browser /usr/lib64/chromium-browser/chromium-browser
# Install Robot Framework and associated libraries
RUN pip3 install \
--no-cache-dir \
robotframework==$ROBOT_FRAMEWORK_VERSION \
robotframework-browser==$BROWSER_LIBRARY_VERSION \
robotframework-databaselibrary==$DATABASE_LIBRARY_VERSION \
robotframework-datadriver==$DATADRIVER_VERSION \
robotframework-datadriver[XLS] \
robotframework-datetime-tz==$DATETIMETZ_VERSION \
robotframework-faker==$FAKER_VERSION \
robotframework-ftplibrary==$FTP_LIBRARY_VERSION \
robotframework-imaplibrary2==$IMAP_LIBRARY_VERSION \
robotframework-pabot==$PABOT_VERSION \
robotframework-requests==$REQUESTS_VERSION \
robotframework-seleniumlibrary==$SELENIUM_LIBRARY_VERSION \
robotframework-sshlibrary==$SSH_LIBRARY_VERSION \
axe-selenium-python==$AXE_SELENIUM_LIBRARY_VERSION \
PyYAML \
# Install awscli to be able to upload test reports to AWS S3
awscli==$AWS_CLI_VERSION
# Gecko drivers
RUN dnf install -y \
wget \
# Download Gecko drivers directly from the GitHub repository
&& wget -q "https://github.com/mozilla/geckodriver/releases/download/$GECKO_DRIVER_VERSION/geckodriver-$GECKO_DRIVER_VERSION-linux64.tar.gz" \
&& tar xzf geckodriver-$GECKO_DRIVER_VERSION-linux64.tar.gz \
&& mkdir -p /opt/robotframework/drivers/ \
&& mv geckodriver /opt/robotframework/drivers/geckodriver \
&& rm geckodriver-$GECKO_DRIVER_VERSION-linux64.tar.gz \
&& dnf remove -y \
wget \
&& dnf clean all
# Install the Node dependencies for the Browser library
# FIXME: Playright currently doesn't support relying on system browsers, which is why the `--skip-browsers` parameter cannot be used here.
RUN rfbrowser init \
&& ln -sf /usr/lib64/libstdc++.so.6 /usr/local/lib/python3.10/site-packages/Browser/wrapper/node_modules/playwright-core/.local-browsers/firefox-1316/firefox/libstdc++.so.6
# Create the default report and work folders with the default user to avoid runtime issues
# These folders are writeable by anyone, to ensure the user can be changed on the command line.
RUN mkdir -p ${ROBOT_REPORTS_DIR} \
&& mkdir -p ${ROBOT_WORK_DIR} \
&& chown ${ROBOT_UID}:${ROBOT_GID} ${ROBOT_REPORTS_DIR} \
&& chown ${ROBOT_UID}:${ROBOT_GID} ${ROBOT_WORK_DIR} \
&& chmod ugo+w ${ROBOT_REPORTS_DIR} ${ROBOT_WORK_DIR}
# Allow any user to write logs
RUN chmod ugo+w /var/log \
&& chown ${ROBOT_UID}:${ROBOT_GID} /var/log
# Update system path
ENV PATH=/opt/robotframework/bin:/opt/robotframework/drivers:$PATH
# Set up a volume for the generated reports
VOLUME ${ROBOT_REPORTS_DIR}
USER ${ROBOT_UID}:${ROBOT_GID}
# A dedicated work folder to allow for the creation of temporary files
WORKDIR ${ROBOT_WORK_DIR}
# Execute all robot tests
CMD ["run-tests-in-virtual-screen.sh"]
local directories:
enter image description here
Basically the USER specified in dockerfile (USER ${ROBOT_UID}:${ROBOT_GID}) is used the container and has no access rights to the folder on your host. While you could use root in the container to "solve" the problem your container may get root on host. You should NEVER use root in a docker container.
To avoid the problem give the user (in your case 1000:1000) appropriate rights on the folder on host (./test) with setfacl. If the user is not present on host just add one with same UID/GID:
sudo addgroup robot --gid 1000
sudo adduser robot --ingroup robot --uid 1000
setfacl -R -m u:robot:rwx test
by adding user: root in docker-compose.yml. The user granted full access right to path.
version: '3'
services:
robot-runner:
build:
context: .
dockerfile: /Dockerfile
container_name: robot-runner
# image: ppodgorsek/robot-framework:latest
image: robot-runner:latest
user: root
volumes:
- ./BrowserTests:/opt/robotframework/tests
- ./output-local:/opt/robotframework/reports
environment:
PYTHONWARNINGS: "ignore:Unverified HTTPS request"
extra_hosts:
- "speech.sts:172.17.0.1"
- "speech.srs:172.17.0.1"
networks:
- sts_sts_network
networks:
sts_sts_network:
external: true

Visual studio code docker container error

I want to use a docker container with VS code. I added configuration files for Anaconda configuration container and build it. VS Code created a dockerfile:
# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.231.6/containers/python-3-anaconda/.devcontainer/base.Dockerfile
FROM mcr.microsoft.com/vscode/devcontainers/anaconda:0-3
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="none"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
# Copy environment.yml (if found) to a temp location so we update the environment. Also
# copy "noop.txt" so the COPY instruction does not fail if no environment.yml exists.
COPY environment.yml* .devcontainer/noop.txt /tmp/conda-tmp/
RUN if [ -f "/tmp/conda-tmp/environment.yml" ]; then umask 0002 && /opt/conda/bin/conda env update -n base -f /tmp/conda-tmp/environment.yml; fi \
&& rm -rf /tmp/conda-tmp
RUN apt-get update
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
But when I add command RUN apt-get or any another command (for example: RUN pip3 install opencv-python) at the and of this dockerfile then I get an error: "An error occurred setting up the container."
It is very strange problem. This commands are correctly, but I can't to change original dockerfile. After any change in the dockerfile I get this error. What is way to solve this problem? I want to ask for some examples of correct dockerfiles with right pip3 packages installation.
Also I want to ask. Where does VS code keep created docker images?

Sagemaker with windows

I am trying to use aws sagemaker with Windows using Docker :
Here is the docker file :
# Build an image that can do training and inference in SageMaker
# This is a Python 2 image that uses the nginx, gunicorn, flask stack
# for serving inferences in a stable way.
FROM ubuntu:16.04
MAINTAINER Amazon AI <sage-learner#amazon.com>
RUN apt-get -y update && apt-get install -y --no-install-recommends \
wget \
python3.5 \
nginx \
libgcc-5-dev \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Here we get all python packages.
# There's substantial overlap between scipy and numpy that we eliminate by
# linking them together. Likewise, pip leaves the install caches populated which uses
# a significant amount of space. These optimizations save a fair amount of space in the
# image, which reduces start up time.
RUN wget https://bootstrap.pypa.io/3.3/get-pip.py && python3.5 get-pip.py && \
pip3 install numpy==1.14.3 scipy scikit-learn==0.19.1 xgboost==0.72.1 pandas==0.22.0 flask gevent gunicorn && \
(cd /usr/local/lib/python3.5/dist-packages/scipy/.libs; rm *; ln ../../numpy/.libs/* .) && \
rm -rf /root/.cache
# Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard
# output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE
# keeps Python from writing the .pyc files which are unnecessary in this case. We also update
# PATH so that the train and serve programs are found when the container is invoked.
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/program:${PATH}"
# Set up the program in the image
COPY xgboost /opt/program
WORKDIR /opt/program
My question is should I, since I work under windows 7, change these path : ?
Thank you
Are you talking about the ENV PATH?
That sets the PATH env within the docker container, which uses linux file system (ubuntu:16.04), so you shouldn't have to change anything.
https://docs.docker.com/engine/reference/builder/#environment-replacement
EDIT:
I reread your question. None of your paths have to change within your Dockerfile, as they are configured for the docker container themselves.

Can't build openjdk:8-jdk image directly

I'm slowly making my way through the Riot Taking Control of your Docker Image tutorial http://engineering.riotgames.com/news/taking-control-your-docker-image. This tutorial is a little old, so there are some definite changes to how the end file looks. After hitting several walls I decided to work in the opposite order of the tutorial. I successfully folded the official jenkinsci image into my personal Dockerfile, starting with FROM: openjdk:8-dk. But when I try to fold in the openjdk:8-dk file into my personal image I receive the following error
E: Version '8u102-b14.1-1~bpo8+1' for 'openjdk-8-jdk' was not found
ERROR: Service 'jenkinsmaster' failed to build: The command '/bin/sh
-c set -x && apt-get update && apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION"
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" && rm -rf
/var/lib/apt/lists/* && [ "$JAVA_HOME" = "$(docker-java-home)" ]'
returned a non-zero code: 100 Cosettes-MacBook-Pro:docker-test
Cosette$
I'm receiving this error even when I gave up and directly copied and pasted the openjdk:8-jdk Dockerfile into my own. My end goal is to bring my personal Dockerfile down to the point that it starts FROM debian-jessie. Any help would be appreciated.
My Dockerfile:
FROM buildpack-deps:jessie-scm
# A few problems with compiling Java from source:
# 1. Oracle. Licensing prevents us from redistributing the official JDK.
# 2. Compiling OpenJDK also requires the JDK to be installed, and it gets
# really hairy.
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
unzip \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN echo 'deb http://deb.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-backports.list
# Default to UTF-8 file.encoding
ENV LANG C.UTF-8
# add a simple script that can auto-detect the appropriate JAVA_HOME value
# based on whether the JDK or only the JRE is installed
RUN { \
echo '#!/bin/sh'; \
echo 'set -e'; \
echo; \
echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
} > /usr/local/bin/docker-java-home \
&& chmod +x /usr/local/bin/docker-java-home
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV JAVA_VERSION 8u102
ENV JAVA_DEBIAN_VERSION 8u102-b14.1-1~bpo8+1
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN set -x \
&& apt-get update \
&& apt-get install -y \
openjdk-8-jdk="$JAVA_DEBIAN_VERSION" \
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" \
&& rm -rf /var/lib/apt/lists/* \
&& [ "$JAVA_HOME" = "$(docker-java-home)" ]
# see CA_CERTIFICATES_JAVA_VERSION notes above
RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
# Jenkins Specifics
# install Tini
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
# Set Jenkins Environmental Variables
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.19.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=dc28b91e553c1cd42cc30bd75d0f651671e6de0b
ENV JENKINS_UC https://updates.jenkins.io
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000. If you bind mount a volume from the host or a data
# container, ensure you use the same uid.
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Install Jenkins. Could use ADD but this one does not check Last-Modified header neither does it
# allow to control checksum. see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha1sum -c -
# Prep Jenkins Directories
USER root
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R ${group}:${user} /var/log/jenkins
RUN chown -R ${group}:${user} /var/cache/jenkins
# Expose ports for web (8080) & node (50000) agents
EXPOSE 8080
EXPOSE 50000
# Copy in local config filesfiles
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
# NOTE : Just set pluginID to download latest version of plugin.
# NOTE : All plugins need to be listed as there is no transitive dependency resolution.
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup
# /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/jenkins.sh
# Switch to the jenkins user
USER ${user}
# Tini as the entry point to manage zombie processes
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
Try a JAVA_DEBIAN_VERSION of 8u111-b14-2~bpo8+1
Here's what happens: when you build the docker file, docker tries to execute all the lines in the dockerfile. One of those is this apt command: apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION". This comand says "Install OpenJDK version $JAVA_DEBIAN_VERSION, exactly. Nothing else.". This version is no longer available in Debian repositories, so it can't be apt-get installed! I believe this happens with all packages in official mirrors: if a new version of the package is released, the older version is no longer around to be installed.
If you want to access older Debian packages, you can use something like http://snapshot.debian.org/. The older OpenJDK package has known security vulnerabilities. I recommend using the latest version.
You can use the latest version by leaving out the explicit version in the apt-get command. On the other hand, this will make your image less reproducible: building the image today may get you u111, building it tomorrow may get you u112.
As for why the instructions worked in the other Dockerfile, I think the reason is that at the time the other Dockerfile was built, the package was available. So docker could apt-get install it. Docker then built the image containing the (older) OpenJDK. That image is a binary, so you can install it, or use it in FROM without any issues. But you can't reproduce the image: if you were to try and build the same image yourself, you would run into the same errors.
This also brings up an issue about security updates: since docker images are effectively static binaries (built once, bundle in all dependencies), they don't get security updates once built. You need to keep track of any security updates affecting your docker images and rebuild any affected docker images.

Syntaxnet spec file and Docker?

I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.

Resources