Why do I get "unzip: short read" when I try to build an image from Dockerfile? - docker

From Spring Microservices in Action book: I am trying to use the Docker Maven Plugin to build a docker image for deploy a Java microservice as Docker container to the cloud.
Dockerfile:
FROM openjdk:8-jdk-alpine
RUN mkdir -p /usr/local/configserver
ADD jce_policy-8.zip /tmp/
RUN unzip /tmp/jce_policy-8.zip && \
rm /tmp/jce_policy-8.zip && \
yes | cp -v /tmp/UnlimitedJCEPolicyJDK8/*.jar /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/
ADD #project.build.finalName#.jar /usr/local/configserver/
ADD run.sh run.sh
RUN chmod +x run.sh
CMD ./run.sh
Output related to step 4 in Dockerfile:
...
---> Using cache
---> dd33d4c12d29
Step 4/8 : RUN unzip /tmp/jce_policy-8.zip && rm /tmp/jce_policy-8.zip && yes | cp -v /tmp/UnlimitedJCEPolicyJDK8/*.jar /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/
---> Running in 1071273ceee5
Archive: /tmp/jce_policy-8.zip
unzip: short read
Why do I get unzip: short read when I try to build the image?

Somehow, curl on alpine linux distro can't set cookie headers correctly while downloading jce zip file. It seems it downloads a zip file but in fact it is an html error page. If you view the file you can see that it is an html file. I've used wget instead of curl and it successfully downloaded file. Then unzip operation worked as expected.
FROM openjdk:8-jdk-alpine
RUN apk update && apk upgrade && apk add netcat-openbsd
RUN mkdir -p /usr/local/configserver
RUN cd /tmp/ && \
wget 'http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip' --header "Cookie: oraclelicense=accept-securebackup-cookie" && \
unzip jce_policy-8.zip && \
rm jce_policy-8.zip && \
yes |cp -v /tmp/UnlimitedJCEPolicyJDK8/*.jar /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/
ADD #project.build.finalName#.jar /usr/local/configserver/
ADD run.sh run.sh
RUN chmod +x run.sh
CMD ./run.sh

It's possible your jce_policy-8.zip archive is being recognized as a compressed archive and expanded by the ADD instruction. If so, you can skip unzipping on the next line. Or, switch to the COPY instruction, which does no special processing of local archives.
In general, I recommend always using the COPY instruction to bring in files and directories from the build context. Only use ADD when you specifically want the extra unpacking behaviour.

I'm find solved link
FROM openjdk:8-jdk-alpine
RUN apk update && apk upgrade && apk add netcat-openbsd && apk add curl
RUN mkdir -p /usr/local/configserver
RUN cd /tmp/ && \
**curl -L -b "oraclelicense=a" http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -O** && \
unzip jce_policy-8.zip && \
rm jce_policy-8.zip && \
yes |cp -v /tmp/UnlimitedJCEPolicyJDK8/*.jar /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/
ADD #project.build.finalName#.jar /usr/local/configserver/
ADD run.sh run.sh
RUN chmod +x run.sh
CMD ./run.sh

Maybe it is related to the fact that the unzip command in alpine is provided busybox and not the standard unzip tool.
Busybox do have bugs related to this error:
https://bugs.busybox.net/show_bug.cgi?id=8821
Here is a related issue with more details:
https://github.com/wahern/luaossl/issues/103
As a workaround installing the standard unzip command should work.

Related

upload the ssh folder with keys to docker

I need to throw the ssh folder with the keys in docker.
Dockerfile:
FROM python:3.6-alpine3.12
RUN mkdir /code && mkdir /data
ADD . /code
WORKDIR /code
RUN pip3 install -r requirement && apk add git
RUN mkdir /root/.ssh && -v ~/.ssh:/root/.ssh
RUN apk add -y wget
Error when building:
/bin/sh: illegal option -
The command '/bin/sh -c -v ~/.ssh:/root/.ssh returned a non-zero code: 2
The shell does not recognize the command -v ~/.ssh:/root/.ssh
Try this:
FROM python:3.6-alpine3.12
ADD . /code
WORKDIR /code
RUN pip3 install -r requirement && \
apk add -y git wget && \
mkdir /data
COPY $HOME/.ssh /root/.ssh
PS: I added some Dockerfile's optimization for you
EDIT:
Copying sensitive data into your container is not a good idea unless you really know what you are doing.
If your application needs to connect to a remote server you own it would be better to generate new keys for it specifically and distribute them on your server (public key).

Run Python scripts on command line running Docker images

I built a docker image using Dockerfile with Python and some libraries inside (no my project code inside). In my local work dir, there are some scripts to be run on the docker. So, here what I did
$ cd /path/to/my_workdir
$ docker run -it --name test -v `pwd`:`pwd` -w `pwd` my/code:test python src/main.py --config=test --results-dir=/home/me/Results
The command python src/main.py --config=test --results-dir=/home/me/Results is what I want to run inside the Docker container.
However, it returns,
/home/docker/miniconda3/bin/python: /home/docker/miniconda3/bin/python: cannot execute binary file
How can I fix it and run my code?
Here is my Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
MAINTAINER Me <me#me.com>
RUN apt update -yq && \
apt install -yq curl wget unzip git vim cmake sudo
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh
ENV PATH /home/docker/miniconda3/bin:$PATH
Run pip install absl-py==0.5.0 atomicwrites==1.2.1 attrs==18.2.0 certifi==2018.8.24 chardet==3.0.4 cycler==0.10.0 docopt==0.6.2 enum34==1.1.6 future==0.16.0 idna==2.7 imageio==2.4.1 jsonpickle==1.2 kiwisolver==1.0.1 matplotlib==3.0.0 mock==2.0.0 more-itertools==4.3.0 mpyq==0.2.5 munch==2.3.2 numpy==1.15.2 pathlib2==2.3.2 pbr==4.3.0 Pillow==5.3.0 pluggy==0.7.1 portpicker==1.2.0 probscale==0.2.3 protobuf==3.6.1 py==1.6.0 pygame==1.9.4 pyparsing==2.2.2 pysc2==3.0.0 pytest==3.8.2 python-dateutil==2.7.3 PyYAML==3.13 requests==2.19.1 s2clientprotocol==4.10.1.75800.0 sacred==0.8.1 scipy==1.1.0 six==1.11.0 sk-video==1.1.10 snakeviz==1.0.0 tensorboard-logger==0.1.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1.1 urllib3==1.23
USER docker
ENTRYPOINT ["/bin/bash"]
Try making the file executable before running it.
as John mentioned to do in the dockerfile
FROM python:latest
COPY src/main.py /usr/local/share/
RUN chmod +x /usr/local/share/src/main.py #<-**--- just add this also
# I have some doubts about the pathing
CMD ["/usr/local/share/src/main.py", "--config=test --results-dir=/home/me/Results"]
You can run a python script in docker by adding this to your docker file:
FROM python:latest
COPY src/main.py /usr/local/share/
CMD ["src/main.py", "--config=test --results-dir=/home/me/Results"]

Can't clone github public repository into Dockerfile's Run order

Premise · What I want to realize
I'm trying to clone a git public Repository into Dockerfiles Run order, but I'm not going well...
testing environment
MacOS Mojave
10.14.6
Docker
19.03.8
python
3.6.10
bash
3.2.57
What I did
**1. make a Dockerfile **
FROM python:3.6
LABEL maintainer="aaa"
SHELL ["/bin/bash", "-c"]
WORKDIR /usr/local/src/
RUN git clone https://path/to/target_repository.git \
&& chmod -R 755 ./target_repository \
&& cd ./target_repository \
&& pip install -r requirements.txt \
&& mkdir -p ./data/hojin/zip \
&& mv ../13_tokyo_all_20200529.zip ./data/hojin/zip/ \
&& sh scripts/download.sh \
&& pip install IPython seqeval \
&& sh scripts/generate_alias.sh \
&& python tools/dataset_converter.py \
&& python tools/dataset_preprocess.py
EXPOSE 80
CMD ["/sbin/init"]
Problems occurring · Error messages
...
Cloning into 'target-repository'...
chmod: cannot access './target-repository': No such file or directory
...
that's all
I got the errors. What shoud I do?
Could you lend me a hand?
I changed a bit your Dockerfile to test with my repo and it works well.
FROM python:3.6
LABEL maintainer="aaa"
SHELL ["/bin/bash", "-c"]
WORKDIR /usr/local/src/
RUN git clone https://path/to/my_repository.git
RUN chmod -R 755 ./my_repository
RUN cd ./my_repository
You can use RUN commands like me to be more clear and make sure that you type exactly the name of folder.

Docker : Dockerfile can't execute COPY

I ve to build a docker images .
Inside my repositiory , i ve those files :
Dockerfile
docker-prompt
My Dockerfile is :
FROM fortio/fortio:1.3.1 as fortiobuild
FROM docker:stable-dind
RUN apk add --no-cache tcpdump apache2-utils lynx git tmux py2-pip apache2-utils vim build-base gettext-dev curl bash-completion bash util-linux jq openssh openssl tree python python-dev py-pip libffi-dev openssl-dev libgcc nfs-utils
ENV COMPOSE_VERSION=1.24.1
RUN pip install docker-compose==${COMPOSE_VERSION}
RUN mkdir /etc/bash_completion.d \
&& curl https://raw.githubusercontent.com/docker/cli/master/contrib/completion/bash/docker -o /etc/bash_completion.d/docker \
&& sed -i "s/ash/bash/" /etc/passwd
RUN rm /sbin/modprobe && echo '#!/bin/true' >/sbin/modprobe && chmod +x /sbin/modprobe
COPY ["docker-prompt", "sudo", "/usr/local/bin/"]
I ve run this cmd :
docker build -t "myImage"
but if fails throwing this :
Step 8/8: COPY ["docker-prompt", "sudo", "/usr/local/bin/"] COPY
failed: stat /var/lib/docker/tmp/docker-builder273771066/sudo: no such
file or directory
Since it's not clear what is the problem ,
Suggestions ?
COPY command work with source and destination only, if you want to own file to sudo then you need to use --chown. otherwise the copy command will consider the Sudo as the source path.
COPY
COPY has two forms:
COPY [--chown=<user>:<group>] <src>... <dest>
COPY [--chown=<user>:<group>] ["<src>",... "<dest>"] (this form is required for paths containing whitespace)
Note:
The --chown feature is only supported on Dockerfiles used to build
Linux containers, and will not work on Windows containers. Since user
and group ownership concepts do not translate between Linux and
Windows, the use of /etc/passwd and /etc/group for translating user
and group names to IDs restricts this feature to only be viable for
Linux OS-based containers.
I assume that you are looking for a way like
COPY --chown=root:root docker-prompt /usr/local/bin/

Setting up NSCA in Docker Alpine image for passive nagios check

In the Alpine linux package site https://pkgs.alpinelinux.org/packages
NSCA packages are yet to get added. Is there an alternative to setup NSCA in Alpine Linux for passive-check?
If there is no package for it, you can always build it yourself.
FROM alpine AS builder
ARG NSCA_VERSION=2.9.2
RUN apk update && apk add build-base build-base gcc wget git
RUN wget http://prdownloads.sourceforge.net/nagios/nsca-$NSCA_VERSION.tar.gz
RUN tar xzf nsca-$NSCA_VERSION.tar.gz
RUN cd nsca-$NSCA_VERSION&& ./configure && make all
RUN ls -lah nsca-$NSCA_VERSION/src
RUN mkdir -p /dist/bin && cp nsca-$NSCA_VERSION/src/nsca /dist/bin
RUN mkdir -p /dist/etc && cp nsca-$NSCA_VERSION/sample-config/nsca.cfg /dist/etc
FROM alpine
COPY --from=builder /dist/bin/nsca /bin/
COPY --from=builder /dist/etc/nsca.cfg /etc/
Since this is using multiple stages, your resulting image will not contain development files and will still be small.

Resources