I'm trying to build a Docker image for my Gitlab CI pipeline containing docker client + gcloud along with the following gcloud components:
kubectl
docker-credential-gcr
This is my dockerfile:
FROM docker:git
RUN mkdir /opt \
&& cd /opt \
&& wget -q https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& tar -xzf google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& rm google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& ln -s /opt/google-cloud-sdk/bin/gcloud /usr/bin/gcloud \
&& apk -q update \
&& apk -q add python \
&& apk add --update libintl \
&& apk add --virtual build_deps gettext \
&& cp /usr/bin/envsubst /usr/local/bin/envsubst \
&& apk del build_deps \
&& rm -rf /var/cache/apk/* \
&& echo "y" | gcloud components install kubectl docker-credential-gcr \
&& ln -s /opt/google-cloud-sdk/bin/kubectl /usr/bin/kubectl \
&& ln -s /opt/google-cloud-sdk/bin/docker-credential-gcr /usr/bin/docker-credential-gcr
Inside my CI flow, I need to run docker-credential-gcr (because of this issue).
The docker-credential-gcr executable is correctly installed inside /opt/google-cloud-sdk/bin like shown by running docker run -i -t gitlabci-test ls /opt/google-cloud-sdk/bin
It is also correctly simlinked inside /usr/bin as shown by docker run -i -t gitlabci-test ls -la /usr/bin
And yet, trying to call it with any of the methods below fails miserably
docker run -i -t gitlabci-test docker-credential-gcr
docker run -i -t gitlabci-test /usr/bin/docker-credential-gcr
docker run -i -t gitlabci-test /opt/google-cloud-sdk/bin/docker-credential-gcr
Error message:
/usr/local/bin/docker-entrypoint.sh: exec: line 20: docker-credential-gcr: not found
On the other hand, running the kubectl component works fine
docker run -i -t gitlabci-test kubectl version
Any idea how I can fix this issue to be able to run docker-credential-gcr with the container ?
Related
How can I get act to create containers with a defined volume mounted?
I have created a local instance of a docker runner. I'm looking to optimise running of https://github.com/marketplace/actions/setup-miniconda
have created a docker image and a docker repository
have switched to using conda-lock to remove need to resolve environment.yaml
now the final step so that Conda package downloads can be shared across containers created by act with a volume mount in the container. Equivalent of:
docker run -it -v /home/vagrant/miniconda/pkgs/:/root/miniconda3/pkgs localhost:5000/my-act /bin/bash
Have tried patching docker_run.go
func (cr *containerReference) Create(capAdd []string, capDrop []string) common.Executor {
cr.input.Mounts["/root/miniconda3/pkgs"] = "/home/vagrant/miniconda/pkgs"
return common.
NewInfoExecutor("%sdocker create image=%s platform=%s entrypoint=%+q cmd=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd).
Then(
common.NewPipelineExecutor(
cr.connect(),
cr.find(),
cr.create(capAdd, capDrop),
).IfNot(common.Dryrun),
)
}
.actrc
-P ubuntu-latest=localhost:5000/my-act
Script to create local docker repository
actimg="localhost:5000/my-act"
docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker build --no-cache -t act-custom:v1 .
docker tag act-custom:v1 $actimg
docker push $actimg
Dockerfile
FROM catthehacker/ubuntu:act-latest
ARG CONDA=/root/miniconda
ENV CONDA=$CONDA
ENV INSTALL_MINICONDA=$CONDA
ARG NODE=/opt/hostedtoolcache/node/16.18.1/x64/bin/node
ENV RUNNER_TOOL_CACHE=/opt/hostedtoolcache
ENV RUNNER_TEMP=/tmp
RUN dl=$( curl -s https://api.github.com/repos/conda-incubator/setup-miniconda/releases/latest | jq -r '.zipball_url' ) \
&& wget -q -O dl2.zip $dl \
&& unzip -q dl2.zip -d dl2 \
&& env INPUT_ARCHITECTURE=x64 \
INPUT_AUTO-ACTIVATE-BASE=true \
INPUT_AUTO-UPDATE-CONDA=false \
INPUT_CLEAN-PATCHED-ENVIRONMENT-FILE=true \
INPUT_MINIFORGE-VARIANT=mambaforge \
INPUT_MINIFORGE-VERSION=latest \
INPUT_REMOVE-PROFILES=false \
INPUT_RUN-POST=true \
INPUT_USE-MAMBA=true \
${NODE} $( find dl2/ -wholename "*/dist/setup/index.js" ) \
&& source ${CONDA}3/etc/profile.d/conda.sh \
&& conda config --set default_threads 4 \
&& ${NODE} $( find dl2/ -wholename "*/dist/delete/index.js" ) \
&& mamba install conda-lock \
&& mamba clean --all \
&& rm -r /opt/hostedtoolcache/mambaforge/ \
&& rm -rf * \
&& mv /usr/local/bin/ /usr/local/bin-old \
&& ln -s ${CONDA}3/bin /usr/local/bin
ENV CONDA=${CONDA}3
ENV PATH=${CONDA}/bin:${PATH}
VOLUME /root/miniconda3/pkgs
COPY /home/vagrant/geopandas/ci/lock /root/lock
CMD [ "/usr/bin/tail -f /dev/null" ]
I'm trying to build a custom image of opentsdb to run as non-root user. Our k8s clusters have security policies that doesn't allow containers to run as root. I'm utilizing an existing Dockerfile from here https://hub.docker.com/r/petergrace/opentsdb-docker/dockerfile
Below is my Docker file where I have added extra step to create a new user 'opentsdb' and at the end running it as USER 'opentsdb'
FROM alpine:latest
ENV TINI_VERSION v0.18.0
ENV TSDB_VERSION 2.4.0
ENV HBASE_VERSION 1.4.4
ENV GNUPLOT_VERSION 5.2.4
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:/usr/lib/jvm/java-1.8-openjdk/bin/
ENV ALPINE_PACKAGES "rsyslog bash openjdk8 make wget libgd libpng libjpeg libwebp libjpeg-turbo cairo pango lua"
ENV BUILD_PACKAGES "build-base autoconf automake git python3-dev cairo-dev pango-dev gd-dev lua-dev readline-dev libpng-dev libjpeg-turbo-dev libwebp-dev sed"
ENV HBASE_OPTS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
ENV JVMARGS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -enableassertions -enablesystemassertions"
RUN addgroup opentsdb && adduser -D -u 100 -G opentsdb opentsdb
# Tini is a tiny init that helps when a container is being culled to stop things nicely
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-amd64 /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
# Add the base packages we'll need
RUN apk --update add apk-tools \
&& apk add ${ALPINE_PACKAGES} \
# repo required for gnuplot \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.0/testing/ \
&& mkdir -p /opt/opentsdb
WORKDIR /opt/opentsdb/
# Add build deps, build opentsdb, and clean up afterwards.
RUN set -ex && apk add --virtual builddeps ${BUILD_PACKAGES}
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN wget --no-check-certificate \
-O v${TSDB_VERSION}.zip \
https://github.com/OpenTSDB/opentsdb/archive/v${TSDB_VERSION}.zip \
&& unzip v${TSDB_VERSION}.zip \
&& rm v${TSDB_VERSION}.zip \
&& cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& echo "tsd.http.request.enable_chunked = true" >> src/opentsdb.conf \
&& echo "tsd.http.request.max_chunk = 1000000" >> src/opentsdb.conf
RUN cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& find . | xargs grep -s central.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/central/https:\/\/repo1/g" \
&& find . | xargs grep -s repo1.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/repo1/https:\/\/repo1/g" \
&& ./build.sh \
&& cp build-aux/install-sh build/build-aux \
&& cd build \
&& make install \
&& cd / \
&& rm -rf /opt/opentsdb/opentsdb-${TSDB_VERSION}
RUN cd /tmp && \
wget --no-check-certificate https://sourceforge.net/projects/gnuplot/files/gnuplot/${GNUPLOT_VERSION}/gnuplot-${GNUPLOT_VERSION}.tar.gz && \
tar xzf gnuplot-${GNUPLOT_VERSION}.tar.gz && \
cd gnuplot-${GNUPLOT_VERSION} && \
./configure && \
make install && \
cd /tmp && rm -rf /tmp/gnuplot-${GNUPLOT_VERSION} && rm /tmp/gnuplot-${GNUPLOT_VERSION}.tar.gz
RUN apk del builddeps && rm -rf /var/cache/apk/*
#Install HBase and scripts
RUN mkdir -p /data/hbase /root/.profile.d /opt/downloads
WORKDIR /opt/downloads
RUN wget -O hbase-${HBASE_VERSION}.bin.tar.gz http://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz \
&& tar xzvf hbase-${HBASE_VERSION}.bin.tar.gz \
&& mv hbase-${HBASE_VERSION} /opt/hbase \
&& rm -r /opt/hbase/docs \
&& rm hbase-${HBASE_VERSION}.bin.tar.gz
# Add misc startup files
RUN ln -s /usr/local/share/opentsdb/etc/opentsdb /etc/opentsdb \
&& rm /etc/opentsdb/opentsdb.conf \
&& mkdir /opentsdb-plugins
ADD files/opentsdb.conf /etc/opentsdb/opentsdb.conf.sample
ADD files/hbase-site.xml /opt/hbase/conf/hbase-site.xml.sample
ADD files/start_opentsdb.sh /opt/bin/
ADD files/create_tsdb_tables.sh /opt/bin/
ADD files/start_hbase.sh /opt/bin/
ADD files/entrypoint.sh /entrypoint.sh
# Fix ENV variables in installed scripts
RUN for i in /opt/bin/start_hbase.sh /opt/bin/start_opentsdb.sh /opt/bin/create_tsdb_tables.sh; \
do \
sed -i "s#::JAVA_HOME::#$JAVA_HOME#g; s#::PATH::#$PATH#g; s#::TSDB_VERSION::#$TSDB_VERSION#g;" $i; \
done
RUN echo "export HBASE_OPTS=\"${HBASE_OPTS}\"" >> /opt/hbase/conf/hbase-env.sh
#4242 is tsdb, rest are hbase ports
EXPOSE 60000 60010 60030 4242 16010 16070
USER opentsdb
#HBase is configured to store data in /data/hbase, vol-mount it to persist your data.
VOLUME ["/data/hbase", "/tmp", "/opentsdb-plugins"]
CMD ["/entrypoint.sh"]
however the newly built image is throwing error and says permission denied for /opt/bin/ files. And the opentsdb is not getting deployed correctly.
On local using docker desktop, everything works fine using root, when I run below command
docker run -dp 4242:4242 petergrace/opentsdb-docker
Do i need to use any chown commands too ?
Could you help how to make opentsdb get deployed correctly using uid 100 ? Thanks in advance!
I am asking for a massive favor. I was stuck below the issue for the last couple of days. If someone helps then that would be great. Going back to the issue. I have installed a docker and docker container using the following code (Docker-Apache spark).
Docker File:-
FROM debian:stretch
MAINTAINER Getty Images "https://github.com/gettyimages"
RUN apt-get update \
&& apt-get install -y locales \
&& dpkg-reconfigure -f noninteractive locales \
&& locale-gen C.UTF-8 \
&& /usr/sbin/update-locale LANG=C.UTF-8 \
&& echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen \
&& locale-gen \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Users with other locales should set this in their derivative image
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
RUN apt-get update \
&& apt-get install -y curl unzip \
python3 python3-setuptools \
&& ln -s /usr/bin/python3 /usr/bin/python \
&& easy_install3 pip py4j \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# http://blog.stuart.axelbrooke.com/python-3-on-spark-return-of-the-pythonhashseed
ENV PYTHONHASHSEED 0
ENV PYTHONIOENCODING UTF-8
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
# JAVA
RUN apt-get update \
&& apt-get install -y openjdk-8-jre \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# HADOOP
ENV HADOOP_VERSION 3.0.0
ENV HADOOP_HOME /usr/hadoop-$HADOOP_VERSION
ENV HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
ENV PATH $PATH:$HADOOP_HOME/bin
RUN curl -sL --retry 3 \
"http://archive.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz" \
| gunzip \
| tar -x -C /usr/ \
&& rm -rf $HADOOP_HOME/share/doc \
&& chown -R root:root $HADOOP_HOME
# SPARK
ENV SPARK_VERSION 2.4.1
ENV SPARK_PACKAGE spark-${SPARK_VERSION}-bin-without-hadoop
ENV SPARK_HOME /usr/spark-${SPARK_VERSION}
ENV SPARK_DIST_CLASSPATH="$HADOOP_HOME/etc/hadoop/*:$HADOOP_HOME/share/hadoop/common/lib/*:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/hdfs/lib/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/yarn/lib/*:$HADOOP_HOME/share/hadoop/yarn/*:$HADOOP_HOME/share/hadoop/mapreduce/lib/*:$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/tools/lib/*"
ENV PATH $PATH:${SPARK_HOME}/bin
RUN curl -sL --retry 3 \
"https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/${SPARK_PACKAGE}.tgz" \
| gunzip \
| tar x -C /usr/ \
&& mv /usr/$SPARK_PACKAGE $SPARK_HOME \
&& chown -R root:root $SPARK_HOME
WORKDIR $SPARK_HOME
CMD ["bin/spark-class", "org.apache.spark.deploy.master.Master"]
Command:
ubuntu#ip-123.43.11.136:~$ sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage /home/ubuntu bin/spark-submit ./count.py
Got Error below
Error :- Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/home/ubuntu\": permission denied": unknown.
Can some help me what could be the issue? I have gone through several links but no luck still not able to resolve the issue.
ERRO[0001] error waiting for the container: context cancelled
Everything passes after image sparkimage are considered as an argument to the Docker Entrypoint.
For example
Entrypoint ["node"]
so when you start
docker run -it my_image app.js
Now here app.js will be the argument for the node which will start app.js and docker will treat them like node app.js.
So you are passing invalid option to the image in your docker run command as there is no entrypoint in your Dockefile and command become
CMD ["/home/ubuntu bin/spark-submit ./count.py"]
That's is its throw error for /home/ubuntu permission denied.
You can try these two combinations.
Etnrypoint ["bin/spark-class", "org.apache.spark.deploy.master.Master"]
with the run command.
sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage /home/ubuntu bin/spark-submit ./count.py
OR
CMD ["bin/spark-class", "org.apache.spark.deploy.master.Master","/home/ubuntu bin/spark-submit ./count.py"]
with docker run command
sudo docker run -it --rm -v $(pwd):/home/ubuntu sparkimage
The issue has been resolved. the corrected right mount path and executed and it was working fine without any issue.
I have been trying to build an iso-image for alpine-linux inside a docker container following the standard instructions here however i seem to be unable to actually write the .iso back into the mounted volume due to libburn :
>>> mkimage-x86_64: Creating alpine-standard-edge-x86_64.iso
xorriso 1.4.8 : RockRidge filesystem manipulator, libburnia project.
libburn : SORRY : Failed to open device (a pseudo-drive) : Permission denied
libburn : FATAL : Burn run failed
xorriso : FATAL : -abort_on 'FAILURE' encountered 'FATAL' during image writing
libisofs: MISHAP : Image write cancelled
xorriso : FAILURE : libburn indicates failure with writing.
This is the standard result of trying to run the downloaded script from the tutorial:
sh aports/scripts/mkimage.sh --tag edge --outdir /build2/ --arch x86_64 --repository http://dl-cdn.alpinelinux.org/alpine/edge/main --profile standard
The docker image im using:
FROM alpine:latest
RUN addgroup root abuild
RUN apk add --update \
alpine-sdk \
# build-base \
apk-tools \
alpine-conf \
busybox \
git \
fakeroot \
syslinux \
xorriso \
squashfs-tools \
mtools \
dosfstools \
grub-efi \
&& rm -rf /var/cache/apk/*
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN mkdir /usr/src/app/build
RUN touch /usr/src/app/build/worked.txt
RUN adduser -G abuild -g "Alpine Package Builder" -s /bin/sh -u 12345 -D builder
RUN echo "builder:newpass"|chpasswd
RUN chgrp -R abuild /usr/local; \
find /usr/local -type d | xargs chmod g+w; \
echo "builder ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/builder; \
chmod 0440 /etc/sudoers.d/builder
WORKDIR /build2/
RUN git clone git://git.alpinelinux.org/aports
RUN chmod +x aports/scripts/mkimage.sh
RUN abuild-keygen -i -a
USER builder
I have looked over the official forum however only one post mentioned something similar but did not allude to any actual resolution.
Failing to find a solution for this, can anyone else recommend a good alternative minimal distro that can be build an iso via script for x_86, x_64 and rpi?
You can easily create your own Alpine Linux ISO image using script alpine-make-vm-image.
Example:
sudo ./alpine-make-vm-image \
--image-format qcow2 \
--image-size 5G \
--packages "ca-certificates git ssl_client" \
--script-chroot \
alpine-$(date +%Y-%m-%d).qcow2 -- ./configure.sh
You're getting a permission denied error because the user you created can't access the pseudo device needed by xorriso. I removed all the user creation parts and just ran the whole thing as root and it works.
Here's the Dockerfile I used:
FROM alpine:latest
RUN apk add --no-cache \
alpine-conf \
alpine-sdk \
apk-tools \
dosfstools \
grub-efi \
mtools \
squashfs-tools \
syslinux \
xorriso
WORKDIR /src
RUN git clone git://git.alpinelinux.org/aports
RUN chmod +x aports/scripts/mkimage.sh
RUN addgroup root abuild
RUN abuild-keygen -i -a -n
WORKDIR /build
ENTRYPOINT /src/aports/scripts/mkimage.sh
CMD "--tag edge --arch x86_64 --repository http://dl-cdn.alpinelinux.org/alpine/edge/main --profile standard"
Then build and run.
docker build -t alpine-iso .
docker run -v "$(pwd):/build" -it alpine-iso
I built a docker image to run helm in container.
FROM alpine:edge
ARG HELM_VERSION=2.9.0
ENV URL="https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz"
RUN apk add --no-cache curl && \
curl -L ${URL} |tar xvz && \
cp linux-amd64/helm /usr/bin/helm && \
chmod +x /usr/bin/helm && \
rm -rf linux-amd64 && \
apk del curl
WORKDIR /apps
ENTRYPOINT ["helm"]
CMD ["--help"]
I have put kubeconfig to default path ~/.kube/config, and I am fine to run helm commands locally
But when run helm command within container, i got below error:
$ docker run -ti --rm -v ~/.kube:/root/.kube -v $(pwd):/apps helm list
Error: Get https://xxxx.yl4.us-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: dial tcp 10.xxx.7.xxx:443: i/o timeout
Anything I can check?
Updates
Tried next day, it works without any changes.