How can I get act to create containers with a defined volume mounted?
I have created a local instance of a docker runner. I'm looking to optimise running of https://github.com/marketplace/actions/setup-miniconda
have created a docker image and a docker repository
have switched to using conda-lock to remove need to resolve environment.yaml
now the final step so that Conda package downloads can be shared across containers created by act with a volume mount in the container. Equivalent of:
docker run -it -v /home/vagrant/miniconda/pkgs/:/root/miniconda3/pkgs localhost:5000/my-act /bin/bash
Have tried patching docker_run.go
func (cr *containerReference) Create(capAdd []string, capDrop []string) common.Executor {
cr.input.Mounts["/root/miniconda3/pkgs"] = "/home/vagrant/miniconda/pkgs"
return common.
NewInfoExecutor("%sdocker create image=%s platform=%s entrypoint=%+q cmd=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd).
Then(
common.NewPipelineExecutor(
cr.connect(),
cr.find(),
cr.create(capAdd, capDrop),
).IfNot(common.Dryrun),
)
}
.actrc
-P ubuntu-latest=localhost:5000/my-act
Script to create local docker repository
actimg="localhost:5000/my-act"
docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker build --no-cache -t act-custom:v1 .
docker tag act-custom:v1 $actimg
docker push $actimg
Dockerfile
FROM catthehacker/ubuntu:act-latest
ARG CONDA=/root/miniconda
ENV CONDA=$CONDA
ENV INSTALL_MINICONDA=$CONDA
ARG NODE=/opt/hostedtoolcache/node/16.18.1/x64/bin/node
ENV RUNNER_TOOL_CACHE=/opt/hostedtoolcache
ENV RUNNER_TEMP=/tmp
RUN dl=$( curl -s https://api.github.com/repos/conda-incubator/setup-miniconda/releases/latest | jq -r '.zipball_url' ) \
&& wget -q -O dl2.zip $dl \
&& unzip -q dl2.zip -d dl2 \
&& env INPUT_ARCHITECTURE=x64 \
INPUT_AUTO-ACTIVATE-BASE=true \
INPUT_AUTO-UPDATE-CONDA=false \
INPUT_CLEAN-PATCHED-ENVIRONMENT-FILE=true \
INPUT_MINIFORGE-VARIANT=mambaforge \
INPUT_MINIFORGE-VERSION=latest \
INPUT_REMOVE-PROFILES=false \
INPUT_RUN-POST=true \
INPUT_USE-MAMBA=true \
${NODE} $( find dl2/ -wholename "*/dist/setup/index.js" ) \
&& source ${CONDA}3/etc/profile.d/conda.sh \
&& conda config --set default_threads 4 \
&& ${NODE} $( find dl2/ -wholename "*/dist/delete/index.js" ) \
&& mamba install conda-lock \
&& mamba clean --all \
&& rm -r /opt/hostedtoolcache/mambaforge/ \
&& rm -rf * \
&& mv /usr/local/bin/ /usr/local/bin-old \
&& ln -s ${CONDA}3/bin /usr/local/bin
ENV CONDA=${CONDA}3
ENV PATH=${CONDA}/bin:${PATH}
VOLUME /root/miniconda3/pkgs
COPY /home/vagrant/geopandas/ci/lock /root/lock
CMD [ "/usr/bin/tail -f /dev/null" ]
Related
When I am using the elasticsearch official docker image ELASTIC_PASSWORD env variable is working good
docker run -dti -e ELASTIC_PASSWORD=my_own_password -e discovery.type=single-node elasticsearch:7.8.0
But when I build my own customized docker image the ELASTIC_PASSWORD is not working can you please help me on this
Here is my Docker file
FROM ubuntu:18.04
ENV \
REFRESHED_AT=2020-06-20
###############################################################################
# INSTALLATION
###############################################################################
### install prerequisites (cURL, gosu, tzdata, JDK for Logstash)
RUN set -x \
&& apt update -qq \
&& apt install -qqy --no-install-recommends ca-certificates curl gosu tzdata openjdk-11-jdk-headless \
&& apt clean \
&& rm -rf /var/lib/apt/lists/* \
&& gosu nobody true \
&& set +x
### set current package version
ARG ELK_VERSION=7.8.0
### install Elasticsearch
# predefine env vars, as you can't define an env var that references another one in the same block
ENV \
ES_VERSION=${ELK_VERSION} \
ES_HOME=/opt/elasticsearch
ENV \
ES_PACKAGE=elasticsearch-${ES_VERSION}-linux-x86_64.tar.gz \
ES_GID=991 \
ES_UID=991 \
ES_PATH_CONF=/etc/elasticsearch \
ES_PATH_BACKUP=/var/backups \
KIBANA_VERSION=${ELK_VERSION}
RUN DEBIAN_FRONTEND=noninteractive \
&& mkdir ${ES_HOME} \
&& curl -O https://artifacts.elastic.co/downloads/elasticsearch/${ES_PACKAGE} \
&& tar xzf ${ES_PACKAGE} -C ${ES_HOME} --strip-components=1 \
&& rm -f ${ES_PACKAGE} \
&& groupadd -r elasticsearch -g ${ES_GID} \
&& useradd -r -s /usr/sbin/nologin -M -c "Elasticsearch service user" -u ${ES_UID} -g elasticsearch elasticsearch \
&& mkdir -p /var/log/elasticsearch ${ES_PATH_CONF} ${ES_PATH_CONF}/scripts /var/lib/elasticsearch ${ES_PATH_BACKUP}
As I think in order to achieve this functionality (set ELASTIC_PASSWORD from command line and it works) for your own container you need to re-configure Elasticsearch startup script. It's not a trivial task.
For example here is docker-entrypoint.sh from official docker image.
https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/bin/docker-entrypoint.sh
You can see that script do all 'hidden' work to allow us to run it by only command.
I'm trying to run RStudio Server in a docker container. Users will connect to this docker container and use RStudio via the internet.
The built-in mechanism for uploading and downloading files in Rstudio is very slow so I'd also like to run an SFTP server in a separate container.
I'm trying to link the two containers using Docker Volumes but I'm having some trouble. Here's is how I'm trying to run the two images.
I'm running the FTP sever using:
docker run -p 2222:22 -v /home/rstudio --name ftpserver -d atmoz/sftp rstudio:foo:1001
Then I'm trying to connect to the same directory in RStudio by doing:
docker run -d -p 8787:8787 -e PASSWORD=foo --volumes-from ftpserver --name rstudio r-studio-bio:Dockerfile
This causes RStudio to give an error
RStudio Initialization Error. Unable to connect to service.
Likewise I'm unable to upload to the FTP server because it's saying I lack the proper permissions.
The FTP server image is here : https://hub.docker.com/r/atmoz/sftp/
The RStudio-Server Dockerfile is:
# See the following for more info:
# https://hub.docker.com/r/pgensler/sandboxr/
# https://www.rocker-project.org/images/
# https://hub.docker.com/r/rocker/rstudio
FROM rocker/tidyverse
LABEL maintainer="Alex"
#
RUN mkdir -p $HOME/.R
RUN mkdir $HOME/Rlibs
ENV R_LIBS $HOME/Rlibs
# COPY R/Makevars /root/.R/Makevars
RUN apt-get update -qq \
&& apt-get -y --no-install-recommends install \
curl \
clang \
ccache \
default-jdk \
default-jre \
wget \
systemd \
# openssh-server \
&& R CMD javareconf \
# && systemctl ssh start \
# && systemctl enable ssh \
&& rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
# # Install additional R packages
RUN Rscript -e "BiocManager::install()"
RUN Rscript -e "BiocManager::install('multtest')"
RUN Rscript -e "install.packages('Seurat')"
How can I use volume in my dockerfile for copy the JMeter result in my local?
Need to display the result in local, how can I copy the result and paste in local with the help of VOLUME.
For example:- I am saving the JMeter HTML report in my container but after that container is automatically stopped. So someone suggests me to use the docker VOLUME command for RUN the HTML.
FROM alpine
ARG JMETER_VERSION="4.0"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
ENV JMETER_PLUGINS_DOWNLOAD_URL http://repo1.maven.org/maven2/kg/apc/jmeter-plugins-functions/2.0/jmeter-plugins-functions-2.0.jar
ENV JMETER_PLUGINS_FOLDER ${JMETER_HOME}/lib/ext/
# Change TimeZone TODO: TZ still is not set!
ARG TZ="Australia/Melbourne"
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& rm -rf /var/cache/apk/* \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-dummy/0.2/jmeter-plugins-dummy-0.2.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-dummy-0.2.jar
RUN curl -L --silent ${JMETER_PLUGINS_DOWNLOAD_URL}/jmeter-plugins-cmn-jmeter/0.5/jmeter-plugins-cmn-jmeter-0.5.jar -o ${JMETER_PLUGINS_FOLDER}/jmeter-plugins-cmn-jmeter-0.5.jar
# TODO: plugins (later)
# && unzip -oq "/tmp/dependencies/JMeterPlugins-*.zip" -d $JMETER_HOME
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
ENV URL_PATH=${URL}
WORKDIR ${JMETER_HOME}
#RUN export DATETIME=$(date +%Y%m%d)
RUN mkdir -p /var/www/html/"$(date +%Y%m%d)"
VOLUME /var/www/html/
#Copy the *.jmx file jmeter bin file
COPY Get_Ping_Node_API.jmx ./bin
CMD ./bin/jmeter -n -t ./bin/Get_Ping_Node_API.jmx -l ./bin/result.jtl -e -o ./bin/result_html'
You can use docker volume while you run the docker image.
You can add --volume , -v flag to your docker run command.
docker run -v "HOST DIR":"CONTAINER DIR"
I'm trying to build a Docker image for my Gitlab CI pipeline containing docker client + gcloud along with the following gcloud components:
kubectl
docker-credential-gcr
This is my dockerfile:
FROM docker:git
RUN mkdir /opt \
&& cd /opt \
&& wget -q https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& tar -xzf google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& rm google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& ln -s /opt/google-cloud-sdk/bin/gcloud /usr/bin/gcloud \
&& apk -q update \
&& apk -q add python \
&& apk add --update libintl \
&& apk add --virtual build_deps gettext \
&& cp /usr/bin/envsubst /usr/local/bin/envsubst \
&& apk del build_deps \
&& rm -rf /var/cache/apk/* \
&& echo "y" | gcloud components install kubectl docker-credential-gcr \
&& ln -s /opt/google-cloud-sdk/bin/kubectl /usr/bin/kubectl \
&& ln -s /opt/google-cloud-sdk/bin/docker-credential-gcr /usr/bin/docker-credential-gcr
Inside my CI flow, I need to run docker-credential-gcr (because of this issue).
The docker-credential-gcr executable is correctly installed inside /opt/google-cloud-sdk/bin like shown by running docker run -i -t gitlabci-test ls /opt/google-cloud-sdk/bin
It is also correctly simlinked inside /usr/bin as shown by docker run -i -t gitlabci-test ls -la /usr/bin
And yet, trying to call it with any of the methods below fails miserably
docker run -i -t gitlabci-test docker-credential-gcr
docker run -i -t gitlabci-test /usr/bin/docker-credential-gcr
docker run -i -t gitlabci-test /opt/google-cloud-sdk/bin/docker-credential-gcr
Error message:
/usr/local/bin/docker-entrypoint.sh: exec: line 20: docker-credential-gcr: not found
On the other hand, running the kubectl component works fine
docker run -i -t gitlabci-test kubectl version
Any idea how I can fix this issue to be able to run docker-credential-gcr with the container ?
I have the following Dockerfile:
FROM php:5.6-apache
MAINTAINER pc_magas#openmailbox.org
EXPOSE 80
RUN apt-get update && apt-get install -y \
libjpeg-dev \
libfreetype6-dev \
libgeoip-dev \
libpng12-dev \
libldap2-dev \
zip \
mysql-client \
&& rm -rf /var/lib/apt/lists/*
RUN docker-php-ext-configure gd --with-freetype-dir=/usr --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-configure ldap --with-libdir=lib/x86_64-linux-gnu/ \
&& docker-php-ext-install -j$(nproc) gd mbstring mysql pdo_mysql zip ldap opcache
RUN pecl install APCu geoip
ENV PIWIK_VERSION 3.0.1
RUN curl -fsSL -o piwik.tar.gz \
"https://builds.piwik.org/piwik-${PIWIK_VERSION}.tar.gz" \
&& curl -fsSL -o piwik.tar.gz.asc \
"https://builds.piwik.org/piwik-${PIWIK_VERSION}.tar.gz.asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 814E346FA01A20DBB04B6807B5DBD5925590A237 \
&& gpg --batch --verify piwik.tar.gz.asc piwik.tar.gz \
&& rm -r "$GNUPGHOME" piwik.tar.gz.asc \
&& tar -xzf piwik.tar.gz -C /usr/src/ \
&& rm piwik.tar.gz
COPY php.ini /usr/local/etc/php/php.ini
RUN curl -fsSL -o /usr/src/piwik/misc/GeoIPCity.dat.gz http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz \
&& gunzip /usr/src/piwik/misc/GeoIPCity.dat.gz
COPY docker-entrypoint.sh /entrypoint.sh
# WORKDIR is /var/www/html (inherited via "FROM php")
# "/entrypoint.sh" will populate it at container startup from /usr/src/piwik
VOLUME /var/www/html
ENV PIWIK_DB_HOST ""
ENV PIWIK_DB_PORT ""
ENV PIWIK_DB_USER ""
ENV PIWIK_DB_PASSWORD ""
ENV PIWIK_DB_NAME ""
#Create backup and restore foolders
RUN mkdir /var/backup && \
chmod 665 /var/backup && \
mkdir /var/restore && \
chmod 665 /var/restore
#Export Backup Folder
VOLUME /var/backup
#Export restore foolder
VOLUME /var/restore
COPY backup.php /tmp/backup.php
RUN cp /tmp/backup.php /usr/local/bin/piwik_backup && \
chown root:root /usr/local/bin/piwik_backup && \
chmod 733 /usr/local/bin/piwik_backup && \
rm -rf /tmp/backup
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
That uses the following script as entrypoint:
#!/bin/bash
if [ ! -e piwik.php ]; then
cp -R /usr/src/piwik/* /var/www/html
chown -R www-data:www-data .
fi
: ${PIWIK_DB_HOST:=$DB_PORT_3306_TCP_ADDR}
echo "Mariadb Addr:"$DB_PORT_3306_TCP_ADDR
: ${PIWIK_DB_PORT:=${DB_PORT_3306_TCP_PORT}}
COUNTER=0
echo "Waiting for mysql to start at ${PIWIK_DB_HOST} using port ${PIWIK_DB_PORT}..."
while ! mysqladmin ping -h"$PIWIK_DB_HOST" -P $PIWIK_DB_PORT --silent; do
if [ $COUNTER -gt 10 ] ; then
exit 1
fi
echo "Connecting to ${PIWIK_DB_HOST} Failed"
COUNTER=$[COUNTER+1]
sleep 1
done
echo "Setting up the database connection info"
: ${PIWIK_DB_USER:=${DB_ENV_MYSQL_USER:-root}}
: ${PIWIK_DB_NAME:=${DB_ENV_MYSQL_DATABASE:-'piwik'}}
if [ "$PIWIK_DB_USER" = 'root' ]; then
: ${PIWIK_DB_PASSWORD:=$DB_ENV_MYSQL_ROOT_PASSWORD}
else
: ${PIWIK_DB_PASSWORD:=$DB_ENV_MYSQL_PASSWORD}
fi
if ! mysql -h"$PIWIK_DB_HOST" -P $PIWIK_DB_PORT -u ${PIWIK_DB_USER} -p${PIWIK_DB_PASSWORD} -e ";" ; then
echo "The user does not exist to the mysql server: ${PIWIK_DB_HOST}"
exit 1
fi
php console config:set --section="database" --key="host" --value=${PIWIK_DB_HOST}
php console config:set --section="database" --key="port" --value=${PIWIK_DB_PORT}
php console config:set --section="database" --key="username" --value=${PIWIK_DB_USER}
php console config:set --section="database" --key="password" --value=${PIWIK_DB_PASSWORD}
php console config:set --section="database" --key="tables_prefix" --value="piwik_"
php index.php
exec "$#"
But for some reason The entrypoint script cannot find the enviromental variables provided by mariadb container such as the DB_PORT_3306_TCP_ADDR providing the connection to the mariadb server.
I use the following commands in order to run the images into the containers containers:
docker run --name piwikdb --volume $(pwd)/volumes/db:/var/lib/db \
-e MYSQL_ROOT_PASSWORD=123 -d mariadb
docker run --volume $(pwd)/volumes/piwik:/var/www/data --link piwikdb:mysql \
-p 8081:80 -t ^hash of the fresly build image^
I tried to troubleshoot it, but I cannot figure out why that happens.
This is not how you want to do linking.
The correct, supported, way, is one of the following.
Use docker-compose
If you use docker-compose, you would name your database service (say, db), and then your other containers can be told to connect to db as if it were a hostname.
You can use env_file in docker-compose.yml to specify a file with parameters such as database name, mariadb port, authentication info, and so on. Each container can load the same env_file.
Use a docker network
If you prefer to run containers without using compose, just make sure they are on the same network, like this:
docker network create myapp
docker run --name piwikdb --volume $(pwd)/volumes/db:/var/lib/db \
-e MYSQL_ROOT_PASSWORD=123 -d --network myapp mariadb
docker run --volume $(pwd)/volumes/piwik:/var/www/data \
--network myapp -p 8081:80 -t ^hash of the fresly build image^
If all containers are on the same network, then as with docker-compose, you can just tell your piwik container to use "piwikdb" as the server (i.e. the container name of your other container).