Permission denied error from Docker container in Snakemake - docker

I had built a Docker container from this Dockerfile previously and it worked fine:
FROM perl:5.32
MAINTAINER Matthew Jordan Oldach, moldach686#gmail.com
WORKDIR /usr/local/bin
# Install cpan modules
RUN cpanm install --force Cwd Getopt::Long POSIX File::Basename List::Util Bio::DB::Fasta Bio::Seq Bio::SeqUtils Bio::SeqIO Set::IntervalTree Set::IntSpan
RUN apt-get install tar
# Download CooVar-v0.07
RUN wget http://genome.sfu.ca/projects/coovar/CooVar-0.07.tar.gz
RUN tar xvf CooVar-0.07.tar.gz
RUN cd coovar-0.07; chmod +x scripts/* coovar.pl
# Set WORKDIR to /data -- predefined mount location.
RUN mkdir /data
WORKDIR /data
# Set Entrypoint
ENTRYPOINT ["perl", "/usr/local/bin/coovar-0.07/coovar.pl"]
The only issue was that I found there was a slight difference between what is on the repo and the coovar-0.07 which is on our server (there was slight difference in the extract-cdna.pl script).
In order to reproduce our pipeline I'll need to COPY CooVar locally into the container (rather than wget).
I've therefore tried the following Dockerfile:
FROM perl:5.32
MAINTAINER Matthew Jordan Oldach, moldach686#gmail.com
WORKDIR /usr/local/bin
# Install cpan modules
RUN cpanm install --force Cwd Getopt::Long POSIX File::Basename List::Util Bio::DB::Fasta Bio::Seq Bio::SeqUtils Bio::SeqIO Set::IntervalTree Set::IntSpan
# Download CooVar-v0.07
COPY coovar-0.07 /usr/local/bin/coovar-0.07
RUN cd coovar-0.07; chmod +x scripts/* coovar.pl
# Set WORKDIR to /data -- predefined mount location.
RUN mkdir /data
WORKDIR /data
# Set Entrypoint
ENTRYPOINT ["perl", "/usr/local/bin/coovar-0.07/coovar.pl"]
It appears I could run the main script (coovar.pl) from Docker (no Permission Denied error):
# pull the container
$ sudo docker pull moldach686/coovar-v0.07:latest
# force entry point of `moldach686/coovar-v0.07` to /bin/bash
## in order to investigate file system
$ sudo docker run -it --entrypoint /bin/bash moldach686/coovar-v0.07
root#c7459dbe216a:/data# perl /usr/local/bin/coovar-0.07/coovar.pl
USAGE: ./coovar.pl -e EXONS_GFF -r REFERENCE_FASTA (-t GVS_TAB_FORMAT | -v GVS_VCF_FORMAT) [-o OUTPUT_DIRECTORY] [--circos] [--feature_source] [--feature_type]
Program parameter details provided in file README.
However, when I tried to incorporate this into my Snakemake workflow I get the following Permission Denied error:
Workflow defines that rule get_vep_cache is eligible for caching between workflows (use the --cache argument to enable this).
Building DAG of jobs...
Using shell: /cvmfs/soft.computecanada.ca/nix/var/nix/profiles/16.09/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 coovar
1
[Tue Nov 3 21:56:51 2020]
rule coovar:
input: variant_calling/varscan/MTG470.vcf, refs/c_elegans.PRJNA13758.WS265.genomic.fa
output: annotation/coovar/varscan/MTG470/categorized-gvs.gvf, annotation/coovar/varscan/MTG470.annotated.vcf, annotation/coovar/varscan/filtration/MTG470_keep.tsv, annotation/coovar/varscan/filtration/MTG470_exclude.tsv
jobid: 0
wildcards: sample=MTG470
resources: mem=4000, time=10
Activating singularity image /scratch/moldach/COOVAR/cbc22e3a26af1c31fb0e4fcae240baf8.simg
Can't open perl script "/usr/local/bin/coovar-0.07/coovar.pl": Permission denied

The solution I found to work was adding the following line to the Dockerfile:
RUN echo "user ALL=NOPASSWD: ALL" >> /etc/sudoers
This adds the user to the sudoers file giving permissions:
FROM perl:5.32
MAINTAINER Matthew Jordan Oldach, moldach686#gmail.com
USER root
WORKDIR /usr/local/bin
# Install cpan modules
RUN cpanm install --force Cwd Getopt::Long POSIX File::Basename List::Util Bio::DB::Fasta Bio::Seq Bio::SeqUtils Bio::SeqIO Set::IntervalTree Set::IntSpan
RUN echo "user ALL=NOPASSWD: ALL" >> /etc/sudoers
# Download CooVar-v0.07
COPY coovar-0.07 /usr/local/bin/coovar-0.07
RUN cd coovar-0.07; chmod a+rwx scripts/* coovar.pl
# Download Bedtools 2.27.1
ENV VERSION 2.27.1
ENV NAME bedtools2
ENV URL "https://github.com/arq5x/bedtools2/releases/download/v${VERSION}/bedtools-${VERSION}.tar.gz"
WORKDIR /tmp
RUN wget -q -O - $URL | tar -zxv && \
cd ${NAME} && \
make -j 4 && \
cd .. && \
cp ./${NAME}/bin/bedtools /usr/local/bin/ && \
strip /usr/local/bin/*; true && \
rm -rf ./${NAME}/
# Set WORKDIR to /data -- predefined mount location.
RUN mkdir /data
WORKDIR /data
# Set Entrypoint
ENTRYPOINT ["perl", "/usr/local/bin/coovar-0.07/coovar.pl"]

Related

local uaa docker image container not starting in windows docker

I have built a local uaa docker image and tried to run in local.
But I am getting this error when I am trying to start the docker image.
I built the docker image via this below command and the build is successful too.
docker build -t uaa-local --build-arg uaa_yml_name=local.yml .
when I am trying to run the local uaa docker image, I am getting this below error. What I am doing wrong
Content of DockerFile
FROM openjdk:11-jre
ARG uaa_yml_name=local.yml
ENV UAA_CONFIG_PATH /uaa
ENV CATALINA_HOME /tomcat
ADD run.sh /tmp/
ADD conf/$uaa_yml_name /uaa/uaa.yml
RUN chmod +x /tmp/run.sh
RUN wget -q https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.57/bin/apache-tomcat-8.5.57.tar.gz
RUN tar zxf apache-tomcat-8.5.57.tar.gz
RUN rm apache-tomcat-8.5.57.tar.gz
RUN mkdir /tomcat
RUN mv apache-tomcat-8.5.57/* /tomcat
RUN rm -rf /tomcat/webapps/*
ADD dist/cloudfoundry-identity-uaa-74.22.0.war /tomcat/webapps/
RUN mv /tomcat/webapps/cloudfoundry-identity-uaa-74.22.0.war /tomcat/webapps/ROOT.war
RUN mkdir -p /tomcat/webapps/ROOT && cd /tomcat/webapps/ROOT && unzip ../ROOT.war
ADD conf/log4j2.properties /tomcat/webapps/ROOT/WEB-INF/classes/log4j2.properties
RUN rm -rf /tomcat/webapps/ROOT.war
EXPOSE 8080
CMD ["/tmp/run.sh"]
On further investigation I think it is looking for run.sh file in the /tmp/ folder which is added on line 5 in Dockerfile..but when I checked for the file in /tmp/ folder it is not there..Is it because of that?And how to resolve that? I already have the run.sh in my current folder.

My custom beat cant find custombeat.yml when I try to run it from a container

So, I have built a beat with mage GenerateCustomBeat and it runs okay, except, now I'm trying to cotainerize it. When I run the image I built, it complains that no customBeat.yml was found.
I have secured that the file exists in the folder by adding a line RUN ls . at the end of my Dockerfile.
The beat name is coletorbeat, so this name appears multiple times inside the Dockerfile.
Upon executing sudo docker run coletorbeat I have the following error message:
Exiting: error loading config file: stat coletorbeat.yml: no such file or directory
If there was a way to specify the coletorbeat.yml file location when I execute the beat, in CMD I think I would solve it, but I have not found how to do so yet.
I'll post the Dockerfile below. I know the code inside the beater folder works fine. I'm guessing I'm making some mistake on the containerization.
Dockerfile:
FROM ubuntu
MAINTAINER myNameHere
ARG ${ip:-"333.333.333.333"}
ARG ${porta:-"4343"}
ARG ${dataInicio:-"2020-01-07"}
ARG ${dataFim:-"2020-01-07"}
ARG ${tipoEquipamento:-"type"}
ARG ${versao:-"2"}
ARG ${nivel:-"0"}
ARG ${instituicao:-"RJ"}
ADD . .
RUN mkdir /etc/coletorbeat
COPY /coletorbeat/coletorbeat.yml /etc/coletorbeat/coletorbeat.yml
RUN apt-get update && \
apt-get install -y wget git
RUN wget https://storage.googleapis.com/golang/go1.14.4.linux-amd64.tar.gz
RUN tar -zxvf go1.14.*.linux-amd64.tar.gz -C /usr/local
RUN mkdir /go
ENV GOROOT /usr/local/go
ENV GOPATH $HOME/go
ENV PATH $PATH:$GOROOT/bin:$GOPATH/bin
RUN echo $PATH
RUN go get -u -d github.com/magefile/mage
RUN cd $GOPATH/src/github.com/magefile/mage && \
go run bootstrap.go
RUN apt-get install -y python3-venv
RUN apt-get install -y build-essential
RUN cd /coletorbeat && chmod go-w coletorbeat.yml && ./coletorbeat setup
RUN cd /coletorbeat && ./coletorbeat test config -c /coletorbeat/coletorbeat.yml && ls .
CMD ./coletorbeat/coletorbeat -E 'coletorbeat.ip=${ip}'
You are adding the yml file into the /etc dir
COPY /coletorbeat/coletorbeat.yml /etc/coletorbeat/coletorbeat.yml
But then running commands on /coletorbeat without using etc.
On CMD line in the Dockerfile, I added the command cd /mybeatfolder and it worked. Libbeat searches the current folder for the config file as default, so moving to the right directory before executing my beat solved it.

Docker COPY is not copying script

Docker COPY is not copying over the bash script
FROM alpine:latest
#Install Go and Tini - These remain.
RUN apk add --no-cache go build-base gcc go
RUN apk add --no-cache --update ca-certificates redis git && update-ca-certificates
# Set Env Variables for Go and add Go to Path.
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN go get github.com/rakyll/hey
RUN echo GOLANG VERSION `go version`
COPY ./bench.sh /root/bench.sh
RUN chmod +x /root/bench.sh
ENTRYPOINT /root/bench.sh
Here is the script -
#!/bin/bash
set -e;
echo "entered";
hey;
I try running the above Dockerfile with
$ docker build -t test-bench .
$ docker run -it test-bench
But I get the error
/bin/sh: /root/bench.sh: not found
The file does exist -
$ docker run --rm -it test-bench sh
/ # ls
bin dev etc go home lib media mnt opt proc root run sbin srv sys tmp usr var
/ # cd root
~ # ls
bench.sh
~ #
Is your docker build successful. When I tried to simulate this, found the following error
---> Running in 96468658cebd
go: missing Git command. See https://golang.org/s/gogetcmd
package github.com/rakyll/hey: exec: "git": executable file not found in $PATH
The command '/bin/sh -c go get github.com/rakyll/hey' returned a non-zero code: 1
Try installing git using Dockerfile RUN apk add --no-cache go build-base gcc go git and run again.
The COPY operation here seems to be correct. Make sure it is present in the directory from where docker build is executed.
Okay, the script is using /bin/bash the bash binary is not available in the alpine image. Either it has to be installed or a /bin/sh shell should be used

How can I run a searchguard set up script after elasticsearch is up and running in docker?

I have been trying to make the searchguard setup script init_sg.sh to run after elasticsearch automatically. I don't want to do it manually with docker exec. Here's what I have tried.
entrypoint.sh:
#! /bin/sh
elasticsearch
source init_sg.sh
Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0
COPY config/ config/
COPY bin/ bin/
# Search Guard plugin
# https://github.com/floragunncom/search-guard/wiki
RUN elasticsearch-plugin install --batch com.floragunn:search-guard-6:6.1.0-20.1 \
&& chmod +x \
plugins/search-guard-6/tools/hash.sh \
plugins/search-guard-6/tools/sgadmin.sh \
&& chown -R elasticsearch config/sg/ \
&& chmod -R go= config/sg/
# This custom entrypoint script is used instead of
# the original's /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["bash","-c","entrypoint.sh"]
However, it'd throw cannot run elasticsearch as root error:
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
So I guess I cannot run elasticsearch directly in entrypoint.sh, which is confusing because there's no problem when the Dockerfile is like this:
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0
COPY config/ config/
COPY bin/ bin/
....
CMD ["elasticsearch"]
This thread's accepted answer doesn't work. There's no "/run/entrypoint.sh" in the container.
Solution:
Finally I've managed to get it done. Here's my custom entrypoint script that will run the searchguard setup script automatically:
source init_sg.sh
while [ $? -ne 0 ]; do
sleep 10
source init_sg.sh
done &
/bin/bash -c "source /usr/local/bin/docker-entrypoint.sh;"
If you have any alternative solutions, please feel free to answer.

How to fix permissions for an Alpine image writing files using Cron as non root user into accessible volume

I'm trying to create a multi-stage build in docker which simply run a non root crontab which write to volume accessible from outside the container. I have two problem with permissions, with volume external access and with cron:
the first build in dockerfile create a non-root user image with entry-point and su-exec useful to fix permission with volume!
the second build in the same dockerfile used the first image to run a crond process which normally write to /backup folder.
The docker-compose.yml file to build the dockerfile:
version: '3.4'
services:
scrap_service:
build: .
container_name: "flight_scrap"
volumes:
- /home/rey/Volumes/mongo/backup:/backup
In the first step of DockerFile (1), I try to adapt the answer given by denis bertovic to Alpine image
############################################################
# STAGE 1
############################################################
# Create first stage image
FROM gliderlabs/alpine:edge as baseStage
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk add --update && apk add -f gnupg ca-certificates curl dpkg su-exec shadow
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
# ADD NON ROOT USER, i hard fix value to 1000, my current id
RUN addgroup scrapy \
&& adduser -h /home/scrapy -u 1000 -S -G scrapy scrapy
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
My docker-entrypoint.sh to fix permission is:
#!/usr/bin/env bash
chown -R scrapy .
exec su-exec scrapy "$#"
The second stage (2) run the cron service to write into /backup folder mounted as volume
############################################################
# STAGE 2
############################################################
FROM baseStage
MAINTAINER rey
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add busybox-suid
RUN apk add -f tini bash build-base curl
# CREATE FUTURE VOLUME FOLDER WRITEABLE BY SCRAPY USER
RUN mkdir /backup && chown scrapy:scrapy /backup
# INIT NON ROOT USER CRON CRONTAB
COPY crontab /var/spool/cron/crontabs/scrapy
RUN chmod 0600 /var/spool/cron/crontabs/scrapy
RUN chown scrapy:scrapy /var/spool/cron/crontabs/scrapy
RUN touch /var/log/cron.log
RUN chown scrapy:scrapy /var/log/cron.log
# Switch to user SCRAPY already created in stage 1
WORKDIR /home/scrapy
USER scrapy
# SET TIMEZONE https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes
VOLUME /backup
ENTRYPOINT ["/sbin/tini"]
CMD ["crond", "-f", "-l", "8", "-L", "/var/log/cron.log"]
The crontab file which normally create a test file into /backup volume folder:
* * * * * touch /backup/testCRON
DEBUG phase :
Login into my image with bash, it seems image correctly run the scrapy user :
uid=1000(scrapy) gid=1000(scrapy) groups=1000(scrapy)
The crontab -e command also gives the correct information
But first error, cron don't run correctly, when i cat /var/log/cron.log i have a permission denied error
crond: crond (busybox 1.27.2) started, log level 8
crond: root: Permission denied
crond: root: Permission denied
I have also a second error when I try to write directly into the /backup folder using the command touch /backup/testFile. The /backup volume folder continue to be only accessible using root permission, don't know why.
crond or cron should be used as root, as described in this answer.
Check out instead aptible/supercronic, a crontab-compatible job runner, designed specifically to run in containers. It will accomodate any user you have created.

Resources