Connecting Rstudio and SFTP docker containers directly - docker

I'm trying to run RStudio Server in a docker container. Users will connect to this docker container and use RStudio via the internet.
The built-in mechanism for uploading and downloading files in Rstudio is very slow so I'd also like to run an SFTP server in a separate container.
I'm trying to link the two containers using Docker Volumes but I'm having some trouble. Here's is how I'm trying to run the two images.
I'm running the FTP sever using:
docker run -p 2222:22 -v /home/rstudio --name ftpserver -d atmoz/sftp rstudio:foo:1001
Then I'm trying to connect to the same directory in RStudio by doing:
docker run -d -p 8787:8787 -e PASSWORD=foo --volumes-from ftpserver --name rstudio r-studio-bio:Dockerfile
This causes RStudio to give an error
RStudio Initialization Error. Unable to connect to service.
Likewise I'm unable to upload to the FTP server because it's saying I lack the proper permissions.
The FTP server image is here : https://hub.docker.com/r/atmoz/sftp/
The RStudio-Server Dockerfile is:
# See the following for more info:
# https://hub.docker.com/r/pgensler/sandboxr/
# https://www.rocker-project.org/images/
# https://hub.docker.com/r/rocker/rstudio
FROM rocker/tidyverse
LABEL maintainer="Alex"
#
RUN mkdir -p $HOME/.R
RUN mkdir $HOME/Rlibs
ENV R_LIBS $HOME/Rlibs
# COPY R/Makevars /root/.R/Makevars
RUN apt-get update -qq \
&& apt-get -y --no-install-recommends install \
curl \
clang \
ccache \
default-jdk \
default-jre \
wget \
systemd \
# openssh-server \
&& R CMD javareconf \
# && systemctl ssh start \
# && systemctl enable ssh \
&& rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
# # Install additional R packages
RUN Rscript -e "BiocManager::install()"
RUN Rscript -e "BiocManager::install('multtest')"
RUN Rscript -e "install.packages('Seurat')"

Related

How to connect to squid proxy docker container to use ftp-proxy?

I am using squid proxy server in docker container to use ftp-proxy. As following. https://hub.docker.com/r/sameersbn/squid/
My modified docker file
FROM ubuntu:bionic-20190612
ENV SQUID_VERSION=3.5.27 \
SQUID_CACHE_DIR=/var/spool/squid \
SQUID_LOG_DIR=/var/log/squid \
SQUID_USER=proxy
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y squid=${SQUID_VERSION}* \
&& rm -rf /var/lib/apt/lists/*
ENV http_proxy=test.rebex.net \
https_proxy=test.rebex.net \
ftp_proxy=test.rebex.net:21
COPY entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
EXPOSE 3128/tcp
ENTRYPOINT ["/sbin/entrypoint.sh"]
Running the docker container with command:
docker run --name squid \
--publish 3128:3128 \
squid-proxy
I want to connect to the ftp test.rebex.net with my filezilla through the running docker. When I try to connect my filezilla ftp client to localhost:3128, the connection fails. How do I connect to the ftp server through squid ftp-proxy in docker?

How to deploy to a (local) Kubernetes cluster using Jenkins

This question is somewhat related to one of my previous questions as in it gives a clearer idea on what I am trying to achieve.. This question is about an issue I ran into when trying to achieve the task in that previous question...
I am trying to test if my kubectl works from within the Jenkins container. When I start up my Jenkins container I use the following command:
docker run \
-v /home/student/Desktop/jenkins_home:/var/jenkins_home \
-v $(which kubectl):/usr/local/bin/kubectl \ #bind docker host binary to docker container binary
-v ~/.kube:/home/jenkins/.kube \ #docker host kube config file stored in /.kube directory. Binding this to $HOME/.kube in the docker container
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker -v ~/.kube:/home/root/.kube \
--group-add 998
-p 8080:8080 -p 50000:50000
-d --name jenkins jenkins/jenkins:lts
The container starts up and I can login/create jobs/run pipeline scripts all no issue.
I created a pipeline script just to check if I can access my cluster like this:
pipeline {
agent any
stages {
stage('Kubernetes test') {
steps {
sh "kubectl cluster-info"
}
}
}
}
When running this job, it fails with the following error:
+ kubectl cluster-info // this is the step
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
Thanks!
I'm not getting why there is:
-v $(which kubectl):/usr/local/bin/kubectl -v ~/.kube:/home/jenkins/.kube
/usr/local/bin/kubectl is a kubectl binary and ~/.kube:/home/jenkins/.kube should be the location where the kubectl binary looks for the cluster context file i.e. kubeconfig. First, you should make sure that the kubeconfig is mounted to the container at /home/jenkins/.kube and is accessible to kubectl binary. After appropriate volume mounts, you can verify by creating a session in the jenkins container with docker container exec -it jenkins /bin/bash and test with kubectl get svc. Make sure you have KUBECONFIG env var set in the session with:
export KUBECONFIG=/home/jenkins/.kube/kubeconfig
Before you run the verification test and
withEnv(["KUBECONFIG=$HOME/.kube/kubeconfig"]) {
// Your stuff here
}
In your pipeline code. If it works with the session, it should work in the pipeline as well.
I would personally recommend to create a custom Docker image for Jenkins which will contain kubectl binary and other utilities necessary (such as aws-iam-authenticator for AWS EKS IAM-based authentication) for working with Kubernetes cluster. This creates isolation between your host system binaries and your Jenkins binaries.
Below is the Dockerfile I'm using which contains, helm, kubectl and aws-iam-authenticator.
# This Dockerfile contains Helm, Docker client-only, aws-iam-authenticator, kubectl with Jenkins LTS.
FROM jenkins/jenkins:lts
USER root
ENV VERSION v2.9.1
ENV FILENAME helm-${VERSION}-linux-amd64.tar.gz
ENV HELM_URL https://storage.googleapis.com/kubernetes-helm/${FILENAME}
ENV KUBE_LATEST_VERSION="v1.11.0"
# Install the latest Docker CE binaries
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce \
&& curl -o /tmp/$FILENAME ${HELM_URL} \
&& tar -zxvf /tmp/${FILENAME} -C /tmp \
&& mv /tmp/linux-amd64/helm /bin/helm \
&& rm -rf /tmp/linux-amd64/helm \
&& curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& curl -L https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator -o /usr/local/bin/aws-iam-authenticator \
&& chmod +x /usr/local/bin/aws-iam-authenticator
Kubernetes fails inside jenkins pipeline this was my solution for jenkins installed locally on a windows machine.

Docker Is supposed to be listening but it doesn't

I deployed my first scala project on docker but i have a problem, the problem is the docker says that the server has been started, but surprisingly it doesn't listen to any request, even i exposed the port to the host, when i tried to request a get, it says that the connection is refused, also i tried to telnet to the port and it seems that there are no listener on port 9000 neither 3200 an 3000, please find bellow what i have wrote in dockerFile
FROM jelastic/sbt
# Env variables
ENV SCALA_VERSION 2.12.4
ENV SBT_VERSION 1.1.0
# Scala expects this file
RUN touch /usr/lib/jvm/java-8-openjdk-amd64/release
# Install Scala
## Piping curl directly in tar
RUN \
curl -fsL https://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz | tar xfz - -C /root/ && \
echo >> /root/.bashrc && \
echo "export PATH=~/scala-$SCALA_VERSION/bin:$PATH" >> /root/.bashrc
# Install sbt
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb && \
apt-get update && \
apt-get install sbt && \
sbt sbtVersion
WORKDIR /
ADD play /
RUN tree /
EXPOSE 9000
CMD sbt run
and my run command was
docker run -p 9000:9000 -t bee while bee is my image name
as you see the server is started properly.
please find bellow the attached picture to be more clearly
here is the docker ps
If you see your screenshot, it clear states the docker machine is located at 192.168.99.100. So that is the address you need to use.
Open http://192.168.99.100:9000 and it should work

Inside a Docker Container: "Error: cannot open display: localhost:11.0"

I am trying to use programs with a graphical interface in a docker container over ssh.
Currently I am connected over ssh on an external machine where docker and the containers are running. On the host I can start programs like firefox which was displayed correctly. The connection is established with:
ssh -Y root#host
When I try the same in a docker container, with the firefox image (see below):
docker run -it --privileged --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /root/.Xauthority:/root/.Xauthority:rw \
firefox
I just get:
Error: cannot open display: localhost:11.0
I already tried to set xhost + on the host, but it is still not working.
The host runs Scientific Linux release 7.2 and the docker image is created with the
Dockerfile from http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y firefox
# Replace 1000 with your user / group id
RUN export uid=1000 gid=1000 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
USER developer
ENV HOME /home/developer
CMD /usr/bin/firefox
Adding --net=host to docker run solved the problem.

How do I connect a database to the Dockerfile example for Drupal 7?

I'm brand new to Docker and am trying to set up a Drupal 7 installation.
I ran this example
# from https://www.drupal.org/requirements/php#drupalversions
FROM php:5.6-apache
RUN a2enmod rewrite
# install the PHP extensions we need
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev libpq-dev \
&& rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd mbstring pdo pdo_mysql pdo_pgsql zip
WORKDIR /var/www/html
# https://www.drupal.org/node/3060/release
ENV DRUPAL_VERSION 7.41
ENV DRUPAL_MD5 7636e75e8be213455b4ac7911ce5801f
RUN curl -fSL "http://ftp.drupal.org/files/projects/drupal-${DRUPAL_VERSION}.tar.gz" -o drupal.tar.gz \
&& echo "${DRUPAL_MD5} *drupal.tar.gz" | md5sum -c - \
&& tar -xz --strip-components=1 -f drupal.tar.gz \
&& rm drupal.tar.gz \
&& chown -R www-data:www-data sites
but I get this error when trying to connect to a database.
Failed to connect to your database server. The server reports the
following message: SQLSTATE[HY000] [2002] No such file or directory.
Do I need to run a MySQL container as well? I don't fully understand how containers "talk to one another", ie. if I used the MySQL example how would I tell my Drupal container to use that database?
If is best to tun the drupal image directly, instead of its Dockerfile.
See the Drupal Full Description.
$ docker run --name some-drupal -p 8080:80 -d drupal
Then, access it via http://localhost:8080 or http://host-ip:8080 in a browser.
For using it with a database, you need to run a database container first, like mysql:
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
Then you can link it to a drupal container:
$ docker run --name some-drupal --link some-mysql:mysql -d drupal

Resources