How to run a docker image on Digital Ocean droplet? - docker

I have a Ubuntu based droplet on Digital Ocean with Docker installed, and where I uploaded my docker image.tar file from my desktop. I uploaded this image.tar file into /home/newuser/app directory. Next, I loaded the image.tar using following command:
sudo docker load -i image.tar
The image has been loaded. I checked.
When I run these following lines, I can't see my image app on public IP connected to my droplet instance:
sudo docker run image
or
sudo docker run -p 80:80 image
How do you guys go about this?
Here is the dockerfile:
FROM r-base:3.5.0
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Add shiny user
RUN groupadd shiny \
&& useradd --gid shiny --shell /bin/bash --create-home shiny
# Download and install ShinyServer
RUN wget --no-verbose https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-1.5.7.907-amd64.deb && \
gdebi shiny-server-1.5.7.907-amd64.deb
# Install R packages that are required
RUN R -e "install.packages(c('Benchmarking', 'plotly', 'DT'), repos='http://cran.rstudio.com/')"
RUN R -e "install.packages('shiny', repos='https://cloud.r-project.org/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
The code for shiny-server.conf :
# Define the user we should use when spawning R Shiny processes
run_as shiny;
# Define a top-level server which will listen on a port
server {
# Instruct this server to listen on port 80. The app at dokku-alt need expose PORT 80, or 500 e etc. See the docs
listen 80;
# Define the location available at the base URL
location / {
# Run this location in 'site_dir' mode, which hosts the entire directory
# tree at '/srv/shiny-server'
site_dir /srv/shiny-server;
# Define where we should put the log files for this location
log_dir /var/log/shiny-server;
# Should we list the contents of a (non-Shiny-App) directory when the user
# visits the corresponding URL?
directory_index on;
}
}
And code for shiny-server.sh :
# Make sure the directory for individual app logs exists
mkdir -p /var/log/shiny-server
chown shiny.shiny /var/log/shiny-server
exec shiny-server >> /var/log/shiny-server.log 2>&1

There's really no need to EXPOSE port 80 in the docker file when you run the container with -p 80:80, except maybe as a hint to others: https://forums.docker.com/t/what-is-the-use-of-expose-in-docker-file/37726/2
You should probably post your shiny-server.conf, but I betcha that you either specified no port (in which case shiny-server starts on port 3838) or a port other than 80. Make sure you modify this line in the config file:
listen 3838

Related

Mounting docker volume on specific path

I am trying to deploy photo-stream (https://github.com/maxvoltar/photo-stream) using a docker container. Photo-stream is a picture publishing site meant for self-hosting. It expects its pictures in a path called 'photos/original/', relative to where it's installed. It will create other directories under 'photos/' to cache thumbnails and such.
When I populate that directory with some pictures and start the application natively (without docker) from its build directory using:
$ bundle exec jekyll serve --host 0.0.0.0
it shows me the pictures I put in that directory. When running the application inside a docker container, I need it to
mount a volume that contains a path 'photos/original' so that I can keep my pictures there. I have created this path on
a disk mounted at /mnt/data/.
In order to do that, I have added a volume line to the existing Dockerfile:
FROM ruby:latest
ENV VIPSVER 8.9.1
RUN apt update && apt -y upgrade && apt install -y build-essential
RUN wget -O ./vips-$VIPSVER.tar.gz https://github.com/libvips/libvips/releases/download/v$VIPSVER/vips-$VIPSVER.tar.gz
RUN tar -xvzf ./vips-$VIPSVER.tar.gz && cd vips-$VIPSVER && ./configure && make && make install
COPY ./ /photo-stream
WORKDIR /photo-stream
RUN ruby -v && gem install bundler jekyll && bundle install
VOLUME /photo-stream/photos
EXPOSE 4000
ENTRYPOINT bundle exec jekyll serve --host 0.0.0.0
I build the container this way:
$ docker build --tag photo-stream:1.0 .
I run the container this way:
$ docker run -d -p 4000:4000 -v /mnt/data/photos/:/photos/ --name ps photo-stream:1.0
I was expecting the content of the directory /mnt/data/photos to be shown. Instead, nothing is shown. However, a volume '/var/lib/docker/volumes/e5ff426ced2a5e786ced6b47b67d7dee59160c60f59f481516b638805b731902/_data' is created, and when that is populated with pictures, those are shown.

What permissions should a user have to expose a docker container port?

I'm setting up puppeteer on a docker container. Tried to do it according to their troubleshooting page https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md#running-on-alpine.
But after creating and using the new user(pptruser) the server can't expose the port 80 because the user has no permission Error: listen EACCES 0.0.0.0:80.
Can't find a clear documentation about the permissions that I should add to this user so that the user can EXPOSE $PORT
Tried adding the user to sudo and that didn't work, but on the other hand even if that works I assume that's a mistake because it's a security risk.
Tried exposing the port before starting to use the new user, this failed too.
Dockerfile
FROM node
# Installs latest Chromium (72) package.
RUN apk update && apk upgrade && \
echo #edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories && \
echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories && \
apk add --no-cache \
chromium#edge \
nss#edge \
freetype#edge \
harfbuzz#edge \
ttf-freefont#edge \
udev
RUN mkdir -p /app
WORKDIR /app
# Tell Puppeteer to skip installing Chrome. We'll be using the installed package.
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
# Add user so we don't need --no-sandbox.
RUN addgroup -S pptruser && adduser -S -g pptruser pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /app
ENV PORT 80
ENV HTTP_PORT $PORT
ENV HTTPS_PORT 443
EXPOSE $HTTP_PORT
EXPOSE $HTTPS_PORT
USER pptruser
CMD [ "run.sh" ]
run.sh
#!/bin/sh
PORT="${HTTP_PORT:-80}"
node "app/bin/server.js"
error from logs
Error: listen EACCES 0.0.0.0:80
at Object.exports._errnoException (util.js:1020:11)
at exports._exceptionWithHostPort (util.js:1043:20)
at Server._listen2 (net.js:1258:19)
at listen (net.js:1307:10)
at Server.listen (net.js:1403:5)
at appServer.app.then.then.then (/app/bin/server.js:69:12)
Any help appreciated
Traditionally [1], only the root user can bind ports under 1024. However, I think you'll find that doesn't matter for your particular use case. The port on which a service is listening inside a container doesn't have any relationship to the port on which remote clients connect to it. The mapping of "ports exposed on the host" to "ports listening inside a container" is controlled via Docker's port publishing mechanism.
For your example, you would configure your service to listen on something other than port 80...for the purposes of this example, let's say you configure it to listen on port 8080.
When you run your container, map port 80 on your host to port 8080 inside your container:
docker run -p 80:8080 ...
Now you can access your service at port 80 on your host. You can handle port 443 the same way.
Note also that the EXPOSE keyword in your Dockerfile is largely unnecessary. In a typical Docker environment that is a no-op and is only informational. You can publish ports regardless of whether or not have have been EXPOSEd.
[1] The situation is actually a little more nuanced under Linux.

Shiny app docker container not loading in browser

I containerised a shiny app and attempted to deploy it on GCP using Kubernetes but each time I obtain the external IP address and load it in the browser, I get the "this site cannot be reached connection refused" error. So I have attempted to run the container on my localhost to troubleshoot and now I get the "127.0.0.1 didn’t send any data. ERR_EMPTY_RESPONSE" error. I searched tirelessly online for a solution but nothing seems to work for me. Plus none of the solutions on are for a shiny app docker container. Many of the fixes mention the port but I am still stuck. I have xampp installed on my mac by the way. Is it possible that xampp and my docker container are attempting to share the same port or is there a problem with my Docker file code? Pardon me but I am new to containers and have only been following the documentation procedure up till now. Below is my Docker file code:
Dockerfile
# Install R version 3.5.1
FROM r-base:3.5.1
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-
build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-
build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb &&
\
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
# TODO: add further package if you need!
RUN R -e
"install.packages(c('shiny','shinyjs','tools','foreign','XLConnect'),
repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
I would appreciate it if someone could assist me.
You don't need shiny-server.
add
app <- shinyApp(ui = ui, server = server)
runApp(app, host ="0.0.0.0", port = 80, launch.browser = FALSE)
to your R script and
EXPOSE 80
CMD ["R", "-e", "library(shiny); source('/root/pathToYourScript/script.R')"]
to your Dockerfile.
I had to first create a Rprofile.site file and place it in the same directory as the dockerfile and shinyapp. Then I created my own base image with all the necessary libraries for the app and called it from my dockerfile. Here is the final code:
Rprofile.site
local({
options(shiny.port = 3838, shiny.host = "0.0.0.0")
})
Dockerfile
FROM bimage_rpackages
# Copy the app to the image
RUN mkdir /root/shinyapp
COPY app/shinyapp /root/shinyapp
COPY app/Rprofile.site /usr/lib/R/etc/
# Make the ShinyApp available at port 3838
EXPOSE 3838
CMD ["R", "-e", "shiny::runApp('/root/shinyapp')"]

How to view GUI apps from inside a docker container

When I try to run a GUI, like xclock for example I get the error:
Error: Can't open display:
I'm trying to use Docker to run a ROS container, and I need to see the GUI applications that run inside of it.
I did this once just using a Vagrant VM and was able to use X11 to get it done.
So far I've tried putting way #1 and #2 into a docker file based on the info here:
http://wiki.ros.org/docker/Tutorials/GUI
Then I tried copying most of the dockerfile here:
https://hub.docker.com/r/mjenz/ros-indigo-gui/~/dockerfile/
Here's my current docker file:
# Set the base image to use to ros:kinetic
FROM ros:kinetic
# Set the file maintainer (your name - the file's author)
MAINTAINER me
# Set ENV for x11 display
ENV DISPLAY $DISPLAY
ENV QT_X11_NO_MITSHM 1
# Install an x11 app like xclock to test this
run apt-get update
run apt-get install x11-apps --assume-yes
# Stuff I copied to make a ros user
ARG uid=1000
ARG gid=1000
RUN export uid=${uid} gid=${gid} && \
groupadd -g ${gid} ros && \
useradd -m -u ${uid} -g ros -s /bin/bash ros && \
passwd -d ros && \
usermod -aG sudo ros
USER ros
WORKDIR /home/ros
# Sourcing this before .bashrc runs breaks ROS completions
RUN echo "\nsource /opt/ros/kinetic/setup.bash" >> /home/ros/.bashrc
# Copy entrypoint script into the image, this currently echos hello world
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
My personal preference is to inject the display variable and share the unix socket or X windows with something like:
docker run -it --rm -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /etc/localtime:/etc/localtime:ro \
my-gui-image
Sharing the localtime just allows the timezone to match up as well, I've been using this for email apps.
The other option is to spin up a VNC server, run your app on that server, and then connect to the container with a VNC client. I'm less a fan of that one since you end up with two processes running inside the container making signal handling and logs a challenge. It does have the advantage that the app is better isolated so if hacked, it doesn't have access to your X display.

Docker fedora hbase JAVA_HOME issue

My dockerfile on fedora 22
FROM java:latest
ENV HBASE_VERSION=1.1.0.1
RUN groupadd -r hbase && useradd -m -r -g hbase hbase
USER hbase
ENV HOME=/home/hbase
# Download'n extract hbase
RUN cd /home/hbase && \
wget -O - -q \
http://apache.mesi.com.ar/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz \
| tar --strip-components=1 -zxf -
# Upload local configuration
ADD ./conf/ /home/hbase/conf/
USER root
RUN chown -R hbase:hbase /home/hbase/conf
USER hbase
# Prepare data volumes
RUN mkdir /home/hbase/data
RUN mkdir /home/hbase/logs
VOLUME /home/hbase/data
VOLUME /home/hbase/logs
# zookeeper
EXPOSE 2181
# HBase Master API port
EXPOSE 60000
# HBase Master Web UI
EXPOSE 60010
# Regionserver API port
EXPOSE 60020
# HBase Regionserver web UI
EXPOSE 60030
WORKDIR /home/hbase
CMD /home/hbase/bin/hbase master start
As I understand when I set "FROM java:latest" my current dockerfile overlays on that one, so JAVA_HOME must be setted as it is in java:latest? Am I right? This Dockerfile is builded, but when I "docker run" image, It shows "JAVA_HOME not found" error. How can I properly set it up?
use the ENV directive, something like ENV JAVA_HOME /abc/def the doc https://docs.docker.com/reference/builder/#env
add to ~./bashrc (or for global /etc/bashrc:
export JAVA_HOME=/usr/java/default
export PATH=$JAVA_HOME/bin:$PATH

Resources