Installing packages for Rstudio Docker - docker

I'm trying to use Rstudio on a DigitalOcean server using the Rstudio docker. Since my experience with linux servers is limited, it's been a bit of a challenge for me.
I'm able to get Rstudio up and running with:
docker run -dp 8787:8787 -v /root:/home/rstudio/ -e ROOT=TRUE rocker/hadleyverse
However, I'd like to be able to shut down the server and save it to a snapshot when I'm not using it, but not have to re-install packages each time I do so.
Using the the docker documentation on updating an image, I am able to create a container, install packages on that container, and then commit the changes:
docker run -t -i rocker/hadleyverse /bin/bash
install.r randomForest
exit
docker commit \<CONTAINER_ID> michael91/ms:v1
However, once I make the commit, I am unable to run the updated image properly. I try and run it as follows:
docker run -dp 8787:8787 -v /root:/home/rstudio/ -e ROOT=TRUE michael91/ms:v1
When I do so, Rstudio server is not activated, as it is when I run the original rocker/hadleyverse version. I've tried making commits with and without installing packages; either way it doesn't seem to work. Obviously I'm doing something incorrectly, but I'm not sure what. If anyone could offer me some guidance, I'd really appreciate it.
Edit: Thanks a lot VonC; that did the trick.

It could be because the new committed image has lost its CMD directive that was present in rocker-org/rocker/rstudio/Dockerfile#L58.
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d /supervisord.conf"]
Try and create a new Dockerfile:
FROM michael91/ms:v1
## Add RStudio binaries to PATH
ENV PATH /usr/lib/rstudio-server/bin/:$PATH
ENV LANG en_US.UTF-8
EXPOSE 8787
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
And build it as michael91/ms:v2.
Then see v2 works better than v1 when it comes to activating RStudio:
docker run -dp 8787:8787 -v /root:/home/rstudio/ -e ROOT=TRUE michael91/ms:v2

Related

Docker can't find OpenJDK

I am experimenting with Docker for the first time, and am trying to get a Spring Boot web app to run inside a Docker container. I am building the app (which packages up into a self-contained jar) and then adding it to the Docker image (which is what I want).
I am following the instructions from the OpenJDK Docker base image here. You can find my SSCCE at this Bootup repo on GitHub, whose README has all the instructions to reproduce what I'm seeing. But basically:
I build the web app into a jar
Run docker build -t bootup . which succeeds
Run docker run -it --rm --name bootup bootup which gives me the error below and then exits
The error:
/bin/sh: 1: /bin/sh: [java,: not found
According to the Google Gods, this used to be a problem with the Oracle JDK images, but should not be happening with OpenJDK images.
Looking at my Dockerfile (which is also up in that GitHub repo), can anyone spot where I'm going awry:
FROM openjdk:8
RUN mkdir /opt/bootup
ADD build/libs/bootup.jar /opt/bootup
WORKDIR /opt/bootup
ENTRYPOINT ['java', '-jar', 'bootup.jar']
CMD ['']
Thanks in advance!
Update:
Output of docker ps:
CONTAINER ID IMAGE COMMAND CREATED
16bde964ff6b bootup "/bin/sh -c 'java -ja" 2 days ago
STATUS PORTS NAMES
Up 14 seconds 0.0.0.0:8080->8080/tcp bootup
I had it working fine using this dockerfile:
FROM openjdk:8
RUN mkdir /opt/bootup
ADD build/libs/bootup.jar /opt/bootup
WORKDIR /opt/bootup
EXPOSE 8080
ENTRYPOINT java -jar bootup.jar
It runs just fine with this commad:
docker run -it -p 8080:8080 --name bootup bootup
I am no java developer and I don't know why it ignores your configuration that requires it to start on port 9200, since your app starts on port 8080, but from a docker point of view everything is working with my config and I can connect to the app from my browser on localhost:8080
Here the screenshot:
Also, since you posted your github repo I suggest you to modify the readme so that users can start gradle from docker without the need of a java environment in the host machine running this one time command:
docker run --rm -v "$PWD":/usr/src/myapp -w /usr/src/myapp openjdk:8 /usr/src/myapp/gradlew clean build

Docker build project from Docker Hub

I'd like to set up OpenProject using Docker. There are several decent options in the Hub, but so far I've tried this one as the best possible option. I'd like to clone it, change the database default password (because I find it unsafe) and then build it and run it. How should I proceed?
I've tried docker build -t myrepo/openproject dockerfile_location. Then I get an error that git does not exist. I know that I could add RUN apt-get install git, but afterwards I encounter an error checking for pg_config... no. In order to fix that, I need to install postgres, but this means that I have to put the code and data in the same container. This is the situation that I'm trying to avoid.
How can I solve the problem?
You don't have to put the postgres binaries and data in the same container. pg_config is basically configuring your postgres.
pg_config is in postgresql-devel (libpq-dev in Debian/Ubuntu)
In essence:
# container were your data is
docker run -d --name openproject-postgres-data -v /data busybox true
# container were postgres runs
docker run -d --name openproject-postgres --volumes-from openproject-postgres-data -e USER=super -e PASS=password paintedfox/postgresql
# container that actually runs your application and links to your db container
docker run -d --name openproject --link openproject-postgres:postgres -p 8080:80 abevoelker/openproject

Why is Docker Tomcat failing to start?

I am trying to build a Tomcat image from a Dockerfile. This is what my Dockerfile looks like:
FROM dockerfile/java
RUN sudo apt-get update
RUN sudo apt-get install tomcat7
EXPOSE 8086
CMD sudo service tomcat7 start && tail -f /var/log/tomcat7/catalina.out.
but when I build an image from this and run the image with
$ docker run tomcat7-test
it gives the following:
Starting Tomcat servlet engine tomcat7 …fail!
I don’t know what is causing the problem. How can I check the logs of this Docker Tomcat? Can anybody tell me what commands I should use in the Dockerfile to run Tomcat?
There is an official Tomcat image you can use. There are links to the Dockerfiles there to checkout and install Tomcat.
If you want to inspect what is going on when you build your dockerfile, just perform the same steps (apt-getting tomcat7 and starting the service) manually after starting an interactive shell inside the dockerfile/java container with this command:
docker -it dockerfile/java bash
There you will be able to check the logs and see what could be going on.
I did install tomcat server in the docker container instead of using the official Tomcat image.
When I start the server I get the fail response but was curl the tomcat server index page.
Also instead of exiting the container, if you detach from it back to your terminal by typing:
ctrl-p then ctrl-q (source)
You can access your webapps from browser using the below URL:
http://<< boot2docker_ip >>:8080
Try running the image with following command
docker run -dt --cap-add SYS_PTRACE -p 8082:8080 tomcat7-test

How to make sure docker's time syncs with that of the host?

I have dockers running on Linode servers. At times, I see that the time is not right on the dockers. Currently I have changed the run script in every docker to include the following lines of code.
yum install -y ntp
service ntpd stop
ntpdate pool.ntp.org
What I would ideally like to do however is that the docker should sync time with the host. Is there a way to do this?
The source for this answer is the comment to the answer at: Will docker container auto sync time with the host machine?
After looking at the answer, I realized that there is no way a clock drift will occur on the docker container. Docker uses the same clock as the host and the docker cannot change it. It means that doing an ntpdate inside the docker does not work.
The correct thing to do is to update the host time using ntpdate
As far as syncing timezones is concerned, -v /etc/localtime:/etc/localtime:ro works.
You can add your local files (/etc/timezone and /etc/localtime) as volume in your Docker container.
Update your docker-compose.yml with the following lines.
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
Now the container time is the same as on your host.
This will reset the time in the docker server:
docker run --rm --privileged alpine hwclock -s
Next time you create a container the clock should be correct.
Source: https://github.com/docker/for-mac/issues/2076#issuecomment-353749995
If you are using boot2docker and ntp doesn't work inside the docker VM (you are behind a proxy which does not forward ntp packets) but your host is time-synced, you can run the following from your host:
docker-machine ssh default "sudo date -u $(date -u +%m%d%H%M%Y)"
This way you are sending your machine's current time (in UTC timezone) as a string to set the docker VM time using date (again in UTC timezone).
NOTE: in Windows, inside a bash shell (from the msys git), use:
docker-machine.exe ssh default "sudo date -u $(date -u +%m%d%H%M%Y)"
This is what worked for me with a Fedora 20 host. I ran a container using:
docker run -v /etc/localtime:/etc/localtime:ro -i -t mattdm/fedora /bin/bash
Initially /etc/localtime was a soft link to /usr/share/zoneinfo/Asia/Kolkata which Indian Standard Time. Executing date inside the container showed me same time as that on the host. I exited from the shell and stopped the container using docker stop <container-id>.
Next, I removed this file and made it link to /usr/share/zoneinfo/Singapore for testing purpose. Host time was set to Singapore time zone. And then did docker start <container-id>. Then accessed its shell again using nsenter and found that time was now set to Singapore time zone.
docker start <container-id>
docker inspect -f {{.State.Pid}} <container-id>
nsenter -m -u -i -n -p -t <PID> /bin/bash
So the key here is to use -v /etc/localtime:/etc/localtime:ro when you run the container first time. I found it on this link.
Hope it helps.
This easy solution fixed our time sync issue for the docker kernel in WSL2.
Open up PowerShell in Windows and run this command to resync the clock.
wsl -d docker-desktop -e /sbin/hwclock -s
You can then test it using
docker run -it alpine date
Reference: https://github.com/docker/for-win/issues/10347#issuecomment-776580765
I have the following in the compose file
volumes:
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
Then all good in Gerrit docker with its replication_log set with correct timestamp.
If you're using docker-machine, the virtual machines can drift. To update the clock on the virtual machine without restarting run:
docker-machine ssh <machine-name|default>
sudo ntpclient -s -h pool.ntp.org
This will update the clock on the virtual machine using NTP and then all the containers launched will have the correct date.
I was facing a time offset of -1hour and 4min
Restarting Docker itself fixed the issue for me.
To set the timezone in general:
ssh into your container:
docker exec -it my_website_name bash
run dpkg-reconfigure tzdata
run date
docker-compose usage:
Add /etc/localtime:/etc/localtime:ro to the volumes attribute:
version: '3'
services:
a-service:
image: service-name
container_name: container-name
volumes:
- /etc/localtime:/etc/localtime:ro
It appears there can by time drift if you're using Docker Machine, as this response suggests: https://stackoverflow.com/a/26454059/105562 , due to VirtualBox.
Quick and easy fix is to just restart your VM:
docker-machine restart default
For docker on macOS, you can use docker-time-sync-agent. It works for me.
With docker for windows I had to tick
MobyLinuxVM > Settings > Integration Services > Time synchronization
in Hyper-V manager and it worked
Windows users:
The solution is very simple. Simply open a powershell prompt and enter:
docker run --privileged --rm alpine date -s "$(Get-Date ([datetime]::UtcNow) -UFormat "+%Y-%m-%d %H:%M:%S")"
To check that it works, run the command:
docker run --rm -it alpine date
My solution is inspired by something I found in docker forum thread. Anyways, it was the only solution that worked for me on docker desktop, except for restarting my machine (which also works). Here's a link to the original thread: https://forums.docker.com/t/syncing-clock-with-host/10432/23
The difference between the thread answer and mine is that mine converts the time to UTC time, which is necessary for e.g. AWS. Otherwise, the original answer from the forum looks like this:
docker run --privileged --rm alpine date -s "$(date -u "+%Y-%m-%d %H:%M:%S")"
Although this is not a general solution for every host, someone may find it useful. If you know where you are based (UK for me) then look at tianon's answer here.
FROM alpine:3.6
RUN apk add --no-cache tzdata
ENV TZ Europe/London
This is what I added to my Dockerfile ^ and the timezone problem was fixed.
Docker Usage
Here's a complete example which builds a docker image for a go app in a multistage build. It shows how to include the timezone in your image.
FROM golang:latest as builder
WORKDIR /app
ENV GO111MODULE=on \
CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o main
### Certs
FROM alpine:latest as locals
RUN apk --update --no-cache add ca-certificates
RUN apk add --no-cache tzdata
### App
FROM scratch
WORKDIR /root/
COPY --from=locals /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=builder app/main .
COPY --from=builder app/templates ./templates
COPY --from=locals /usr/share/zoneinfo /usr/share/zoneinfo
ENV TZ=Asia/Singapore
EXPOSE 8000
CMD ["./main"]
For me, restarting Docker Desktop did not help. Shutting down Win10 and start it again, it did help.
I saw this on windows, launching Prometheus from docker-compose. I had a 15 hour time drift.
If you are running Docker Desktop on WSL, you can try running wsl --shutdown from a powershell prompt.
Docker Desktop should restart, and you can try running your docker container again.
Worked for me, and I didn't have to restart.
I've discovered that if your computer goes to sleep then the docker container goes out of sync.
https://forums.docker.com/t/time-in-container-is-out-of-sync/16566
I have made a post about it here
Certificate always expires 5 days ago in Docker
Enabling Hyper-V in Windows Features solved the problem:
Windows Features
For whatever reason none of these answers solved my problem.
It had nothing to do with the docker date/time for the images I was creating. It had to do with my local WSL date time.
Once I ran sudo ntpdate-debian everything worked.
If you don't have ntp just install it and run the command. If you aren't using debian then you probably won't have the shell script ntpdate-debian, but you can use ntpd -gq as well. Basically just update the date for your main WSL distro.
This code worked for me
docker run -e TZ="$(cat /etc/timezone)" myimage

I lose my data when the container exits

Despite Docker's Interactive tutorial and faq I lose my data when the container exits.
I have installed Docker as described here: http://docs.docker.io/en/latest/installation/ubuntulinux
without any problem on ubuntu 13.04.
But it loses all data when exits.
iman#test:~$ sudo docker version
Client version: 0.6.4
Go version (client): go1.1.2
Git commit (client): 2f74b1c
Server version: 0.6.4
Git commit (server): 2f74b1c
Go version (server): go1.1.2
Last stable version: 0.6.4
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:05:47 Unable to locate ping
iman#test:~$ sudo docker run ubuntu apt-get install ping
Reading package lists...
Building dependency tree...
The following NEW packages will be installed:
iputils-ping
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 56.1 kB of archives.
After this operation, 143 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ precise/main iputils-ping amd64 3:20101006-1ubuntu1 [56.1 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 56.1 kB in 0s (195 kB/s)
Selecting previously unselected package iputils-ping.
(Reading database ... 7545 files and directories currently installed.)
Unpacking iputils-ping (from .../iputils-ping_3%3a20101006-1ubuntu1_amd64.deb) ...
Setting up iputils-ping (3:20101006-1ubuntu1) ...
iman#test:~$ sudo docker run ubuntu ping
2013/10/25 08:06:11 Unable to locate ping
iman#test:~$ sudo docker run ubuntu touch /home/test
iman#test:~$ sudo docker run ubuntu ls /home/test
ls: cannot access /home/test: No such file or directory
I also tested it with interactive sessions with the same result. Did I forget something?
EDIT: IMPORTANT FOR NEW DOCKER USERS
As #mohammed-noureldin and others said, actually this is NOT a container exiting. Every time it just creates a new container.
You need to commit the changes you make to the container and then run it. Try this:
sudo docker pull ubuntu
sudo docker run ubuntu apt-get install -y ping
Then get the container id using this command:
sudo docker ps -l
Commit changes to the container:
sudo docker commit <container_id> iman/ping
Then run the container:
sudo docker run iman/ping ping www.google.com
This should work.
When you use docker run to start a container, it actually creates a new container based on the image you have specified.
Besides the other useful answers here, note that you can restart an existing container after it exited and your changes are still there.
docker start f357e2faab77 # restart it in the background
docker attach f357e2faab77 # reattach the terminal & stdin
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
In addition to Unferth's answer, it is recommended to create a Dockerfile.
In an empty directory, create a file called "Dockerfile" with the following contents.
FROM ubuntu
RUN apt-get install ping
ENTRYPOINT ["ping"]
Create an image using the Dockerfile. Let's use a tag so we don't need to remember the hexadecimal image number.
$ docker build -t iman/ping .
And then run the image in a container.
$ docker run iman/ping stackoverflow.com
There are really great answers above to the asked question. There might be no need for another answer but still I want to give my personal opinion on the topic in the simplest words possible.
Here are some points about containers & images that will help us for a conclusion:
A docker image can be:
created-from-a-given-container
deleted
used-to-create-any-number-of-containers
A docker container can be:
created-from-an-image
started
stopped
restarted
deleted
used-to-create-any-number-of-images
A docker run command does this:
Downloads an image or uses a cached image
Creates a new container out of it
Starts the container
When a Dockerfile is used to create an image:
It is already well known that the image will eventually be used to run a docker container.
After issuing docker build command, docker behind-the-scenes creates a running container with a base-file-system and follows steps inside the Dockerfile to configure that container as per the developers need.
After the container is configured with specs of the Dockerfile, it will be committed as an image.
The image gets ready to rock & roll!
Conclusion:
As we can see, a docker container is independent of a docker image.
A container can be restarted provided the unique ID of that container [use docker ps --all to get the id].
Any operation like making a new directory, creating files, installing tools, etc. can be done inside the container when it is running. Once the container is stopped, it persists all the changes. Container stopping and restarting is like rebooting a computer system.
An already created container is always available for a restart but when we issue docker run command, a new container is created out of an image and hence it is like a new computer system. The changes made inside the old container - as we can understand now - are not available in this new container.
A final note:
I guess it's now obvious why the data seems to be lost yet it is always there.. but in a different [old] container. So, take a good note of the difference in docker start & docker run command & never get confused in them.
I have got a much simpler answer to your question, run the following two commands
sudo docker run -t -d ubuntu --name mycontainername /bin/bash
sudo docker ps -a
the above ps -a command returns a list of all containers. Take the name of the container which references the image name - 'ubuntu' . docker auto generates names for the containers for example - 'lightlyxuyzx', that's if you don't use the --name option.
The -t and -d options are important, the created container is detached and can be reattached as given below with the -t option.
With --name option, you can name your container in my case 'mycontainername'.
sudo docker exec -ti mycontainername bash
and this above command helps you login to the container with bash shell. From this point on any changes you make in the container is automatically saved by docker.
For example - apt-get install curl inside the container
You can exit the container without any issues, docker auto saves the changes.
On the next usage, All you have to do is, run these two commands every time you want to work with this container.
This Below command will start the stopped container:
sudo docker start mycontainername
sudo docker exec -ti mycontainername bash
Another example with ports and a shared space given below:
docker run -t -d --name mycontainername -p 5000:5000 -v ~/PROJECTS/SPACE:/PROJECTSPACE 7efe2989e877 /bin/bash
In my case:
7efe2989e877 - is the imageid of a previous container running
which I obtained using
docker ps -a
You might want to look at docker volumes if you you want to persist the data in your container. Visit https://docs.docker.com/engine/tutorials/dockervolumes/. The docker documentation is a very good place to start
My suggestion is to manage docker, with docker compose. Is an easy to way to manage all the docker's containers for your project, you can map the versions and link different containers to work together.
The docs are very simple to understand, better than docker's docs.
Docker-Compose Docs
Best
the similar problem (and no way Dockerfile alone could fix it) brought me to this page.
stage 0:
for all, hoping Dockerfile could fix it: until --dns and --dns-search will appear in Dockerfile support - there is no way to integrate intranet based resources into.
stage 1:
after building image using Dockerfile (by the way it's a serious glitch Dockerfile must be in the current folder), having an image to deploy what's intranet based, by running docker run script. example:
docker run -d \
--dns=${DNSLOCAL} \
--dns=${DNSGLOBAL} \
--dns-search=intranet \
-t pack/bsp \
--name packbsp-cont \
bash -c " \
wget -r --no-parent http://intranet/intranet-content.tar.gz \
tar -xvf intranet-content.tar.gz \
sudo -u ${USERNAME} bash --norc"
stage 2:
applying docker run script in daemon mode providing local dns records to have ability to download and deploy local stuff.
important point: run script should be ending with something like /usr/bin/sudo -u ${USERNAME} bash --norc to keep container running even after the installation scripts finishes.
no, it's not possible to run container in interactive mode for the full automation matter as it will remain inside internal shall command prompt until CTRL-p CTRL-q being pressed.
no, if interacting bash will not be executed at the end of the installation script, the container will terminate immediately after finishes script execution, loosing all installation results.
stage 3:
container is still running in background but it's unclear whether container has ended installation procedure or not yet. using following block to determine execution procedure finishes:
while ! docker container top ${CONTNAME} | grep "00[[:space:]]\{12\}bash \--norc" -
do
echo "."
sleep 5
done
the script will proceed further only after completed installation. and this is the right moment to call: commit, providing current container id as well as destination image name (it may be the same as on the build/run procedure but appended with the local installation purposes tag. example: docker commit containerID pack/bsp:toolchained.
see this link on how to get proper containerID
stage 4: container has been updated with the local installs as well as it has been committed into newly assigned image (the one having purposes tag added). it's safe now to stop container running. example: docker stop packbsp-cont
stage5: any moment the container with local installs require to run, start it with the image previously saved.
example: docker run -d -t pack/bsp:toolchained
a brilliant answer here How to continue a docker which is exited from user kgs
docker start $(docker ps -a -q --filter "status=exited")
(or in this case just docker start $(docker ps -ql) 'cos you don't want to start all of them)
docker exec -it <container-id> /bin/bash
That second line is crucial. So exec is used in place of run, and not on an image but on a containerid. And you do it after the container has been started.
None of the answers address the point of this design choice. I think docker works this way to prevent these 2 errors:
Repeated restart
Partial error

Resources