I try to run a Shiny app on a remote server (here DigitalOcean) using Docker.
First, I created a package for my app as a .tar.gz file. Then:
Create the following Dockerfile:
FROM thinkr/rfull
COPY myapp_*.tar.gz /myapp.tar.gz
RUN R -e "install.packages('myapp.tar.gz', repos = NULL, type = 'source')"
COPY Rprofile.site /usr/local/lib/R/etc
EXPOSE 3838
CMD ["R", "-e myapp::run()"]
Create the following Rprofile.site
local({
options(shiny.port = 3838, shiny.host = "0.0.0.0")
})
Then I build the image using
docker build -t myapp .
I push the image to DockerHub using
docker tag myapp myrepo/myapp:latest
docker push myrepo/myapp
I connect to my droplet on DigitalOcean
eval $(docker-machine env mydroplet)
I create a container from my image on Dockerhub
docker run -d -p 3838:3838 myrepo/myapp
So far it seems to work fine. No message error and I got expected messages when I run docker logs mycontainer
The problem is that I do not know how to actually access the running container. When I connect to the droplet IP, I got nothing (This site can’t be reached). If use
docker inspect --format '{{ .NetworkSettings.IPAddress }}' mycontainer
I got an IP, it seems to be a local one ('172.17.0.2').
When I run docker ps here is what I got
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d4195XXXX myrepo/myapp "R '-e myapp::ru…" 10 days ago Up 10 days 0.0.0.0:3838->3838/tcp, 8787/tcp determined_brown
So the question is: how can I run my dockerized shiny app on my droplet IP address?
Check if you have added the firewall rule to allow connections to 3838 port.
https://www.digitalocean.com/docs/networking/firewalls/resources/troubleshooting/
First, you need to publish the port, which is what you already do.
Second, you need to access the IP address of the host machine where there port is published.
The easiest way is probably to check the output of docker-machine env mydroplet and use this IP, together with your published port.
Related
I am following this blog on how to connect to a docker instance: https://phoenixnap.com/kb/how-to-ssh-into-docker-container. It mentions using docker attach <name>
Trying this on my ec2 instance gives us:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
849844c1e3a5 6501862...us-east-618356524 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:32788->8401/tcp ecs-prod-clia-lab-5-Applicationprodclia-lab-8c88d2e0bc83cfb1230
Now let's try to `docker attach <instance-name>
$ docker attach ecs-prod-clia-lab-5-Applicationprodclia-lab-8c88d2e0bc83cfb1230
Error: No such container: ecs-prod-clia-lab-5-Applicationprodclia-lab-8c88d2e0bc83cfb1230
So that actually does not work? What is the correct way to do this?
To get a shell in a running container, do this:
$ docker exec -it <container-id> /bin/sh
The attach sub-command gives you access to a running containers stdout. That's not what you want here
However, if your conainer is meant to provide SSH as a service, you'll need to run it in such a way that it's exposed on the host, on some available port (like 2222).
The you'd simply "SSH in" like this:
$ ssh 127.0.0.1 -p 2222
I am trying to access a named postgres server from inside a docker container. I can access the server via it's IP address, but not its name. I've tried --net=host and -p ServerName:5432:5432 options on the docker run command.
I will demonstrate the issue:
# on the host
$ ping ServerName
# This works
$ ping 10.1.1.25
# Works
# Then enter container with:
$ winpty docker exec -it containerName bash
$ ping 10.1.1.25
# Works
$ ping ServerName
# Does NOT work
I would guess that I need to give docker some kind of mapping from the hosts knowlegde of the network to the container. I presume that would be through the network functionality, but I can't find any instructions that I understand.
And before anyone suggests it, the postgres instance cannot be moved, including being moved into a docker container of it's own.
Output of docker ps is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15c7903f7ccd imageName "tail -f /dev/null" 47 minutes ago Up 47 minutes 0.0.0.0:5432->5432/tcp, 0.0.0.0:8000->8888/tcp containerName
Solved: I needed to use the --dns option in docker run command. Then, inside the container, I needed to use the fully qualified name i.e. serverName.companyName.com instead of just serverName.
Thanks to #Dupinder Singh for pointing me to some useful DNS articles (see comments on the question).
I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.
I have a docker container in a virtual machines where I am hosting my Postgres database, but when I pull that image to my local machine the container does not show up. I have tried import/export and save/load but still I can't get the container to show in my local machine. Any help will be much appreciated!
You have to build containers after pulling in the images.
The simplest way is
docker create --name my_container bf141206f773
where bf141206f773 is the image's hash. You can also use its full_name:tag.
To start your new container:
docker start my_container
To enter your new container:
docker exec -it my_container /bin/bash
If you want to see how I do this in a deployment-environment, check out my Laravel QuickStart project's docker files: https://github.com/phpexpertsinc/laravel_quickstart
Hi I'm very new to Docker, I'm trying to get familiar with Docker by following the tutorial on official site. Now I get stuck at part 2 of the tutorial (where you can check up the link here => https://docs.docker.com/get-started/part2/#run-the-app)
I have sample application code, Dockerfile, and requirements.txt exactly same as the offical tutorial
$ ls
app.py Dockerfile requriements.txt
My Dockerfile looks like this
FROM python:2.7-slim
WORKDIR /app
ADD . /app
RUN pip install -r requriements.txt
EXPOSE 80
ENV NAME World
CMD ["python", "app.py"]
All 3 files have file content/code exactly same as the tutorial also. I was able to build image registry with this command
$ docker build -t friendlyhello .
Everything looks great. Now I had sample project image registry.
$ docker images
REPOSITORY TAG IMAGE ID CREATED
friendlyhello latest 82b8a0b52e91 39 minutes ago
python 2.7-slim 1c7128a655f6 5 days ago
hello-world latest 48b5124b2768 4 months ago
I then ran the app according to the official tutorial with this command
$ docker run -d -p 4000:80 friendlyhello
c1893f7eea9f1b708f639653b8eba20733d8a45d3812b442bc295b43c6c7dd5c
Edit: This is my container after ran above command
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
c1893f7eea9f friendlyhello "python app.py" 15 minutes ago Up 16 minutes
And the official tutorial guides readers to have a look at http://localhost:4000 as they have already mapped machine port 4000 to container port 80
Unfortunately, I couldn't get any response from that URL.
$ curl http://localhost:4000
curl: (7) Failed to connect to localhost port 4000: Connection refused
I'm totally newbie and I have no idea what to do....How can I get it to work ?
Thanks in advance for any response.
Edit: I did as #johnharris85 suggested. Below is the output
$ curl http://$(echo docker-machine ip default):4000
curl: (6) Couldn't resolve host 'docker-machine'
curl: (6) Couldn't resolve host 'ip'
curl: (6) Couldn't resolve host 'default'
It seems like it doesn't work either.
Edit: #johnharris85 corrected his suggestion and #user8023051 clarify how this command come from and what is going on under the hood. It is working now :) Thanks
$ curl http://$(docker-machine ip default):4000
<h3>Hello World!</h3><b>Hostname:</b> c1893f7eea9f<br/><b>Visits:</b> <i>cannot connect to Redis, counter disabled</i>
I'm not very familiar with docker, but it sounds like your setup is such that your docker instance is running in a virtual machine, and you're trying to access an application bound to localhost (the vm) from your Windows machine. The reason you would get a refusal here from curl is because nothing is actually listening on port 4000 on the host (Windows).
Try to find the IP that your docker instance is using by:
$ docker-machine ip default
Now that you know the IP address, try curl again. You can even have it evaluated within the command like so:
$ curl http://$(docker-machine ip default):4000
if you are running your docker on Mac and trying to connect to your dependancies (postgres etc) using localhost, replace localhost with docker.for.mac.localhost
You are using port 9089 in your docker config but your program or server running on different port. To check the port for xampp you can use below method or Try to search google to check the port number :
Go to xampp control panel and see the port number below image. I have
marked that red color. In my case, port number is 80
The docker instance is not running on your local rather than docker machine.You can use docker-machine ssh command login, and then curl "localhost:6666" on the docker-machine.