Rails app docker container not accessible from Windows host - docker

I am trying to make a simple docker container that runs the Rails app from the directory that I launch it in.
Everything appears to be fine except when I run the container and try to access it from my Windows host at the IP address that Docker Machine gives me, it responds with a connection refused error message.
I even used the Nginx Dockerfile as a reference, because the Nginx Dockerfile actually builds a container that is accessible for me.
Here is my Dockerfile so far:
FROM ruby:2.3.1
RUN gem install rails && \
apt-get update -y && \
apt-get install -y nodejs
VOLUME ["/web_app"]
ADD . /web_app
WORKDIR /web_app
RUN bundle install
CMD rails s -p 80
EXPOSE 80
I build the image using this command
docker build -t rails_server .
I then run it using this command
docker run -d -p 80:80 rails_server
And here is what I try to access the webpage:
curl $(docker-machine ip)
And this is what I get back:
curl: (7) Failed to connect to 192.168.99.100 port 80: Connection refused
And this is how it makes me feel:

The problem here seems to be that the app is listening on 127.0.0.1:80, so the service will not accept connection from outside the container. Could you check if modifying the rails server to listening on 0.0.0.0 the issue solves?
You can do that using the -b flag of rails s:
FROM ruby:2.3.1
RUN gem install rails && \
apt-get update -y && \
apt-get install -y nodejs
VOLUME ["/web_app"]
ADD . /web_app
WORKDIR /web_app
RUN bundle install
CMD rails s -b 0.0.0.0 -p 80
EXPOSE 80

The port is only exposed to the vm running the docker inside. You have to still expose port 80 of your vm to your local machine so it can connect to it. I think the best approach is making your container to be listened o an optional port like 7070 and then using a simple nginx proxy pass to feed the content to the outside (listening on port 80)

Related

linux curl can't find docker webserver

I have been Googling this for 2 days and the only work around I have found is using --net=host vs. -p 8081:80 to connect to the web server. I have create a basic web server on a normal non web server RH 7 box. The expose port is 80. I compile and start the container and 2 web pages are copied in. In the container "curl http://localhost/index.html" writes out "The Web Server is Running". Out side curl times out with "curl: (7) Failed connect to localhost:80; Connection refused". All the post say it should work, but it doesn't.
The container was create as follows:
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data -p 8081:80 --name=d_webserver webserver
I have done docker inspect d_webserver and see the Gateway": "172.17.0.1" and "IPAddress": "172.17.0.2" are this. curl http://localhost:8081/index.html or curl http://172.17.02:8081/index.html all fail. Only if I use
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data --net=host --name=d_webserver webserver
does it work as expected. From all I have read, the -p 8081:80 should allow me to see the web page, but it just doesn't work. There are no firewall up, so that's not the problem. Does anyone know why 8081 is not allowing me to connect to the webserver? What step am I missing?
Also, I would like to use the chrome on my PC to do a http:xxx.xxx.xxx.xxx:8081/index.html to the Linux box vs. running the browser on Linux box. PC chrome says the Linux ip can't be reached. There is a gateway box required so that probably is the problem. Is there someway to fix the pc chrome so it can find the Linux docker web server via gateway box or must I start chrome on the Linux box all the time? Sort of defeats the point of making the docker webserver in the first place if people have to ssh into the box and start up a browser.
We are using local repositories because of security. These are the rpm I saved in rpms directory for install.
rpms $ ls
deltarpm-3.6-3.el7.x86_64.rpm
httpd-2.4.6-97.el7_9.4.x86_64.rpm
yum-utils-1.1.31-54.el7_8.noarch.rpm
Dockerfile:
# Using RHEL 7 base image and Apache Web server
# Version 1
# Pull the rhel image from the local registry
FROM rhel7_base:latest
USER root
MAINTAINER "Group"
# Copy X dependencies that are not available on the local repository to /home/
COPY rpms/*.rpm /home/
# Update image
# Install all the RPMs using yum. First add the pixie repo to grab the rest of the dependencies
# the subscription and signature checking will be turned off for install
RUN cd /home/ && \
yum-config-manager --add-repo http://xxx.xxx.xxx.xxx/repos/rhel7/yumreposd/redhat.repo && \
cat /etc/yum.repos.d/redhat.repo && \
yum update --disableplugin=subscription-manager --nogpgcheck -y && rm -rf /var/cache/yum && \
yum install --disableplugin=subscription-manager --nogpgcheck *.rpm -y && rm -rf /var/cache/yum && \
rm *.rpm
# Copy test web page directory into container at /var/www/html.
COPY unit_test/ /var/www/html/unit_test/
# Add default Web page and expose port for testing
RUN echo "The Web Server is Running" > /var/www/html/index.html
EXPOSE 80
# Start the service
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["/usr/sbin/httpd"]
I built it this way and then started it and I should have been able to connect on port 8081, but curl fails.
docker build -t="webserver" .
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data -p 8081:80 --name=d_webserver webserver
curl http://localhost/index.html (time out)
curl http://localhost:8081/index.html (time out)
curl http://172.17.0.2:8081/index.html (time out)
curl http://172.17.0.2/index.html (time out)

TCP/Telnet from inside docker container

I have a 'mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim' Docker container, with 2 things on it:
-a custom SDK running on it
-a dotnet5 application that connects to the SDK with TCP
If I connect the the Docker's bash, I can use telnet localhost 54321 to connect to my SDK successfully
If I run the Windows SDK version on my development computer (Windows), and I run my application IIS Express instead of Docker, I can successfully connect with a telnet library (host 'localhost', port '54321'), this works
However, I want to run both the SDK and my dotnet application in a Docker container, and when I try to connect from inside the container (the same thing that works on the IIS hosted version), this does not work. By running 'telnet localhost 54321' from the docker container commandline I can confirm that the SDK is running. What am I doing wrong?
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
RUN apt-get update && apt-get install -y telnet
RUN apt-get update && apt-get install -y libssl1.1
RUN apt-get update && apt-get install -y libpulse0
RUN apt-get update && apt-get install -y libasound2
RUN apt-get update && apt-get install -y libicu63
RUN apt-get update && apt-get install -y libpcre2-16-0
RUN apt-get update && apt-get install -y libdouble-conversion1
RUN apt-get update && apt-get install -y libglib2.0-0
RUN mkdir /sdk
COPY ["Server/Sdk/SomeSDK*", "sdk/"]
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["Server/MyProject.Server.csproj", "Server/"]
COPY ["Shared/MyProject.Shared.csproj", "Shared/"]
COPY ["Client/MyProject.Client.csproj", "Client/"]
RUN dotnet restore "Server/MyProject.Server.csproj"
COPY . .
WORKDIR "/src/Server"
RUN dotnet build "MyProject.Server.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyProject.Server.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyProject.Server.dll"]
Code (working on IIS, when SDK is running on Windows, but not when both SDK and code are running inside the same Docker container):
var telnetClient = new TelnetClient("localhost", 54321, TimeSpan.FromSeconds(1), new CancellationToken());
await telnetClient.Connect();
Thread.Sleep(2000);
await telnetClient.Send("init");
Command line (working BOTH from windows CLI, and from Docker bash (so also when the code is not working, this is working):
$telnet localhost 54321
$init
The issue might be (but I'm not sure, as I've received this result from using direct command line '$telnet localhost 54321' from within dotnet: 'telnet: connection refused by remote host'
Make sure you use in your docker run command also --network=host to use the same network if you want to reach out of the container or create a bridge with --network=bridge (for example with another container).
By default the Docker container is spawn on a separate, dedicated and private subnet on your machine (mostly 172.17.0.0/16) which is different from your machine's default/local subnet (127.0.0.0/8).
For connecting into the host's subnet in this case 127.0.0.0/8 you need --network=host. For communication within the same container though it's not necessary and works out of the box.
For accessing the service in container from the outside you need to make sure you have your application's port published either with --publish HOST_PORT:DOCKER_PORT or --publish-all (random port(s) on host are then assigned, check with docker ps).
Host to container (normal)
# host
telnet <container's IP> 8000 # connects, typing + return shows in netcat stdout
# container
docker run --rm -it --publish 8000:8000 alpine nc -v -l -p 8000
Container to host (normal)
# host
nc -v -l -p 8000
# container, docker run -it alpine
apk add busybox-extras
telnet localhost 8000 # hangs, is dead
Container to host (on host network)
# host
nc -v -l -p 8000
# container, docker run -it --network=host alpine
apk add busybox-extras
telnet localhost 8000 # connects, typing + return shows in netcat stdout
Within container
# start container
docker run -it --name alpine alpine
apk add busybox-extras
# exec into container to create service on 4000
docker exec -it alpine nc -v -l -p 4000
# exec into container to create service on 5000
docker exec -it alpine nc -v -l -p 5000
# telneting from the original terminal (the one with apk add)
telnet localhost 4000 # connects, works
telnet localhost 5000 # connects, works

Docker Toolbox refused to connect on the browser - Tried different solutions - Windows 7

I have installed Docker Toolbox on Windows 7.
Everything has been installed correctly.
Now I try to build and run a DockerFile.
Dockerfile
FROM debian:9
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq \
&& apt-get clean -y
ADD . /app/
WORKDIR /app
RUN npm install
VOLUME /app/logs
CMD npm run start
After successfully running the command line docker build -t test . and docker run -it -d -p 3306:3306 test, I try to access it via my browser by doing :
http://192.168.99.100:3306
which correspond to http://[docker-machine-ip]:port
But it refuses to connect.
After searching on the internet, I tried several solutions:
1. Use the container IP
docker inspect --format '{{ .NetworkSettings.IPAddress }}' [id]
http://[containerIP]:port
2. Add port forwarding on Oracle VM defaut machine
VirtualBox -> Machine settings -> Network -> Adapter 1 (NAT) -> Advanced, Port Forwarding
name : test
Host ip : 127.0.0.1
Host port : 3306
Guest port : 3306
I even tried by putting Guest IP to 192.168.99.100 and letting Host IP to empty.
3. Try different ports
I tried differents ports to see if it was not caused by a port already opened.
I even tried the option --publish-all (-P) but as a result, I don't have any ports showing on docker ps -a
docker run -it -d -P test
4. Deactivate the windows firewall
Both public and private.
None of those solutions worked for me and I don't know what to do next.
Any help ? I would appreciate. Thank you.

Can't connect to docker container with Web Browser [duplicate]

I want to run Django in a simple Docker container.
First I built my container with Docker-file. There wasn't anything special in it (only FROM, RUN and COPY commands)
Then I ran my container with command
docker run -tid -p 8000:8000 --name <container_name> <image>
Entered my container:
docker exec -it <container_name> bash
Ran Django server:
python manage.py runserver
Got:
Starting development server at http://127.0.0.1:8000/
But when I go to 127.0.0.1:8000 I see nothing:
The 127.0.0.1 page isn’t working
There are no Nginx or other working servers.
What am I doing wrong?
Update 1 (Dockerfile)
FROM ubuntu:16.04
MAINTAINER Max Malyshev <user>
COPY . /root
WORKDIR /root
RUN apt-get update
RUN apt-get install python-pip -y
RUN apt-get install postgresql -y
RUN apt-get install rabbitmq-server -y
RUN apt-get install libpq-dev python-dev -y
RUN apt-get install npm -y
RUN apt-get install mongodb -y
RUN pip install -r requirements.txt
The problem is that you're exposing the development server to 127.0.0.1 inside your Docker container, not on the host OS.
If you access another console to your container and do a http request to 127.0.0.1:8000 it will work.
The key is to make sure the Docker container exposes the development server to all IPv4 addresses, you can do this by using 0.0.0.0 instead of 127.0.0.1.
Try running the following command to start your Django development server instead:
python manage.py runserver 0.0.0.0:8000
Also, for further inspiration, you can check out this working Dockerfile for hosting a Django application with the built-in development server https://github.com/Niklas9/django-unixdatetimefield/blob/master/Dockerfile.
You need to expose port 8000 in your Dockerfile and run a WSGI server like gunicorn. If you follow the steps here you should be good... https://semaphoreci.com/community/tutorials/dockerizing-a-python-django-web-application
I agree with Niklaus9 comments. If I could suggest an enhancement try
python manage.py runserver [::]:8000
The difference is that [::] supports ipv6 addresses.
I also noticed some packages for mongodb. If you want to test and dev locally you can create docker containers and use docker compose to test your app on your machine before deploying to dev/stage/prod environment.
You can find out more about how to set up a Django app linked to a database backend in docker on this tutorial http://programmathics.com/programming/docker/docker-compose-for-django/ (Disclaimer: I am the creator of that website)

Can't access docker containers from host

I have a simple image for a Rails service with the following Dockerfile:
FROM ruby:2.4.4
MAINTAINER sadzid.suljic#gmail.com
RUN apt-get update && apt-get install -y \
build-essential \
nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install
COPY . ./
EXPOSE 3000
CMD ["rails", "s", "-p", "3000"]
I built the image and ran the container with these commands:
docker build -t chat/users .
docker run -P --name users_service chat/users
I have this output on the host:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
23716591e656 chat/users "rails s -p 3000" 6 minutes ago Up 5 minutes 0.0.0.0:32774->3000/tcp users_service
$ lsof -n -i :32774 | grep LISTEN
com.docke 32891 ssuljic 18u IPv4 0x41c034d5d4627f5f 0t0 TCP *:filenet-re (LISTEN)
com.docke 32891 ssuljic 19u IPv6 0x41c034d5d3beb9b7 0t0 TCP [::1]:filenet-re (LISTEN)
$ curl localhost:32774
curl: (52) Empty reply from server
When I run curl localhost:3000 inside the container I get the proper response from my API.
Does anyone know why I can't access the container from my host?
I'm using Docker for Mac with this version:
$ docker -v
Docker version 18.03.1-ce, build 9ee9f40
Some versions of Rails bind to localhost by default, which explains why you can access from within the container but not from the host (it’s viewed as a different host).
Adding -b 0.0.0.0 to the CMD instruction should solve the problem.
Try:
docker run -p 3000:3000 --name users_service chat/users
This will map 3000 to 3000
http://localhost:3000
Looking into this more I think this was either and chrome issue or network issue as I was having the same issue:
Here is how I resolved it:
Make sure your /etc/hosts file has 127.0.0.1 localhost (more than likely it's already there)
Cleared Cookies and Cached files
Cleared host cache
Go to:chrome://net-internals/#dns click Clear Host Cache
Restarted chrome
Reset Network Adapter
Note: This was unintentional so not sure if it was part of the fix or not, but wanted to include it in case.
Unfortunately I'm not sure which step fixed the problem

Resources