Can't access docker containers from host - ruby-on-rails

I have a simple image for a Rails service with the following Dockerfile:
FROM ruby:2.4.4
MAINTAINER sadzid.suljic#gmail.com
RUN apt-get update && apt-get install -y \
build-essential \
nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install
COPY . ./
EXPOSE 3000
CMD ["rails", "s", "-p", "3000"]
I built the image and ran the container with these commands:
docker build -t chat/users .
docker run -P --name users_service chat/users
I have this output on the host:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
23716591e656 chat/users "rails s -p 3000" 6 minutes ago Up 5 minutes 0.0.0.0:32774->3000/tcp users_service
$ lsof -n -i :32774 | grep LISTEN
com.docke 32891 ssuljic 18u IPv4 0x41c034d5d4627f5f 0t0 TCP *:filenet-re (LISTEN)
com.docke 32891 ssuljic 19u IPv6 0x41c034d5d3beb9b7 0t0 TCP [::1]:filenet-re (LISTEN)
$ curl localhost:32774
curl: (52) Empty reply from server
When I run curl localhost:3000 inside the container I get the proper response from my API.
Does anyone know why I can't access the container from my host?
I'm using Docker for Mac with this version:
$ docker -v
Docker version 18.03.1-ce, build 9ee9f40

Some versions of Rails bind to localhost by default, which explains why you can access from within the container but not from the host (it’s viewed as a different host).
Adding -b 0.0.0.0 to the CMD instruction should solve the problem.

Try:
docker run -p 3000:3000 --name users_service chat/users
This will map 3000 to 3000
http://localhost:3000
Looking into this more I think this was either and chrome issue or network issue as I was having the same issue:
Here is how I resolved it:
Make sure your /etc/hosts file has 127.0.0.1 localhost (more than likely it's already there)
Cleared Cookies and Cached files
Cleared host cache
Go to:chrome://net-internals/#dns click Clear Host Cache
Restarted chrome
Reset Network Adapter
Note: This was unintentional so not sure if it was part of the fix or not, but wanted to include it in case.
Unfortunately I'm not sure which step fixed the problem

Related

#Port 8983 is already being used by another process#

I was able to run SOLR 7.x in my Apple M1 chip Mac without any issues.
We recently moved from SOLR 7.x to SOLR 8.x
Now the below command throws this error consistently,
#Command: #
**docker run --name solr bitnami/solr:latest**
#Error:#
Port 8983 is already being used by another process
Info for you:
-----------
I create Solr docker image based out of docker.io/bitnami/solr:8.11.1
#Below commands gives empty output in mac terminal,#
lsof -i tcp:8983
lsof -i :8983
#Command: #
telnet localhost 8983
#Output:#
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
telnet: Unable to connect to remote host
Docker file:
------------
FROM docker.io/bitnami/solr:8.11.1
.
.
.
RUN mkdir -p $SOLR_HOME/home/primary/conf
RUN mkdir -p $SOLR_HOME/home/reindex/conf
RUN mkdir -p $SOLR_HOME/home/primary/data
RUN mkdir -p $SOLR_HOME/home/reindex/data
COPY ./primary/core.properties $SOLR_HOME/home/primary/
COPY ./primary/custom.properties $SOLR_HOME/home/primary/
COPY ./conf/* $SOLR_HOME/home/primary/conf/
COPY ./reindex/core.properties $SOLR_HOME/home/reindex/
COPY ./reindex/custom.properties $SOLR_HOME/home/reindex/
COPY ./conf/* $SOLR_HOME/home/reindex/conf/
USER root
RUN apt-get update && apt-get upgrade -y && rm -rf /var/lib/apt/lists/*
.
.
.
EXPOSE 8983
This Apple M1 chip, Docker, SOLR 8.11.1 combination error is very strange. Any help is greatly appreciated.

Running docker container not accessible from host (localhost:8081)

Using Ubuntu.
Based on this guide:
https://www.freecodecamp.org/news/how-to-use-routing-in-vue-js-to-create-a-better-user-experience-98d225bbcdd9/
I have created a minimal vuejs project with below project structure:
https://github.com/dev-samples/samples/tree/master/vuejs-001
frontend-router/
build/
config/
src/
static/
test/
build.sh
Dockerfile.dev
package-lock.json
package.json
Where:
Dockerfile.dev
FROM node:10
RUN apt install curl
RUN mkdir /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
# make the 'app' folder the current working directory before running npm install
WORKDIR /app
RUN npm install
CMD [ "npm", "run", "dev" ]
I am building the image and running the container from that image with:
docker build -t frontend-router-image -f Dockerfile.dev .
docker rm -f frontend-router-container
docker run -it -p 8081:8080 -v ${PWD}:/app/ -v /app/node_modules --name frontend-router-container frontend-router-image
which gives:
DONE Compiled successfully in 1738ms 3:49:45 PM
I Your application is running here: http://localhost:8080
Since I add -p 8081:8080 to docker run command I would expect that I can access the application from my host browser on:
http://localhost:8081/
but it just gives below error:
I works fine when I run it with vanilla npm from my host. But why can't I access the application when its run from inside a docker container?
Source code here:
https://github.com/dev-samples/samples/tree/master/vuejs-001
As suggested below I have tried:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e011fb9e39e8 frontend-router-image "docker-entrypoint.s…" 12 seconds ago Up 9 seconds 0.0.0.0:8081->8080/tcp frontend-router-container
$ docker run -it --rm --net container:frontend-router-container nicolaka/netshoot ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:8080 0.0.0.0:*
For comparison this project works fine:
https://github.com/dev-samples/samples/tree/master/vuejs-002
Meaning that when I run a container I can access the web application on my host browser on localhost:8081
Based on this:
https://github.com/webpack/webpack-dev-server/issues/547
and:
https://dev.to/azawakh/don-t-forget-to-give-host-0-0-0-0-to-the-startup-option-of-webpack-dev-server-using-docker-1483
https://pythonspeed.com/articles/docker-connection-refused/
It works if I change:
host: 'localhost', // can be overwritten by process.env.HOST
to:
host: '0.0.0.0', // can be overwritten by process.env.HOST
in the file: /frontend-router/config/index.js
When you have connection reset it means usually that nobody is listen on the port .
It seems you are listening on localhost , you must
listening on 0.0.0.0 when you are in the docker .
in your file config/index.js , host is localhost , you must remove the host directive
If you listening on 127.0.0.1or localhost , you are listening on local network , so
inside the container , the web server can be accessed only by local process .
Another source of problems you can have , you are connecting to the wrong port .
if you run with docker run -it -p 8081:8080 you must acces to http://localhost:8081/
see
Publish or expose port (-p, --expose)
from https://docs.docker.com/engine/reference/commandline/run/

Rails app docker container not accessible from Windows host

I am trying to make a simple docker container that runs the Rails app from the directory that I launch it in.
Everything appears to be fine except when I run the container and try to access it from my Windows host at the IP address that Docker Machine gives me, it responds with a connection refused error message.
I even used the Nginx Dockerfile as a reference, because the Nginx Dockerfile actually builds a container that is accessible for me.
Here is my Dockerfile so far:
FROM ruby:2.3.1
RUN gem install rails && \
apt-get update -y && \
apt-get install -y nodejs
VOLUME ["/web_app"]
ADD . /web_app
WORKDIR /web_app
RUN bundle install
CMD rails s -p 80
EXPOSE 80
I build the image using this command
docker build -t rails_server .
I then run it using this command
docker run -d -p 80:80 rails_server
And here is what I try to access the webpage:
curl $(docker-machine ip)
And this is what I get back:
curl: (7) Failed to connect to 192.168.99.100 port 80: Connection refused
And this is how it makes me feel:
The problem here seems to be that the app is listening on 127.0.0.1:80, so the service will not accept connection from outside the container. Could you check if modifying the rails server to listening on 0.0.0.0 the issue solves?
You can do that using the -b flag of rails s:
FROM ruby:2.3.1
RUN gem install rails && \
apt-get update -y && \
apt-get install -y nodejs
VOLUME ["/web_app"]
ADD . /web_app
WORKDIR /web_app
RUN bundle install
CMD rails s -b 0.0.0.0 -p 80
EXPOSE 80
The port is only exposed to the vm running the docker inside. You have to still expose port 80 of your vm to your local machine so it can connect to it. I think the best approach is making your container to be listened o an optional port like 7070 and then using a simple nginx proxy pass to feed the content to the outside (listening on port 80)

Can't map ports between host and a Docker container on OSX

I have tried to follow some instructions on GitHub
to set up the port forwarding but I have no luck. Would you please help? I built the container following an example in a book and here is the Dockerfile:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y nginx
RUN echo 'Hi, I am in your container' > /usr/share/nginx/html/index.html
EXPOSE 8000
Steps taken:
$ boot2docker stop
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 delete forwardHostPort8000ToDockerVM
$ VBoxManage modifyvm "boot2docker-vm" --natpf1 "forwardHostPort8000ToDockerVM,tcp,,8000,,8000"
$ boot2docker start
$ docker run -d -p 127.0.0.1:8000:8000 --name static_web static_web \nginx -g "daemon off;"
122ba8949685ce91b84890656c399b19028cb2e8a7e8be3d4a19122eba9ab592
This is the result:
$ curl 127.0.0.1:8000
curl: (52) Empty reply from server
$ telnet 127.0.0.1 8000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host
Note:
If I don't bind it to the 127.0.0.1 interface (i.e, only -p 8000:8000), then it will work with my VM’s IP:
$ curl 192.168.59.103:8000
Hi, I am in your container
Env:
$ boot2docker -v
Boot2Docker-cli version: v1.3.1
Git commit: 57ccdb8
$ docker -v
Docker version 1.3.1, build 4e9bbfa
I'm using VirtualBox version 4.3.20 running under OS X 10.10.1.
It is actually a bug on my side. I exposed port 8000 in the Dockerfile; however, I haven't changed the setting in nginx. So, I changed it back to port 80 and started the container as followed, and it worked. Now I can hit it with a browser on another machine.
docker run -d -p 8000:80 --name static_web static_web \nginx -g "daemon off;"

How to use docker container as apache server?

I just started using docker and followed following tutorial: https://docs.docker.com/engine/admin/using_supervisord/
FROM ubuntu:14.04
RUN apt-get update && apt-get upgrade
RUN apt-get install -y openssh-server apache2 supervisor
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
and
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Build and run:
sudo docker build -t <yourname>/supervisord .
sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
My question is, when docker runs on my server with IP http://88.xxx.x.xxx/, how can I access the apache localhost running inside the docker container from the browser on my computer? I would like to use a docker container as a web server.
You will have to use port forwarding to be able to access your docker container from the outside world.
From the Docker docs:
By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers.
But if you want containers to accept incoming connections, you will need to provide special options when invoking docker run.
So, what does this mean? You will have to specify a port on your host machine (typically port 80) and forward all connections on that port to the docker container. Since you are running Apache in your docker container you probably want to forward the connection to port 80 on the docker container as well.
This is best done via the -p option for the docker run command.
sudo docker run -p 80:80 -t -i <yourname>/supervisord
The part of the command that says -p 80:80 means that you forward port 80 from the host to port 80 on the container.
When this is set up correctly you can use a browser to surf onto http://88.x.x.x and the connection will be forwarded to the container as intended.
The Docker docs describes the -p option thoroughly. There are a few ways of specifying the flag:
# Maps the provided host_port to the container_port but only
# binds to the specific external interface
-p IP:host_port:container_port
# Maps the provided host_port to the container_port for all
# external interfaces (all IP:s)
-p host_port:container_port
Edit: When this question was originally posted there was no official docker container for the Apache web server. Now, an existing version exists.
The simplest way to get Apache up and running is to use the official Docker container. You can start it by using the following command:
$ docker run -p 80:80 -dit --name my-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
This way you simply mount a folder on your file system so that it is available in the docker container and your host port is forwarded to the container port as described above.
There is an official image for apache. The image documentation contains instructions in how you can use this official images as a base for a custom image.
To see how it's done take a peek at the Dockerfile used by the official image:
https://github.com/docker-library/httpd/blob/master/2.4/Dockerfile
Example
Ensure files are accessible to root
sudo chown -R root:root /path/to/html_files
Host these files using official docker image
docker run -d -p 80:80 --name apache -v /path/to/html_files:/usr/local/apache2/htdocs/ httpd:2.4
Files are accessible on port 80.

Resources