linux curl can't find docker webserver - docker

I have been Googling this for 2 days and the only work around I have found is using --net=host vs. -p 8081:80 to connect to the web server. I have create a basic web server on a normal non web server RH 7 box. The expose port is 80. I compile and start the container and 2 web pages are copied in. In the container "curl http://localhost/index.html" writes out "The Web Server is Running". Out side curl times out with "curl: (7) Failed connect to localhost:80; Connection refused". All the post say it should work, but it doesn't.
The container was create as follows:
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data -p 8081:80 --name=d_webserver webserver
I have done docker inspect d_webserver and see the Gateway": "172.17.0.1" and "IPAddress": "172.17.0.2" are this. curl http://localhost:8081/index.html or curl http://172.17.02:8081/index.html all fail. Only if I use
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data --net=host --name=d_webserver webserver
does it work as expected. From all I have read, the -p 8081:80 should allow me to see the web page, but it just doesn't work. There are no firewall up, so that's not the problem. Does anyone know why 8081 is not allowing me to connect to the webserver? What step am I missing?
Also, I would like to use the chrome on my PC to do a http:xxx.xxx.xxx.xxx:8081/index.html to the Linux box vs. running the browser on Linux box. PC chrome says the Linux ip can't be reached. There is a gateway box required so that probably is the problem. Is there someway to fix the pc chrome so it can find the Linux docker web server via gateway box or must I start chrome on the Linux box all the time? Sort of defeats the point of making the docker webserver in the first place if people have to ssh into the box and start up a browser.
We are using local repositories because of security. These are the rpm I saved in rpms directory for install.
rpms $ ls
deltarpm-3.6-3.el7.x86_64.rpm
httpd-2.4.6-97.el7_9.4.x86_64.rpm
yum-utils-1.1.31-54.el7_8.noarch.rpm
Dockerfile:
# Using RHEL 7 base image and Apache Web server
# Version 1
# Pull the rhel image from the local registry
FROM rhel7_base:latest
USER root
MAINTAINER "Group"
# Copy X dependencies that are not available on the local repository to /home/
COPY rpms/*.rpm /home/
# Update image
# Install all the RPMs using yum. First add the pixie repo to grab the rest of the dependencies
# the subscription and signature checking will be turned off for install
RUN cd /home/ && \
yum-config-manager --add-repo http://xxx.xxx.xxx.xxx/repos/rhel7/yumreposd/redhat.repo && \
cat /etc/yum.repos.d/redhat.repo && \
yum update --disableplugin=subscription-manager --nogpgcheck -y && rm -rf /var/cache/yum && \
yum install --disableplugin=subscription-manager --nogpgcheck *.rpm -y && rm -rf /var/cache/yum && \
rm *.rpm
# Copy test web page directory into container at /var/www/html.
COPY unit_test/ /var/www/html/unit_test/
# Add default Web page and expose port for testing
RUN echo "The Web Server is Running" > /var/www/html/index.html
EXPOSE 80
# Start the service
CMD ["-D", "FOREGROUND"]
ENTRYPOINT ["/usr/sbin/httpd"]
I built it this way and then started it and I should have been able to connect on port 8081, but curl fails.
docker build -t="webserver" .
docker run -d -v /data/docker_webpage/unit_test/data:/var/www/html/unit_test/data -w /var/www/html/unit_test/data -p 8081:80 --name=d_webserver webserver
curl http://localhost/index.html (time out)
curl http://localhost:8081/index.html (time out)
curl http://172.17.0.2:8081/index.html (time out)
curl http://172.17.0.2/index.html (time out)

Related

GUI application via Docker - X11 - "Unable to init server"

I'm trying to run Firefox in a Debian docker image but can't connect to the X11 server.
I'm using the method described here, but changed the base image to the latest Debian. I also changed the user creation method.
Dockerfile
FROM debian:latest
RUN apt-get update && apt-get install -y firefox-esr
RUN useradd --shell /bin/bash --create-home developer && \
usermod -aG sudo developer
USER developer
ENV HOME /home/developer
CMD /usr/bin/firefox
Building the container
docker build -t firefox .
Command to start the container
docker run -ti --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
firefox
ERROR
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0
Operating system
OpenSUSE Leap 15.2
Context
I'm doing the above to understand how to run a GUI app via docker. The aim is to run the latest version of FreeCAD (v19), which is currently broken on OpenSUSE.
docker run --rm \
--net=host \
--env="DISPLAY" \
--volume="$HOME/.Xauthority:/home/developer/.Xauthority:rw" \
firefox
This should work with your Dockerfile!
Couple of points
.Xauthority file also needs to be shared as it holds the cookies and auth sessions for the X server. Hence it has to be read/write too.
If you dont want to do --net=host then you can listen on a TCP port bound to unix socket and forward that to the container.

Docker Toolbox refused to connect on the browser - Tried different solutions - Windows 7

I have installed Docker Toolbox on Windows 7.
Everything has been installed correctly.
Now I try to build and run a DockerFile.
Dockerfile
FROM debian:9
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq \
&& apt-get clean -y
ADD . /app/
WORKDIR /app
RUN npm install
VOLUME /app/logs
CMD npm run start
After successfully running the command line docker build -t test . and docker run -it -d -p 3306:3306 test, I try to access it via my browser by doing :
http://192.168.99.100:3306
which correspond to http://[docker-machine-ip]:port
But it refuses to connect.
After searching on the internet, I tried several solutions:
1. Use the container IP
docker inspect --format '{{ .NetworkSettings.IPAddress }}' [id]
http://[containerIP]:port
2. Add port forwarding on Oracle VM defaut machine
VirtualBox -> Machine settings -> Network -> Adapter 1 (NAT) -> Advanced, Port Forwarding
name : test
Host ip : 127.0.0.1
Host port : 3306
Guest port : 3306
I even tried by putting Guest IP to 192.168.99.100 and letting Host IP to empty.
3. Try different ports
I tried differents ports to see if it was not caused by a port already opened.
I even tried the option --publish-all (-P) but as a result, I don't have any ports showing on docker ps -a
docker run -it -d -P test
4. Deactivate the windows firewall
Both public and private.
None of those solutions worked for me and I don't know what to do next.
Any help ? I would appreciate. Thank you.

Run Omnet++ inside docker with x11 forwarding on windows. SSH not working

Cannot ssh into container running on Windows hostmachine
For a university project i build a docker image containing Omnet++ to provide a consistent development environment.
The Image uses phusions's Baseimage and sets up x11 forwarding via SSH like rogaha did in his docker-desktop image.
The image works perfectly fine on a Linux Host System. But on Windows and OS X i was unable to ssh on the container from the host machine.
I reckon this is due to the different implementation of Docker on Windows and OS X. As explained in this Article by Microsoft Docker uses a NAT Network for Containers as a default to Separate the Networks from Host and Containers.
My problem is i don't know how to reach the running container via ssh.
I already tried the following:
Change the Container Network to a transparent Network as described in the Microsoft Article. The following error occurs both in Windows and OS X:
docker network create -d transparent MyTransparentNetwork
Error response from daemon: legacy plugin: plugin not found
On Windows run Docker in Virtualbox instead of Hyper-V
Explicitly expose port 22 like this:
docker run -p 52022:22 containerName
ssh -p 52022 root#ContainerIP
Dockerfile
FROM phusion/baseimage:latest
MAINTAINER Robin Finkbeiner
LABEL Description="Docker image for Nesting Stupro University of Stuttgart containing full omnet 5.1.1"
# Install dependencies
RUN apt-get update && apt-get install -y \
xpra\
rox-filer\
openssh-server\
pwgen\
xserver-xephyr\
xdm\
fluxbox\
sudo\
git \
xvfb\
wget \
build-essential \
gcc \
g++\
bison \
flex \
perl \
qt5-default\
tcl-dev \
tk-dev \
libxml2-dev \
zlib1g-dev \
default-jre \
doxygen \
graphviz \
libwebkitgtk-3.0-0 \
libqt4-opengl-dev \
openscenegraph-plugin-osgearth \
libosgearth-dev\
openmpi-bin\
libopenmpi-dev
# Set the env variable DEBIAN_FRONTEND to noninteractive
ENV DEBIAN_FRONTEND noninteractive
#Enabling SSH -- from phusion baseimage documentation
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# Copied command from https://github.com/rogaha/docker-desktop/blob/master/Dockerfile
# Configuring xdm to allow connections from any IP address and ssh to allow X11 Forwarding.
RUN sed -i 's/DisplayManager.requestPort/!DisplayManager.requestPort/g' /etc/X11/xdm/xdm-config
RUN sed -i '/#any host/c\*' /etc/X11/xdm/Xaccess
RUN ln -s /usr/bin/Xorg
RUN echo X11Forwarding yes >> /etc/ssh/ssh_config
# OMnet++ 5.1.1
# Create working directory
RUN mkdir -p /usr/omnetpp
WORKDIR /usr/omnetpp
# Fetch Omnet++ source
RUN wget https:******omnetpp-5.1.1-src-linux.tgz
RUN tar -xf omnetpp-5.1.1-src-linux.tgz
# Path
ENV PATH $PATH:/usr/omnetpp/omnetpp-5.1.1/bin
# Configure and compile
RUN cd omnetpp-5.1.1 && \
xvfb-run ./configure && \
make
# Cleanup
RUN apt-get clean && \
rm -rf /var/lib/apt && \
rm /usr/omnetpp/omnetpp-5.1.1-src-linux.tgz
Solution that worked for me
First of all the linked Microsoft Article is only valid for windows container.
This Article explains very well how docker networks work.
To simplify the explanation i drew a simple example.Simple ssh into docker network.
To be able to reach a container in bridged networks one is required to expose the necessary ports explicitly.
Expose Port
docker run -p 22 {$imageName}
Find Port mapping on host machine
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a2ec2bd2b53b renderfehler/omnet_ide_baseimage "/sbin/my_init" 17 hours ago Up 17 hours 0.0.0.0:32773->22/tcp tender_newton
ssh onto container using mapped port
ssh -p 32772 root#0.0.0.0

Rails app docker container not accessible from Windows host

I am trying to make a simple docker container that runs the Rails app from the directory that I launch it in.
Everything appears to be fine except when I run the container and try to access it from my Windows host at the IP address that Docker Machine gives me, it responds with a connection refused error message.
I even used the Nginx Dockerfile as a reference, because the Nginx Dockerfile actually builds a container that is accessible for me.
Here is my Dockerfile so far:
FROM ruby:2.3.1
RUN gem install rails && \
apt-get update -y && \
apt-get install -y nodejs
VOLUME ["/web_app"]
ADD . /web_app
WORKDIR /web_app
RUN bundle install
CMD rails s -p 80
EXPOSE 80
I build the image using this command
docker build -t rails_server .
I then run it using this command
docker run -d -p 80:80 rails_server
And here is what I try to access the webpage:
curl $(docker-machine ip)
And this is what I get back:
curl: (7) Failed to connect to 192.168.99.100 port 80: Connection refused
And this is how it makes me feel:
The problem here seems to be that the app is listening on 127.0.0.1:80, so the service will not accept connection from outside the container. Could you check if modifying the rails server to listening on 0.0.0.0 the issue solves?
You can do that using the -b flag of rails s:
FROM ruby:2.3.1
RUN gem install rails && \
apt-get update -y && \
apt-get install -y nodejs
VOLUME ["/web_app"]
ADD . /web_app
WORKDIR /web_app
RUN bundle install
CMD rails s -b 0.0.0.0 -p 80
EXPOSE 80
The port is only exposed to the vm running the docker inside. You have to still expose port 80 of your vm to your local machine so it can connect to it. I think the best approach is making your container to be listened o an optional port like 7070 and then using a simple nginx proxy pass to feed the content to the outside (listening on port 80)

Running 2 services

I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.
when I run my image I publish both ports (could I do this with EXPOSE?):
docker run -d --publish=7474:7474 --publish=8000:8000 linkurious
but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?
* Edit I *
here's my Dockerfile:
FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start
* Edit II *
to perhaps help explain my quandary, I've asked a different question
EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.
See also "Difference between “expose” and “publish” in docker".
See also an example with "Advanced Usecase with Docker: Connecting Containers"
Make sure though that the ip is the right one ($(docker-machine ip default)).
If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"
In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["neo4j"]
It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).
It is meant to be run on its own:
# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j
And then used by another image, with a --link neo4j:neo4j directive.

Resources