Docker port not being exposed - docker

Using Windows and I have pulled the Jenkins image successfully via
docker pull jenkins
I am running a new container via following command and it seems to start the container fine. But when I try to access the Jenkins page on my browser, I just get following error message. I was expecting to see the Jenkins main log in page. Same issue when I tried other images like Redis, Couchbase and JBoss/Wildfly. What am I doing wrong? New to Docker and following tutorials which has described the following command to expose ports. Same for some answers given here + docs. Please advice. Thanks.
docker run -tid -p 127.0.0.1:8097:8097 --name jen1 --rm jenkins
In browser, just getting a normal 'Problem Loading page Error'.
The site could be temporarily unavailable or too busy.

First, it looks a little bit strange use -tid. Since you're trying to run it detached, maybe, it'd be better just -d, and use -ti for example to access via shell docker exec -ti jen1 bash.
Second, docker localhost is not the same than host localhost, so, I'd run the container directly without 127.0.0.1. If you want to use it, you may specify --net=host, what makes 127.0.0.1 is the same inside and outside docker.
Third, try to access first through port 8080 for initial admin password.
Definitively, in summary:
docker run -d -p 8097:8080 --name jen1 --rm jenkins
Then,
http://172.17.0.2:8080/
Finally, unlock Jenkins setting admin password. You can have a look at starting logs: docker logs jen1

Take a look at Jenkins Dockerfile from here:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
.....
.....
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
As you can see port 8080 is being exposed and not 8097.
Change your command to
docker run -tid -p 8097:8080 --name jen1 --rm jenkins

What your command does is connects your host port 8097 with jenkins image port 8097, but how do you know that the image exposes/uses port 8097 (spoiler: it doesn't).
This image uses port 8080, so you want to port your local 8097 to port that one.
Change the command to this:
docker run -tid -p 127.0.0.1:8097:8080 --name jen1 --rm jenkins
Just tested your command with this small fix, and it works locally for me.

Related

How to access to a docker container via SSH using IP address?

I'm using NVIDIA Docker in a Linux machine (Ubuntu 20.04). I've created a container named user1 using nvidia/cuda:11.0-base image as follows:
docker run --gpus all --name user1 -dit nvidia/cuda:11.0-base /bin/bash
And, here is what I see if I run docker ps -a:
admin#my_desktop:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a365362840de nvidia/cuda:11.0-base "/bin/bash" 3 seconds ago Up 2 seconds user1
I want to access to that container via ssh using its unique IP address from a totally different machine (other than my_desktop, which is the host). First of all, is it possible to grant each container a unique IP address? If so, how can I do it? Thanks in advance.
In case you want to access to your container with ssh from an external VM, you need to do the following
Install the ssh daemon for your container
Run the container and expose its ssh port
I would propose the following Dockerfile, which builds from nvidia/cuda:11.0-base and creates an image with the ssh daemon inside
Dockerfile
# Instruction for Dockerfile to create a new image on top of the base image (nvidia/cuda:11.0-base)
FROM nvidia/cuda:11.0-base
ARG root_password
RUN apt-get update || echo "OK" && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo "root:${root_password}" | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image from the Dockerfile
docker image build --build-arg root_password=password --tag nvidia/cuda:11.0-base-ssh .
Create the container
docker container run -d -P --name ssh nvidia/cuda:11.0-base-ssh
Run docker ps to see the container port
Finally, access the container
ssh -p 49157 root#<VM_IP>
EDIT: As David Maze correctly pointed out. You should be aware that the root password will be visible in the image history. Also this way overwrites the original container process.
This process, if it is to be adopted it needs to be modified in case you need it for production use. This serves as a starting point for someone who wishes to add ssh to his container.

How to get --add-host parameter working for a docker build?

I'm building a simple docker image based on a Dockerfile, and I'd like to add an alias to the hosts file to allow me to access an application on my local machine rather than out on the internet.
When I run the following...
> docker build --add-host=example.com:172.17.0.1 -f ./Dockerfile -t my-image .
> docker run --name=my-container --network=my-bridge --publish 8080:8080 my-image
> docker exec -it my-container cat /etc/hosts
I don't see example.com 172.17.0.1 like I'd expect. Where does the host get added? Or is it not working? The documentation is very sparse, but it looks like I'm using the param correctly.
My Dockerfile is doing very little - just specifying a base image, installing a few things, and setting some environment variables. It looks somewhat like this:
FROM tomcat:9.0.40-jdk8-adoptopenjdk-openj9
RUN apt update
RUN apt --assume-yes iputils-ping
# ... a few more installs ...
COPY ./conf /usr/local/tomcat/conf
COPY ./lib /usr/local/tomcat/lib
COPY ./webapps /usr/local/tomcat/webapps
ENV SOME_VAR=some value
# ... more env variables ...
EXPOSE 8080
When the image is created and the container is run my web app works fine, but I'd like to have certain communications (to example.com) redirected to an app running on my local machine.
when you run the container you can put the --add-host
docker run --add-host=example.com:172.17.0.1 --name=my-container --network=my-bridge --publish 8080:8080 my-image
the --add-host feature during build is designed to allow overriding a host during build, but not to persist that configuration in the image.
see also question for docker build --add-host command

Expose application that runs under a docker container

I'm trying to expose a nodejs application that runs under a docker
docker run -p 3005:3005 -p 5858:5858 -i -t -v /usuarios centos-nodejs:1.0 /bin/bash
after that command, I access my application
cd usuarios
node index
and then the application is running inside the docker container.
How can I expose a port to access in my browser something like localhost:5858/my_api_here
It seems a nodejs application is bound to localhost:5858 only inside a container. That's why you cannot access it via 127.0.0.1:5858 from the host. You need to find a way to bind it to 0.0.0.0:5858. After that you can access it on 127.0.0.1:5858 from the host.
Following the command below, it works
docker run -p 3005:3005 -p 5858:5858 -i -t -v C:\Users\lgermano\Documents
\Repositorios:/opt/rede/workspace centos-nodejs:1.0 /bin/bash

DNS resolution with the container

I have a docker image which is build from the following file.
FROM java:7
MAINTAINER Tushar Gandhi
ARG version
ENV version=$version
ARG port
ENV port=$port
RUN mkdir -p /cacheDir/services/live/prediction/p$port/$version/logs
RUN ls -tlr /cacheDir/services/live/prediction/p$port/
RUN mkdir -p /cacheDir/services/releases/prediction/p$port/$version/
RUN mkdir -p /cacheDir/services/predictionmodel
ADD target/predictionDependencies/* /cacheDir/services/predictionmodel/
ADD /target/prediction-0.0.13-SNAPSHOT.jar /cacheDir/services/releases/prediction/p$port/$version/prediction-0.0.13-SNAPSHOT.jar
ADD /target/instance.properties /cacheDir/services/releases/prediction/p$port/$version/instance.properties
ADD /target/logback.xml /cacheDir/services/releases/prediction/p$port/$version/logback.xml
RUN ls -ltr /cacheDir/services/live/prediction/p$port/$version/
RUN ls -ltr /cacheDir/services/releases/prediction/p$port/$version/
RUN ls -ltr /cacheDir/services/predictionmodel
ENTRYPOINT ["sh","-c","java -server -Xmx2g -Xloggc:/cacheDir/services/live/prediction/p${port}/${version}/logs/gc.log -verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/cacheDir/services/live/prediction/p${port}/${version}/oom.dump -Dlogback.configurationFile=/cacheDir/services/releases/prediction/p${port}/${version}/logback.xml -Dlog.home=/cacheDir/services/live/prediction/p${port}/${version}/logs -Dlogback.debug=true -Dbroker.l^Ct=sv-kafka6.pv.sv.nextag.com:9092,sv-kafka7.pv.sv.nextag.com:9092,sv-kafka8.pv.sv.nextag.com:9092,sv-kafka9.pv.sv.nextag.com:9092 -jar /cacheDir/services/releases/prediction/p${port}/${version}/prediction-0.0.13-SNAPSHOT.jar $port /cacheDir/services/releases/prediction/p${port}/${version}/instance.properties /com/abc/services/$ZK_PATH"]
I'm using the following build command to build the image.
docker build --build-arg version=test1 --build-arg port=3001 -f Dockerfile -t prediction:test1 .
The image creation is successful and the container comes up to be successful. Run command used
sudo docker run -p 7105:3001 -v ~/PredictionVolume/logs/:/cacheDir/services/live/prediction/p5030/Testing1/logs/ -e ZK_PATH=qa -t prediction:test
Now, the problem lies in that my application when runs in a docker container, it tries to access URL qa-zk1.com:2181. This URL is accessible from my system but not from the docker container. Can anyone please suggest a way to make the URL accessible from the container.
[Edit] I have been trying different methods and came across that I was able to ping google.com. This showed me that internet is working. If internet is working, then that URL should also be accessible, but it isn't, therefore it seems to be a problem of DNS resolution. I tried with the IP address and was able to hit the service properly, now I need to find out how to enable that search pattern using a URL rather than an IP address.
In case you can reach the site by IP, it means that inside the container you are pointing to the DNS server, which does not know "qa-zk1.com" name.
You can 2 options:
Add your ip to the local hosts file
/etc/hosts
Update container's DNS configuration
See Configure container DNS for more details

How to use docker container as apache server?

I just started using docker and followed following tutorial: https://docs.docker.com/engine/admin/using_supervisord/
FROM ubuntu:14.04
RUN apt-get update && apt-get upgrade
RUN apt-get install -y openssh-server apache2 supervisor
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
and
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Build and run:
sudo docker build -t <yourname>/supervisord .
sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
My question is, when docker runs on my server with IP http://88.xxx.x.xxx/, how can I access the apache localhost running inside the docker container from the browser on my computer? I would like to use a docker container as a web server.
You will have to use port forwarding to be able to access your docker container from the outside world.
From the Docker docs:
By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers.
But if you want containers to accept incoming connections, you will need to provide special options when invoking docker run.
So, what does this mean? You will have to specify a port on your host machine (typically port 80) and forward all connections on that port to the docker container. Since you are running Apache in your docker container you probably want to forward the connection to port 80 on the docker container as well.
This is best done via the -p option for the docker run command.
sudo docker run -p 80:80 -t -i <yourname>/supervisord
The part of the command that says -p 80:80 means that you forward port 80 from the host to port 80 on the container.
When this is set up correctly you can use a browser to surf onto http://88.x.x.x and the connection will be forwarded to the container as intended.
The Docker docs describes the -p option thoroughly. There are a few ways of specifying the flag:
# Maps the provided host_port to the container_port but only
# binds to the specific external interface
-p IP:host_port:container_port
# Maps the provided host_port to the container_port for all
# external interfaces (all IP:s)
-p host_port:container_port
Edit: When this question was originally posted there was no official docker container for the Apache web server. Now, an existing version exists.
The simplest way to get Apache up and running is to use the official Docker container. You can start it by using the following command:
$ docker run -p 80:80 -dit --name my-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
This way you simply mount a folder on your file system so that it is available in the docker container and your host port is forwarded to the container port as described above.
There is an official image for apache. The image documentation contains instructions in how you can use this official images as a base for a custom image.
To see how it's done take a peek at the Dockerfile used by the official image:
https://github.com/docker-library/httpd/blob/master/2.4/Dockerfile
Example
Ensure files are accessible to root
sudo chown -R root:root /path/to/html_files
Host these files using official docker image
docker run -d -p 80:80 --name apache -v /path/to/html_files:/usr/local/apache2/htdocs/ httpd:2.4
Files are accessible on port 80.

Resources