How to access remote telnet server within a docker container? - docker

I would like to access the Star Wars Ascii movie from the telnet "towel.blinkenlights.nl" within a Docker Container.
Given this Dockerfile based on nerdalert:
FROM alpine:latest
RUN apk add busybox-extras
ENTRYPOINT ["/usr/bin/telnet", "towel.blinkenlights.nl"]
With this Docker Build & Run Commands:
docker build . -t starwars
docker run --rm -i -P starwars
I receive the following error messages:
telnet: can't connect to remote host (213.136.8.188): Connection refused
I also tried this Run Command with the same Error:
docker run --rm --network host -P starwars
and change the Dockerfile Baseimage to bitnami/minideb:stretch with no success.
How should I change the Dockerfile or the Docker run Command to access a (this) remote telnet server?
Without the Docker Container on my Windows Host system - I can access the telnet server towel.blinkenlights.nl easily

Related

Docker port not being exposed

Using Windows and I have pulled the Jenkins image successfully via
docker pull jenkins
I am running a new container via following command and it seems to start the container fine. But when I try to access the Jenkins page on my browser, I just get following error message. I was expecting to see the Jenkins main log in page. Same issue when I tried other images like Redis, Couchbase and JBoss/Wildfly. What am I doing wrong? New to Docker and following tutorials which has described the following command to expose ports. Same for some answers given here + docs. Please advice. Thanks.
docker run -tid -p 127.0.0.1:8097:8097 --name jen1 --rm jenkins
In browser, just getting a normal 'Problem Loading page Error'.
The site could be temporarily unavailable or too busy.
First, it looks a little bit strange use -tid. Since you're trying to run it detached, maybe, it'd be better just -d, and use -ti for example to access via shell docker exec -ti jen1 bash.
Second, docker localhost is not the same than host localhost, so, I'd run the container directly without 127.0.0.1. If you want to use it, you may specify --net=host, what makes 127.0.0.1 is the same inside and outside docker.
Third, try to access first through port 8080 for initial admin password.
Definitively, in summary:
docker run -d -p 8097:8080 --name jen1 --rm jenkins
Then,
http://172.17.0.2:8080/
Finally, unlock Jenkins setting admin password. You can have a look at starting logs: docker logs jen1
Take a look at Jenkins Dockerfile from here:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
.....
.....
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
As you can see port 8080 is being exposed and not 8097.
Change your command to
docker run -tid -p 8097:8080 --name jen1 --rm jenkins
What your command does is connects your host port 8097 with jenkins image port 8097, but how do you know that the image exposes/uses port 8097 (spoiler: it doesn't).
This image uses port 8080, so you want to port your local 8097 to port that one.
Change the command to this:
docker run -tid -p 127.0.0.1:8097:8080 --name jen1 --rm jenkins
Just tested your command with this small fix, and it works locally for me.

Google Compute Engine Container Port Closed

I added a firewall rule to open port 8080. If I click the SSH button in the GCE console, and run on the host shell:
nc -l -p 8080 127.0.0.1
I can detect the opened port. If I then go to the container's shell with:
docker run --rm -i -t <image> /bin/sh
and run the same netcat command, I can't detect the open port.
I went down this troubleshooting route because I couldn't connect to a node:alpine container running the ws npm for a demo websocket server. Here is my dockerfile:
# specify the node base image with your desired version node:<version>
FROM node:alpine
# replace this with your application's default port
EXPOSE 8080
WORKDIR /app
RUN apk --update add git
docker run --rm -i -t -p 8080:8080 <image> /bin/sh
per Google Compute Engine Container Port Closed

How to connect to docker-machine via its ip adress (hyper-v)

I have a backend that I want to run in a Docker container in order to connect to it from another computer or device.
I created an external virtual switch on a hyper-v machine and created a new virtual machine that is connected to this switch. By the command:
docker-machine create -d hyperv --hyperv-virtual-switch <NameOfVirtualSwitch> <nameOfNode>
I connect to this external virtual switch in the network settings
There are a set of commands that I use to run
docker container prune
docker image prune
docker build -t nestjsdocker:latest .
docker run -it -p 3001:3001 --name {here are id of image} nestjsdocker:latest
There is my Dockerfile
FROM node:10-alpine
WORKDIR /src/app
COPY . .
RUN npm install
EXPOSE 3001
CMD ["npm","start"]
When I type docker-machine ip {name of my vm} I get 10.10.0.242, but when I type in the browser like http://10.10.0.242:3001/ I get error 'This site can’t be reached'

docker exposing multiple ports - not working

I have a docker image that on running executes the following script
java -jar myjar.jar & disown
python2.7 manage.py runserver 0.0.0.0:9999
the myjar.jar program exposes a webservice and listens to port 11111.
In my Docker file i expose these 2 ports like so:
EXPOSE 9999 11111
This is how i run the image:
docker run --rm -p 9999:9999 -p 11111:11111 myimage
I can access the python web process with the url localhost:9999/admin/.
When i try to access the java web service with curl localhost:11111/myservice?wsdl I get connection refused.
When i enter the container with a terminal using
docker exec -i -t <container_id> bash
and run curl localhost:11111/myservice?wsdl i get the wsdl content.
Where is my port binding going wrong? (or port exposing? or maybe the way i am running the jar file?)

How to use docker container as apache server?

I just started using docker and followed following tutorial: https://docs.docker.com/engine/admin/using_supervisord/
FROM ubuntu:14.04
RUN apt-get update && apt-get upgrade
RUN apt-get install -y openssh-server apache2 supervisor
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
and
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
Build and run:
sudo docker build -t <yourname>/supervisord .
sudo docker run -p 22 -p 80 -t -i <yourname>/supervisord
My question is, when docker runs on my server with IP http://88.xxx.x.xxx/, how can I access the apache localhost running inside the docker container from the browser on my computer? I would like to use a docker container as a web server.
You will have to use port forwarding to be able to access your docker container from the outside world.
From the Docker docs:
By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers.
But if you want containers to accept incoming connections, you will need to provide special options when invoking docker run.
So, what does this mean? You will have to specify a port on your host machine (typically port 80) and forward all connections on that port to the docker container. Since you are running Apache in your docker container you probably want to forward the connection to port 80 on the docker container as well.
This is best done via the -p option for the docker run command.
sudo docker run -p 80:80 -t -i <yourname>/supervisord
The part of the command that says -p 80:80 means that you forward port 80 from the host to port 80 on the container.
When this is set up correctly you can use a browser to surf onto http://88.x.x.x and the connection will be forwarded to the container as intended.
The Docker docs describes the -p option thoroughly. There are a few ways of specifying the flag:
# Maps the provided host_port to the container_port but only
# binds to the specific external interface
-p IP:host_port:container_port
# Maps the provided host_port to the container_port for all
# external interfaces (all IP:s)
-p host_port:container_port
Edit: When this question was originally posted there was no official docker container for the Apache web server. Now, an existing version exists.
The simplest way to get Apache up and running is to use the official Docker container. You can start it by using the following command:
$ docker run -p 80:80 -dit --name my-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
This way you simply mount a folder on your file system so that it is available in the docker container and your host port is forwarded to the container port as described above.
There is an official image for apache. The image documentation contains instructions in how you can use this official images as a base for a custom image.
To see how it's done take a peek at the Dockerfile used by the official image:
https://github.com/docker-library/httpd/blob/master/2.4/Dockerfile
Example
Ensure files are accessible to root
sudo chown -R root:root /path/to/html_files
Host these files using official docker image
docker run -d -p 80:80 --name apache -v /path/to/html_files:/usr/local/apache2/htdocs/ httpd:2.4
Files are accessible on port 80.

Resources