Application not loading on host port - Docker - docker

Docker service is unable to load on exposed port
I have created a simple DOCKERFILE and have built image "sample-image" from it inside a docker swarm, but whenever I am trying to run a container by creating docker service and exposing it on respective port but its unable to load on my browser. Kindly help
First I have initialised docker swarm and created an image "sample-image" from dockerfile, after that I created overlay network "sample-network" inside swarm and created a service to run container on it "docker service create --name sample-service -p 8010:8001 --network sample-network sample-image". Service gets created but it couldn't loads up on a browser as I have enabled my ufw for port 8010 also.
DOCKERFILE:
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8001
CMD [ "npm", "start" ]
server.js:
'use strict';
const express = require('express');
const PORT = 8001;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
I expect my web browser to display "Hello world" on exposed port number.

Did you check the docker logs ? That should give you what is going wrong.
$ docker logs <containerid>

Could you try "curl -v http://127.0.0.1:8081" and paste the output here please? Also, did you manipulate ufw after the service or dockerd started?

I think your listening port inside docker is 8010, and outside docker is 8001, can you try changing -p 8001:8001? if it works try -p 8001:8010.

Related

Cannot connect to docker container from within WSL2

I am a docker newbie and this is a newbie question :)
I am running Docker Engine from Ubuntu 18.04 in WLS2. When I run my container, I cannot connect to it from a WSL2 bash.
Here is my Dockerfile
FROM node:18.4-alpine3.15
WORKDIR /usr/src/app
# see https://nodejs.org/en/docs/guides/nodejs-docker-webapp/#creating-a-dockerfile
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD [ "node", "./src/main.js" ]
This is how I (successfully) build my image out of it
docker build . -t gb/test
This is how I run my container
docker run --rm -p 3000:3000 --name gbtest gb/test
# tried also this one with no luck
#docker run --rm -P --name gbtest gb/test
My server starts successfully, but I cannot understand how to reach it via curl. I tried all of the following with no success
curl http://localhost:3000
curl http://127.0.01:3000
curl http://127.17.01:3000 <- gather with ifconfig
I tried also what suggested by > this < answer, but it didn't work.
# start container with --add-host
docker run --rm -p 3000:3000 --add-host host.docker.internal:host-gateway --name gbtest gb/test
# try to connect with that host
curl http://host.docker.internal:3000
Note that the NodeJS server I am developing is connecting to the right port as:
it states in its output that it connects to http://127.0.0.1:3000
when I run it directly, so without docker, I can connect with just curl http://localhost:3000
Should you need any further info, just let me know.
EDIT 1
Here is my main.js file, basically the base example from Fastify.
import Fastify from "fastify";
// Require the framework and instantiate it
const fastify = Fastify({ logger: true });
// Declare a route
fastify.get("/", async (request, reply) => {
return { hello: "world" };
});
// Run the server!
const start = async () => {
try {
await fastify.listen({ port: 3000 });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
Here are my container logs
$ docker logs gbtest
{"level":30,"time":1655812753498,"pid":1,"hostname":"8429b2ed314d","msg":"Server listening at http://127.0.0.1:3000"}
I also tried to use host network like below. It works, but I am not sure it is the way to go. How am I supposed to expose a container to the internet? Is it OK to share the whole host network? Does it come with any security concerns?
docker run --rm --network host --name gbtest gb/test
Changing await fastify.listen({ port: 3000 }); to await fastify.listen({ host: "0.0.0.0", port: 3000 }); did the trick.
As you guys pointed out, I was binding to container-private localhost by default, while I want my container to respond to outer connections.
New output of docker logs gbtest is
{"level":30,"time":1655813850993,"pid":1,"hostname":"8e2460ac00c5","msg":"Server listening at http://0.0.0.0:3000"}

Why docker not running flask app on 0.0.0.0?

I am trying to dockerize my flask app. I see that docker will host on 0.0.0.0. But I am not getting it. My dockerfile:
#!/bin/bash
FROM python:3.8
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
#ENTRYPOINT [ "./main_class.py" ]
#CMD [ "flask", "run", "-h", "0.0.0.0", "-p", "5000" ]
CMD [ "python", "main_class.py", "flask", "run", "-h", "0.0.0.0", "-p", "5000" ]
When I build it , it build successfully. When I run it docker run -p 5000:5000 iris, it ran successfully. It says Running on http://172.17.0.2:5000/ (Press CTRL+C to quit).
Moreover when I see 172.17.0.2:5000 in browser, it does not work. But if I use 127.0.0.1:5000 it works. How can I bring 0.0.0.0:5000 ? In main_class.py I am using
if __name__ == '__main__':
app.run(debug=True, threaded=True, host="0.0.0.0")
When I type docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90a8a1ee3625 iris "python main_class.p…" 23 minutes ago Up 21 minutes 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp thirsty_lalande
Is there anything wrong I am doing in dockerfile ?
Moreover when I use ENTRYPOINT, it shows me an error standard_init_linux.go:228: exec user process caused: exec format error, thats why I am not using ENTRYPOINT.
The flask run -h 0.0.0.0 will tell your flask process to bind to any local IP address, as mentioned in the comments. This will also allow flask to respond to your Docker port-forwarding (5000->5000), i.e. to answer incoming traffic from the Docker bridge.
On the Docker level your effective port-forwarding is 0.0.0.0:5000->5000/tcp as shown by docker ps, so on your host system port 5000 on any local IP address will be forwarded to your container's flask process.
Please mind that 0.0.0.0 isn't a real IP address, but a placeholder to bind to any local IP address.
You can access your flask application on any IP that routes to your host system, port 5000.

Docker Go build not serving HTTP server

Trying to create a Docker container for my Go application - it creates a HTTP server on port 8080:
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/", doX)
if err := http.ListenAndServe("localhost:8080", nil); err != nil {
panic(err)
}
}
func doX(w http.ResponseWriter, r *http.Request) {
x
}
When I run go build and then go to localhost:8080 it works, however it's unresponsive when I try building and running it in Docker:
# Start from the latest golang base image
FROM golang:1.14.3-alpine
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy the source from the current directory to the Working Directory inside the container
COPY /src .
# Build the Go app
RUN go build -o main .
# Expose port 8080 to the outside world
EXPOSE 8080
# Command to run the executable
CMD ["./main"]
The commands I'm running are:
docker build -t src .
docker run -d -p 8080:8080 src
All my .go files are in a directory called 'src'. Help appreciated - quite new to Docker, thanks!
change the host from localhost to 0.0.0.0. It will be fine.
i.e
func main() {
http.HandleFunc("/", doX)
if err := http.ListenAndServe("0.0.0.0:8080", nil); err != nil {
panic(err)
}
}
The main idea of loopback is serving resources on SAME HOST. if you are exposing a port, the host changes so it should throw a connection reset by peer kind of error.
Alternatively, you may wish to create a network if your golang application needs to communicate between other containers of different images.
So first a new network needs to be created.
docker network create <network_name>
Then while booting this go container along with different containers(if at all required) pass the network flag to make them available under the same subnet.
docker run --network <network_name> -d -p 8080:8080 src
If you don't want to do that run the container on host network of docker (PS. it's not the standard practice, but good for debugging). If you have run your src image in the host network with localhost at http.ListenAndServe, you could instantly notice that it is accessible but while exposing the port on docker's bridge network mode, it's not accessible. So there must be some error in the application setup where it is invisible to the outside network.
docker run -d --network host src
Thanks :)

Can't connect to a docker running flask app when network=host

I have an flask app running on docker. the host = 0.0.0.0 and the port is 5001. If I run the docker without network=host I'm able to connect to 0.0.0.0 on port 5001, but with network=host I can't. I need the network=host because I need to connect to a db outside of the docker network cluster (which I'm able to connect to).
How can I do that? I tried running the docker with proxy but that didn't help.
Steps to reproduce:
Install python and flask (I'm using 3.8.6 and 1.1.2). In your main, create a flask app and expose one REST endpoint (the Url doesn't really matter).
This is my flask_main file:
from flask import Flask, request
app = Flask(__name__)
#app.route('/')
def route():
return "got to route"
if __name__ == "__main__":
app.run(host = 0.0.0.0, port=5001)
Dockerfile:
FROM python:3.8.6
WORKDIR /
ADD . /.
RUN pip --proxy ***my_proxy*** install -r requirements.txt
EXPOSE 5001
CMD ["python", "flask_main.py"]
requirements.txt:
Flask==1.1.2
itsdangerous==1.1.0
Jinja2==2.11.2
MarkupSafe==1.1.1
psycopg2==2.8.6
Werkzeug==1.0.1
click==7.1.2
I tried running the Dockerfile with and without proxy, tried to remove the host=0.0.0.0 but nothing helps.
I managed to solve the problem by removing the network=host and specifying the dns server of my company.

Problem acessing server on docker container

I am trying to build and run docker image from example on this site: https://kubernetes.io/docs/tutorials/hello-minikube/
//server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);
//Dockerfile
FROM node:6.14.2
EXPOSE 8080
COPY server.js .
CMD node server.js
I use commands
docker build -t nsj .
docker run nsj
They run without error but I cannot access the server on localhost:8080.
What is wrong?
Seems like at least two things are wrong:
You need to map the port from your docker host
You need to bind your server to 0.0.0.0
So, probably these changes (untested):
In your code:
www.listen(8080, "0.0.0.0");
In your docker command:
docker run nsj -p 8080:8080
Note that having EXPOSE 8080 in your Dockerfile does not actually expose anything. It just "marks" this port in the docker engine's metadata and is intended for both documentation (so people reading the Dockerfile know what it does) and for tools that inspect the docker engine.
To quote from the reference:
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published

Resources