I am a docker newbie and this is a newbie question :)
I am running Docker Engine from Ubuntu 18.04 in WLS2. When I run my container, I cannot connect to it from a WSL2 bash.
Here is my Dockerfile
FROM node:18.4-alpine3.15
WORKDIR /usr/src/app
# see https://nodejs.org/en/docs/guides/nodejs-docker-webapp/#creating-a-dockerfile
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD [ "node", "./src/main.js" ]
This is how I (successfully) build my image out of it
docker build . -t gb/test
This is how I run my container
docker run --rm -p 3000:3000 --name gbtest gb/test
# tried also this one with no luck
#docker run --rm -P --name gbtest gb/test
My server starts successfully, but I cannot understand how to reach it via curl. I tried all of the following with no success
curl http://localhost:3000
curl http://127.0.01:3000
curl http://127.17.01:3000 <- gather with ifconfig
I tried also what suggested by > this < answer, but it didn't work.
# start container with --add-host
docker run --rm -p 3000:3000 --add-host host.docker.internal:host-gateway --name gbtest gb/test
# try to connect with that host
curl http://host.docker.internal:3000
Note that the NodeJS server I am developing is connecting to the right port as:
it states in its output that it connects to http://127.0.0.1:3000
when I run it directly, so without docker, I can connect with just curl http://localhost:3000
Should you need any further info, just let me know.
EDIT 1
Here is my main.js file, basically the base example from Fastify.
import Fastify from "fastify";
// Require the framework and instantiate it
const fastify = Fastify({ logger: true });
// Declare a route
fastify.get("/", async (request, reply) => {
return { hello: "world" };
});
// Run the server!
const start = async () => {
try {
await fastify.listen({ port: 3000 });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
Here are my container logs
$ docker logs gbtest
{"level":30,"time":1655812753498,"pid":1,"hostname":"8429b2ed314d","msg":"Server listening at http://127.0.0.1:3000"}
I also tried to use host network like below. It works, but I am not sure it is the way to go. How am I supposed to expose a container to the internet? Is it OK to share the whole host network? Does it come with any security concerns?
docker run --rm --network host --name gbtest gb/test
Changing await fastify.listen({ port: 3000 }); to await fastify.listen({ host: "0.0.0.0", port: 3000 }); did the trick.
As you guys pointed out, I was binding to container-private localhost by default, while I want my container to respond to outer connections.
New output of docker logs gbtest is
{"level":30,"time":1655813850993,"pid":1,"hostname":"8e2460ac00c5","msg":"Server listening at http://0.0.0.0:3000"}
Related
I'm hosting a flask application in docker container through AWS EC2, which will display the contents of the file that is uploaded to it.
I'm facing an strange issue is that my application is not exposing to the external world, but it works on localhost. When I try to hit on my public ip of the machine, it is showing "refused to connect" (Note: Security groups are fine)
docker ps command showing my container is running fine on the expected port number. Can you please let me know how can i resolve it to make it work? Thanks in advance
Docker version 20.10.7, build f0df350
Here is my Dockerfile,
FROM ubuntu:latest
RUN mkdir /testing
WORKDIR /testing
ADD . /testing
RUN apt-get update&&apt-get install -y pip
RUN pip install -r requirements.txt
CMD flask run
In my requirements.txt file, I'm having flask to be installed in docker.
Here is my flask code and html file,
from flask import Flask,render_template,request
app = Flask(__name__)
#app.route("/")
def file():
return render_template("index.html")
#app.route("/upload", methods= ['POST','GET'])
def test():
print("test")
if request.method == 'POST':
print("test3")
file = request.files['File'].read()
return file
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0')
index.html:
<html>
<body>
<form action = "http://<publiciIPofmachine>:5000/upload" method = "POST" enctype =
"multipart/form-data">
<input type = "file" name = "File" />
<input type = "submit" value = "Submit" />
</form>
</body>
</html>
docker logs:
Serving Flask app 'app' (lazy loading)
Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
Debug mode: on
Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
Running on http://172.17.0.2:5000/ (Press CTRL+C to quit)
Here are my docker commands that is used,
docker build -t <name> .
docker run -d -t -p 5000:80 <imagename>
You have a typo in the docker run command:
Background
That is what docs say:
-p=[] : Publish a container's port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
Both hostPort and containerPort can be specified as a
range of ports. When specifying ranges for both, the
number of container ports in the range must match the
number of host ports in the range, for example:
-p 1234-1236:1234-1236/tcp
So you are publishing port in format: hostPort:containerPort
Solution
Instead of your code:
docker run -d -t -p 5000:80 <imagename>
You should do
docker run -d -t -p 80:5000 <imagename>
But it's good practice to define the EXPOSE layer in your container :)
The EXPOSE has two roles:
Its kind of documentation, whoever is reading your Dockerfile, they know what port should be exposed.
You can use the --publish-all | -P flag to publish all exposed ports.
Trying to create a Docker container for my Go application - it creates a HTTP server on port 8080:
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/", doX)
if err := http.ListenAndServe("localhost:8080", nil); err != nil {
panic(err)
}
}
func doX(w http.ResponseWriter, r *http.Request) {
x
}
When I run go build and then go to localhost:8080 it works, however it's unresponsive when I try building and running it in Docker:
# Start from the latest golang base image
FROM golang:1.14.3-alpine
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy the source from the current directory to the Working Directory inside the container
COPY /src .
# Build the Go app
RUN go build -o main .
# Expose port 8080 to the outside world
EXPOSE 8080
# Command to run the executable
CMD ["./main"]
The commands I'm running are:
docker build -t src .
docker run -d -p 8080:8080 src
All my .go files are in a directory called 'src'. Help appreciated - quite new to Docker, thanks!
change the host from localhost to 0.0.0.0. It will be fine.
i.e
func main() {
http.HandleFunc("/", doX)
if err := http.ListenAndServe("0.0.0.0:8080", nil); err != nil {
panic(err)
}
}
The main idea of loopback is serving resources on SAME HOST. if you are exposing a port, the host changes so it should throw a connection reset by peer kind of error.
Alternatively, you may wish to create a network if your golang application needs to communicate between other containers of different images.
So first a new network needs to be created.
docker network create <network_name>
Then while booting this go container along with different containers(if at all required) pass the network flag to make them available under the same subnet.
docker run --network <network_name> -d -p 8080:8080 src
If you don't want to do that run the container on host network of docker (PS. it's not the standard practice, but good for debugging). If you have run your src image in the host network with localhost at http.ListenAndServe, you could instantly notice that it is accessible but while exposing the port on docker's bridge network mode, it's not accessible. So there must be some error in the application setup where it is invisible to the outside network.
docker run -d --network host src
Thanks :)
I am trying to build and run docker image from example on this site: https://kubernetes.io/docs/tutorials/hello-minikube/
//server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);
//Dockerfile
FROM node:6.14.2
EXPOSE 8080
COPY server.js .
CMD node server.js
I use commands
docker build -t nsj .
docker run nsj
They run without error but I cannot access the server on localhost:8080.
What is wrong?
Seems like at least two things are wrong:
You need to map the port from your docker host
You need to bind your server to 0.0.0.0
So, probably these changes (untested):
In your code:
www.listen(8080, "0.0.0.0");
In your docker command:
docker run nsj -p 8080:8080
Note that having EXPOSE 8080 in your Dockerfile does not actually expose anything. It just "marks" this port in the docker engine's metadata and is intended for both documentation (so people reading the Dockerfile know what it does) and for tools that inspect the docker engine.
To quote from the reference:
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published
I am trying to start an ASP.NET Core container hosting a website.
It does not exposes the ports when using the following command line
docker run my-image-name -d -p --expose 80
or
docker run my-image-name -d -p 80
Upon startup, the log will show :
Now listening on: http://[::]:80
So I assume the application is not bound to a specific address.
But does work when using the following docker compose file
version: '0.1'
services:
website:
container_name: "aspnetcore-website"
image: aspnetcoredocker
ports:
- '80:80'
expose:
- '80'
You need to make sure to pass all options (-d -p 80) to the docker command before naming the image as described in the docker run docs. The notation is:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
So please try the following:
docker run -d -p 80 my-image-name
Otherwise the parameters are used as command/args inside the container. So basically running your entrypoint of the docker image with the additional params of -d -p 80 instead of passing them to the docker command itself. So in your example the docker daemon is just not receiving the params -d and -p 80 and thus not mapping the port to the host. You can also notice that by not receiving the -d the command runs in the foreground and you see the logs in your terminal.
Docker service is unable to load on exposed port
I have created a simple DOCKERFILE and have built image "sample-image" from it inside a docker swarm, but whenever I am trying to run a container by creating docker service and exposing it on respective port but its unable to load on my browser. Kindly help
First I have initialised docker swarm and created an image "sample-image" from dockerfile, after that I created overlay network "sample-network" inside swarm and created a service to run container on it "docker service create --name sample-service -p 8010:8001 --network sample-network sample-image". Service gets created but it couldn't loads up on a browser as I have enabled my ufw for port 8010 also.
DOCKERFILE:
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8001
CMD [ "npm", "start" ]
server.js:
'use strict';
const express = require('express');
const PORT = 8001;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
I expect my web browser to display "Hello world" on exposed port number.
Did you check the docker logs ? That should give you what is going wrong.
$ docker logs <containerid>
Could you try "curl -v http://127.0.0.1:8081" and paste the output here please? Also, did you manipulate ufw after the service or dockerd started?
I think your listening port inside docker is 8010, and outside docker is 8001, can you try changing -p 8001:8001? if it works try -p 8001:8010.