I am trying to build and run docker image from example on this site: https://kubernetes.io/docs/tutorials/hello-minikube/
//server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);
//Dockerfile
FROM node:6.14.2
EXPOSE 8080
COPY server.js .
CMD node server.js
I use commands
docker build -t nsj .
docker run nsj
They run without error but I cannot access the server on localhost:8080.
What is wrong?
Seems like at least two things are wrong:
You need to map the port from your docker host
You need to bind your server to 0.0.0.0
So, probably these changes (untested):
In your code:
www.listen(8080, "0.0.0.0");
In your docker command:
docker run nsj -p 8080:8080
Note that having EXPOSE 8080 in your Dockerfile does not actually expose anything. It just "marks" this port in the docker engine's metadata and is intended for both documentation (so people reading the Dockerfile know what it does) and for tools that inspect the docker engine.
To quote from the reference:
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published
Related
I am a docker newbie and this is a newbie question :)
I am running Docker Engine from Ubuntu 18.04 in WLS2. When I run my container, I cannot connect to it from a WSL2 bash.
Here is my Dockerfile
FROM node:18.4-alpine3.15
WORKDIR /usr/src/app
# see https://nodejs.org/en/docs/guides/nodejs-docker-webapp/#creating-a-dockerfile
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD [ "node", "./src/main.js" ]
This is how I (successfully) build my image out of it
docker build . -t gb/test
This is how I run my container
docker run --rm -p 3000:3000 --name gbtest gb/test
# tried also this one with no luck
#docker run --rm -P --name gbtest gb/test
My server starts successfully, but I cannot understand how to reach it via curl. I tried all of the following with no success
curl http://localhost:3000
curl http://127.0.01:3000
curl http://127.17.01:3000 <- gather with ifconfig
I tried also what suggested by > this < answer, but it didn't work.
# start container with --add-host
docker run --rm -p 3000:3000 --add-host host.docker.internal:host-gateway --name gbtest gb/test
# try to connect with that host
curl http://host.docker.internal:3000
Note that the NodeJS server I am developing is connecting to the right port as:
it states in its output that it connects to http://127.0.0.1:3000
when I run it directly, so without docker, I can connect with just curl http://localhost:3000
Should you need any further info, just let me know.
EDIT 1
Here is my main.js file, basically the base example from Fastify.
import Fastify from "fastify";
// Require the framework and instantiate it
const fastify = Fastify({ logger: true });
// Declare a route
fastify.get("/", async (request, reply) => {
return { hello: "world" };
});
// Run the server!
const start = async () => {
try {
await fastify.listen({ port: 3000 });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
Here are my container logs
$ docker logs gbtest
{"level":30,"time":1655812753498,"pid":1,"hostname":"8429b2ed314d","msg":"Server listening at http://127.0.0.1:3000"}
I also tried to use host network like below. It works, but I am not sure it is the way to go. How am I supposed to expose a container to the internet? Is it OK to share the whole host network? Does it come with any security concerns?
docker run --rm --network host --name gbtest gb/test
Changing await fastify.listen({ port: 3000 }); to await fastify.listen({ host: "0.0.0.0", port: 3000 }); did the trick.
As you guys pointed out, I was binding to container-private localhost by default, while I want my container to respond to outer connections.
New output of docker logs gbtest is
{"level":30,"time":1655813850993,"pid":1,"hostname":"8e2460ac00c5","msg":"Server listening at http://0.0.0.0:3000"}
Trying to create a Docker container for my Go application - it creates a HTTP server on port 8080:
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/", doX)
if err := http.ListenAndServe("localhost:8080", nil); err != nil {
panic(err)
}
}
func doX(w http.ResponseWriter, r *http.Request) {
x
}
When I run go build and then go to localhost:8080 it works, however it's unresponsive when I try building and running it in Docker:
# Start from the latest golang base image
FROM golang:1.14.3-alpine
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy the source from the current directory to the Working Directory inside the container
COPY /src .
# Build the Go app
RUN go build -o main .
# Expose port 8080 to the outside world
EXPOSE 8080
# Command to run the executable
CMD ["./main"]
The commands I'm running are:
docker build -t src .
docker run -d -p 8080:8080 src
All my .go files are in a directory called 'src'. Help appreciated - quite new to Docker, thanks!
change the host from localhost to 0.0.0.0. It will be fine.
i.e
func main() {
http.HandleFunc("/", doX)
if err := http.ListenAndServe("0.0.0.0:8080", nil); err != nil {
panic(err)
}
}
The main idea of loopback is serving resources on SAME HOST. if you are exposing a port, the host changes so it should throw a connection reset by peer kind of error.
Alternatively, you may wish to create a network if your golang application needs to communicate between other containers of different images.
So first a new network needs to be created.
docker network create <network_name>
Then while booting this go container along with different containers(if at all required) pass the network flag to make them available under the same subnet.
docker run --network <network_name> -d -p 8080:8080 src
If you don't want to do that run the container on host network of docker (PS. it's not the standard practice, but good for debugging). If you have run your src image in the host network with localhost at http.ListenAndServe, you could instantly notice that it is accessible but while exposing the port on docker's bridge network mode, it's not accessible. So there must be some error in the application setup where it is invisible to the outside network.
docker run -d --network host src
Thanks :)
I'm very new to Dockers and I've been reading documentation and been doing some experiment's but there are few things I'm not getting.
The Case is I've two application one is dotnet core web application and the other one is dotnetcore web Api. I'm running dotnet core web application inside a container. Below is the docker file:
WORKDIR /source
COPY . ./DockerTest/
WORKDIR /source/DockerTest
RUN dotnet publish -c release -o /app
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
EXPOSE 50/TCP
EXPOSE 50/UDP
COPY --from=build /app ./
ENTRYPOINT ["dotnet", "DockerTest.dll","--environment=development"]
and the command I execute to run this image is: docker run -d -p 90:50 myapp
Now here I'm trying to map 50 on which my dotnetcore application should be running to port 90 on my host machine. But unfortunately what ever port I give in EXPOSE my application always run on port 80 inside the container I want to know why is that so and how can I change that. Second thing is from inside the container I'm trying to access my web api which is running on host machine:
public async Task<string> GetData()
{
var data = "";
var request = new HttpRequestMessage(HttpMethod.Get,
"http://localhost:51468/weatherforecast");
using (var context = ClientFactory.CreateClient()){
var response = await context.GetAsync(request.RequestUri);
if (response.IsSuccessStatusCode)
{
data = await response.Content.ReadAsStringAsync();
}
else
{
data = "error happen";
}
}
return data;
}
This is how I'm trying to send request to my api which is outside the container but it gives this error:HttpRequestException: Cannot assign requested address.
Now I'm blocked and I need help and suggestions here.
First about you Docker:
The docker EXPOSE 50 is only known for docker, dotnet knows nothing about docker. So in your DockerTest.dll you must also specify the listening port.
dont use port 50, it is too low. Anything below 1024 is seen as well-known ports or system ports and should not be used. dotnet normally listening on port 5000 - when it is not 80/443.
Second about you access to the host:
When using localhost inside the docker container, it will not reach the host but only th container itself. So you have to use the Host LAN ip i.e. 192.168.. or something...
I want to run two containers inside a k8s pod.
tomcat exporter ( which runs on port 8080 )
tomcat application ( which also runs on the port 8080 )
As multiple running containers inside a pod cant share a same port , I am looking forward to build a custom tomcat image with a different port ( say 9090 ( default tomcat port is : 8080 ))
This is what the Dockerfile I have used.
cat Dockerfile
FROM tomcat:9.0.34
RUN sed -i 's/8080/9090/' /usr/local/tomcat/conf/server.xml
EXPOSE 9090
After building that image and running a container, I see that 9090 port has been assigned , but I also see 8080 is also still existing.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b66e1e9c3db8 chakilams3/tomcatchangedport:v1 "catalina.sh run" 3 seconds ago Up 2 seconds 8080/tcp, 0.0.0.0:9090->9090/tcp test
I am wondering from where does this 8080/tcp port comes from , even after I have changed all refferences of 8080 to 9090 in the server.xml file
Any thoughts are appreciated.
With lots of effort, I found the solution to change the internal port of tomcat
container
my Dockerfile is
FROM tomcat:7.0.107
RUN sed -i 's/port="8080"/port="4287"/' ${CATALINA_HOME}/conf/server.xml
ADD ./tomcat-cas/war/ ${CATALINA_HOME}/webapps/
CMD ["catalina.sh", "run"]
Here
ADD ./tomcat-cas/war/ ${CATALINA_HOME}/webapps/ part is not necessary unless you want to initially deploy some war files. And also I don't add EXPOSE 4287, because if I did so, the tomcat server not binding to the port 4287 then it always binding to the 8080 default port.
Just build the image and run
docker build -f Dockerfile -t test/tomcat-test:1.0 .
docker run -d -p 4287:4287 --name tomcat-test test/tomcat-test:1.0
Checking the tomcat:9.0.34 Dockerfile in Dockerhub, we can see that it is exposing port 8080. What happens when you use this image as your parent image, is that you inherit this EXPOSE instruction from that image.
Searching through the documentation, there does not seem to exist an "unexpose" instruction in the Dockerfile to undo the EXPOSE 8080 instruction of the parent image.
This should not cause any issue, but if you would like to eliminate it, you could fork the tomcat Dockerfile, remove the EXPOSE instruction and build your own tomcat image.
Docker service is unable to load on exposed port
I have created a simple DOCKERFILE and have built image "sample-image" from it inside a docker swarm, but whenever I am trying to run a container by creating docker service and exposing it on respective port but its unable to load on my browser. Kindly help
First I have initialised docker swarm and created an image "sample-image" from dockerfile, after that I created overlay network "sample-network" inside swarm and created a service to run container on it "docker service create --name sample-service -p 8010:8001 --network sample-network sample-image". Service gets created but it couldn't loads up on a browser as I have enabled my ufw for port 8010 also.
DOCKERFILE:
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8001
CMD [ "npm", "start" ]
server.js:
'use strict';
const express = require('express');
const PORT = 8001;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
I expect my web browser to display "Hello world" on exposed port number.
Did you check the docker logs ? That should give you what is going wrong.
$ docker logs <containerid>
Could you try "curl -v http://127.0.0.1:8081" and paste the output here please? Also, did you manipulate ufw after the service or dockerd started?
I think your listening port inside docker is 8010, and outside docker is 8001, can you try changing -p 8001:8001? if it works try -p 8001:8010.