I have the simplest possible Web Api app in .net core ( with the default api/values api you get upon creation)
I've enabled HTTPS so in debug it works, and kestrel reports:
Hosting environment: Development
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
When I run the app in docker (using the MS provided dockerfile), kestrel reports that it only listens on port 80
Hosting environment: Production
Now listening on: http://[::]:80
How to configure the app to listen on https as well in docker?
After making sure that you have EXPOSE 5001 in your app Dockerfile,
use this command to start your app :
sudo docker run -it -p 5000:5000 -p 5001:5001
-e ASPNETCORE_URLS="https://+443;http://+80"
-e ASPNETCORE_HTTPS_PORT=5001
-e ASPNETCORE_Kestrel__Certificates__Default__Password="{YOUR_CERTS_PASSWORD}"
-e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/{YOUR_CERT}.pfx
-v ${HOME}/.aspnet/https:/https/
--restart=always
-d {YOUR_DOCKER_ID}/{YOUR_IMAGE_NAME}
UPDATE :
Just use a self-signed certificate for debugging, here's an example for Kestrel :
WebHost.CreateDefaultBuilder(args)
.UseKestrel(options =>
{
options.Listen(IPAddress.Loopback, 5000); // http:localhost:5000
options.Listen(IPAddress.Any, 80); // http:*:80
options.Listen(IPAddress.Loopback, 443, listenOptions =>
{
listenOptions.UseHttps("certificate.pfx", "password");
});
})
.UseStartup<Startup>()
.Build();
Related
I am trying to implement Varnish for a small small node js server
(index.js)
const port = 80;
require("http").createServer((req, res) => {
res.write(new Date().toISOString());
res.end();
}).listen(port, () => {
console.log(`http://127.0.0.1:${port}/`);
})
(default.vcl)
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "80";
}
(CMD)
//now I run docker with following commands
docker run --name varnish -p 8080:80 -e VARNISH_SIZE=2G varnish:stable
docker cp default.vcl varnish:/etc/varnish
(Followed By restart container)
But All i see is following error:
Error 503 Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: 31
Varnish cache server
You have a problem in your varnish configuration. You have set:
backend default {
.host = "127.0.0.1";
.port = "80";
}
But 127.0.0.1 (or localhost) means "this container", and your backend is not running inside the same container as Varnish. If your node.js server is running on your host, you probably want to do something like this:
vcl 4.1;
backend default {
.host = "host.docker.internal";
.port = "80";
}
And then start the container like this:
docker run --name varnish -p 8080:80 --add-host=host.docker.internal:host-gateway -e VARNISH_SIZE=2G varnish:stable
This maps the hostname host.docker.internal to mean "the host on which Docker is running".
If your node.js is running in another container, the solution is going to look a little different.
My docker configuration needs to map ports for external access, but when trying to install the data hub central war file, mlDeploy and mlRedeploy encounter problems, that the ports are unavailable:
Task :mlDeployApp
Creating custom rewriters for staging and job app servers
Loading REST options for staging server
Initializing ExecutorService
Loading default query options from file default.xml
Shutting down ExecutorService
Loading REST options for jobs server
Initializing ExecutorService
Loading traces query options from file traces.xml
Shutting down ExecutorService
Writing traces query options to MarkLogic; port: 8013
Error occurred while loading modules; host: localhost; port: 8013;
cause: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:8013
...
What went wrong:
Execution failed for task ':mlDeployApp'.
Error occurred while loading REST modules: Error occurred while loading modules; host: localhost; port: 8013; cause: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:8013
Docker file contents
FROM store/marklogicdb/marklogic-server:10.0-7-dev-centos
WORKDIR /tmp
EXPOSE 7997-8040
EXPOSE 8080
EXPOSE 9000
CMD /etc/init.d/MarkLogic start && tail -f /dev/null
Original docker run command:
docker run -d --name=marklogic10.0-7_local -p 7997-8040:7997-8040 -p 8080:8080 -p 9000:9000 marklogic-initial-install:10.0-7-dev-centos
Revised docker run command:
docker run -d --name=marklogic10.0-7_local -p 7997-8012:7997-8012 -p 8014-8040:8014-8040 -p 8043:8013 -p 8090:8080 -p 9000:9000 marklogic-initial-install:10.0-7-dev-centos
Note: I originally had the same problem with port 8080 but mapped it to port 8090 which fixed the problem. Doing the same for port 8013 did not work.
The problem was with the installation steps and not the ports.
Following the tutorial on https://docs.docker.com/get-started/part2/.
I start my docker container with docker run -p 4000:80 friendlyhello
and see
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:8088/ (Press CTRL+C to quit)
But it's inaccessible from the expected path of localhost:4000.
$ curl http://localhost:4000/
curl: (7) Failed to connect to localhost port 4000: Connection refused
$ curl http://127.0.0.1:4000/
curl: (7) Failed to connect to 127.0.0.1 port 4000: Connection refused
Okay, so maybe it's not on my local host. Getting the container ID I retrieve the IP with
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 7e5bace5f69c
and it returns 172.17.0.2 but no luck! curl continues to give the same responses. I can confirm something is running on 4000....
lsof -i :4000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 94812 travis 18u IPv6 0x7516cbae76f408b5 0t0 TCP *:terabase (LISTEN)
I'm pulling my hair out on this. I've read through the troubleshooting guide and can confirm
* not on a proxy
* don't use a custom dns
* I'm having issues connecting to docker, not docker connecting to my pip server.
Running the app.py with python app.py the server starts and I'm able to hit it. What am I missing?
Did you accidentally put port=8088 at the bottom of your app.py file? When you are running this the last line of your output is saying that your python app is exposed on port 8088 not 80.
To confirm you can run either modify the app.py file and rebuild the image, or alternatively you could run: docker run -p 4000:8088 friendlyhello which would map your local port 4000 to 8088 in the container.
Try to run it using:
docker run -p 4000:8088 friendlyhello
As you can see from the logs, your app starts on port 8088, but you connect 4000 to 80 where on 80, nothing is actually listening.
getting this error while curl the application ip
curl (56) Recv failure: Connection reset by peer - when hitting docker container
Do a small check by running:
docker run --network host -d <image>
if curl works well with this setting, please make sure that:
You're mapping the host port to the container's port correctly:
docker run -p host_port:container_port <image>
Your service application (running in the container) is running on localhost or 0.0.0.0 and not something like 127.0.0.1
I GOT the same error
umesh#ubuntu:~/projects1$ curl -i localhost:49161
curl: (56) Recv failure: Connection reset by peer
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
in my case it was due wrong port no
|---MY Projects--my working folder
--------|Dockerfile ---port defined 8080
--------|index.js-----port defined 3000
--------|package.json
then i was running ::::
docker run -p 49160:8080 -d umesh1/node-web-app1
so as the application was running in port 3000 in index.js it was not able to connect to the application got the error as u were getting
So TO SOLVE THE PROBLEM
deleted the last container/image that was created my worong port
just change the port no of INDEX.JS
|---MY Projects--my working folder
--------|Dockerfile ---port defined 8080
--------|index.js-----port defined 8080
--------|package.json
then build the new image
docker build -t umesh1/node-web-app1 .
running the image in daemon mode with exposed port
docker run -p 49160:8080 -d umesh1/node-web-app1
THUS MY APPLICATION WAS RUNNING without any error listing on port 49161
I have same when bind to port that is not lissened by any service inside container.
So check -p option
-p 9200:9265
-p <port in container>:<port in host os to be binded to>
I have a web application built with Elixir that uses a Postgres database in a docker container (https://hub.docker.com/_/postgres/).
I need to expose the web interface (running on port 4000) and the database in the docker container.
I tried adding this to my configuration files:
tunnels:
api:
addr: 4000
proto: http
db:
addr: 5432
proto: tcp
Then in my Elixir config/dev.exs I add this under the database configuration:
...
hostname: "TCP_URL_GIVEN_BY_NGRROK"
When I attempt to start the application, it says failure to connect to the database.
The docker command that I used is:
docker run --name phoenix-pg -e POSTRGRES_PASSWORD=postgres -d postgres
What am I doing wrong?