Varnish Docker image not working for node js On localhost - docker

I am trying to implement Varnish for a small small node js server
(index.js)
const port = 80;
require("http").createServer((req, res) => {
res.write(new Date().toISOString());
res.end();
}).listen(port, () => {
console.log(`http://127.0.0.1:${port}/`);
})
(default.vcl)
vcl 4.1;
backend default {
.host = "127.0.0.1";
.port = "80";
}
(CMD)
//now I run docker with following commands
docker run --name varnish -p 8080:80 -e VARNISH_SIZE=2G varnish:stable
docker cp default.vcl varnish:/etc/varnish
(Followed By restart container)
But All i see is following error:
Error 503 Backend fetch failed
Backend fetch failed
Guru Meditation:
XID: 31
Varnish cache server

You have a problem in your varnish configuration. You have set:
backend default {
.host = "127.0.0.1";
.port = "80";
}
But 127.0.0.1 (or localhost) means "this container", and your backend is not running inside the same container as Varnish. If your node.js server is running on your host, you probably want to do something like this:
vcl 4.1;
backend default {
.host = "host.docker.internal";
.port = "80";
}
And then start the container like this:
docker run --name varnish -p 8080:80 --add-host=host.docker.internal:host-gateway -e VARNISH_SIZE=2G varnish:stable
This maps the hostname host.docker.internal to mean "the host on which Docker is running".
If your node.js is running in another container, the solution is going to look a little different.

Related

Connection refused: when uwsgi and nginx in different containers

I am trying to setup two docker containers(yes separate without docker-compose): one with nginx and one with uwsgi with basic flask app.
I run containers in same network within docker
My nginx config for site added/linked to sites-enabled(everything else is default):
server {
listen 80;
server_name 127.0.0.1;
location / {
include uwsgi_params;
uwsgi_pass 0.0.0.0:8080;
}
}
My uwsgi.ini
[uwsgi]
module = app:app
master = true
processes = 2
socket = 0.0.0.0:8080
uwsgi entry point in docker looks like
.local/bin/uwsgi --ini uwsgi.ini
Containers run fine on their own - uwsgi receives request on 8080 and nginx receives expected requests. How ever when I try to access 127.0.0.1 i get 502 status code and nginx logs error:
1 connect() failed (111: Connection refused) while connecting to
upstream, client: 192.168.4.1, server: 127.0.0.1, request: "GET /
HTTP/1.1", upstream: "uwsgi://0.0.0.0:8080", host: "127.0.0.1"
By googling i find solution that rather use one container and some_socket.sock as file or use docker compose. Apparently problem with permissions, but I do not know how to solve them or diagnose.
I launch containers with these commands:
docker run --network app_network --name nginx --rm -p 80:80 my_nginx
docker run --network app_network --name flaskapp --rm -p 8080:8080 my_uwsgi
EDIT
You can simply use the hostname of the docker container in the uwsgi_pass directive as both docker containers are on the same subnet.
location / {
include uwsgi_params;
uwsgi_pass flaskapp:8080;
}
0.0.0.0 isn't the IP address of the server, it essentially tells the server to be hosted on every IP that the device has allocated.
To connect to it from nginx, you will need to use the IP address of the container instead.
You can find the IP address of the container running uWsgi with the following command:
docker inspect CONTAINER_ID
Where CONTAINER_ID is the ID of the container you started uwsgi in.
From here you can update the nginx config as follows:
uwsgi_pass IP_ADDRESS:8080;
Where IP_ADDRESS is the one you found from the command above
You can also set the ip address of the container when you start it with the following option
--ip <ip>
Be careful, however, to ensure that the IP address you set is in the same subnet as the standard IP's assigned.

how to reach another two containers with cpprest from a dockerised nginx

i have 2 docker containers with cpprest runned in:
docker run -it -p 0.0.0.0:8081:8080
and
docker run -it -p 0.0.0.0:8082:8080
I have nginx in docker container runned in: docker run -it -p 0.0.0.0:8083:8080
my nginx.conf file is:
http {
upstream backend {
server 0.0.0.0:8081;
server 0.0.0.0:8082;
}
# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
location / {
proxy_pass http://backend/;
}
}
}
When nginx isn't in a container all works fine, but when i try with dockerised nginx and open 0.0.0.0:8083 - it gives me :

Nginx reverse proxy not finding other internal Docker container using hostname

I have two docker containers. One runs Kestrel (172.17.0.3), The other runs Nginx (172.17.0.4) using a reverse proxy to connect to Kestrel. Nginx connects fine when I use internal Docker ip of Kestrel container but when I try to connect to Kestrel using container's hostname in nginx.conf (kestral) I get following error:
2020/06/30 00:23:03 [emerg] 58#58: host not found in upstream "kestrel" in /etc/nginx/nginx.conf:7
nginx: [emerg] host not found in upstream "kestrel" in /etc/nginx/nginx.conf:7
I launched containers with these two lines
docker run -d --name kestrel --restart always -h kestrel mykestrelimage
docker run -d --name nginx --restart always -p 80:80 -h nginx mynginximage
My nginx.conf file below.
http {
# I've tried with and without line below that I found on Stackoverflow
resolver 127.0.0.11 ipv6=off;
server {
listen 80;
location / {
# lines below don't work
# proxy_pass http//kestrel:80;
# proxy_pass http//kestrel
# proxy_pass http//kestrel:80/;
# proxy_pass http//kestrel/;
# when I put internal docker ip of Kestrel server works fine
proxy_pass http://172.17.0.3:80/;
}
}
}
events {
}
I figured out solution to my problem. There were two issues.
First problem: By default Docker uses default bridge network when creating containers. The default Docker bridge network does not resolve DNS though. You have to create a custom bridge network and then specify network when creating docker containers. The below allowed me to ping between containers using hostname
docker network create --driver=bridge mycustomnetwork
docker run -d --name=kestrel --restart=always -h kestrel.local --network=mycustomnetwork mykestrelimage
docker run -d --name=nginx --restart always -p 80:80 -h nginx.local --network=mycustomnetwork mynginximage
Second problem: Even though it was only one kestrel server for some reason Nginx required that I setup an upstream section in /etc/nginx/nginx.conf
http {
upstream backendservers {
server kestrel;
}
server {
listen 80;
location / {
proxy_pass http://backendservers/;
}
}
}
events {
}

How to expose docker container over internet using windows

I have configured my router to expose http 80 on my local machine ip address:
ie '192.168.0.79', and exposed both inbound and outbound ip address, including allowing through firewall. For the purpose of this example lets say its "200.200.200.200"
I have a node server running locally on this same ip address which works and I can see 'hello world' when I visit my exposed ip address, eg: 200.200.200.200 on my web browser. This works.
import yargs from 'yargs';
import express from 'express';
const app = express();
const argv = yargs.argv;
const host = argv.host ;
const port = argv.port;
app.get('/', (req, res) => res.send('Hello World!'));
app.listen(port, host, function() {
console.log('listening on ', host, ':', port);
});
when I stop the node server and instead run a docker container on the same ip address as follows:
docker run -p 192.168.0.79:80:8080 -p 50000:50000 --name myjenkins -v %cd%/jenkins:/var/jenkins_home jenkins/jenkins
I can see this locally on my machine, but when trying to access it from external webbrowser, eg: "200.200.200.200" it simply returns - HTTP ERROR 504
Is there something else I need to expose via the docker container to make this visible online?
I'm having the same issue with an nginx image. So i'm convinced there is something missing in my docker arguments.
Dockerfile
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY dist /usr/share/nginx/html/dist
COPY nginx/default.conf /etc/nginx/conf.d/
docker build -t nginx_image .
docker run -p 192.168.0.79:80:8080 nginx_image
Sounds like a return route issue. Log onto your docker container and see if you can ping 8.8.8.8. Also run netstat -r and see what the default route is. It should be the internal IP address of your firewall.
ok so on much exhaustive research it seems there might be a problem with windows exposing these containers. Or it might be something more advanced regarding proxying this container to the outside.
My solution. Create a node server that proxys to the localhost on my machine.
step 1 - get my ip address for this particular desktop computer on the ethernet
start > cmd
ipconfig
Ethernet adapter Ethernet 4 (Yours will be different. Which ever is connected to the internet):
...
IPv4 Address. . . . . . . . . . . : 192.168.0.79
step 2 - configure router, sky or other, to expose this ip to the internet
visit 192.168.0.2
user: admin
pass: sky
Advanced > Lan IP Setup > LAN TCP/IP Setup
LAN TCP/IP Setuphelp
IP Address:
192. 168. 0. 1
IP Subnet Mask:
255. 255. 255. 0
TICK - Use Router as DHCP Serverhelp
Starting IP Address:
192. 168. 0. 2
Ending IP Address:
192. 168. 0. 254
Address Reservation > Add
cmd
ip address: 192.168.0.79
Mac adress: (This number will look something like 4c:a2:e0 etc.... - can by got by going to a website and typing whats my ip)
Device Name: (Right click my computer > properties) MYCOMPUTERNAME
Security > Firewall Rules > Outbound services > Edit
Service: http: tcp 80
action: allow always
access from: any
0 0 0 0
Security > Firewall Rules > Inbound services > Edit
Service: http: tcp 80
action: allow always
Destination IPv4 LAN address: 192.168.0.79
access from: any
step 3 - create a docker container (ie jenkins), that will default to localhost, and expose the port on something other than 80, ie 81. (We need 80, to be exposed via our router)
Create docker container on localhost:81
docker run -p 81:8080 -p 50000:50000 --name myjenkins -v %cd%/jenkins:/var/jenkins_home jenkins/jenkins
step 4 - Create a node server or equivalent that will proxy the exposed ip address to this localhost
Create a proxy server that redirects 192.168.0.79 to localhost:81
import express from 'express';
import httpProxy from 'http-proxy';
const app = express();
const host = '192.168.0.79' ;
const port = '80';
const apiProxy = httpProxy.createProxyServer();
app.all('/*', (req, res) => {
console.log('redirecting to docker container - http://localhost:81');
apiProxy.web(req, res, {target: 'http://localhost:81'});
});
app.listen(port, host, function() {
console.log('listening on ', host, ':', port);
});
step 5 - type into a webbrowser - whats my ip
ipv4 will by something like 30.132.323.11
Now type this into a webbrowser and you should see your docker container exposed via the node server proxy.

.net core web api app with https in docker

I have the simplest possible Web Api app in .net core ( with the default api/values api you get upon creation)
I've enabled HTTPS so in debug it works, and kestrel reports:
Hosting environment: Development
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
When I run the app in docker (using the MS provided dockerfile), kestrel reports that it only listens on port 80
Hosting environment: Production
Now listening on: http://[::]:80
How to configure the app to listen on https as well in docker?
After making sure that you have EXPOSE 5001 in your app Dockerfile,
use this command to start your app :
sudo docker run -it -p 5000:5000 -p 5001:5001
-e ASPNETCORE_URLS="https://+443;http://+80"
-e ASPNETCORE_HTTPS_PORT=5001
-e ASPNETCORE_Kestrel__Certificates__Default__Password="{YOUR_CERTS_PASSWORD}"
-e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/{YOUR_CERT}.pfx
-v ${HOME}/.aspnet/https:/https/
--restart=always
-d {YOUR_DOCKER_ID}/{YOUR_IMAGE_NAME}
UPDATE :
Just use a self-signed certificate for debugging, here's an example for Kestrel :
WebHost.CreateDefaultBuilder(args)
.UseKestrel(options =>
{
options.Listen(IPAddress.Loopback, 5000); // http:localhost:5000
options.Listen(IPAddress.Any, 80); // http:*:80
options.Listen(IPAddress.Loopback, 443, listenOptions =>
{
listenOptions.UseHttps("certificate.pfx", "password");
});
})
.UseStartup<Startup>()
.Build();

Resources