I maintain a system consisting of two docker-containers, frontend and backend. The frontend-container hosts an nginx-webserver with a flutter web app while the backend provides a REST-interface that is used by the frontend. Both, frontend and backend are separate projects and both work fine independently. Both are within the same docker-network ("my-network") that I create externally.
docker network create my-network
Here is the docker-compose.yml of the backend:
version: '3'
services:
my-backend:
image: my-backend
ports:
- "8080:8080"
networks:
- my-network
networks:
my-network:
driver: bridge
external: true
And here is the one of the frontend:
version: '3'
services:
my-frontend:
image: my-frontend
ports:
- "4200:80"
networks:
- my-network
networks:
my-network:
driver: bridge
external: true
Everything works fine, I can eg use curl http://my-backend:8080/somepath from within the my-frontend-container.
My nginx.conf is as follows:
events {}
http {
include /etc/nginx/conf/mime.types;
server {
listen 80 default_server;
root '/usr/share/nginx/html';
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
location /backend {
proxy_pass http://my-backend:8080;
}
location /example {
proxy_pass https://example.com;
}
}
}
Like I said, from the frontend's container I can resolve my-backend:8080 using curl. I also can access the frontend from a browser (using http://localhost:4200) and also the example-forward (http://localhost:4200/example), yet, http://localhost:4200/backend does not reach its destination (while http://localhost:8080 works perfectly fine). After some timeout nginx returns a 404. Could someone please explain to me what is the issue?
Docker-images that I use are latest nginx and latest openjdk.
EDIT:
I changed the nginx.conf file slightly by adding
location /backend {
resolver 127.0.0.11; # added this line also to http and server-section
set $backend http://ausanz-backend:8080;
proxy_pass $backend;
}
Furthermore I use nginx-debug. In the debug-output I see that the IP-address is resolved correctly (some parts of the debug-output):
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http request line: "GET /backend/deployment-info HTTP/1.1"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http uri: "/backend/deployment-info"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 test location: "backend"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 using configuration "/backend"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 rewrite phase: 3
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http script value: "http://my-backend:8080"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http script set $backend
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http script var: "http://my-backend:8080"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http script copy: "Host"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http script var: "my-backend:8080"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http script copy: "Connection"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http script copy: "close"
my-frontend_1 | "GET /backend/deployment-info HTTP/1.0
my-frontend_1 | Host: my-backend:8080
my-frontend_1 | Connection: close
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http upstream resolve: "/backend/deployment-info?"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 name was resolved to 172.19.0.2
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 connect to 172.19.0.2:8080, fd:13 #7
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http upstream connect: -2
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http upstream request: "/backend/deployment-info?"
my-frontend_1 | 2022/11/08 12:56:02 [debug] 30#30: *5 http proxy status 404 "404 Not Found"
I verified that curl 172.19.0.2:8080/deployment-info is in fact my-backend.
The solution was rather simple. Backend is always called with a path+query parameter. This was not reflected by the nginx-rule. The solution was to use a regex as proxy_pass. The resolver was still needed though, otherwise I obtained a 502 bad-gateway.
location ~ ^/backend/(.*) {
resolver 127.0.0.11;
proxy_pass http://my-backend:8080/$1$is_args$args;
}
Related
I'm trying to convert logs and ingest to signalfx as metrics. I'm using the fluent-bit internal http plugin to send to a locally installed signalfx agent. Installation of both fluent-bit and signalfx is successfull. i'm able to see the docker containers.
But while sending the logs from fluent-bit to the exposed signalfx endpoint 8095. it is not able to find the host.
Error:
[net] could not connect to localhost:8095
signalfx-processor | [2022/09/01 11:47:59] [debug] [net] could not connect to localhost:8095
signalfx-processor | [2022/09/01 11:47:59] [debug] [task] destroy task=0x7fa02603d690 (task_id=2)
signalfx-processor | [2022/09/01 11:47:59] [debug] [upstream] connection #-1 failed to localhost:8095
signalfx-processor | [2022/09/01 11:47:59] [debug] [upstream] connection #-1 failed to localhost:8095
signalfx-processor | [2022/09/01 11:47:59] [debug] [task] task_id=1 reached retry-attempts limit 1/1
signalfx-processor | [2022/09/01 11:47:59] [error] [output:http:http.0] no upstream connections available to localhost:8095
signalfx-processor | [2022/09/01 11:47:59] [error] [output:http:http.0] no upstream connections available to localhost:8095
could not connect to localhost:8095
docker compose file
version: '3.7'
services:
signalfx-agent:
image: quay.io/signalfx/signalfx-agent:5
ports:
- "8095:8095"
- "9080:9080"
volumes:
- /:/hostfs:ro
- ./etc/signalfx:/etc/signalfx:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: always
fluent-bit:
image: fluent/fluent-bit:1.9-debug
user: root
depends_on:
- signalfx-agent
volumes:
- $PWD/fluentd/nginx.log:/nginx-access.log
- $PWD/fluentd:/fluent-bit/etc
container_name: signalfx-processor
restart: always
I hope this is something to do with docker network setup. Any help will be really helpful.
Thanks in advance
I have docker containers with NGINX and Docker Registry based on the tutorial at https://docs.docker.com/registry/recipes/nginx/
version: "3.5"
services:
nginx-auth:
hostname: reverse-proxy
image: nginx:alpine
restart: always
ports:
- 5044:443
links:
- registry:registry
volumes:
- ./auth:/etc/nginx/conf.d
- ./auth/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- registry-services
registry:
restart: always
image: registry:2
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
volumes:
- ./data:/var/lib/registry
networks:
- registry-services
networks:
registry-services:
external: true
When I attempt to log in using the credentials in my htpasswd file, I get an error 500
$ sudo docker login -u=testuser -p=testpassword my-docker-registry.com:5044
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: login attempt to https://my-docker-registry.com:5044/v2/ failed with status: 500 Internal Server Error
The NGINX container reports permission denied when trying to access the htpaswd file.
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
10.148.100.212 - - [03/Aug/2022:21:31:20 +0000] "GET /v2/ HTTP/1.1" 401 179 "-" "docker/19.03.6 go/go1.12.17 git-commit/369ce74a3c kernel/5.4.0-64-generic os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.6 \x5C(linux\x5C))"
2022/08/03 21:31:20 [crit] 21#21: *2 open() "/etc/nginx/conf.d/nginx.htpasswd" failed (13: Permission denied), client: 10.148.100.212, server: my-docker-registry.com, request: "GET /v2/ HTTP/1.1", host: "my-docker-registry.com:5044"
10.148.100.212 - testuser [03/Aug/2022:21:31:20 +0000] "GET /v2/ HTTP/1.1" 500 177 "-" "docker/19.03.6 go/go1.12.17 git-commit/369ce74a3c kernel/5.4.0-64-generic os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.6 \x5C(linux\x5C))"
But I can open a sh shell to the NGINX container and verify the file is there and accessible from within the container
* Executing task: docker exec -it 40412ee4cda03507e180e7dfadbbe9a060dcc90172088968a299167291a34407 sh
/ # ls -la /etc/nginx/conf.d/nginx.htpasswd
-rw-rw---- 1 675254 8000 70 Aug 3 20:57 /etc/nginx/conf.d/nginx.htpasswd
/ # cat /etc/nginx/conf.d/nginx.htpasswd
testuser:$2y$05$648qqQTyWDvUgk1G3D.o6.CG1bCYI7uu5/jqmQaWGGmEX1js4CuE6
/ #
What do I need to do to allow NGINX to read the htpasswd file?
I'm building my project VUEJS App using Trusted Third Party API, and I'm in the middle of building Dockerfile and docker-compose.yml and using haproxy to allow all methode access to API. But after running docker-compose up --build my first theApp stopped immediately, and always stop even after restart, here's my file
Dockerfile
FROM node:18.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "serve"]
docker-compose.yml
version: "3.7"
services:
theApp:
container_name: theApp
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src:/app/src
ports:
- "9990:9990"
haproxy:
image: haproxy:2.3
expose:
- "7000"
- "8080"
ports:
- "8080:8080"
volumes:
- ./haproxy:/usr/local/etc/haproxy
restart: "always"
depends_on:
- theApp
haproxy.cfg
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout tunnel 1h # timeout to use with WebSocket and CONNECT
#enable resolving throught docker dns and avoid crashing if service is down while proxy is starting
resolvers docker_resolver
nameserver dns 127.0.0.11:53
frontend stats
bind *:7000
stats enable
stats hide-version
stats uri /stats
stats refresh 10s
stats auth admin:admin
frontend project_frontend
bind *:8080
acl is_options method OPTIONS
use_backend cors_backend if is_options
default_backend project_backend
backend project_backend
# START CORS
http-response add-header Access-Control-Allow-Origin "*"
http-response add-header Access-Control-Allow-Headers "*"
http-response add-header Access-Control-Max-Age 3600
http-response add-header Access-Control-Allow-Methods "GET, DELETE, OPTIONS, POST, PUT, PATCH"
# END CORS
server pbe1 theApp:8080 check inter 5s
backend cors_backend
http-after-response set-header Access-Control-Allow-Origin "*"
http-after-response set-header Access-Control-Allow-Headers "*"
http-after-response set-header Access-Control-Max-Age "31536000"
http-request return status 200
here's the error from command
[NOTICE] 150/164342 (1) : New worker #1 (8) forked
haproxy_1 | [WARNING] 150/164342 (8) : Server project_backend/pbe1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1 | [NOTICE] 150/164342 (8) : haproxy version is 2.3.20-2c8082e
haproxy_1 | [NOTICE] 150/164342 (8) : path to executable is /usr/local/sbin/haproxy
haproxy_1 | [ALERT] 150/164342 (8) : backend 'project_backend' has no server available!
trisaic |
trisaic | > trisaic#0.1.0 serve
trisaic | > vue-cli-service serve
trisaic |
trisaic | INFO Starting development server...
trisaic | ERROR Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | at checkResourceSource (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:167:11)
trisaic | at Function.normalizeRule (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:198:4)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:110:20
trisaic | at Array.map (<anonymous>)
trisaic | at Function.normalizeRules (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:109:17)
trisaic | at new RuleSet (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:104:24)
trisaic | at new NormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/NormalModuleFactory.js:115:18)
trisaic | at Compiler.createNormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:636:31)
trisaic | at Compiler.newCompilationParams (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:653:30)
trisaic | at Compiler.compile (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:661:23)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:77:18
trisaic | at AsyncSeriesHook.eval [as callAsync] (eval at create (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:24:1)
trisaic | at AsyncSeriesHook.lazyCompileHook (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/Hook.js:154:20)
trisaic | at Watching._go (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:41:32)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:33:9
trisaic | at Compiler.readRecords (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:529:11)
trisaic exited with code 1
I already tried and googled but got stuck, am I missing something here?
I've been trying to teach myself Nginx. Naturally I figured I should use docker. I'm trying to do this with docker for windows. Would eventually move to Linux server. I feel like I'm so close, but I'm stuck on this last issue.
reverseproxy_1 | 2021/07/14 22:37:31 [error] 31#31: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.18.0.2:5000/favicon.ico", host: "localhost:4000", referrer: "http://localhost:4000/"
Anyone have any suggestions? I'm new to this, so it's probably something stupid. I've gone through several tutorials and I really feel like this should work.
version: '3.7'
services:
web:
image: 'anatomy-lab2'
container_name: 'AnatomyLabWeb'
ports:
- "5000:80"
restart: always
reverseproxy:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- '4000:4000'
depends_on:
- web
restart: always
user nginx;
events {
worker_connections 1000;
}
http {
upstream web-api {
server web:5000;
}
server {
listen 4000;
location / {
proxy_pass http://web-api;
}
}
}
λ docker-compose up
Starting AnatomyLabWeb ... done
Starting anatomy-lab_reverseproxy_1 ... done
Attaching to AnatomyLabWeb, anatomy-lab_reverseproxy_1
reverseproxy_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
reverseproxy_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
reverseproxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
reverseproxy_1 | 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
reverseproxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
reverseproxy_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
reverseproxy_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
AnatomyLabWeb | [04:56:26 WRN] Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed
.
AnatomyLabWeb | [04:56:26 INF] User profile is available. Using '/root/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest.
AnatomyLabWeb | Hosting environment: Production
AnatomyLabWeb | Content root path: /app
AnatomyLabWeb | Now listening on: http://[::]:80
AnatomyLabWeb | Application started. Press Ctrl+C to shut down.
reverseproxy_1 | 2021/07/15 04:56:33 [error] 23#23: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://172.18.
0.2:5000/", host: "localhost:4000"
reverseproxy_1 | 172.18.0.1 - - [15/Jul/2021:04:56:33 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
reverseproxy_1 | 2021/07/15 04:56:33 [error] 23#23: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "htt
p://172.18.0.2:5000/favicon.ico", host: "localhost:4000", referrer: "http://localhost:4000/"
reverseproxy_1 | 172.18.0.1 - - [15/Jul/2021:04:56:33 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "http://localhost:4000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome
/91.0.4472.124 Safari/537.36"
I get the web app to work just fine by itself (asp.net/kestrel). But I can't seem to hook it to Nginx.
Any thoughts on this would be great. I've been stuck for quite a bit of time.
The problem came from
upstream web-api {
server web:5000;
}
In the dockerized environment the web container listens :80 so you need to change the config like
upstream web-api {
server web:80;
}
I'm trying to setup the consul server and connect an agent to it for 2 or 3 days already. I'm using docker-compose.
But after performing a join operation, agent gets a message "Agent not live or unreachable".
Here are the logs:
root#e33a6127103f:/app# consul agent -join 10.1.30.91 -data-dir=/tmp/consul
==> Starting Consul agent...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Version: 'v1.0.1'
Node ID: '0e1adf74-462d-45a4-1927-95ed123f1526'
Node name: 'e33a6127103f'
Datacenter: 'dc1' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 172.17.0.2 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/12/06 10:44:43 [INFO] serf: EventMemberJoin: e33a6127103f 172.17.0.2
2017/12/06 10:44:43 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/12/06 10:44:43 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/12/06 10:44:43 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2017/12/06 10:44:43 [INFO] agent: (LAN) joining: [10.1.30.91]
2017/12/06 10:44:43 [INFO] serf: EventMemberJoin: consul1 172.19.0.2 2017/12/06 10:44:43 [INFO] consul: adding server consul1 (Addr: tcp/172.19.0.2:8300) (DC: dc1)
2017/12/06 10:44:43 [INFO] agent: (LAN) joined: 1 Err: <nil>
2017/12/06 10:44:43 [INFO] agent: started state syncer
2017/12/06 10:44:43 [WARN] manager: No servers available
2017/12/06 10:44:43 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:44:54 [INFO] memberlist: Suspect consul1 has failed, no acks received
2017/12/06 10:44:55 [ERR] consul: "Catalog.NodeServices" RPC failed to server 172.19.0.2:8300: rpc error getting client: failed to get conn: dial tcp <nil>->172.19.0.2:8300: i/o timeout
2017/12/06 10:44:55 [ERR] agent: failed to sync remote state: rpc error getting client: failed to get conn: dial tcp <nil>->172.19.0.2:8300: i/o timeout
2017/12/06 10:44:58 [INFO] memberlist: Marking consul1 as failed, suspect timeout reached (0 peer confirmations)
2017/12/06 10:44:58 [INFO] serf: EventMemberFailed: consul1 172.19.0.2
2017/12/06 10:44:58 [INFO] consul: removing server consul1 (Addr: tcp/172.19.0.2:8300) (DC: dc1)
2017/12/06 10:45:05 [INFO] memberlist: Suspect consul1 has failed, no acks received
2017/12/06 10:45:06 [WARN] manager: No servers available
2017/12/06 10:45:06 [ERR] agent: Coordinate update error: No known Consul servers
2017/12/06 10:45:12 [WARN] manager: No servers available
2017/12/06 10:45:12 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:45:13 [INFO] serf: attempting reconnect to consul1 172.19.0.2:8301
2017/12/06 10:45:28 [WARN] manager: No servers available
2017/12/06 10:45:28 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:45:32 [WARN] manager: No servers available . `
My settings are:
docker-compose SERVER:
consul1:
image: "consul.1.0.1"
container_name: "consul1"
hostname: "consul1"
volumes:
- ./consul/config:/config/
ports:
- "8400:8400"
- "8500:8500"
- "8600:53"
- "8300:8300"
- "8301:8301"
command: "agent -config-dir=/config -ui -server -bootstrap-expect 1"
Help please solve the problem.
I think you using wrong ip-address of consul-server
"consul agent -join 10.1.30.91 -data-dir=/tmp/consul"
10.1.30.91 this is not docker container ip it might be your host address/virtualbox.
Get consul-container ip and use that to join in consul-agent command.
For more info about how consul and agent works follow the link
https://dzone.com/articles/service-discovery-with-docker-and-consul-part-1
Try to get the right IP address by executing this command:
docker inspect <container id> | grep "IPAddress"
Where the is the container ID of the consul server.
Than use the obtained address instead of "10.1.30.91" in the command
consul agent -join <IP ADDRESS CONSUL SERVER> -data-dir=/tmp/consul