GET http://api:1337/games net::ERR_NAME_NOT_RESOLVED for nuxt.js pages using asyncData - docker

I have somewhat complicated setup with docker. Everything's working as expected except I have this weird problem.
Visiting index page or /pages/_id pages I have no errors. But when I try to open /other-page it crashes. All are using the same API url.
Error found in the console when opening /other-page:
GET http://api:1337/games net::ERR_NAME_NOT_RESOLVED
Not sure what to do, any suggestions?
nuxt.config.js
axios: {
baseURL: 'http://api:1337'
},
docker-compose.yml
version: '3'
services:
api:
build: .
image: strapi/strapi
environment:
- APP_NAME=strapi-app
- DATABASE_CLIENT=mongo
- DATABASE_HOST=db
- DATABASE_PORT=27017
- DATABASE_NAME=strapi
- DATABASE_USERNAME=
- DATABASE_PASSWORD=
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=strapi
- HOST=api
- NODE_ENV=production
ports:
- 1337:1337
volumes:
- ./strapi-app:/usr/src/api/strapi-app
#- /usr/src/api/strapi-app/node_modules
depends_on:
- db
restart: always
links:
- db
nuxt:
# build: ./app/
image: "registry.gitlab.com/username/package:latest"
container_name: nuxt
restart: always
ports:
- "3000:3000"
links:
- api:api
command:
"npm run start"
nginx:
image: nginx:1.14.2
expose:
- 80
container_name: nginx
restart: always
ports:
- "80:80"
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
links:
- nuxt
index.vue
...
async asyncData({ store, $axios }) {
const games = await $axios.$get('/games')
store.commit('games/emptyList')
games.forEach(game => {
store.commit('games/add', {
id: game.id || game._id,
...game
})
})
return { games }
},
...
page.vue
...
async asyncData({ store, $axios }) {
const games = await $axios.$get('/games')
store.commit('games/emptyList')
games.forEach(game => {
store.commit('games/add', {
id: game.id || game._id,
...game
})
})
return { games }
},
...
Nginx conf
upstream webserver {
ip_hash;
server nuxt:3000;
}
server {
listen 80;
access_log off;
connection_pool_size 512k;
large_client_header_buffers 4 512k;
location / {
proxy_pass http://webserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_max_temp_file_size 0;
}
UPDATE:
Tried what Thomasleveil suggested. Now I'm receiving following error:
nuxt | [2:09:35 PM] Error: connect ECONNREFUSED 127.0.0.1:80
So, it seems like now /api is being forwarded to 127.0.0.1:80. Not sure why ^^
nuxt.config.js
axios: {
baseURL: '/api'
},
server: {
proxyTable: {
'/api': {
target: 'http://localhost:1337',
changeOrigin: true,
pathRewrite: {
"^/api": ""
}
}
}
}
docker-compose.yml
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # listen to the Docker events
networks:
- mynet
api:
build: .
image: strapi/strapi
container_name: api
environment:
- APP_NAME=strapi-app
- DATABASE_CLIENT=mongo
- DATABASE_HOST=db
- DATABASE_PORT=27017
- DATABASE_NAME=strapi
- DATABASE_USERNAME=
- DATABASE_PASSWORD=
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=strapi
- HOST=api
- NODE_ENV=development
ports:
- 1337:1337
volumes:
- ./strapi-app:/usr/src/api/strapi-app
#- /usr/src/api/strapi-app/node_modules
depends_on:
- db
restart: always
networks:
- mynet
labels:
- "traefik.backend=api"
- "traefik.docker.network=mynet"
- "traefik.frontend.rule=Host:example.com;PathPrefixStrip:/api"
- "traefik.port=1337"
db:
image: mongo
environment:
- MONGO_INITDB_DATABASE=strapi
ports:
- 27017:27017
volumes:
- ./db:/data/db
restart: always
networks:
- mynet
nuxt:
# build: ./app/
image: "registry.gitlab.com/username/package:latest"
container_name: nuxt
restart: always
ports:
- "3000:3000"
command:
"npm run start"
networks:
- mynet
labels:
- "traefik.backend=nuxt"
- "traefik.frontend.rule=Host:example.com;PathPrefixStrip:/"
- "traefik.docker.network=web"
- "traefik.port=3000"
networks:
mynet:
external: true

Visiting index page or /pages/_id pages I have no errors. But when I try to open /other-page it crashes.
To reformulate:
I have a main page at / that shows some links targeting pages at /pages/_id (where _id is a valid game id)
When I open / or /pages/_id, the content shows up
But if I click a link from page / targeting /pages/xxx (where xxx is a valid id), I got an error
Furthermore if I refresh the page, I then see the content and not the error
content for those pages comes from an api server. Pages are supposed to fetch the content by calling the api server and then render the page contents with the response.
What's happening here?
AsyncData
The way asyncData works in nuxt.js is the following:
on first page load
the user enters the url http://yourserver/pages/123 in its browser
the nuxt web server handles the request, resolve the route and mount the vue component for that page
the asyncData method from the vue component is called from the nuxt.js server side
the nuxt.js server (not the user browser) then fetch the content by making different call to http://api:1337/games/123, receive the response and the content renders.
when the user clicks a link for another page
Something a bit different happens now.
The user is still on the page http://api:1337/games/123 which has a link to the main page listing all the games (http://yourserver/) and click it.
the browser does not load any new page. Instead, the user browser makes an ajax call to http://api:1337/games to try to fetch the new content. And fails due to a name resolution error
Why?
This is a feature brought to you by nuxt.js to speed up page content loading time. from the documentation, the important bit of information is:
asyncData is called every time before loading the page component. It will be called server-side once (on the first request to the Nuxt app) and client-side when navigating to further routes.
server-side means the call is made from the nuxt server to the api server
client-side means the call is made from the user browser to the api server
Now the fun part:
the nuxt server is running in a first container
the api server is running in a second container and is listening on port 1337
from the nuxt container, the url for calling the api server is http://api:1337/, and this works fine
from the user browser, calling http://api:1337 fails (net::ERR_NAME_NOT_RESOLVED) because the user computer does not know how to translate the domain name api to an IP address. And even if it could, this IP Address would be unreachable anyway.
What can be done?
You need to set up a reverse proxy that will forward requests made by the user browsers to url starting with http://yourserver/api/ to the api container on port 1337.
And you need to configure nuxt.js so that links to the api made client-side (from the user browser) use the url http://yourserver/api instead of http://api:1337/
And you need to configure nuxt.js so that it keeps calling http://api:1337 for calls made server-side.
adding a reverse proxy for calls made from nuxt (server-side)
Since you are using the nuxt.js Axios module to make calls to the api container, you are half way there.
The Axios module has a proxy option that can be set to true in nuxtjs.config.js
Bellow is an example of setting up a reverse proxy for your project using Traefik, but the documentation state that the proxy is incompatible with the usage of the baseURL option. The prefix option must be used instead.
Your nuxt.config.js should then look like this:
axios: {
prefix: '/api',
proxy: true
},
proxy: {
'/api/': {
target: 'http://localhost:1337',
pathRewrite: {
'^/api/': ''
}
}
},
This works fine from your development computer, where if strapi is running and responding at http://localhost:1337. But this won't work in a container because we there need to replace http://localhost:1337 with http://api:1337.
To do so, we can introduce an environment variable (STRAPI_URL):
axios: {
prefix: '/api',
proxy: true
},
proxy: {
'/api/': {
target: process.env.STRAPI_URL || 'http://localhost:1337',
pathRewrite: {
'^/api/': ''
}
}
},
We will later set the STRAPI_URL in the docker-compose.yml file.
adding a reverse proxy for calls made from the user browser (client-side)
Since I gave up on implementing reverse proxies with nginx when using docker, here's an example with Traefik:
docker-compose.yml:
version: '3'
services:
reverseproxy: # see https://docs.traefik.io/#the-traefik-quickstart-using-docker
image: traefik:1.7
command: --docker
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
api:
image: strapi/strapi
environment:
- ...
expose:
- 1337
labels:
traefik.frontend.rule: PathPrefixStrip:/api
traefik.port: 1337
nuxt:
image: ...
expose:
- 3000
command:
"npm run start"
labels:
traefik.frontend.rule: PathPrefixStrip:/
traefik.port: 3000
Now all HTTP requests made by the user browser to http://yourserver will be handled by the Traefik reverse proxy.
Traefik will configure forwarding rules by looking at labels starting with traefik. on the nuxt and api containers.
What changed?
You now have 2 reverse proxies:
one for server-side requests (the nuxt.js Proxy module)
one for client-side requests (Traefik)
It's not done yet
We now need to instruct the nuxt.js Proxy module that it must forward requests to http://api:1337/. We are going to use the STRAPI_URL environment variable for that.
And we need to instruct nuxt Axios module that the user browser must call the api on http://yourserver/api. This is done with the API_URL_BROWSER environment variable.
All together
nuxt.config.js
axios: {
prefix: '/api',
proxy: true
},
proxy: {
'/api/': {
target: process.env.STRAPI_URL || 'http://localhost:1337',
pathRewrite: {
'^/api/': ''
}
}
},
docker-compose.yml
version: '3'
services:
reverseproxy: # see https://docs.traefik.io/#the-traefik-quickstart-using-docker
image: traefik:1.7
command: --docker
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
api:
image: strapi/strapi
environment:
- ...
expose:
- 1337
labels:
traefik.frontend.rule: PathPrefixStrip:/api
traefik.port: 1337
nuxt:
image: ...
expose:
- 3000
command:
"npm run start"
environment:
NUXT_HOST: 0.0.0.0
STRAPI_URL: http://api:1337/
API_URL_BROWSER: /api
labels:
traefik.frontend.rule: PathPrefixStrip:/
traefik.port: 3000

Related

Api-platform docker deployment behind nginx-proxy

I'm trying to deploy a quick demo of api-platform.
In advance, my apologize if I missed something in a discssion or a documentation, I'm not used to work on deployment and maybe not looking at the right places.
I use a server where I already have some docker containers running, for that I use nginxproxy/nginx-proxy docker container as reverse proxy.
I looked at the api-platofrm documentation on how to deploy with docker-compose : https://api-platform.com/docs/deployment/docker-compose/#deploying but since I'm working on this subject I evolve between "502 bad gateway" or "The page is not redirected correctly" errors.
Actually I've got this docker-compose.yml :
version: "3.4"
services:
php:
build:
context: ./api
target: api_platform_php
depends_on:
- database
restart: unless-stopped
volumes:
- php_socket:/var/run/php
healthcheck:
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
networks:
- 'cloud'
caddy:
build:
context: api/
target: api_platform_caddy
depends_on:
- php
environment:
PWA_UPSTREAM: pwa:3000
SERVER_NAME: ${SERVER_NAME:-localhost, caddy:80}
MERCURE_PUBLISHER_JWT_KEY: ${MERCURE_PUBLISHER_JWT_KEY:-!ChangeMe!}
MERCURE_SUBSCRIBER_JWT_KEY: ${MERCURE_SUBSCRIBER_JWT_KEY:-!ChangeMe!}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- caddy_data:/data
- caddy_config:/config
ports:
# HTTP
- target: 80
published: 7000
protocol: tcp
# HTTPS
- target: 443
published: 7001
protocol: tcp
# HTTP/3
- target: 443
published: 7001
protocol: udp
networks:
- 'cloud'
database:
image: postgres:13-alpine
environment:
- POSTGRES_DB=api
- POSTGRES_PASSWORD=!ChangeMe!
- POSTGRES_USER=api-platform
volumes:
- db_data:/var/lib/postgresql/data:rw
# you may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./api/docker/db/data:/var/lib/postgresql/data:rw
networks:
- 'cloud'
volumes:
php_socket:
db_data:
caddy_data:
caddy_config:
networks:
cloud:
external: true
and this docker-compose.preprod.yml file :
version: "3.4"
# Preproduction environment override
services:
php:
environment:
APP_ENV: prod
APP_SECRET: ${APP_SECRET}
caddy:
environment:
MERCURE_PUBLISHER_JWT_KEY: ${MERCURE_PUBLISHER_JWT_KEY:-!ChangeMe!}
MERCURE_SUBSCRIBER_JWT_KEY: ${MERCURE_SUBSCRIBER_JWT_KEY:-!ChangeMe!}
VIRTUAL_HOST: api-preprod.melofeel.com
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: api-preprod.melofeel.com
I'm deploying it with gitlab-ci and launching it with this command :
SERVER_NAME=******.*****.com APP_SECRET=testdeploy POSTGRES_PASSWORD=testdeploy CADDY_MERCURE_JWT_SECRET=testdeploy docker-compose -f api_preprod/docker-compose.yml -f api_preprod/docker-compose.preprod.yml up -d
I've tried to run it with and without Caddy, without I always get "502 bad gateway".
The 3 containers are running, but when I look on Caddy logs I've got this messages :
{"level":"error","ts":1648201680.3190682,"logger":"tls.issuance.acme.acme_client","msg":"challenge failed","identifier":"*****.*****.com","challenge_type":"http-01","problem":{"type":"urn:ietf:params:acme:error:unauthorized","title":"","detail":"Invalid response from http://*****.*****.com/.well-known/acme-challenge/O9zJRdytI8vlf7yZLRcV9pzUlmI73ysCqQJTHg8XWTw [188.165.218.39]: 404","instance":"","subproblems":[]}}
I've tried to deactivate the automatic https from caddy, nginx-proxy is already responsible for doing it, but it's seems to not work.
My Caddyfile :
{
# Debug
{$DEBUG}
# HTTP/3 support
servers {
protocol {
experimental_http3
},
auto_https disable_redirects
}
}
{$SERVER_NAME}
log
# Matches requests for HTML documents, for static files and for Next.js files,
# except for known API paths and paths with extensions handled by API Platform
#pwa expression `(
{header.Accept}.matches("\\btext/html\\b")
&& !{path}.matches("(?i)(?:^/docs|^/graphql|^/bundles/|^/_profiler|^/_wdt|\\.(?:json|html$|csv$|ya?ml$|xml$))")
)
|| {path} == "/favicon.ico"
|| {path} == "/manifest.json"
|| {path} == "/robots.txt"
|| {path}.startsWith("/_next")
|| {path}.startsWith("/sitemap")`
route {
root * /srv/api/public
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt:///data/mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Allow anonymous subscribers (double-check that it's what you want)
anonymous
# Enable the subscription API (double-check that it's what you want)
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
vulcain
push
# Add links to the API docs and to the Mercure Hub if not set explicitly (e.g. the PWA)
header ?Link `</docs.jsonld>; rel="http://www.w3.org/ns/hydra/core#apiDocumentation", </.well-known/mercure>; rel="mercure"`
# Comment the following line if you don't want Next.js to catch requests for HTML documents.
# In this case, they will be handled by the PHP app.
reverse_proxy #pwa http://{$PWA_UPSTREAM}
php_fastcgi unix//var/run/php/php-fpm.sock
encode zstd gzip
file_server
}
Thanks in advance for any help and explanation that allow me to understand what the problem is.
I managed to get it working by changing the caddy server name to
SERVER_NAME: ${SERVER_NAME:-localhost, caddy}:80
Then in Nginx Proxy redirect it to the IP over http to port 80 with a HTTPS certificate etc but I've found this breaks Vulcain for preloading fetch requests with the following error
A preload for '<URL>' is found, but is not used because the request credentials mode does not match. Consider taking a look at crossorigin attribute.
I haven't manage to fix it yet and can confirm my setup works when I point my DNS record directly to the Server running the docker as opposed to Nginx-Proxy

How to connect two docker containers together

I have a reactjs front end application and a simple python flask. And I am using a docker-compose.yml to spin up both the containers, and it is like this:
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
ports:
- 80:80
links:
- "backend:backend"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
ports:
- 8083:8083
I have used links so the frontend service can talk to backend service using axios as below:
axio.get("http://backend:8083/monitors").then(res => {
this.setState({
status: res.data
});
});
I used docker-compose up --build -d to build and start the two containers and they are started without any issue and running fine.
But now the frontend cannot talk to backend.
I am using an AWS ec2 instance. When the page loads, I tried to see the for any console errors and I get this error:
VM167:1 GET http://backend:8083/monitors net::ERR_NAME_NOT_RESOLVED
Can someone please help me?
The backend service is up and running.
You can use a nginx as reverse proxy for both
The compose file
version: "3.2"
services:
frontend:
build: .
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- backend
backend:
build: ./api
# volumes:
# - ./api:/usr/src/app
environment:
# CHOKIDAR_USEPOLLING: "true"
FLASK_APP: /usr/src/app/server.py
FLASK_DEBUG: 1
proxy:
image: nginx
volumes:
- ./nginx.conf:/etc/nginx/conf.d/example.conf
ports:
- 80:80
minimal nginx config (nginx.conf):
server {
server_name example.com;
server_tokens off;
location / {
proxy_pass http://frontend:80;
}
}
server {
server_name api.example.com;
server_tokens off;
location / {
proxy_pass http://backend:8083;
}
}
The request hits the nginx container and is routed according the domain to the right container.
To use example.com and api.example.com you need to edit your hosts file:
Linux: /etc/hosts
Windows: c:\windows\system32\drivers\etc\hosts
Mac: /private/etc/hosts
127.0.0.1 example.com api.example.com

Problem with nginx reverse proxy and docker. Only one proxy_path is working

I am completly lost.. So i have two proxy_paths in my nginx conf: '/' and '/api'. '/' redirects to my frontend and is working perfectly but the '/api' proxy path is not working at all. The proxy server is logging requests to '/api/' but not forwarding them to my actual api-server. I'm missing something. Is the '/' proxy_path some sort of catch all that overrides any other proxy paths? Any assistance would be invaluable! Thanks! Here are my configs:
nginx reverse proxy conf:
server {
listen 80;
server_name proxy;
location / {
proxy_pass http://frontend_prod:3000/;
}
location /api {
proxy_pass http://api_prod:3333/;
}
}
docker-compose:
version: '3.1'
services:
proxy:
build: ./proxy/
ports:
- '9000:80'
restart: always
depends_on:
- frontend_prod
- api_prod
frontend_prod:
build: ./frontend/nginx/
ports:
- '3000'
depends_on:
- api_prod
restart: always
db:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: hunter2
api_prod:
build: ./backend/api/
command: npm run production
ports:
- '3333'
depends_on:
- db

nginx unable to forward request to a service running in a different container

I want to use nginx reverse proxy as an APIGateway in my microservice architecture
Problem: Nginx is unable to proxy_pass to my payment_service running in a different container. However, when I try to curl payment_service:3000 from inside nginx container, it works. So the network is ok.
docker-compose.yml
version: '3'
services:
payment_service:
container_name: payment_service
build: ./payment
ports:
- "3000:3000"
volumes:
- ./payment:/usr/app
networks:
- microservice-network
api_gateway:
image: nginx:latest
container_name: api_gateway
restart: always
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:8080
- 443:443
depends_on:
- payment_service
networks:
- microservice-network
networks:
microservice-network:
driver: bridge
default.conf
upstream payment_server {
server payment_service:3000 max_fails=10;
}
server {
listen 8080;
location /api/v1/payment {
proxy_pass http://payment_server;
}
}
Payment service is working fine when I directly access it using http://localhost:3000
But dont work with http://localhost:8080/api/v1/payment
According to the docs
If proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed, or the full normalized request URI is passed when processing the changed URI...
I don't know what your payment service is expecting, but I'm assuming you're trying to hit the root path. You need to add a trailing slash to the proxy_pass:
location /api/v1/payment {
proxy_pass http://payment_server/;
}
otherwise the request will be made as
http://payment_service:3000/api/v1/payment

Can not update or save settings on SonarQube behind Nginx reverse proxy

I've an instance of sonarqube and nginx both in docker.
Sonarqube is behind nginx and it works fine, I can access it BUT I can not update anything in sonarqube UI such as installing plugins in Marketplace.
When I press install or update my account details it says:
It looks like it fails to pass cookies and authentication token when making Ajax calls. When pressing install button on plugin marketplace, it makes a POST call to: http://localhost:8089/sonarqube/api/plugins/install
Have I missed anything in nginx config?
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
server {
location /sonarqube {
proxy_pass http://sonarqube:9000;
proxy_read_timeout 90s;
proxy_redirect http://sonarqube:9000 http://localhost:8089;
}
}
}
version: "3.5"
services:
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
ports:
- "8089:8080"
networks:
- sonarnet
sonarqube:
build:
context: .
dockerfile: Dockerfile.sonarqube
expose:
- "9000"
networks:
- sonarnet
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
- sonar.forceAuthentication=true
- sonar.web.context=/sonarqube
########### The following line fixed the problem:###########
- sonar.core.serverBaseUrl = localhost:8089/sonarqube
############################################################
volumes:
- sonarqubeConf:/opt/sonarqube/conf
- sonarqubeLogs:/opt/sonarqube/logs
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresqlData:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
......
After spending a whole day, I found the solution. As sonar is behind the reverse proxy and accessible as localhost:8089/sonarqube, we need to specify sonar.core.serverBaseUrl as well. So adding the following env variable solved the problem: - sonar.core.serverBaseUrl = localhost:8089/sonarqube

Resources