Keycloak LDAP authentication from a dockerized NuxtJS app - docker

I am facing a problem with my authentication with Keycloak. Everything works fine when my Nuxt app is running locally (npm run dev), but when it is inside a Docker container, something goes wrong.
Windows 10
Docker 20.10.11
Docker-compose 1.29.2
nuxt: ^2.15.7
#nuxtjs/auth-next: ^5.0.0-1637745161.ea53f98
#nuxtjs/axios: ^5.13.6
I have a docker service containing Keycloak and Ldap : keycloak:8180 and myad:10389. My Nuxt app is running on port 3000.
On front side, here is my configuration, which is working great when I launch my app locally with "npm run dev" :
server: {
port: 3000,
host: '0.0.0.0'
},
...
auth: {
strategies: {
local: false,
keycloak: {
scheme: 'oauth2',
endpoints: {
authorization: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/auth',
token: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/token',
userInfo: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/userinfo',
logout: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/logout?redirect_uri=' + encodeURIComponent('http://localhost:3000')
},
token: {
property: 'access_token',
type: 'Bearer',
name: 'Authorization',
maxAge: 300
},
refreshToken: {
property: 'refresh_token',
maxAge: 60 * 60 * 24 * 30
},
responseType: 'code',
grantType: 'authorization_code',
clientId: '<client_id>',
scope: ['openid'],
codeChallengeMethod: 'S256'
}
},
redirect: {
login: '/',
logout: '/',
home: '/home'
}
},
router: {
middleware: ['auth']
}
}
And here are my Keycloak and Nuxt docker-compose configurations :
keycloak:
image: quay.io/keycloak/keycloak:latest
container_name: keycloak
hostname: keycloak
environment:
- DB_VENDOR=***
- DB_ADDR=***
- DB_DATABASE=***
- DB_USER=***
- DB_SCHEMA=***
- DB_PASSWORD=***
- KEYCLOAK_USER=***
- KEYCLOAK_PASSWORD=***
- PROXY_ADDRESS_FORWARDING=true
ports:
- "8180:8080"
networks:
- ext_sd_bridge
networks:
ext_sd_bridge:
external:
name: sd_bridge
client_ui:
image: ***
container_name: client_ui
hostname: client_ui
ports:
- "3000:3000"
networks:
- sd_bridge
networks:
sd_bridge:
name: sd_bridge
When my Nuxt app is inside its container, the authentication seems to work, but redirections are acting strange. As you can see I am always redirected to my login page ("/") after my redirection to "/home":
Browser network
Am I missing something or is there something I am doing wrong?

I figured out what my problem was.
So basically, my nuxt.config.js was wrong for a use inside a Docker container. I had to change the auth endpoints to :
endpoints: {
authorization: '/auth/realms/<realm>/protocol/openid-connect/auth',
token: '/auth/realms/<realm>/protocol/openid-connect/token',
userInfo: '/auth/realms/<realm>/protocol/openid-connect/userinfo',
logout: '/auth/realms/<realm>/protocol/openid-connect/logout?redirect_uri=' + encodeURIComponent('http://localhost:3000')
}
And proxy the "/auth" requests to the hostname of my Keycloak Docker container (note that my Keycloak and Nuxt containers are in the same network in my docker-compose files) :
proxy: {
'/auth': 'http://keycloak:8180'
}
At this point, every request was working fine except the "/authenticate" one, because "keycloak:8180/authenticate" is put in the browser URL and of course, it doesn't know "keycloak".
For this to work, I added this environment variable to my Keycloak docker-compose :
KEYCLOAK_FRONTEND_URL=http://localhost:8180/auth
With this variable, the full process of authentication/redirection is working like a charm, with Keycloak and Nuxt in their containers :)
👍

Related

Connect to docker container using hostname on local environment in kubernetes

I have a kubernete docker-compose that contains
frontend - a web app running on port 80
backend - a node server for API running on port 80
database - mongodb
I would like to ideally access frontend via a hostname such as http://frontend:80, and for the browser to be able to access the backend via a hostname such as http://backend:80, which is required by the web app on the client side.
How can I go about having my containers accessible via those hostnames on my localhost environment (windows)?
docker-compose.yml
version: "3.8"
services:
frontend:
build: frontend
hostname: framework
ports:
- "80:80"
- "443:443"
- "33440:33440"
backend:
build: backend
hostname: backend
database:
image: 'mongo'
environment:
- MONGO_INITDB_DATABASE=framework-database
volumes:
- ./mongo/mongo-volume:/data/database
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
ports:
- '27017-27019:27017-27019'
I was able to figure it out, using the docker-compose aliases & networks I was able to connect every container to the same development network.
There was 3 main components:
container mapping node dns server - Grabs the aliases via docker ps and creates a DNS server that redirects those requests to 127.0.0.1 (localhost)
nginx reverse proxy container - mapping the hosts to the containers via their aliases in the virtual network
projects - each project is a docker-compose.yml that may have an unlimited number of containers running on port 80
docker-compose.yml for clientA
version: "3.8"
services:
frontend:
build: frontend
container_name: clienta-frontend
networks:
default:
aliases:
- clienta.test
backend:
build: backend
container_name: clienta-backend
networks:
default:
aliases:
- api.clienta.test
networks:
default:
external: true # connect to external network (see below for more)
name: 'development' # name of external network
nginx proxy docker-compose.yml
version: '3'
services:
parent:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80" #map port 80 to localhost
networks:
- development
networks:
development: #create network called development
name: 'development'
driver: bridge
DNS Server
import dns from 'native-dns'
import { exec } from 'child_process'
const { createServer, Request } = dns
const authority = { address: '8.8.8.8', port: 53, type: 'udp' }
const hosts = {}
let server = createServer()
function command (cmd) {
return new Promise((resolve, reject) => {
exec(cmd, (err, stdout, stderr) => stdout ? resolve(stdout) : reject(stderr ?? err))
})
}
async function getDockerHostnames(){
let containersText = await command('docker ps --format "{{.ID}}"')
let containers = containersText.split('\n')
containers.pop()
await Promise.all(containers.map(async containerID => {
let json = JSON.parse(await command(`docker inspect ${containerID}`))?.[0]
let aliases = json?.NetworkSettings?.Networks?.development?.Aliases || []
aliases.map(alias => hosts[alias] = {
domain: `^${alias}*`,
records: [
{ type: 'A', address: '127.0.0.1', ttl: 100 }
]
})
}))
}
await getDockerHostnames()
setInterval(getDockerHostnames, 8000)
function proxy(question, response, cb) {
var request = Request({
question: question, // forwarding the question
server: authority, // this is the DNS server we are asking
timeout: 1000
})
// when we get answers, append them to the response
request.on('message', (err, msg) => {
msg.answer.map(a => response.answer.push(a))
});
request.on('end', cb)
request.send()
}
server.on('close', () => console.log('server closed', server.address()))
server.on('error', (err, buff, req, res) => console.error(err.stack))
server.on('socketError', (err, socket) => console.error(err))
server.on('request', async function handleRequest(request, response) {
await Promise.all(request.question.map(question => {
console.log(question.name)
let entry = Object.values(hosts).find(r => new RegExp(r.domain, 'i').test(question.name))
if (entry) {
entry.records.map(record => {
record.name = question.name;
record.ttl = record.ttl ?? 600;
return response.answer.push(dns[record.type](record));
})
} else {
return new Promise(resolve => proxy(question, response, resolve))
}
}))
response.send()
});
server.serve(53, '127.0.0.1');
Don't forget to update your computers network settings to use 127.0.0.1 as the DNS server
Git repository for dns server + nginx proxy in case you want to see the implementation: https://github.com/framework-tools/dockerdnsproxy

Nuxt.js SSR w/ Nest API deployed to AWS in a Docker container

I've tried roughly 5 million variations on the theme here, as well as spent a lot of time poring through the Nuxt docs and I cannot get Nuxt SSR with a Nest backend working when deployed in a docker container to AWS. Below is my current setup. Please let me know if I've left anything out.
Here are the errors I'm getting:
https://www.noticeeverythingcreative.com/contact
This route makes a POST request for page meta to https://www.noticeeverythingcreative.com/api/contact/meta in the component's asyncData method. This produces a big old error from Axios. Below is the part I think is relevant, but let me know if you need more.
{
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: 'xxx.xx.x.x', // IP Address of the docker container
port: 443,
config: {
url: 'https://www.noticeeverythingcreative.com/api/contact/meta',
method: 'post',
headers: {
Accept: 'application/json, text/plain, */*',
connection: 'close',
'x-real-ip': 'xx.xxx.xxx.xxx', // My IP
'x-forwarded-for': 'xx.xxx.xxx.xxx', // My IP
'x-forwarded-proto': 'https',
'x-forwarded-ssl': 'on',
'x-forwarded-port': '443',
pragma: 'no-cache',
'cache-control': 'no-cache',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36',
'sec-fetch-user': '?1',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'navigate',
'accept-encoding': 'gzip, deflate',
'accept-language': 'en-US,en;q=0.9',
'Content-Type': 'application/json'
},
baseURL: 'https://www.noticeeverythingcreative.com'
}
Here's the relevant part of my nuxt.config.js:
mode: 'universal',
srcDir: './src',
rootDir: './',
modules: ['#nuxtjs/axios'],
// NOTE: I get the same errors if I leave this block out
server: {
host: '0.0.0.0',
port: 3002
},
When I deploy I use a Dockerfile that copies all the needed files from my project directory into the container, runs yarn install, exposes port 3002, runs yarn build.prod, and ends with CMD ["yarn", "start"] (see below for relevant package.json scripts).
"scripts": {
"clean.nuxt": "rimraf .nuxt",
"build.client": "nuxt build",
"build.server": "tsc -p tsconfig.server.json", // Transpile TypeScript from `src/server` into `.nuxt/api`
"build.prod": "run-s clean.nuxt build.client build.server",
"start": "cross-env NODE_ENV=production node .nuxt/api/index.js",
}
The docker image is built locally and pushed to an ECR repo. I then SSH into my server and run docker-compose up -d with this compose file:
version: '3.2'
services:
my_service:
image: link/to/my/image:${TAG:-prod}
container_name: my_container
hostname: www.noticeeverythingcreative.com
restart: unless-stopped
ports:
# Http Port
- 3002:3002
networks:
- web-network # External (the actual compose file also has the corresponding networks block at the bottom)
environment:
- NODE_ENV=production
- API_URL=https://www.noticeeverythingcreative.com
- HOST=www.noticeeverythingcreative.com
- PORT=3002
- VIRTUAL_PORT=3002
Here's my server-side controller that handles Nuxt rendering:
src/server/app/nuxt.controller.ts
import { Controller, Get, Request, Response } from '#nestjs/common';
import { join, resolve } from 'path';
import * as config from 'config';
const { Builder, Nuxt } = require('nuxt');
const nuxtConfig = require(join(resolve(), 'nuxt.config.js'));
#Controller()
export class NuxtController
{
nuxt:any;
constructor()
{
this.nuxt = new Nuxt(nuxtConfig);
const Env = config as any;
// Build only in dev mode
if (Env.name === 'development')
{
const builder = new Builder(this.nuxt);
builder.build();
}
else
{
this.nuxt.ready();
}
}
#Get('*')
async root(#Request() req:any, #Response() res:any)
{
if (this.nuxt)
{
return await this.nuxt.render(req, res);
}
else
{
res.send('Nuxt is disabled.');
}
}
}
Here is the client-side contact component's asyncData and head implementations:
async asyncData(ctx:any)
{
// fetch page meta from API
try
{
const meta = await ctx.$axios(<any>{
method: 'post',
url: `${ ctx.env.apiHost }/contact/meta`,
headers: { 'Content-Type': 'application/json' }
});
return { meta: meta.data };
}
catch (error)
{
// Redirect to error page or 404 depending on server response
console.log('ERR: ', error);
}
}
head()
{
return this.$data.meta;
}
The issues I'm having only occur in the production environment on the production host. Locally I can run yarn build.prod && cross-env NODE_ENV=development node .nuxt/api/index.js and the app runs and renders without error.
Update
If I allow the Nuxt app to actually run on localhost inside the docker container, I end up with the opposite problem. For example, if I change my nuxt.config.js server and axios blocks to
server: {
port: 3002, // default: 3000,
},
axios: {
baseURL: 'http://localhost:3002'
}
And change the request to:
const meta = await ctx.$axios(<any>{
method: 'post',
// NOTE: relative path here instead of the absolute path above
url: `/api/contact/meta`,
headers: { 'Content-Type': 'application/json' }
});
return { meta: meta.data };
A fresh load of https://www.noticeeverythingcreative.com/contact renders fine. This can be confirmed by viewing the page source and seeing that the title has been updated and that there are no console errors. However, if you load the home page (https://www.noticeeverythingcreative.com) and click the contact link in the nav, you'll see POST http://localhost:3002/api/contact/meta net::ERR_CONNECTION_REFUSED.
NOTE: this is the version that is deployed as of the last edit of this question.
I've come up with a solution, but I don't love it, so if anyone has anything better, please post.
I got it working by allowing the Nuxt app to run on localhost inside the docker container, but making the http requests to the actual host (e.g., https://www.noticeeverythingcreative.com/whatever).
So, in nuxt.config.js:
// The server and axios blocks simply serve to set the port as something other than the default 3000
server: {
port: 3002, // default: 3000,
},
axios: {
baseURL: 'http://localhost:3002'
},
env: {
apiHost: process.env.NODE_ENV === 'production' ?
'https://www.noticeeverythingcreative.com/api' :
'http://localhost:3002/api'
}
In docker-compose.yml I removed anything that would make the host anything but localhost, as well as any env variables that nuxt is counting on (mostly because I can't quite figure out how those work in Nuxt, except that it's not the way I would expect):
version: '3.2'
services:
my_service:
image: link/to/my/image:${TAG:-prod}
container_name: my_container
# REMOVED
# hostname: www.noticeeverythingcreative.com
restart: unless-stopped
ports:
# Http Port
- 3002:3002
networks:
- web-network # External (the actual compose file also has the corresponding networks block at the bottom)
environment:
# REMOVED
# - API_URL=https://www.noticeeverythingcreative.com
# - HOST=www.noticeeverythingcreative.com
- NODE_ENV=production
- PORT=3002
- VIRTUAL_PORT=3002
And when making api requests:
// NOTE: ctx.env.apiHost is https://www.noticeeverythingcreative.com/api
const meta = await ctx.$axios(<any>{
method: 'post',
url: `${ ctx.env.apiHost }/contact/meta`,
headers: { 'Content-Type': 'application/json' }
});
return { meta: meta.data };

web-component-tester cannot bind to port

I have a docker setup with following containers:
selenium-hub
selenium-firefox
selenium-chrome
spring boot app
node/java service for wct tests
All these containers are defined via docker-compose.
The node/java service is created as follows (extract from docker-compose):
wct:
build:
context: ./app/src/main/webapp
args:
ARTIFACTORY: ${DOCKER_REGISTRY}
image: wct
container_name: wct
depends_on:
- selenium-hub
- selenium-chrome
- selenium-firefox
- webapp
The wct tests are run using:
docker-compose run -d --name wct-run wct npm run test
And the wct.conf.js looks like following:
const seleniumGridAddress = process.env.bamboo_selenium_grid_address || 'http://selenium-hub:4444/wd/hub';
const hostname = process.env.FQDN || 'wct';
module.exports = {
activeBrowsers: [{
browserName: "chrome",
url: seleniumGridAddress
}, {
browserName: "firefox",
url: seleniumGridAddress
}],
webserver: {
hostname: hostname
},
plugins: {
local: false,
sauce: false,
}
}
The testrun fails with this stacktrace:
ERROR: Server failed to start: Error: No available ports. Ports tried: [8081,8000,8001,8003,8031,2000,2001,2020,2109,2222,2310,3000,3001,3030,3210,3333,4000,4001,4040,4321,4502,4503,4567,5000,5001,5050,5432,6000,6001,6060,6666,6543,7000,7070,7774,7777,8765,8777,8888,9000,9001,9080,9090,9876,9877,9999,49221,55001]
at /app/node_modules/polymer-cli/node_modules/polyserve/lib/start_server.js:384:15
at Generator.next (<anonymous>)
at fulfilled (/app/node_modules/polymer-cli/node_modules/polyserve/lib/start_server.js:17:58)
I tried to fix it as per: polyserve cannot serve the app but without success.
I also tried setting hostnameto wct as this is the known hostname for the container inside the docker network, but it shows the same error.
I really do not know what to do next.
Any help is appreciated.
The problem was that the hostname was incorrect, so WCT was unable to bind to an unknown hostname.

axios ECONNREFUSED when requesting in a docker-compose service behind nginx reverse proxy

I have two docker-compose setup, the main service is an SPA containing:
nginx proxy for a wordpress on port 80
wordpress + mysql
expressjs for serving a react app port 6000
This runs behind another docker-compose which is basically an nginx-reverse proxy.
The SPA manages to serve website and connects to backend API via reverse proxy just fine. However, when I try to make a separate https request to the backend api from the server.js I get this message:
{ Error: connect ECONNREFUSED 127.0.0.1:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1121:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 443 }
And it's not just axios, plain wget to the backend url gives me connection refused as well.
A sample for said request:
axios.put('/wc/v3/orders/934', {
status: "completed"
},{
withCredentials:true,
auth: {
username: process.env.REACT_APP_WC_ADMIN_CK_KEY,
password: process.env.REACT_APP_WC_ADMIN_CS_KEY
}
}).then(function (response) {
console.log(`ok`);
}).catch(function (error) {
console.log(error);
});
Any one knows what might be the problem here?
If you have multiple docker-compose environments, then each brings up its own network by default. You want to share the network between the two to allow for the services in one environment to communicate to the other.
# spa/docker-compose.yml
version: '2'
services:
spa:
...
networks:
- app-net
networks:
app-net:
driver: bridge
.
# express/docker-compose.yml
version: '2'
services:
api:
...
networks:
- spa_app-net
networks:
spa_app-net:
external: true

webpack-dev-server proxy to docker container

I have 2 docker containers managed with docker-compose and can't seem to properly use webpack to proxy some request to the backend api.
docker-compose.yml :
version: '2'
services:
web:
build:
context: ./frontend
ports:
- "80:8080"
volumes:
- ./frontend:/16AGR/frontend:rw
links:
- back:back
back:
build:
context: ./backend
expose:
- "8080"
ports:
- "8081:8080"
volumes:
- ./backend:/16AGR/backend:rw
the service web is a simple react application served by a webpack-dev-server.
the service back is a node application.
I have no problem to access either app from my host :
$ curl localhost
> index.html
$ curl localhost:8081
> Hello World
I can also ping and curl the back service from the web container :
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
73ebfef9b250 16agr_web "npm run start" 37 hours ago Up 13 seconds 0.0.0.0:80->8080/tcp 16agr_web_1
a421fc24f8d9 16agr_back "npm start" 37 hours ago Up 15 seconds 0.0.0.0:8081->8080/tcp 16agr_back_1
$ docker exec -it 73e bash
$ root#73ebfef9b250:/16AGR/frontend# curl back:8080
Hello world
However i have a problem with the proxy.
Webpack is started with
webpack-dev-server --display-reasons --display-error-details --history-api-fallback --progress --colors -d --hot --inline --host=0.0.0.0 --port 8080
and the config file is
frontend/webpack.config.js :
var webpack = require('webpack');
var config = module.exports = {
...
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: "back:8080",
secure: false
}
}
}
...
}
When i try to request /api/test with a link in my app for exemple i get a generic error, the link and google did not help much :(
[HPM] Error occurred while trying to proxy request /api/test from localhost to back:8080 (EINVAL) (https://nodejs.org/api/errors.html#errors_common_system_errors)
I suspect some weird thing because the proxy is on the container and the request is on localhost but I don't really have an idea to solve this.
I think I managed to tacle the problem.
Just had to change the webpack configuration with the following
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: {
host: "back",
protocol: 'http:',
port: 8080
},
ignorePath: true,
changeOrigin: true,
secure: false
}
}
}

Resources