Can I use fluent-logger with otel fluentforwardreceiver? - docker

I am using fluentforwardreceiver in OTEL collector as mentioned below:
receivers:
fluentforward:
endpoint: localhost:8006
When I send logs to this via the docker container mentioned below it works fine :
test_agent_log_generate:
image: httpd
ports:
- "803:80"
logging:
driver: "fluentd"
options:
fluentd-address: localhost:8006
tag: httpd.access
command: /bin/bash -c "while sleep 2; do echo \"T1111esting a log message\"; done"
But, when I use fluent-logger to do the same, I am not getting any logs ! The code is as below:
const FluentClient = require("#fluent-org/logger").FluentClient;
const logger = new FluentClient("tag_prefix", {
socket: {
host: "localhost",
port: 8006,
timeout: 3000, // 3 seconds
}
});
// send an event record with 'tag.label'
do {
console.log("working....")
logger.emit('label', {record: 'this is a log'});
} while (true);
As far as I understand logger.emit will also follow "forward" protocol, so I am expecting the logs to be received in my OTEL,
What may have gone wrong ?
Thanks in advance !

Related

Keycloak LDAP authentication from a dockerized NuxtJS app

I am facing a problem with my authentication with Keycloak. Everything works fine when my Nuxt app is running locally (npm run dev), but when it is inside a Docker container, something goes wrong.
Windows 10
Docker 20.10.11
Docker-compose 1.29.2
nuxt: ^2.15.7
#nuxtjs/auth-next: ^5.0.0-1637745161.ea53f98
#nuxtjs/axios: ^5.13.6
I have a docker service containing Keycloak and Ldap : keycloak:8180 and myad:10389. My Nuxt app is running on port 3000.
On front side, here is my configuration, which is working great when I launch my app locally with "npm run dev" :
server: {
port: 3000,
host: '0.0.0.0'
},
...
auth: {
strategies: {
local: false,
keycloak: {
scheme: 'oauth2',
endpoints: {
authorization: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/auth',
token: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/token',
userInfo: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/userinfo',
logout: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/logout?redirect_uri=' + encodeURIComponent('http://localhost:3000')
},
token: {
property: 'access_token',
type: 'Bearer',
name: 'Authorization',
maxAge: 300
},
refreshToken: {
property: 'refresh_token',
maxAge: 60 * 60 * 24 * 30
},
responseType: 'code',
grantType: 'authorization_code',
clientId: '<client_id>',
scope: ['openid'],
codeChallengeMethod: 'S256'
}
},
redirect: {
login: '/',
logout: '/',
home: '/home'
}
},
router: {
middleware: ['auth']
}
}
And here are my Keycloak and Nuxt docker-compose configurations :
keycloak:
image: quay.io/keycloak/keycloak:latest
container_name: keycloak
hostname: keycloak
environment:
- DB_VENDOR=***
- DB_ADDR=***
- DB_DATABASE=***
- DB_USER=***
- DB_SCHEMA=***
- DB_PASSWORD=***
- KEYCLOAK_USER=***
- KEYCLOAK_PASSWORD=***
- PROXY_ADDRESS_FORWARDING=true
ports:
- "8180:8080"
networks:
- ext_sd_bridge
networks:
ext_sd_bridge:
external:
name: sd_bridge
client_ui:
image: ***
container_name: client_ui
hostname: client_ui
ports:
- "3000:3000"
networks:
- sd_bridge
networks:
sd_bridge:
name: sd_bridge
When my Nuxt app is inside its container, the authentication seems to work, but redirections are acting strange. As you can see I am always redirected to my login page ("/") after my redirection to "/home":
Browser network
Am I missing something or is there something I am doing wrong?
I figured out what my problem was.
So basically, my nuxt.config.js was wrong for a use inside a Docker container. I had to change the auth endpoints to :
endpoints: {
authorization: '/auth/realms/<realm>/protocol/openid-connect/auth',
token: '/auth/realms/<realm>/protocol/openid-connect/token',
userInfo: '/auth/realms/<realm>/protocol/openid-connect/userinfo',
logout: '/auth/realms/<realm>/protocol/openid-connect/logout?redirect_uri=' + encodeURIComponent('http://localhost:3000')
}
And proxy the "/auth" requests to the hostname of my Keycloak Docker container (note that my Keycloak and Nuxt containers are in the same network in my docker-compose files) :
proxy: {
'/auth': 'http://keycloak:8180'
}
At this point, every request was working fine except the "/authenticate" one, because "keycloak:8180/authenticate" is put in the browser URL and of course, it doesn't know "keycloak".
For this to work, I added this environment variable to my Keycloak docker-compose :
KEYCLOAK_FRONTEND_URL=http://localhost:8180/auth
With this variable, the full process of authentication/redirection is working like a charm, with Keycloak and Nuxt in their containers :)
👍

Docker container can't reach another container using container name

I have 2 Docker containers running in the same network and I want 1 of them to call another via spring Webclient.
I'm sure they all are in the same network -> docker network inspect <network_ID> proves this.
AFAIK I can ping one container from another to check if they can talk to each other by docker exec -ti attachment-loader-prim ping attachment-loader-sec
If I run this - I see responses from attachment-loader-sec like 64 bytes from 172.21.0.5: seq=0 ttl=64 time=0.220 ms, which means they can communicate.
When I send Postman request to attachment-loader-prim by its exposed port localhost:8085, I expect that after some business logic it calls for attachment-loader-sec via Webclient, but on that step I get a 500 error with such a message:
"finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80; nested exception is
io.netty.channel.AbstractChannel$AnnotatedConnectException:
finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80"
Both attachment-loader-prim and attachment-loader-sec can be accessed separately via postman and both send a response, no problem.
This is my docker-compose:
version: '3'
services:
attachment-loader-prim:
container_name: attachment-loader-prim
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8085
networks:
- loader_network
expose:
- 8085
ports:
- 8005:8005
- 8085:8085
attachment-loader-sec:
container_name: attachment-loader-sec
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8086
networks:
- loader_network
expose:
- 8086
ports:
- 8006:8005
- 8086:8086
networks:
loader_network:
driver: bridge
And this is a Webclient which makes a call:
class RemoteServiceCaller(private val fetcherWebClientBuilder: WebClient.Builder) {
suspend fun getAttachmentsFromRemote(id: String, params: List<Param>, username: String): Result? {
val client = fetcherWebClientBuilder.build()
val awaitExchange = client.post()
.uri("/{id}/attachment", id)
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(params)
.header(usernameHeader, username)
.accept(MediaType.APPLICATION_OCTET_STREAM)
.awaitExchange {
if (it.statusCode().is2xxSuccessful) {
handleSucessCode(it)
} else it.createExceptionAndAwait().run {
LOG.error(this.responseBodyAsString, this)
throw ProcessingException(this)
}
}
return awaitExchange
}
private suspend fun handleSucessCode(response: ClientResponse) {
// some not important logic
}
}
P.S. BasicUri for Webclient defined as Config Bean like http://attachment-loader-sec/list
All my investigations brought me to such problems as:
Calling container using localhost instead of container name
Containers are not in the same network.
All that seems not relevant for me.
Any ideas will be really appreciated.
The problem was in calling a service without its port. The url became now http://attachment-loader-sec:8086/list and it is correct now. In my case I get 404, which means that my url path is not quite correct, but that is outside of current question

Connect to docker container using hostname on local environment in kubernetes

I have a kubernete docker-compose that contains
frontend - a web app running on port 80
backend - a node server for API running on port 80
database - mongodb
I would like to ideally access frontend via a hostname such as http://frontend:80, and for the browser to be able to access the backend via a hostname such as http://backend:80, which is required by the web app on the client side.
How can I go about having my containers accessible via those hostnames on my localhost environment (windows)?
docker-compose.yml
version: "3.8"
services:
frontend:
build: frontend
hostname: framework
ports:
- "80:80"
- "443:443"
- "33440:33440"
backend:
build: backend
hostname: backend
database:
image: 'mongo'
environment:
- MONGO_INITDB_DATABASE=framework-database
volumes:
- ./mongo/mongo-volume:/data/database
- ./mongo/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
ports:
- '27017-27019:27017-27019'
I was able to figure it out, using the docker-compose aliases & networks I was able to connect every container to the same development network.
There was 3 main components:
container mapping node dns server - Grabs the aliases via docker ps and creates a DNS server that redirects those requests to 127.0.0.1 (localhost)
nginx reverse proxy container - mapping the hosts to the containers via their aliases in the virtual network
projects - each project is a docker-compose.yml that may have an unlimited number of containers running on port 80
docker-compose.yml for clientA
version: "3.8"
services:
frontend:
build: frontend
container_name: clienta-frontend
networks:
default:
aliases:
- clienta.test
backend:
build: backend
container_name: clienta-backend
networks:
default:
aliases:
- api.clienta.test
networks:
default:
external: true # connect to external network (see below for more)
name: 'development' # name of external network
nginx proxy docker-compose.yml
version: '3'
services:
parent:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80" #map port 80 to localhost
networks:
- development
networks:
development: #create network called development
name: 'development'
driver: bridge
DNS Server
import dns from 'native-dns'
import { exec } from 'child_process'
const { createServer, Request } = dns
const authority = { address: '8.8.8.8', port: 53, type: 'udp' }
const hosts = {}
let server = createServer()
function command (cmd) {
return new Promise((resolve, reject) => {
exec(cmd, (err, stdout, stderr) => stdout ? resolve(stdout) : reject(stderr ?? err))
})
}
async function getDockerHostnames(){
let containersText = await command('docker ps --format "{{.ID}}"')
let containers = containersText.split('\n')
containers.pop()
await Promise.all(containers.map(async containerID => {
let json = JSON.parse(await command(`docker inspect ${containerID}`))?.[0]
let aliases = json?.NetworkSettings?.Networks?.development?.Aliases || []
aliases.map(alias => hosts[alias] = {
domain: `^${alias}*`,
records: [
{ type: 'A', address: '127.0.0.1', ttl: 100 }
]
})
}))
}
await getDockerHostnames()
setInterval(getDockerHostnames, 8000)
function proxy(question, response, cb) {
var request = Request({
question: question, // forwarding the question
server: authority, // this is the DNS server we are asking
timeout: 1000
})
// when we get answers, append them to the response
request.on('message', (err, msg) => {
msg.answer.map(a => response.answer.push(a))
});
request.on('end', cb)
request.send()
}
server.on('close', () => console.log('server closed', server.address()))
server.on('error', (err, buff, req, res) => console.error(err.stack))
server.on('socketError', (err, socket) => console.error(err))
server.on('request', async function handleRequest(request, response) {
await Promise.all(request.question.map(question => {
console.log(question.name)
let entry = Object.values(hosts).find(r => new RegExp(r.domain, 'i').test(question.name))
if (entry) {
entry.records.map(record => {
record.name = question.name;
record.ttl = record.ttl ?? 600;
return response.answer.push(dns[record.type](record));
})
} else {
return new Promise(resolve => proxy(question, response, resolve))
}
}))
response.send()
});
server.serve(53, '127.0.0.1');
Don't forget to update your computers network settings to use 127.0.0.1 as the DNS server
Git repository for dns server + nginx proxy in case you want to see the implementation: https://github.com/framework-tools/dockerdnsproxy

kafka: cannot produce messages to kafka inside docker

docker-compose.yml (https://github.com/wurstmeister/kafka-docker)
version: "2.1"
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: "test:3:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Errors when trying to produce messages following https://kafka.apache.org/quickstart:
~/kafka_2.11-1.0.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>gh
>[2018-01-19 17:28:15,385] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1566 ms has passed since batch creation plus linger time
list topics:
~/kafka_2.11-1.0.0$ bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test
why? thanks
UPDATE
how to set KAFKA_ADVERTISED_HOST_NAME or network to make my python/java program or kafka-console-producer.sh (outside docker container) to produce messages to the kafka by localhost:9092?
UPDATE
It seems that the following docker-compose.yml working fine
version: "2"
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- 2181:2181
kafkaserver:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092:9092
environment:
KAFKA_CREATE_TOPICS: "test:3:1"
KAFKA_ZOOKEEPER_CONNECT: localhost:2181
I had the same issue. The suggested syntax in the kafka-docker README does not match the provided docker-compose.yml which does not work as is. I finally found this post and a variation of BEA's updated docker-compose.yml file worked for me. Thank you!
Here are the details.
I am running wurstmeister/kafka-docker on a Ubuntu 16.04 virtual image I set up as described at https://bertrandszoghy.wordpress.com/2018/05/03/building-the-hyperledger-fabric-vm-and-docker-images-version-1-1-from-scratch/
My docker-compose.yml file:
version: '2'
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- "2181:2181"
kafka:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092:9092
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.17.0.1:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "BertTopic:3:1"
On the same VM I installed NodeJs with:
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash –
sudo apt-get install -y nodejs
cd
mkdir nodecode
cd nodecode
sudo npm install -g node-pre-gyp
sudo npm install kafka-node
Then I ran the following program to produce a couple of messages:
var kafka = require('kafka-node'),
Producer = kafka.Producer,
KeyedMessage = kafka.KeyedMessage,
client = new kafka.Client(),
producer = new Producer(client),
km = new KeyedMessage('key', 'message'),
payloads = [
{ topic: 'BertTopic', messages: 'first test message', partition: 0 },
{ topic: 'BertTopic', messages: 'second test message', partition: 0 }
];
producer.on('ready', function () {
producer.send(payloads, function (err, data) {
console.log(data);
process.exit(0);
});
});
producer.on('error', function (err) {
console.log('ERROR: ' + err.toString());
});
Which returned:
{ BertTopic: { '0': 0 } }
And I ran this second NodeJs program to consume the (last) messages:
var options = {
fromOffset: 'latest'
};
var kafka = require('kafka-node'),
Consumer = kafka.Consumer,
client = new kafka.Client(),
consumer = new Consumer(
client,
[
{ topic: 'BertTopic', partition: 0 }
],
[
{
autoCommit: false
},
options =
{
fromOffset: 'latest'
}
]
);
Which returned:
{ topic: 'BertTopic',
value: 'first test message',
offset: 0,
partition: 0,
highWaterOffset: 2,
key: null }
{ topic: 'BertTopic',
value: 'second test message',
offset: 1,
partition: 0,
highWaterOffset: 2,
key: null }
I also have third NodeJs program to show all historical messages in the topic listed at my blog post https://bertrandszoghy.wordpress.com/2017/06/27/nodejs-querying-messages-in-apache-kafka/
Hope this helps someone out.

webpack-dev-server proxy to docker container

I have 2 docker containers managed with docker-compose and can't seem to properly use webpack to proxy some request to the backend api.
docker-compose.yml :
version: '2'
services:
web:
build:
context: ./frontend
ports:
- "80:8080"
volumes:
- ./frontend:/16AGR/frontend:rw
links:
- back:back
back:
build:
context: ./backend
expose:
- "8080"
ports:
- "8081:8080"
volumes:
- ./backend:/16AGR/backend:rw
the service web is a simple react application served by a webpack-dev-server.
the service back is a node application.
I have no problem to access either app from my host :
$ curl localhost
> index.html
$ curl localhost:8081
> Hello World
I can also ping and curl the back service from the web container :
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
73ebfef9b250 16agr_web "npm run start" 37 hours ago Up 13 seconds 0.0.0.0:80->8080/tcp 16agr_web_1
a421fc24f8d9 16agr_back "npm start" 37 hours ago Up 15 seconds 0.0.0.0:8081->8080/tcp 16agr_back_1
$ docker exec -it 73e bash
$ root#73ebfef9b250:/16AGR/frontend# curl back:8080
Hello world
However i have a problem with the proxy.
Webpack is started with
webpack-dev-server --display-reasons --display-error-details --history-api-fallback --progress --colors -d --hot --inline --host=0.0.0.0 --port 8080
and the config file is
frontend/webpack.config.js :
var webpack = require('webpack');
var config = module.exports = {
...
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: "back:8080",
secure: false
}
}
}
...
}
When i try to request /api/test with a link in my app for exemple i get a generic error, the link and google did not help much :(
[HPM] Error occurred while trying to proxy request /api/test from localhost to back:8080 (EINVAL) (https://nodejs.org/api/errors.html#errors_common_system_errors)
I suspect some weird thing because the proxy is on the container and the request is on localhost but I don't really have an idea to solve this.
I think I managed to tacle the problem.
Just had to change the webpack configuration with the following
devServer: {
//redirect api calls to backend server
proxy: {
'/api': {
target: {
host: "back",
protocol: 'http:',
port: 8080
},
ignorePath: true,
changeOrigin: true,
secure: false
}
}
}

Resources