fluent/fluentd-kubernetes-daemonset:v1.14.6-debian-kafka2-1.1 not working when kafka configured with mtls - fluentd

fluent/fluentd-kubernetes-daemonset:v1.14.6-debian-kafka2-1.1 works fine with plaintext kafka , but when I try to send fluentd logs to kafka secured with mTLS I am getting connection error .
Has anyone managed to send logs to mTLS secured kafka using above fluentd image?

Related

Setup ELK and golang local with docker

i just wanna know, if you guys have a tutorial to a golang app send logs to elasticsearch with docker
i wanna to send my logs with tcp connection (with logstash or filebeat)
i will be very happy with the recommendation. Tks!

Logstash output to ssl elasticsearch

I am getting data from a kafka topic through logstash and want to connect to an SSL-protected elasticsearch(all these are in docker images). I have tested the connection with a non SSL-elasticsearch and it worked fine, but with the ssl-elastic does not.
Do you have any suggestions,
e.g do i have to change my logstash configuration output to connect to https://elasticsearch:9200,
Do i have to install x-pack in my logstash,
Any suggestions
Thanks in advance!

Source client having trouble connecting to serverless Icecast server on Cloud Run

Is it possible to make a serverless Icecast server?
I'm trying to make an internet radio with Icecast on Google's serverless Cloud Run platform. I've put this docker image in Containter Registry and then created a Cloud Run service with default Icecast port 8000. It all seems to work when visiting Cloud Run's provided URL. Using it I can get to the default Icecast and admin pages.
The problem is trying to connect to the server with a source client (tried using mixxx and butt). I think the problem is with ports since setting the port to 8000 on mixxx gives: Socket is busy error while butt just simply doesn't connect. Setting the port to 443 on mixxx gives: Socket error while butt: connect: server answered with 411!
Tried to do the same thing with Compute Engine but just installing Icecast and not a docker image and everything works as intended. As I understand Cloud Run provides a URL for the container (https://example.app) with given port on setup (for Icecast 8000) but source client tries to connect to that URL with its provided port (http://example.app:SOURCE_CLIENT_PORT). So not sure if there's a problem with HTTPS or just need to configure the ports differently.
With Cloud Run you can expose only 1 port externally. By default it's the 8080 port but you can override this when you deploy your revision.
This port is wrapped and behind a front layer on Google Cloud infrastructure, named Google Front End, and exposed with a DNS (*.run.app) on the port 443 (HTTPS).
Thus, you can reach your service only on the exposed port via port 443 wrapping. Any other port will fail.
With Compute Engine, you don't have this limitation, and that's why you haven't issues. Simply open the correct port with firewall rules and enjoy.

Why is my Docker container exposed as TCP instead of HTTP on Docker Cloud

I'm creating a Docker service using Docker Cloud. I created the service using the Docker Cloud website but, my container is exposed as a TCP endpoint and not a HTTP endpoint
Container endpoint: tcp://hadoop-cff9a38e-1.67ae8643.cont.dockerapp.io:32773
According to the Docker cloud tutorial, it is possible to have a HTTP endpoint: this is seen in the example for the dockercloud/hello-world Docker Cloud service (See Link here...)
Anyone know why Docker cloud services are exposed as TCP instead of HTTP or how I can access my service using a browser?
It's because the port is in the range of 3000.
If you expose your container on 80 or 8000 you will have HTTP instead of TCP.

Connect Docker Containers: Frontend to GraphQL Backend via Docker Compose on the same Host

Suppose I'm on a host machine with docker-compose running 2 containers/services:
backend graphql (ports: 8000:8000)
frontend react (ports: 8081:8081)
In the frontend container, where my react + apollo code lives, I need to set this const:
// frontend container code
export const APOLLO = {
uri: 'http://0.0.0.0:8000/graphql' // << not working, what to use here?
};
However, the uri value is not able to connect successfully to the backend graphql endpoint. I'm receiving errors such as Error Network error: request to http://0.0.0.0:8000/graphql failed, reason: connect ECONNREFUSED 0.0.0.0:8000
The containers work fine on their own. I am able to navigate to http://0.0.0.0:8000, http://0.0.0.0:8000/graphql, http://0.0.0.0:8081 to interact with them individually. I am also able to enter each container and reach the other via their service name-spaces with ping backend or ping frontend.
However, when I do uri: 'http://backend:8000/graphql' or uri: 'http://backend/graphql' in my code, i get the error Error Network error: only absolute urls are supported.
On docker inspect backend, I get the backend container's IP address as: '172.18.0.5'. Which i tried to plug into the uri as uri: 'http://172.18.0.5/graphql', but I get Error Network error: Network request failed with status 403 - "Forbidden"
How should I connect backend docker container to the frontend within the code given these scenarios?
Thanks!
Fixed it by running the servers locally instead of Docker and found that backend was rejecting frontend entry due to CORS headers not set. Whitelisted frontends' ip and it worked. Tested again in Docker containers with the backend ip http://172.18.0.5/graphql and connection was perfect.
Hope this helps!
Edit: Referring to the container name in the url hostname i.e. http://backend/graphql also works thanks to the docker network bridge setup by docker compose. This is a better solution than hardcoding the docker container ip above.
This is an issue that occurs when node-fetch does not have access to a protocol or hostname
https://github.com/bitinn/node-fetch/blob/e2603d31c767cd5111df2ff6e2977577840656a4/src/request.js#L125
if (!parsedURL.protocol || !parsedURL.hostname) {
throw new TypeError('Only absolute URLs are supported');
}
Depending on how your graphql backend processes queries, it might be a good idea to log out the URLs for each of your service endpoint and ensure the URL contains a host AND protocol or the fetch will fail.
For myself, the error occurred for me when my host variable was coming back from the ENV for my service endpoints as undefined.

Resources