Fluentd copy output plugin to https server - fluentd

I am trying to copy event to https server. I have a list of ports and IPs and I need that FluentD will try sending the event to the first IP it succeeded in sending and only that IP.
what should be the type inside when sending copy to https server
<match analytics>
type copy
<store>
type **https**?
how should I add this logic that sends the data only to one IP from the list
Thanks!

Related

TFTP timeout inside Docker container

I have a Docker container with tftp client on Host#1 with IP 10.10.10.10.
I have a file and a tftp server on Host#2 with IP 11.11.11.11
I want to be able to download file from Host#2 with this tftp client inside Docker container.
The main problem is that tftp uses port 69 only as a control port, but sends data on ephemeral ports. Thus, I tftp client is able to ask the server to send the file, but then can't receive the file itself and catches timeout.
So how do I download this file with tftp?
I know two solutions for now.
First, using --net=host. I do not want to use it because of security (and not only) concerns. Second, publishing ephemeral ports 49152-65535. It's hard to make it work if anything else on host uses any of these ephemeral ports.
Also, it is everything fine with Firewall!

Source client having trouble connecting to serverless Icecast server on Cloud Run

Is it possible to make a serverless Icecast server?
I'm trying to make an internet radio with Icecast on Google's serverless Cloud Run platform. I've put this docker image in Containter Registry and then created a Cloud Run service with default Icecast port 8000. It all seems to work when visiting Cloud Run's provided URL. Using it I can get to the default Icecast and admin pages.
The problem is trying to connect to the server with a source client (tried using mixxx and butt). I think the problem is with ports since setting the port to 8000 on mixxx gives: Socket is busy error while butt just simply doesn't connect. Setting the port to 443 on mixxx gives: Socket error while butt: connect: server answered with 411!
Tried to do the same thing with Compute Engine but just installing Icecast and not a docker image and everything works as intended. As I understand Cloud Run provides a URL for the container (https://example.app) with given port on setup (for Icecast 8000) but source client tries to connect to that URL with its provided port (http://example.app:SOURCE_CLIENT_PORT). So not sure if there's a problem with HTTPS or just need to configure the ports differently.
With Cloud Run you can expose only 1 port externally. By default it's the 8080 port but you can override this when you deploy your revision.
This port is wrapped and behind a front layer on Google Cloud infrastructure, named Google Front End, and exposed with a DNS (*.run.app) on the port 443 (HTTPS).
Thus, you can reach your service only on the exposed port via port 443 wrapping. Any other port will fail.
With Compute Engine, you don't have this limitation, and that's why you haven't issues. Simply open the correct port with firewall rules and enjoy.

Docker container writing to host rsyslog server using localhost

I have a docker container called A that runs on Server B. Server B is running rsyslog server.
I have a script running on A that generates logs, and these logs are sent via a python facility that forwards these logs to either a SIEM or a syslog server.
I would like to send this data to port 514 on Server B so that rsyslog server can receive it.
I can do this if I specify the servername in the python script as serverB.fqdn however it doesnt work when I tried to use localhost or 127.0.0.1
I assume this is expected behaviour because I guess container A refers to localhost or 127.0.0.1 as itself, hence failing to send. Is there a way for me to send logs to Server B that it sits on without having to go over the network (which I assume it does when it connects to the fqdn) so the network overhead can be reduced?
Thanks J
You could use a Unix socket for this.
Here's an article on how to Use Unix sockets with Docker

Is it possible to get any server logs in a container's var/log?

In client server, configured rsyslog.conf to send remote log messages using UDP.
#docker_host_IP:514
Then, Restarted Rsyslog :
service rsyslog restart
In container server, configured the rsyslog.conf to get the remote logs by uncommenting the following lines to make your server to listen on the udp -
$ModLoad imudp
$UDPServerRun 514
And passed the IP of the docker host while running the container.
But Still not able to get the logs of the server to inside the container . What is my mistake ? And what needs to be done ?

remove port number from the url

I have two instances of tomcat on a single machine both instances accepting secure request. Suppose:
one has connector port configured as 8080 and redirect port as 443. The other one has connector port configured as 8083 and redirect port 444. So if first tomcat receive request as
http://localhost:8080/abc/index.html
it then redirect to https://localhost/abc/index.html
and if 2nd tomcat receive request as
http://localhost:8083/abc/index.html
it then redirects to https://localhost:444/abc/index.html
now my problem is that i want to remove that port number 444 from the url. Is there any way to remove that or hide that. I can't use same port number 443 for both the instance.
thanks
No you can't do that. The web browser will only connect on port 443 for HTTPS if you don't specify a port.
Bind an additional static IP address to your computer and assign the second Tomcat to use 443 on that address. Add to your hosts file to use a non-numeric name.
192.168.1.99 localhost2

Resources