I have a docker container called A that runs on Server B. Server B is running rsyslog server.
I have a script running on A that generates logs, and these logs are sent via a python facility that forwards these logs to either a SIEM or a syslog server.
I would like to send this data to port 514 on Server B so that rsyslog server can receive it.
I can do this if I specify the servername in the python script as serverB.fqdn however it doesnt work when I tried to use localhost or 127.0.0.1
I assume this is expected behaviour because I guess container A refers to localhost or 127.0.0.1 as itself, hence failing to send. Is there a way for me to send logs to Server B that it sits on without having to go over the network (which I assume it does when it connects to the fqdn) so the network overhead can be reduced?
Thanks J
You could use a Unix socket for this.
Here's an article on how to Use Unix sockets with Docker
Related
I am currently working on a project where I am attempting to use MinIO with a data moving program developed by my company. This broker software only allows for devices using port 80 to successfully complete a job; however, any avid user of MinIO knows that MinIO hosts on port 9000. So my question is, is there a way to change the port on which the MinIO server is hosted? I've tried looking through the config.json file to find an address variable to assign a port number to but each of the address variables I attempted to change had no effect on the endpoint port number. For reference, I am hosting MinIO on a windows 10 virtual machine during the test phase of the project and will be moving it onto a dedicated server (also windows 10) upon successful completion of testing.
Add --address :80 when you start your minio.
You can refer to this: https://docs.min.io/docs/multi-tenant-minio-deployment-guide.html
When you start the minio server use the following command…
minio server start --address :[port you want to use]
for example…
minio server start --address :8000
I have a client running in a docker container that subscribes to a MQTT broker and then writes the data into a database.
To connect to the MQTT Broker i will have to set up port forwarding.
While developing the client on my local machine the following worked fine:
SSH -L <mqtt-port:1883>:localhost:<9000> <user>#<ip-of-server-running-broker>
The client is then configured to subscribe to the MQTT broker via localhost:9000.
This all works fine on my local machine.
Within the container it wont, unless I run the container with --net=host but I'd rather not do that due to security concerns.
I tried the following:
Create docker network "testNetwork"
Run a ssh_tunnel container within "testNetwork" and implement port forwarding inside this container.
Run the database_client container within "testNetwork" and subscribe to the mqtt broker via the bridge network like ("ssh_tunnel.testNetwork:")
(I want 2 seperate containers for this because the ip address will have to be altered quite often and I don't want to re-build the client container all the time)
But all of my attempts have failed so far. The forwarding seems to work (I can access the shell on the server in the ssh container) but I haven't found a way to actually subscribe to the mqtt broker from within the client container.
Maybe this is actually quite simple and I just don't see how it works, but I've been stuck on this problem for hours by now...
Any help or hints are appreciated!
The solution was actually quite simple and works without using -net=host.
I needed to bind to 0.0.0.0 and use the Gateway Forwarding Option to allow remote hosts (the database client) to connect to the forwarded ports.
ssh -L -g *:<hostport>:localhost:<mqtt-port/remote port> <user>#<remote-ip>
Other containers within the same Docker bridge network can then simply use the connection string <name-of-ssh-container>:<hostport>.
I'm a bit confused. Trying to run both a HTTP server listening on port 8080 and a SSH server listening on port 22 inside a Docker container I managed to accomplish the latter but strangely not the former.
Here is what I want to achieve and how I tried it:
I want to access services running inside a Docker container using the IP address assigned to the container:
ssh user#172.17.0.2
curl http://172.17.0.2:8080
Note: I know this is not how you would configure a real web server but I want the container to mimic an embedded device which runs both services and which I don't have available all the time. (So it's really just a local non-production thing with no security requirements).
I didn't expect integrating the SSH server to be easy, but to my surprise I just installed and started it and had to do nothing else to be able to connect to the machine via ssh (no EXPOSE 22 or --publish).
Now I wanted to access the container via HTTP on port 8080 and fiddled with --publish and EXPOSE but only managed to make the HTTP server available through localhost/127.0.0.1 on the host. So now I can access it via
curl http://127.0.0.1:8080/
but I want to access both services via the same IP address which is NOT localhost (e.g. the address the container got randomly assigned is totally OK for me).
Unfortunately
curl http://172.17.0.2:8080/
waits until it times out every time I tied it.
I tried docker run together with -p 8080, -p 127.0.0.1:8080:8080, -p 172.17.0.2:8080:8080 and much more combinations, together or without EXPOSE 8080 in the Dockerfile but without success.
Why can I access the container via port 22 without having exposed anything?
And how do I make it accessible via the container's IP address?
Update: looks like I'm experiencing exactly what's described here.
In client server, configured rsyslog.conf to send remote log messages using UDP.
#docker_host_IP:514
Then, Restarted Rsyslog :
service rsyslog restart
In container server, configured the rsyslog.conf to get the remote logs by uncommenting the following lines to make your server to listen on the udp -
$ModLoad imudp
$UDPServerRun 514
And passed the IP of the docker host while running the container.
But Still not able to get the logs of the server to inside the container . What is my mistake ? And what needs to be done ?
I have a running graylog2 docker container on a remote machine with ports 3000 and 12900 exposed (3000 routes to port 9000 within docker) and I can open graylog web UI on that port. So that works as expected. But for some reason I can't add logs from outside the container. Running this from the cli WORKS from INSIDE the container, but DOESN'T WORK from OUTSIDE:
curl -XPOST http://localhost:3000/gelf -p0 -d '{"short_message":"Hello there", "host":"example.org", "facility":"test", "_foo":"bar"}'
Running this command from outside the docker container I get:
{"type":"ApiError","message":"HTTP 404 Not Found"}
Edit: Found some information that this could possibly be solved by setting GRAYLOG_REST_TRANSPORT_URI to a public IP when running the docker container. Unfortunately when I start it like that, I run into another problem - can't start any inputs to receive logs. Bind address: 0.0.0.0 Port: 3000 It throws:
Request to start input 'project' failed. Check your Graylog logs for more information.
Edit2: Moved my testing environment to a local machine, to rule out possible server misconfigurations. Getting same errors and same problems.
Edit3: Decided to test out the graylog1 docker image and with that one everything actually works as expected right off the bat! So as a backup I could use an old version, but I'd rather avoid that if possible.
You have to start a GELF HTTP input to be able to receive GELF messages via HTTP.
The Graylog REST API does not provide this type of input.