Docker splunk driver is used in my application. Here is the configuration.
splunk-url: "https://splunk-server:8088"
splunk-token: "token-uuid"
splunk-index: "my_index"
My token of splunk have index acknowledgement enabled, such that Http Event Collector (HEC) requires X-Splunk-Request-Channel in header.
I am sure that event can be sent via a HTTP client like postman to HEC with the header, but I cannot find the configuration option from docker splunk driver to set it.
Given that splunk index ack is required by my organisation. Is there any workaround?
cheers
Related
I am building elasticsearch and kibana inside docker, I have them up but when a compose logs for elasticsearch I got this warning:
"WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.29.0.2:9200, remoteAddress=/172.29.0.4:54642}"`
And then the logs are not connected and displayed inside elasticsearch website, noting that the same repo is running on another server and it works correctly. I will attach the files setup if someone can help or at least try them on his side. Thanks a lot.`
You can find my code in the following repo
And also i have the same error, and then i tried to substitute all hhtp by https and also the same thing:
Elasticsearch | {"#timestamp":"2023-01-04T14:04:50.865Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.29.0.2:9200, remoteAddress=/172.29.0.6:55870}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[ad8fe576ac58][transport_worker][T#2]","log.logger":"org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4HttpServerTransport","elasticsearch.cluster.uuid":"QDf3uC44Trqpc3FHqBuXtA","elasticsearch.node.id":"3NryPkn_R1q0n9vi7WozQw","elasticsearch.node.name":"ad8fe576ac58","elasticsearch.cluster.name":"docker-cluster"}
#IDev have you visited the link which is present in your elastiscearch.yml file for XPack security settings? There are some properties which are disabled by default but are important in order to establish connection with Elasticsearch.
Protocol: mqtt
Version: 3.1.1
Gateway model: CloudGate Ethernet CG0102
I'm publishing json message from my gateway which is connected to an open source Emqx broker (broker.emqx.io) port 1883 for a test. I tried to consume the messages by connecting to it with MQTTX by giving the following informations: Name, Client_ID, Host, Port, Username and Password, and then giving my topic which is my_topic.
The problem is nothing appear in my MQTTX while the given broker informations are good and similar to those in my gateway. Why ?
Also I would like in the future to use my own mqtt broker mounted on my laptop ? Any simple references where I could start to make such thing ? I already use mqtt to consume messages with python from remote broker but never try to build one to receive messages from my remote gateway.
I'm working on a ubuntu bionic VM
Client_ID needs to be unique for every client, so you can not reuse Client_ID between clients.
The MQTT spec says that the broker should kick the oldest client off when a new client connects with the same Client_ID. This normally leads to a fight between the 2 clients as they both try and reconnect kicking each other off.
I have a Docker container that sends its logs to Graylog via udp.
Previously I just used it to output raw messages, but now I've come up with a solution that logs in GELF format.
However, Docker just puts it into "message" field (screen from Graylog Web Interface):
Or in plain text:
{
"version":"1.1",
"host":"1eefd38079fa",
"short_message":"Content root path: /app",
"full_message":"Content root path: /app",
"timestamp":1633754884.93817,
"level":6,
"_contentRoot":"/app",
"_LoggerName":"Microsoft.Hosting.Lifetime",
"_threadid":"1",
"_date":"09-10-2021 04:48:04,938",
"_level":"INFO",
"_callsite":"Microsoft.Extensions.Hosting.Internal.ConsoleLifetime.OnApplicationStarted"
}
GELF-driver is configured in docker-compose file:
logging:
driver: "gelf"
options:
gelf-address: "udp://sample-ip:port"
How to make Docker just forward these already formatted logs?
Is there any way to process these logs and append them as custom fields to docker logs?
The perfect solution would be to somehow enable gelf log driver, but disable pre-processing / formatting since logs are already GELF.
PS. For logs I'm using NLog library, C# .NET 5 and its NuGet package https://github.com/farzadpanahi/NLog.GelfLayout
In my case, there was no need to use NLog at all. It was just a logging framework which no one attempted to dive into.
So a better alternative is to use GELF logger provider for Microsoft.Extensions.Logging: Gelf.Extensions.Logging - https://github.com/mattwcole/gelf-extensions-logging
Don't forget to disable GELF for docker container if it is enabled.
It supports additional fields, parameterization of the formatted string (parameters in curly braces {} become the graylog fields) and is easily configured via appsettings.json
Some might consider this not be an answer since I was using NLog, but for me -- this is a neat way to send customized logs without much trouble. As for NLog, I could not come up with a solution.
I am using the Dataflow template (i've tried both the latest and 2020-11-02-00_RC00 of Cloud_PubSub_to_Splunk ) that streams data from a pubsub topic to splunk. I have followed all steps from the Documentation.
My job arguments were:
JOB_NAME=pubsub-to-splunk-$USER-`date +"%Y%m%d-%H%M%S%z"`
gcloud dataflow jobs run $JOB_NAME \
--subnetwork=https://www.googleapis.com/compute/v1/projects/<PROJECT>/regions/us-central1/subnetworks/<NAME> \
--gcs-location gs://dataflow-templates/2020-11-02-00_RC00/Cloud_PubSub_to_Splunk \
--max-workers 2 \
--parameters=inputSubscription="projects/<PROJECT>/subscriptions/logs-export-subscription",token="<TOKEN>",url="https://<URL>:8088/services/collector/event",outputDeadletterTopic="projects/<PROJECT>/topics/splunk-pubsub-deadletter",batchCount="10",parallelism="8",disableCertificateValidation=true
I can successfully start the Dataflow job and streaming begins and I can see unacked message count from my logs-export-subscription going down, however the job fails when writing to Splunk with the following error:
Error writing to Splunk. StatusCode: 404, content: {"text":"The requested URL was not found on this server.","code":404}, StatusMessage: Not Found
When troubleshooting, I can successfully send a request to the Splunk endpoint from the same subnetwork that the Dataflow workers are running in.
curl -k https://<URL>:8088/services/collector/event -H "Authorization: Splunk <HEC TOKEN>" -d '{"event": {"field1": "hello", "field2": "world"}}'
{"text":"Success","code":0}
And so, I don't think it is a connection or url issue like the error message suggests.
I can reproduce the failure with curl when I remove -d key and value.
curl -k https://<IP>:8088/services/collector/event -H "Authorization: Splunk <TOKEN>"
{"text":"The requested URL was not found on this server.","code":404}
Any idea what may be causing this issue?
The Splunk HEC URL that should be supplied should only be https://[IP]:8088, NOT the full path https://[IP]:8088/services/collector/event, as the path is appended by the Google library.
Thanks for reporting this. We've updated the docs with example to clarify that parameter. Specifically, Splunk HEC url template parameter is as follows:
<protocol>://<host>:<port>
For example: https://splunk-hec.example.com:8088.
Host is the FQDN (or IP) of either Splunk instance running HEC (in case of single HEC instance) or the HTTP(S) Load Balancer in front of HEC tier (in case of distributed HEC setup).
You do not specify the full HEC endpoint path. The Splunk Dataflow template currently only supports HEC JSON Object endpoint (i.e services/collector/event), and it appends it automatically in outgoing HTTP requests.
Also, for a deeper dive, be sure to check out these new resources:
Deploying production-ready log exports to Splunk using Dataflow: This tutorial incorporates security best practices, plus guidance on how to capacity plan the Splunk Dataflow pipeline, and how to handle potential delivery failures to avoid data loss.
Dataflow product documentation: for latest Splunk Dataflow template parameters details.
Is there a way to pick up the log messages which are logged to a log file, when using the syslog log driver of Docker?
Whatever I write to sysout are getting picked by Rsyslog but anything logged to a log file is not picked. I don't see any option in the syslog driver option which could help indicate a log file to be picked up.
Thanks
Dockers logging interface is defined as stdout and stderr, so the best way is to modify the log settings of your process to send any log data to stdout and stderr.
Some applications can configure logging to go directly to syslog. Java processes using log4j are a good example of this.
If logging to file is the only option available, scripts, logstash, fluentd, rsyslog, and syslog-ng can all ingest text files and output syslog. This can either be done inside the a container with an additional service, or using a shared, standardised logging area on each Docker host and running the ingestion from there.