Graylog: which tag triggers the message collection? - graylog

I'm running a graylog collector sidecar (https://github.com/digiapulssi/graylog-sidecar) to send logs to my graylog. The collector is running successfully and I'm able to see the logs in my web interface.
I'm collecting logs from several sources-I have tagged them accordingly (syslog, apache, kafka, etc).
Question: Is there any way to know which tag triggers the collection of the message? For every message I see information like
tags
["apache","kafka","syslog"]
I'm really new to graylog...

Related

Logging in Dask

I am using a SLURM cluster and want to be able to be able to add custom logs inside my task that should appear in the logs on the dashboard when inspecting a particular worker.
Alternatively I would like to be able to extract the name of the worker to use the log_event function and include the name of the worker in such a log in a way that matches the name on the dashboard.
The reason is so that I can see the logs of any long running workers that seem to be hanging or having issues.

How would you display a docker output on a web page?

I have a webserver running on an host, and a docker image running on another one.
I am trying to display the docker image output on a webpage, and I was wondering what would be the best way to:
Draw the console on the web page: very likely html/css and javascript, I had a look at xTerm.js which seems to be a good framework to draw and manage consoles on web pages.
Send the docker image output from its host to the webserver host, so I can display it on the user session webpage: I thought about using rabbitmq, storing each line/group of lines in a message, and it would expect an answer if the console needs interaction (which would mean publishing a reply message from the webserver and so on..)
But it sounds overcomplex, and maybe something else easier such as direct SSH connection between the user page and the docker host might work, instead of using a message queue?
What do you think?
Thanks.

Is it possible to execute some code like logging and writing result metrics to GCS at the end of a batch Dataflow job?

I am using apache beam 2.22.0 (java sdk) and want to log metrics and write them to a GCS bucket after a batch pipeline finishes execution.
I have tried using result.waitUntilFinish() followed by the intended code:
DirectRunner- GCS object is created as expected and the logs appear on the console
DataflowRunner- GCS object is created but logs (post pipeline exec) don't appear on stackdriver
Problem: When a GCS template is created for the same, Neither the GCS object is created nor logs appear using the template.
what you are doing is the correct way of getting a signal for when the pipeline is done. There is no direct API in Apache Beam that allows for getting that signal within the running pipeline aside from wait_until_finish().
For your logging problem, you need to use the Cloud Logging API in your code. This is because the pipeline is submitted to the Dataflow service and runs in GCE VMs which logs to Cloud Logging. However, the code outside of your pipeline runs locally.
See Perform action after Dataflow pipeline has processed all data for a little more information.
It is possible to export the logs from your Dataflow job to Google Cloud Storage, Big Query or PubSub. In order to do that, you can use Cloud Logging Console, Cloud Logging API or gcloud logging to export the desired metrics to a specific sink.
In summary, to use the log export:
Create a sink, selecting Google Cloud Storage as the Sink Service( or one of the desired other options).
Within the sink, create a query to filter your logs (Optional)
Export destination
Afterwards, every time Cloud Logging receives new entries it will add them to the sink, only the new entries.
While you did not mention if you are using custom metrics, I should point that you should follow the Metrics naming rules, here. Otherwise, it won't show up in StackDriver.

Send Docker Entrypoint logs to APP in realtime

I'm looking for ideas to send Docker Logs for each runs to be sent to my application in realtime. I'm looking ways this can be done. Please let me know how this can be done.
Let me know if you have done this already or know how this can be achieved. I want to build feature similar to Netlify or vercel where they show you all build log on UI in realtime. I want something similar for my node application.
You can achieve this with Vercel and Log Drains.
Log Drains make it easy to collect logs from your deployments and forward them to archival, search, and alerting services by sending them via HTTPS, HTTP, TLS, and TCP once a new log line is created.
At the time of writing, we currently support 3 types of Log Drains:
JSON
NDJSON
Syslog
Along with Log Drains, we are introducing two new open-source integrations with logging services for you to start using them today: LogDNA and Datadog.
Install the integration: https://vercel.com/integrations?category=logging
See the announcement blog post: https://vercel.com/blog/log-drains
Note that Vercel does not allow Docker deployments, but does support Serverless Functions.

How to generate the URL which shows the live logs in mesos for a job

I am developing a UI in which I need to show the live logs (stdout and stderr) of jobs running in a mesos slave. I am finding out a way in which I will be able to generate a URL which will point to the mesos logs for the job. Is there a way to do the same? Basically, I need to know the slave id, executor id, master id etc. for generating the URL. Is there a way to find these information?
The sandbox URL is of the form http://
$slave_url:5050/read.json?$work_dir/work/slaves/$slave_id/frameworks/$framework_id/executors/$executor_id/runs/$container_id/stdout, and you can even use the browse.json endpoint to browse around within the sandbox.
Alternatively, you can use the mesos tail $task_id CLI command to access these logs.
For more details, see the following mailing list thread: http://search-hadoop.com/m/RFt15skyLE/Accessing+stdout%252Fstderr+of+a+task+programmattically
How about using reverse approach. You need to present live logs from stderr and stdout. How about storing them outside of mesos slave e.g., elastic-search? You will get nearly live updates, old logs available after, nice search options.
From version 0.27.0 Mesos supports ContainerLogger. You can write your own implementation of ContainerLogger that will push logs to central logs repository (Graylog, Logstash, e.t.c) and then expose it in your UI.
Mesos offers a REST interface where you get the information you want. Visit with your browser http://<MESOS_MASTER_IP>:5050/help (using default port) to check the options you have to query (for example, you can get the information you need from http://<MESOS_MASTER_IP>:5050/master/state.json). Check this link to see an example using it.

Resources