am sending multiple logs to ELK stash which is visible in Kibana UI also in graphical format,
but the fields of all the logs are not visible. Only windows event log Fields are seen.
Can some one tell me what i have to do ??
If you talk about the field list on the table ("Documents" for example with the sample dashboards), it's normal. This show only the fields for the logs on this table (by default 500 logs on time picked).
You can modify the pagging parameters on this field to show more logs (I tried 100 logs * 1000 pages without problem).
Edit :
In the kibana's documentation it seems to exist the option you are looking for. Maybe by exporting the schema and putting this option? I will try later if I have time.
Edit 2:
I tried this and it works fine for me using some_dashboard.json on my server log. If don't know how to use json with kibana, look at this documentation
If you're using kibana4, it sounds like this issue.
Related
I have a question on how does one set up an email alert, when a certain error in found in the logs.
So basicly i have this sort of error :
org.postgresql.util.PSQLException: ERROR: missing FROM-clause entry for table "something".
Now when this error, or similar error comes, i would love for graylog to alert certain people so they can react to it, however i have managed to only find information on how you set up alerts when there are too many messages coming through or something like that. If anyone has some experience with this sort of search and notify alert, would be much appreciated.
What version of Graylog are you using?
Have you tried the docs here:
https://docs.graylog.org/en/4.0/pages/alerts.html?highlight=alerts
Set your query, stream, intervals and then select "Filter has results" and it will generate an event each time the query comes up with a match. You can then use a notification to send emails to relevant users.
When you set an alarm, in the "conditions" section you can see there are two options.
One option is to aggregate the occurencies and then trigger the alarm when they reach a treshold, the other option is to just trigger the alarm as soon as there is a single occurency (that you need to define).
I personally used this option and it works fine. I've attached a screenshot of what I see from my graylog, but if you need additional informations, this is the page from Graylog documentation. I think it's very well explained :)
https://docs.graylog.org/en/4.0/pages/alerts.html
Recently my jobs logs in a job details view are full of entries such as:
"Worker configuration: [machine-type] in [zone]."
Jobs themselves seem to work fine, but these entries didn't show up before and I am worried I won't be able to spot meaningful log entries because of it.
Is it something I should be worried about? Do you know how to get rid of them?
Yes, those logs are spammy and are not to be worried about. I have submitted an internal bug to reduce these spammy logs (with this being the first). While it is being fixed, you can familiarize yourself with the Stackdriver Logs Exclusion feature. This allows you to create filters to exclude logs based on a user-defined query.
Here are the steps to exclude specific Datawflow logs:
Navigate to the logs ingestion page
Find the "Dataflow Step" row
Click the right-most button on the same row
Select the "Create Exclusion Filter..." option from the drop-down
Write the query to select which logs you want to exclude
(in your case: resource.type="dataflow_step" "Worker configuration")
Name your filter
Select your percent of logs to exclude (exclude 100% of selected logs is the default)
Click the "Create Exclusion" button
You can view your created exclusion filter in the "Exclusions" tab in the logs ingestion page
And you should not see such logs spamming for newly scheduled jobs now! We've added logic to prevent excessive logging of this kind of message.
This question already has answers here:
Can I get console logs from my iPhone app testers?
(2 answers)
Closed 3 years ago.
I am looking for a way to programatically pull all of my application logs that are shown in console.
.
I DO NOT WANT to just be able to see them, so using xcode as a preview will not work for me.
What I want is my users to be able to send me those logs along with feedback any time since app is in beta phase and plain user explanations are not good enough for proper debugging at my end.
So, what I DO WANT is some iOS analogy for Android's logcat command which is being used somehow like this:
final Process process = Runtime.getRuntime().exec("logcat -d");
final InputStream inputStream = process.getInputStream();
... then you manipulate the stream into whatever you need to do with it, in my case, to create a String object that i would pass on to my log service.
SO FAR in my investigation I was only being able to find this option, but I would appreciate if something easier to integrate into Swift app is available.
Also, os.log module is used for logging, but I wasn't able to find an option where it allows loading the logs. Or my understanding of the following explanation found HERE is not good enough:
Use the Console app or the log command-line tool to view and filter log messages.
EDIT:
END USER SHOULD NOT INTERRACT WITH LOGS in any way other than just clicking the submit button while switch for debug logs is ON.
So #matt NO - this is not the duplicate of the linked issue.
End users should NOT have to download something else too in order to be able to feed me with my own app logs.
That is a classic killer of user experience and should not even be approved as a solver on the linked post either.
To your last point. That explanation is telling you to connect your phone to a mac and use the "Console" application (or command line tool) to extract the console logs e.g. https://support.apple.com/en-ie/guide/console/welcome/mac
The first link you have provided (using ASL) is the only solution I'm aware of to do this natively. 2 other options would be:
Use a server/cloud based analytical/event tracking system to log messages for each user. If someone sends an issue you can search the logs to see what happened. (Be careful about storing personal information though).
You could write your own class / function that takes in a string, and writes it to a text file and logs it to the console. During development you will see it in the console and in the wild it will store it. When you want the details, you could just read the text file and upload it. Need to be careful of it growing too big. And again depending on where its stored, personal data could be an issue.
I've always used option 1. There are many free services to do this and lots of solutions for releasing apps to test users provide this out of the box too. Microsofts AppCenter being one, and I believe is free unless you want CI/CD
I am using Revulytics SDK to track feature usage and came across the below problem.
I am sending feature usage after properly setting up the SDK configuration etc, using the EventTrack() method like this:
GenericReturn grTest = telemetryObj.EventTrack("FeatureUsage", textBoxName.Text.ToString(), null, false);
This returns OK and usually, I can see the usage data in the dashboard. However, after multiple tests, the data I am sending does not show up on the dashboard.
Can anyone hint me how to debug this? Thanks for any help!
I hit a similar issue when first working with this SDK.
I was able to address this as soon as I understood the following:
There are event quotas for the incoming events;
Event names are used for making the distinction.
So when I was sending dummy test data, it made it there, but when I sent some demo data for stakeholders, it was not showing up.
I think the same happens here. You're getting the event name form textbox.text... Pretty sure that varies every time you run the code.
Here are the things to keep in mind when testing your code:
the server has a mechanism to discard / consider events;
implicitly, it allows first xx events depending on the quota;
if you are sending more than xx events, they will not show up in reports.
So, you must control which events to discard and which to consider (there are a couple of levels you can configure, and based of them you can get the events in various types of reports).
Find the "Tracked Events Whitelist Management". You will be able to control these things form there.
This blog helped me (it is not SDK documentation): https://www.revulytics.com/blog/getting-started-with-usage-intelligence-part2-event-tracking
Good luck!
I have a running copy of the Getting Started Guide. It syncs perfectly with a CouchDB server (at couchappy.com). So far so good.
I need the sync to happen only with a user action (ie. a user hits a button). So I added a button to the markup and wired the click event to the same "sync()" function provided in the Getting Started Guide. Lastly, I changed the two "live" options from "true" to "false".
Whether or not I change values on the client, server or nowhere, when I click the button, it calls the sync function and I get an error for both the replicate.to and replicate.from. I must be missing a basic concept in PouchDB. Can someone help me understand how to get replication working not "live"?
Thanks in advance.
It depends on what your error is, but most likely you need to enable CORS. Check this tutorial, search "CORS".
If that doesn't work, then post your error here along with the version of PouchDB that you're using.
Also, live is false by default, so you don't need to mention it at all. :)