Geneos data extraction - monitoring

We are planning to use geneos reporting service for extracting data from application DB. But the problem is the geneos active console should remain open for reporting extraction to take place ie somebody needs to monitor it 24/7. Is there any solution/ any other option for data extraction using geneos? T

If you are using Geneos, the purpose should be monitoring. You are not clear when you say 'planning to use geneos reporting service for extracting data'. What is the purpose of extracting data? Is it monitoring? If yes, you have two options for monitoring - one is active console and the other is dashboard. You do not have to monitor 24/7, there are several alerting options available. Please elaborate your requirements to suggest further.

Have you tried switching on the Database Logging section in the GSE?
Extracting historical data can be useful for post incident analysis - for example CPU Spikes or Memory usage issues.
Online info:
https://resources.itrsgroup.com/ActiveConsole2/user_guide/index.html?highlight=database%20logging#database-logging
But surely if you are looking at historical data of a database, couldn't you look directly on the database?

Related

Getting vlc SAP Broadcast dump

I am receiving SAP broadcasts, which I can normally use and play using the standalone vlc application.
I have been asked to provide a dump of the same. I have 2 questions:
I dont clearly understand what exactly dump is
How can I obtain the same?
There are multiple types of dumps, so you might first find out, what kind of dump is meant. It could be a database dump, which is similar to a backup, but usually it's a memory dump.
A memory dump or crash dump is a copy of the application including its memory at a specific point in time. Usually you want to create a dump exactly at the time an application is crashing or hanging. The dump will then be helpful to find the cause of the problem.
There are many ways to obtain a dump. First, Windows might do that for you, when it asks "Send information to Microsoft". Second, you can create it using Task Manager. Right click a process and choose "Create dump file". Third, there are many tools out there, e.g. Process Explorer or ProcDump, which all have pros and cons and serve different purposes.
To suggest a tool for your specific case, we would need more information. Exact wording might matter in this situation.
Update
In your particular case it looks like SAP means Service Advertising Protocol, which is related to the network. A broadcast is a message which is sent to everybody.
You could capture that one with Wireshark, but you would need a lot of network knowledge to get the filters set up. In this case the term "dump" probably refers to a something similar to a database dump, because SAP uses tables to store lists of services.

Which is the best way to get real time data from avaya cms server?

I am sorry if this question goes out of topic but i forced to ask here as there is very limited resources found over the net on this.
I am looking to implement system to get real time data from avaya cms server I did lot of RND on JTAPI but it has got some limitations it is not giving all events all data as stored in CMS database. I also tried connecting cms database using Java but no success because it also give historical data in delay of 30 mins.
Is it possible to get the same technically using JTAPI,TAPI anything. Or is there anyone who have used any paid tool by avaya which is cheaper and can solve this purpose.
I saw clint but don't intend to use. Please let me know the ways if anyone had done this.
Your CMS may provide a feature known to me as realtime socket. It is a service pushing data about skills/splits, vdns and vectors over a network socket.
It is virtually the same what you'll find in hsplit and so on but realtime.
Pushed data can be configured by your cms admin.
If you are looking for call data you may take a look at *call_rec* table in cms.
You can use clintSVR which is a high level tool based on CMS CLINT. By using clintSVR, you can use CGI, OCX and C++ interfaces to get the real time data from CMS.
As others have said you can get this from realtime reports. You'll need to scrape them.
RT socket is just a set of wrappers around clint for running reports. It takes the realtime report data and sends to to a socket.
You can roll your own real time reports with clint and feed that to whatever needs to ship the data. A sample realtime report can be run from the command line like:
/cms/toolsbin/clint -u your_user <<EXECUTE_DONE
do menu 0 "cu:rea:Meas"
do "Run"
do "Exit"
EXECUTE_DONE
Here is an example of running a report directly
Run report directly:
/cms/toolsbin/clint -u ini <<EXECUTE_DONE
clear
run gem "r_custom/cr_r_3"
do "Run"
do "Exit"
EXECUTE_DONE

Best tool to record CPU and memory usage with Grinder?

I am using grinder in order to generate reports for the performance tests for my application. But I noticed that it does not generate any report on CPU and memory usage. On further investigation, I found that Grinder does not provide this information. Now, my question is, is there any tool that can be hooked up with grinder, to record the CPU and memory usage details?
As you have discovered, this is not supported directly in The Grinder itself. You will need to use a collection of tools to accomplish this.
I use a combination of Quickstatd, Graphite, and Grinder to Graphite to get all my results in the same place where I can see them. If you need to support Windows, you can probably use collectd (with ssc-serv and the Graphite plugin) instead of Quickstatd, which is based on bash scripts.
You can also pull in server side metrics (like DB lookups per second, etc.) with tools like jmxtrans, statsd, and metrics.
Having all that information in the same place is really powerful, and can give you some good insights.
If you grind a Java server, you can get data via JMX from OperatingSystemMXBean and MemoryMXBean.
Then add the data to a Grinder user Statistic and the data will end up in the -data.log
grinder.statistics.registerDataLogExpression("Load", "userDouble0")
..
grinder.statistics.forCurrentTest.setDouble("userDouble0", systemLoadAverage)
the -data.log can directly be fed into Gnuplot
gnuplot> plot 'client-0-data.log' using 2:7 title "System Load"

Process monitoring

Is there an application that is capable of monitoring AND logging information (to file) about another process (in particular IIS aspnet_wp.exe) like (in periods of time):
- memory usage of process
- cpu usage
Or maybe there is another way to monitor IIS process?
Thanks Pawel
You can check Process Monitor from Microsoft.
Process Monitor is an advanced
monitoring tool for Windows that shows
real-time file system, Registry and
process/thread activity. It combines
the features of two legacy
Sysinternals utilities, Filemon and
Regmon, and adds an extensive list of
enhancements including rich and
non-destructive filtering,
comprehensive event properties such
session IDs and user names, reliable
process information, full thread
stacks with integrated symbol support
for each operation, simultaneous
logging to a file, and much more.
One choice would be Process explorer http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
You might want to look at the ANTS Performance and Memory Profilers from Red Gate. I'm using their memory profiler to track down some memory issues as I write this.
Pretty much any monitoring system or framework that allows custom checks is capable of this.
You write your check and just put an extra line in it to post/print/put something to a file of your choice.
I for example use sensu for which I wright custom checks in ruby.
So I can easily do something like
if %x[{ping]}
ok
%x[{ print ‘all is well’ > syslog]}
end
and similar to that you can do for most other monitoring systems like Nagios etc.

What are the requirements for an application health monitoring system?

What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.

Resources