Accounting and statistics in IIB(IBM Integration Bus) - messagebroker

What is the command in IIB to generate accounting and statistics data for message flows and their respective applications and integration server. And also how to generate the output file for this data and where is this file stored ?
I have searched a lot regarding this and found the command 'mqsichangeflowstats' from IBM knowledge Center. I was able to successfully execute this command. But I don't know where the file is generated. I am not sure whether this is correct or not.

'mqsichangeflowstats' is right. The data will publish to the topic($SYS/Broker/integrationNodeName/StatisticsAccounting/recordType/integrationServerName/messageFlowLabel), and you can subscribe this topic.
FYI
https://www.ibm.com/support/knowledgecenter/en/SSMKHH_10.0.0/com.ibm.etools.mft.doc/bc28540_.htm

Related

Publish rostopic from external program

This question is somewhat a follow up question to my previous post: ROS: Publish topic without 3 second latching
Again, as a premise, I'm very inexperienced with ROS, thus this question might be missing some data and or be a little confusing.
Let's say I have a node that publishes to a topic and an external C program that dumps data every x Hz. How can I, from that external program, publish to a rostopic every time I get data from that program?
That is my base question, I'm not sure how understandble that is so I'll be very specific with what I want to do.
I have a C program that dumps ADSB flight data. For every data dump I convert that data into a mavros/ADSBVehicle rosmsg. I was previously trying to publish that message via CLI, which I found out was not the correct way of doing it. Now I have created a ROS node that would to the publishing of that message. My question is, how can I, from that external program, publish to a rostopic through that node?
Does that make sense?
Thank you.
I have used rospylib - https://roslibpy.readthedocs.io/en/latest/ for sending data from my raspberry pi to the computer where I am working on ROS. You have to use ROSBridge in addition to this.
Install and run ROSBridge - http://wiki.ros.org/rosbridge_suite
Use sample code in rospylib to run it
I know you are using C, however there are many developed libraries in different language that works like the same for example in rust we have rosrust - https://github.com/adnanademovic/rosrust/blob/master/examples/examples/publisher_latch_demo.rs
I hope it can direct you in a good direction now.

How to download influxdb2.0 data in csv format?

I am not familiar with influxdb command line especially influxDB2.0. so I choose to use InfluxDB 8086 port frontend. but I found if want to download .csv by frontend, too many data leads to browser collapse, which at last leads to download failure.
I have read influxdb2.0 documentation and found no answer. Whether I must use command line or what command line should I use? Thanks a lot in advance
I have the same issues using Flux in a browser session.
If you need a lot of data use the Influx API and catch the result in a file. See the 'example query request' using curl on the referenced page. I find this to be very quick and haven't had it overload from returning large data sets.
(If the amount of data is enormous you can also ask Influx to gzip it before download, but of course this may load up the machine it's running on.)

Encrypt and compress a FirebirdSQL 2.5 backup on-the-fly from Delphi7 securely

We need to protect customer data and using FirebirdSQL 2.5(.8) with Delphi 7.
Also it is essential to do regular backups on "secondary" PC, or pen-drives if the "master" fails.
For that we used this method, calling Gbak.exe and 7z.exe with stdin/out.
Realized that was a bad idea because it's very easy to see the parameters (passwords) added to command line during the process, even with a simple Task-manager.
Is there a more secure way to do it?
(Using standard Interbase componenst OR UIB)
Upgrade to Firebird 3 which added Database Encryption capability. If you don't want or cannot, I believe you might run the GBAK tool from your application with the STDOUT option but instead of using 7-zip for compression you would read that output in your application, and encrypt such input by some encryption library on the fly.
I believe you may find many examples how to run an application and read its standard output over here (here is something related to start with), so the rest might be about finding a way of an on the fly stream encryption. Or just capturing STDOUT in one stream and encypting in another.
Firebird guys on SQL.ru forum say, that actually it is possible to use Services API to get backup stream remotely.
That does not mean that IBX or UIB or any other library readily support it though. Maybe it does, maybe not.
They suggested to read Release Notes for Firebird 2.5.2 or Part 4 of doc\README.services_extension.txt files of Firebird 2.5.2+ installation.
Below is a small excerpt from the latter:
The simplest way to use this feature is fbsvcmgr. To backup database
run approximately the following:
fbsvcmgr remotehost:service_mgr -user sysdba -password XXX action_backup -dbname some.fdb -bkp_file stdout >some.fbk
and to restore it:
fbsvcmgr remotehost:service_mgr -user sysdba -password XXX action_restore -dbname some.fdb -bkp_file stdin <some.fbk
Please notice - you can't use "verbose" switch when performing backup
because data channel from server to client is used to deliver blocks
of fbk files. You will get appropriate error message if you try to do
it. When restoring database verbose mode may be used without
limitations.
If you want to perform backup/restore from your own program, you
should use services API for it. Backup is very simple - just pass
"stdout" as backup file name to server and use isc_info_svc_to_eof in
isc_service_query() call. Data, returned by repeating calls to
isc_service_query() (certainly with isc_info_svc_to_eof tag) is a
stream, representing image of backup file.
Restore is a bit more tricky. Client sends new spb parameter
isc_info_svc_stdin to server in
isc_service_query(). If service needs some data in stdin, it returns
isc_info_svc_stdin in query results, followed by 4-bytes value -
number of bytes server is ready to accept from client. (0 value means
no more data is needed right now.) The main trick is that client
should NOT send more data than requested by server - this causes an
error "Size of data is more than requested". The data is sent in next
isc_service_query() call in the send_items block, using
isc_info_svc_line tag in traditional form: isc_info_svc_line, 2 bytes
length, data. When the server needs next portion, it once more returns
non-zero isc_info_svc_stdin value from isc_service_query().
A sample of how services API should be used for remote backup and
restore can be found in source code of fbsvcmgr.

Which is the best way to get real time data from avaya cms server?

I am sorry if this question goes out of topic but i forced to ask here as there is very limited resources found over the net on this.
I am looking to implement system to get real time data from avaya cms server I did lot of RND on JTAPI but it has got some limitations it is not giving all events all data as stored in CMS database. I also tried connecting cms database using Java but no success because it also give historical data in delay of 30 mins.
Is it possible to get the same technically using JTAPI,TAPI anything. Or is there anyone who have used any paid tool by avaya which is cheaper and can solve this purpose.
I saw clint but don't intend to use. Please let me know the ways if anyone had done this.
Your CMS may provide a feature known to me as realtime socket. It is a service pushing data about skills/splits, vdns and vectors over a network socket.
It is virtually the same what you'll find in hsplit and so on but realtime.
Pushed data can be configured by your cms admin.
If you are looking for call data you may take a look at *call_rec* table in cms.
You can use clintSVR which is a high level tool based on CMS CLINT. By using clintSVR, you can use CGI, OCX and C++ interfaces to get the real time data from CMS.
As others have said you can get this from realtime reports. You'll need to scrape them.
RT socket is just a set of wrappers around clint for running reports. It takes the realtime report data and sends to to a socket.
You can roll your own real time reports with clint and feed that to whatever needs to ship the data. A sample realtime report can be run from the command line like:
/cms/toolsbin/clint -u your_user <<EXECUTE_DONE
do menu 0 "cu:rea:Meas"
do "Run"
do "Exit"
EXECUTE_DONE
Here is an example of running a report directly
Run report directly:
/cms/toolsbin/clint -u ini <<EXECUTE_DONE
clear
run gem "r_custom/cr_r_3"
do "Run"
do "Exit"
EXECUTE_DONE

Printing from one Client to another Client via the Server

I don't know if it sounds crazy, but here's the scenario -
I need to print a document over the internet. My pc ClientX initiates the process using the web browser to access a ServerY on the internet and the printer is connected to a ClientZ (may be yours).
1. The document is stored on ServerY.
2. ClientZ is purely a cliet; no IIS, no print server etc.
3. I have the specific details of ClientZ, IP, Port, etc.
4. It'll be completely a server side application (and no client-side on ClientZ) with ASP.NET & C#
- so, is it possible? If yes, please give some clue. Thanks advanced.
This is kind of to big of a question for SO but basically what you need to do is
upload files to the server -- trivial
do some stuff to figure out if they are allowed to print the document -- trivial to hard depending on scope
add items to a queue for printing and associate them with a user/session -- easy
render and print the document -- trivial to hard depending on scope
notify the user that the document has been printed
handling errors
the big unknowns here are scope, if this is for a school project you probably don't have to worry about billing or queue priority in step two. If its for a commercial product billing can be a significant subsystem in its self.
the difficulty in step 4 depends directly on what formats you are going to support as many formats are going to require document specific libraries or applications. There are also security considerations here if this is a commercial product since it isn't safe to try to render all types of files.
Notifications can be easy or hard depending on how you want to do it. You can post back to the html page, but depending on how long its going to take for a job to complete it might be nice to have an email option as well.
You also need to think about errors. What is going to happen when paper or toner runs out or when someone tries to print something on A4 paper? Someone has to be notified so that jobs don't just build up.
On the server I would run just the user interaction piece on the web and have a "print daemon" running as a service to manage getting the documents printed and monitoring their status. I would use WCF to do IPC between the two.
Within the print daemon you are going to need a set of components to print different kinds of documents. I would make one assembly per type (or cluster of types) and load them into your service as plugins using MEF.
sorry this is so general, but you are asking a pretty general and difficult to answer question.

Resources