Serilog Memory Sink - serilog

Is there a Serilog sink that just writes to a buffer in memory? What I am thinking about is a sink that will store X lines and then I could access those X lines and show them on a web page via an api controller. This would be more for viewing recent errors that occurred in the application.
I looked on the GitHub sink page (https://github.com/serilog/serilog/wiki/Provided-Sinks) but did not see one and just wondered if there was something I was missing.

Serilog doesn't have a built-in Sink that writes to memory, but you could easily write one just for that. Take a look, for example, at the DelegatingSink that is used in Serilog's unit tests, which is 80% of what you would need... You'd just have to store the events in an in-memory data structure.
Another option would be to use the mssqlserver sink, write the events to a simple table, and display in your web app.
A third option (which would be my recommendation) would be to just install Seq, which is free for development and single-user deployment, and just write the logs to Seq through their sink. That will save you from having to write the web app, and will give you search and filtering out-of-the-box.

Related

ONLINE VSAM FILE

I've an idea and I don't know if it is doable in Cobol or not, I want to use Online VSAM file in online program, so my online VSAM file has multiple of records and i want if there is new record added to the file my online program detect that and do some of process, is it doable and please give me some of hint
What your describing is basically a trigger based on an event. You described COBOL as the language but in order to achieve what you want you also need to choose a runtime environment. Something like CICS, IMS Db2, WebSphere (Java), MQ, etc.
VSAM itself does not provide a triggering mechanism. An approach that would start to achieve what you want would be to create an MQ queue that processes the records to be written and they could write the record and take additional action. MQ cuts across all the runtimes listed above and is probably the most reliable.
Another option is to look at using Db2 where you could create a Triggers or user defined function that might achieve what your looking for. Here is a reference article that describes many methods.
Here is a list of some of the articles in the link mentioned above:
Utilizing Triggers within DB2 by Aleksey Shevchenko
Using Stored Procedures as Communication Mechanism to a Mainframe by
Robert Catterall
Workload Manager Implementation and Exploitation
Stored Procedures, UDFs and Triggers-Common Logic or Common Problem?
If you are looking to process records simply written from any source to VSAM there are really no inherent capabilities to achieve that in the Access Method Services where VSAM datasets are defined.
You need to consider your runtime environment, capabilities and goals as you continue your design.
If this is a high volume application you could consider IBM's "Change Data Capture" product. On every update to a chosen VSAM file it will dump a before and after image of the record into a message queue. Which can then be processed by the language and platform of your choice.
Also worth considering is if by "online" you mean a CICS application, then, the VSAM file will be exclusively owned by a single CICS region, and all updates will be processed by programs running in this region. You may be able to tweak the application to kick off some post processing (As simple as adding "EXEC CICS START yourtransacion ..." to the existing program(s).
Check out CICS Events. You can set an event for when the VSAM file is written to and action it with a COBOL program. There are several event adapters, you will probably be interested in the one that writes to a TS queue.

Getting vlc SAP Broadcast dump

I am receiving SAP broadcasts, which I can normally use and play using the standalone vlc application.
I have been asked to provide a dump of the same. I have 2 questions:
I dont clearly understand what exactly dump is
How can I obtain the same?
There are multiple types of dumps, so you might first find out, what kind of dump is meant. It could be a database dump, which is similar to a backup, but usually it's a memory dump.
A memory dump or crash dump is a copy of the application including its memory at a specific point in time. Usually you want to create a dump exactly at the time an application is crashing or hanging. The dump will then be helpful to find the cause of the problem.
There are many ways to obtain a dump. First, Windows might do that for you, when it asks "Send information to Microsoft". Second, you can create it using Task Manager. Right click a process and choose "Create dump file". Third, there are many tools out there, e.g. Process Explorer or ProcDump, which all have pros and cons and serve different purposes.
To suggest a tool for your specific case, we would need more information. Exact wording might matter in this situation.
Update
In your particular case it looks like SAP means Service Advertising Protocol, which is related to the network. A broadcast is a message which is sent to everybody.
You could capture that one with Wireshark, but you would need a lot of network knowledge to get the filters set up. In this case the term "dump" probably refers to a something similar to a database dump, because SAP uses tables to store lists of services.

Best tool to record CPU and memory usage with Grinder?

I am using grinder in order to generate reports for the performance tests for my application. But I noticed that it does not generate any report on CPU and memory usage. On further investigation, I found that Grinder does not provide this information. Now, my question is, is there any tool that can be hooked up with grinder, to record the CPU and memory usage details?
As you have discovered, this is not supported directly in The Grinder itself. You will need to use a collection of tools to accomplish this.
I use a combination of Quickstatd, Graphite, and Grinder to Graphite to get all my results in the same place where I can see them. If you need to support Windows, you can probably use collectd (with ssc-serv and the Graphite plugin) instead of Quickstatd, which is based on bash scripts.
You can also pull in server side metrics (like DB lookups per second, etc.) with tools like jmxtrans, statsd, and metrics.
Having all that information in the same place is really powerful, and can give you some good insights.
If you grind a Java server, you can get data via JMX from OperatingSystemMXBean and MemoryMXBean.
Then add the data to a Grinder user Statistic and the data will end up in the -data.log
grinder.statistics.registerDataLogExpression("Load", "userDouble0")
..
grinder.statistics.forCurrentTest.setDouble("userDouble0", systemLoadAverage)
the -data.log can directly be fed into Gnuplot
gnuplot> plot 'client-0-data.log' using 2:7 title "System Load"

Printing from one Client to another Client via the Server

I don't know if it sounds crazy, but here's the scenario -
I need to print a document over the internet. My pc ClientX initiates the process using the web browser to access a ServerY on the internet and the printer is connected to a ClientZ (may be yours).
1. The document is stored on ServerY.
2. ClientZ is purely a cliet; no IIS, no print server etc.
3. I have the specific details of ClientZ, IP, Port, etc.
4. It'll be completely a server side application (and no client-side on ClientZ) with ASP.NET & C#
- so, is it possible? If yes, please give some clue. Thanks advanced.
This is kind of to big of a question for SO but basically what you need to do is
upload files to the server -- trivial
do some stuff to figure out if they are allowed to print the document -- trivial to hard depending on scope
add items to a queue for printing and associate them with a user/session -- easy
render and print the document -- trivial to hard depending on scope
notify the user that the document has been printed
handling errors
the big unknowns here are scope, if this is for a school project you probably don't have to worry about billing or queue priority in step two. If its for a commercial product billing can be a significant subsystem in its self.
the difficulty in step 4 depends directly on what formats you are going to support as many formats are going to require document specific libraries or applications. There are also security considerations here if this is a commercial product since it isn't safe to try to render all types of files.
Notifications can be easy or hard depending on how you want to do it. You can post back to the html page, but depending on how long its going to take for a job to complete it might be nice to have an email option as well.
You also need to think about errors. What is going to happen when paper or toner runs out or when someone tries to print something on A4 paper? Someone has to be notified so that jobs don't just build up.
On the server I would run just the user interaction piece on the web and have a "print daemon" running as a service to manage getting the documents printed and monitoring their status. I would use WCF to do IPC between the two.
Within the print daemon you are going to need a set of components to print different kinds of documents. I would make one assembly per type (or cluster of types) and load them into your service as plugins using MEF.
sorry this is so general, but you are asking a pretty general and difficult to answer question.

how does spider in a search engine works?

How does crawler or spider in a search engine works
Specifically, you need at least some of the following components:
Configuration: Needed to tell the crawler how, when and where to connect to documents; and how to connect to the underlying database/indexing system.
Connector: This will create the connections to a web page or a disk share or anything, really.
Memory: The pages already visited must be known to the crawler. This is usually stored in the index but it depends on the implementation and the needs. The content is also hashed for de-duplication and updates validation purposes.
Parser/Converter: Needed to be able to understand the content of a document and extract meta-data. Will convert the extracted data to a format usable by the underlying database system.
Indexer: Will push the data and meta-data to an database/indexing system.
Scheduler: Will plan runs of the crawler. Might need to handle a large number of running crawlers at the same time and take into consideration what is currently being done.
Connection algorithm: When the parser finds links to other documents, it is needed to analyse when, how, and where the next connections must be made. Also, some indexing algorithm take into consideration the page connection graphs so it might be needed to store and sort information related to that.
Policy Management: Some sites requires crawlers to respect certain policies (robots.txt for example).
Security/User Management: The crawler might need to be able to login in some system to access data.
Content compilation/execution: The crawler might need to execute certain things to be able to access what's inside, like applets/plugins.
Crawlers needs to be efficient at working together from different starting points, speed, memory usage and using a high number of threads/processes. I/O is key.
The world wide web is basically a connected directed graph of web documents,images,multimedia files etc. .Each node of the graph is a component of a web page-for example-a web page consists of image,text,video etc, all of them are linked.Crawler traverses the graph using Breadth First Search using links in web pages.
A crawler initially starts with one (or more) seed points.
It scans the webpage and explores the links in that page.
This process continues until all the graph is explored(some predefined constraint can be used to limit search depth).
From How Stuff Works
How does any spider start its travels over the Web? The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.

Resources