How to configure Serilog for WCF - serilog

I am upgrading our old application for Serilog... One of the existing functionality is ... When log level = ERROR, it will log into local file and send 'WCF' request to the remote server, remote server will update database...
Basically it will log into multiple source(local file, remote database by sending wcf request) if it level is 'ERROR'.
I understand using 'rollingfile' appender to logging into local file.
However, i do not know how to configure 'WCF Service' for Serilog... is there any 'WCF SINK' can help me achieve this?

As of this writing there's no generic sink that makes "WCF" calls... You'd have to build your own sink, implementing the calls you need.
You can see a list of documented sinks in the "Provided Sinks" page on Serilog's wiki, and you can also see available sinks in NuGet.org.

Related

How to upload rdf-file to GraphDB over API

I have an RDF-file stored on my server. The file or at least the file-content should be uploaded to a remote GrapbDB over API.
On the documentation there are two ways to do this. The first one is uploading it to server files and then loading it to GraphDB. Here the problem is, that I am not the owner of the server, GraphDB is running. So I can`t upload it to server files. Or is there maybe another API for that?
The other way is providing a public API on my server and then trigger GraphDB to download the file from my server. But my API must be protected with credantials or JWT. But I donĀ“t know how to set the credantials in the API-Call.
Isn`t there a way to upload a simple graph to a repository?
There is a browser-based user interface in GraphDB that allows you to import from local files. If this is allowed on the server you are connecting to, and you only need to do this once then I think this would be the quickest route to go.
If you want to upload a local file to GraphDB using dotNetRDF, then I would advise you to use the SPARQL 1.1 graph store protocol API via the VDS.RDF.Storage.SparqlHttpProtocolConnector as described here. The base URL you need to use will depend on the configuration of the server and possibly also on the version of GraphDB that it is running, but for the latest version (9.4) the pattern is: <RDF4J_URL>/repositories/<repo_id>/rdf-graphs/service
The connector supports HTTP Basic Authentication (which is one of the options offered by GraphDB) so if you have a user name and password you could try the SetCredentials method on the connector to specify those credentals and if necessary force the use of HTTP Basic Authentication by setting the global options property VDS.RDF.Options.ForceHttpBasicAuth to true.

Getting Data from Kaa server without using Log Apender

I am using Kaa client to pump data to Kaa server.
I want to fetch this data in order to showcase on a client application. With the use of Log Appenders, I am able to do so.
However, is it possible to do the same without adding any external db? I read in Kaa documentation that by default, Kaa stores data to MySQL (MaraidB / PostGre).
However, when I tried to access Mysql (which is part of Kaa Sandbox), I was unable to do so.
Can anyone tell how can we do this?
Yes, Kaa should be configured to append the gathered telemetry to some Log Appender (one can also create a Customer Log Appender with specific functionality if required) or even a set of Log Appenders - depending on the use case.
The easiest way is to configure one of the existing Log Appenders to log the data to e.g. Cassandra and then retrieve the data from there.
Should you need real time triggering of some actions depending on the data received from client(s), you would probably need to try developing a Custom Log Appender for that.
We need to have a external Log appender in order to log the data. The internal database takes care of Schema, event class families, Client/server profile info, notification info logging.

Spring Cloud DataFlow Rabbit Source: how to intercept and enrich messages in a Source

I have been successfully evaluating Spring Cloud DataFlow with a typically simple flow: source | processor | sink.
For deployment there will be multiple sources feeding into this pipeline which I can do using data flow labels. All well and good.
Each source is a different rabbitmq instance and because the processor needs to know where the message came from (because it has to call back to the source system to get further information), the strategy I'd thought of was to enrich each message with header details about the source system which is then transparently passed along to the processor.
Now, I'm well-versed in Spring, Spring Boot and Spring Integration but I cannot find out how to enrich each message in a dataflow source component.
The source component is bound to an org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration. The source uses the default Source.OUTPUT channel. How do I get hold of each message in the source to enrich it?
My processor component uses some Spring Integration DSL to do some of what it needs to do but then this processor component has both an INPUT and OUTPUT channel by definition. Not so with the RabbitSourceConfiguration source.
So, can this be done?
I think you need a custom MessageListener on the MessageListenerContainer in RabbitSourceConfiguration.
In the RabbitSourceConfiguration you can set a custom ChannelAwareMessageListener (You can possibly extend from MessagingMessageListenerAdapter as well) on the MessageListenerContainer that does what you incline to do.
In the end what worked was subclassing org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration to:
override public SimpleMessageListenerContainer container() so that I could insert a custom health check before calling super.container(). My business logic enriches each message (see next bullet) with details of where the message came from (note, this is the publisher of the messages and not the rabbit queue). There's a health check needed to validate the additional enriching information (which is provided via configuration) to ensure that messages aren't consumed from the queue and enriched with the wrong information. If the validation fails, the source component fails to start and hence no messages are consumed.
override the creation of the AmqpInboundChannelAdapter bean so that a custom subclass of DefaultAmqpHeaderMapper can be set on the adapter. This custom mapper adds the enriched headers in public Map toHeadersFromRequest(final MessageProperties source).
For me, the inability of stream/dataflow to intercept and modify messages in Source components is problematic. I really shouldn't have to fiddle around with the underlying message broker API in the ways I did. I should be able to do it with e.g. Spring Integration. Indeed I can register a global message interceptor but I cannot change the headers of the message.
This ability would go on my WIBNI (wouldn't it be nice if) list. Perhaps I'll raise a request for this.

Unstable log4net emails with a Windows Service on a Windows Server 2003 R3 using Exchange Server

I am running a Windows Service on a Windows Server 2003 R2. We are using Exchange Server to send out the emails.
I am using log4net.dll 1.2.11.0.
I have situation where log4net sometimes sends emails and sometimes don't eventhough no changes has been made to the set up.
All the other log4net logging works very fine. And as said sometimes the application sends out emails and some times it don't, having made no change to the application.
All my methods are in try-catch clauses, but I don't get any errors.
When I run the the Windows Service on my local machine, the log4net email always works, and as said on the remote server, sometimes the log4net email works, sometimes it don't, having made no changes to the setup.
I am using log4net.Internal.Debug and have a System.Diagnostics.TextWriterTraceListener file where stuff is written to.
Scanning thorugh this file I haven't noticed anything in particular, but I don't know what specific to look for.
Any ideas about what the problem is or what to do?
If the SMTP appender cannot send an email it would log an exception with this text: Error occurred while sending e-mail notification. This would be visible if internal debugging is enabled. Maybe you have filters or something like that configured that prevents log4net from sending an email.
You could download the source code of log4net and add extra logging to the SMTP appender to find out if the "SendMail" method of the appender gets called at all. If it does and no email and no error is shown, then we need to assume that the exchange server somehow swallows the emails. If the appender is not triggered, then you need to review your filter / buffer / threshold configuration.
Alternatively you could try to use the SmtpPickupDirAppender.

Elmah XML Logging on Load Balanced Environment

We're implementing Elmah for an internal application. For development and testing we use a single server instance but on the production environment the app is delivered using a load balanced environment.
Everything works as charm using Elmah, except for the fact that the logs are done independant in each server. What I mean with this is that if an error happens in Server1 the xml file is stored physically on that server and the same for Server2, since I'm storing that files on the App_Data
When I access the axd location to see the error list, I just see the ones of the server that happened to attend my request.
Is there any way to consolidate the xml files other than putting them on a shared folder? Having a shared folder will make us to allow the user that executes the application on the server to have access to that separate folder and to be on only one of the servers instead of both.
I cannot use In-Memory or Database logging since FileLog is the only one allowed.
You might consider using ElmahR for this case, since you are not able to implement In-Memory or Database logging. ElmahR will provide you with a central location for the two load balanced servers to send errors to (in addition to logging them locally) via an Http post. Then you can access the ElmahR site for to view an aggregated list. Also, ElmahR is storing the error messages in a SqlServerCE database, so it can persist the error messages it receives.
Keep in mind that if the ElamhR Dashboard app design does not meet your initial needs/desires, it could be modified as needed given that it is an open source project.
Hope this might be a viable option.

Resources