How to log user activity with time spent and application name using c#.net 2.0? - c#-2.0

I am creating one desktop application in which I want to track user activity on the system like opened Microsoft Excel with file name and worked for ... much of time on that..
I want to create on xml file to maintain that log.
Please provide me help on that.

This feels like one of those questions where you have to figure out what is meant by the question itself. Taken at face value, it sounds like you want to monitor how long a user spends in any process running in their session, however it may be that you only really want to know if, and for how long a user spends time in a specific subset of all running processes.
Since I'm not sure which of these is the correct assumption to make, I will address both as best I can.
Regardless of whether you are monitoring one or all processes, you need to know what processes are running when you start up, and you need to be notified when a new process is created. The first of these requirements can be met using the GetProcesses() method of the System.Diagnostics.Process class, the second is a tad more tricky.
One option for checking whether new processes exist is to call GetProcesses after a specified interval (polling) and determine whether the list of processes has changed. While you can do this, it may be very expensive in terms of system resources, especially if done too frequently.
Another option is to look for some mechanism that allows you to register to be notified of the creation of a new process asynchronously, I don't believe such a thing exists within the .NET Framework 2.0 but is likely to exist as part of the Win32 API, unfortunately I cant give you a specific function name because I don't know what it is.
Finally, however you do it, I recommend being as specific as you can about the notifications you choose to subscribe for, the less of them there are, the less resources are used generating and processing them.
Once you know what processes are running and which you are interested in you will need to determine when focus changes to a new process of interest so that you can time how long the user spends actually using the application, for this you can use the GetForegroundWindow function to get the window handle of the currently focused window.
As far as longing to an XML file, you can either use an external library such as long4net as suggested by pranay's answer, or you can build the log file using the XmlTextWriter or XmlDocument classes in the System.Xml namespace

Related

How to handle SAP Kapsel Offline app OData conflicts properly?

I build an app that is able to store OData offline by using SAP Kapsel Plugins.
More or less it's the same as generated by WEB ID or similer to the apps in this example: https://blogs.sap.com/2017/01/24/getting-started-with-kapsel-part-10-offline-odatasp13/
Now I am at the point to check the error resolution potential. I created a sync conflict (chaning data on the server after the offline database was stored and changed something on the app and started a flush).
As mentioned in the documentation I can see the error in ErrorArchive and could also see some details. But what I am missing is the information of the "current" data on the database.
In the error details I can just see the data on the device but not the data changed on the server.
For example:
Device is loading some names into offline store
Device is offline
User A is changing some names
User B is changing one of this names directly online
User A is online again and starts a sync
User A is now informend about the entity that was changed BUT:
not the content user B entered
I just see the "offline" data.
Is there a solution to see the "current" and the "offline" one in a kind of compare view?
Please also note that the server communication is done by the Kapsel Plugin and not with normal AJAX calls. This could be an alternative but I am wondering if there is no smarter way supported by the API?
Meanwhile I figured out how to load the online data (manually).
This could be done by switching http handler back to normal one.
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
Anyhow this does not look like a proper solution and I also have the issue with the conflict log itself. It must be deleted before any refresh could be applied.
I could not find any proper documentation for that. Also ETag handling is hardly described in SAPUI5 and SAP Kapsel documentation.
This question is a really tricky one, due to its implications. I understand that you are simulating a synchronization error due to concurrent modification, and want to know if there is a way for the client to obtain the "current" server state in order to give the user a means to compare the local and server state.
First, let me give you the short answer: No, there is no way for the client to see the current server state "for reference" via the Offline APIs when there are synchronization errors. Doing an online query as outlined above might work, but it certainly is a bad idea.
Now for the longer answer, which explains why this is not necessarily a defect and why I said there are quite some implications to the answer.
Types of Synchronization Errors
We distinguish a number of synchronization errors, and in this context, we are clearly dealing with business-related issues. There are two subtypes here: Those that the user can correct, e.g. validation errors, and those that are issues in the business process itself.
If the user violates the input range, e.g. by putting a negative price for a product, the server would reply with the corresponding message: "-1 is not a valid input value for 'Price'". You, as a developer, can display such messages to the user from the error archive, and the ensuing fix is indeed a very easy one.
Now when we talk about concurrent modification, things get really, really nasty. In fact, I like to say that in this case there is an issue with the business process, because on one hand, we allow data to get out of sync. On the other hand, the process allows multiple users to manipulate the same piece of information. How all relevant users should now be notified and synchronize, is no longer just a technical detail, but in fact a new business process. There just is no way to generically device how to handle this case. In most cases, it would involve back-office experts who need to decide how the changes should be merged.
A Better Solution
Angstrom pointed out that there is no way to manipulate ETags on the client side, and you should in fact not even think about it. ETags work like version numbers in optimistic locking scenarios, and changing the ETag basically means "Just overwrite what's on the server". This is a no-go in serious scenarios.
An acceptable workaround would be the following:
Make sure the server returns verbose error messages so that the user can see what happened and what caused the conflict.
If that does not help, refresh the data. This will get you an updated ETag, and merge the local changes into the "current" server state, but only locally. "Merging" really means that local changes always overwrite remote changes.
The user now has another opportunity to review the data and can submit it again.
A Good Solution
Better is not necessarily good, so here is what you should really do: Never let concurrent modification happen because it is really expensive to handle. This implies that not the developer should address this issue, but the business needs to change the process.
The right question to ask is, "When you replicate data in a distributed system, why do you allow it to be modified concurrently at all?" Typically stakeholders will not like this kind of question, and the appropriate reaction is to work out a conflict resolution process together with them. Only then they will realize how expensive fixing that kind of desynchronization is, and more often than not they will see that adjusting the process is way cheaper than insisting in yet another back-office process to fix the issues it causes. Even if they insist that there is a need for this concurrent modification, they will now understand that it is not your task to sort this out and that they need to invest in a conflict resolution process.
TL;DR
There is no way to compare the server and client state to the server state on the client, but you can do a refresh to retain the local changes and get an updated ETag. The real solution, however, is to rework the business process, because this no longer is a purely technical issue.
The default solution is that SMP or HCPms is detecting errors by ETags. At client side there is no API to manipulate ETags in case of conflicts. A potential solution to implement a kind of diff view on the device would work like this:
Show errors
Cache errors (maybe only in memory?)
delete the errors
do a refresh of the database
build a diff view with current data and cached errors
The idea with
sap.OData.removeHttpClient();
sap.OData.applyHttpClient();
could also work but could be very tricky and may introduce side effects.
Maybe some requests are triggered against the "wrong" backend.

opening and closing streaming clients for specific durations

I'd like to infrequently open a Twitter streaming connection with TweetStream and listen for new statuses for about an hour.
How should I go about opening the connection, keeping it open for an hour, and then closing it gracefully?
Normally for background processes I would use Resque or Sidekiq, but from my understanding those are for completing tasks as quickly as possible, not chilling and keeping a connection open.
I thought about using a global variable like $twitter_client but that wouldn't horizontally scale.
I also thought about building a second application that runs on one box to handle this functionality, but that seems excessive if it can be integrated into the main app somehow.
To clarify, I have no trouble starting a process, capturing tweets, and using them appropriately. I'm just not sure what I should be starting. A new app? A daemon of some sort?
I've never encountered a problem like this, and am completely lost. Any direction would be much appreciated!
Although not a direct fix, this is what I would look at:
Time
You're working with time, so I'd look at what time-centric processes could be used to induce the connection for an hour
Specifically, I'd look at running a some sort of job on the server, which you could fire at specific times (programmatically if required), to open & close the connection. I only have experience with resque, but as you say, it's probably not up to the job. If I find any better solutions, I'll certainly update the answer
Storage
Once you've connected to TweetStream, you'll want to look at how you can capture the tweets for that time period. It seems a waste to create a data table just for the job, so I'd be inclined to use something like Redis to store the tweets that you need
This can then be used to output the tweets you need, allowing you to simulate storing / capturing them, but then delete them after the hour-window has passed
Delivery
I don't know what context you're using this feature in, so I'll just give you as generic process idea as possible
To display the tweets, I'd personally create some sort of record in the DB to show the time you're pinging TweetStream that day (if it changes; if it's constant, just set a constant in an initializer), and then just include some logic to try and get the tweets from Redis. If you're able to collect them, show them as you wish, else don't print anything
Hope that gives you a broader spectrum of ideas?

Specflow Feature File Best Practice

Thanks in advance for the help.
My question pertains to best practices inside a SpecFlow feature file?
Question:
Is using a wait command inside of the feature file considered bad practice.
Example:
And i click on the username
And wait 5 seconds
And i input new value into last name
The wait command forces a 5 second wait. I am doing this to make sure the page is loaded to prevent "element not found" errors or other errors. Basically to make sure I have a clean page to manipulate.
Would a better practice be to use a wait inside of the Step file itself?
//using Fluent Automation
I.WaitUntil(() => ());
//or
I.Wait(); //timespan
My reasoning for not using the Fluent Automation wait is:
By utilizing the Fluent Automation method you are dependent on the default timeout in the Settings object. The default timeout in some cases may not be long enough or may be to long. Seems very verbose to me to continually change/reset the Settings object with the only benefit being to remove wait commands from the feature file.
So what is really the best practice?
Thanks,
-n
I think the best practice is to keep the feature file for your scenarios, and free of the implementation details.
Since we are following a BDD process (http://dannorth.net/introducing-bdd) then the feature file is the output of that conversation between you and the process expert, and the scenario represents the steps that you are going to take to prove that your functionality works for that example. You could hope that those steps define the business process and could be performed by any system, not just the one we might be developing now. Ideally this logic captures our intent and can be reused on any future systems that might replace the current one.
So I just don't see you saying that you need to wait
....
Although you might want to say
When the page has loaded
and that maps quite nicely onto the fluent automation.

Background service that checkes change in a file

I haven't done any coding of this kind and would like some pointers how to start. The service will eventually do several things and perhaps someone has already thought of it made it happen.
The big picture is this: Detect if a PowerPoint presentation has been updated on the server. If it has extract the slides and save them as individual jpegs then upload them to a specific image list in SharePoint. All this has to happen without human intervention.
I assume this would be a window service project, right? Then a file stream property that with some property that deals with changes in the file?
As far as dissecting a .pptx/.ppsx files and get the slides converted, it there a "api" or some dll class?
What about uploading files to a library list on SharePoint automatically?
Thanks,
Risho
I've done this in Topshelf http://topshelf-project.com/, a windows service host for .NET.
https://github.com/Topshelf/Topshelf/blob/master/src/Topshelf/FileSystem/FileSystemEventProducer.cs
Since Windows has an event pump issue if events take too long, we also implemented polling on top of this since the FileSystemWatcher gets disconnected during those times.
https://github.com/Topshelf/Topshelf/blob/master/src/Topshelf/FileSystem/PollingFileSystemEventProducer.cs
Now, these producers are supposed to be tied to actors so they might seem a bit overly complicated for just checking on file system events. It's up to use if that model is useful or just the core part. Remember that you can often receive events even if the file is locked or not done yet, so handle those exceptions.
SharePoint has what is called a timer service for just these types of situations. Andrew Connell has an article regarding creating your own timer jobs.
http://www.andrewconnell.com/blog/archive/2007/01/10/5704.aspx

Self Updating

What's the best way to terminate a program and then run additional code from the program that's being terminated? For example, what would be the best way for a program to self update itself?
You have a couple options:
You could use another application .exe to do the auto update. This is probably the best method.
You can also rename a program's exe while it is running. Hence allowing you to get the file from some update server and replace it. On the program's next startup it will be using the new .exe. You can then delete the renamed file on startup.
It'd be really helpful to know what language we're talking about here. I'm sure I could give you some really great tips for doing this in PowerBuilder or Cobol, but that might not really be what you're after! If you're talking Java however, then you could use a shut down hook - works great for me.
Another thing to consider is that most of the "major" apps I've been using (FileZilla, Paint.NET, etc.), are having the updaters uninstall the previous version of the app and then doing a fresh install of the new version of the application.
I understand this won't work for really large applications, but this does seem to be a "preferred" process for the small to medium size applications.
I don't know of a way to do it without a second program that the primary program launches prior to shutting down. Program 2 downloads and installs the changes and then relaunches the primary program.
We did something like this in our previous app. We captured the termination of the program (in .NET 2.0) from either the X or the close button, and then kicked off a background update process that the user didn't see. It would check the server (client-server app) for an update, and if there was one available, it would download in the background using BITS. Then the next time the application opened, it would realize that there was a new version (we set a flag) and popped up a message alerting the user to the new version, and a button to click if they wanted to view the new features added to this version.
It makes it easier if you have a secondary app that runs to do the updates. You would execute the "updater" app, and then inside of it wait for the other process to exit. If you need access to the regular apps DLLs and such but they also need updating, you can run the updater from a secondary location with already updated DLLs so that they are not in use in the original location.
If you're using writing a .NET application, you might consider using ClickOnce. If you need quite a bit of customization, you might look elsewhere.
We have an external process that performs updating for us. When it finds an update, it downloads it to a secondary folder and then waits for the main application to exit. On exit, it replaces all of the current files. The primary process just kicks the update process off every 4 hours. Because the update process will wait for the exit of the primary app, the primary app doesn't have to do any special processing other than start the update application.
This is a side issue, but if you're considering writing your own update process, I would encourage you to look into using compression of some sort to (1) save on download and (2) provide one file to pull from an update server.
Hope that makes sense!

Resources