Best practice for writing a self-updating windows service [closed] - windows-services

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
We need to create a windows service that has the ability to self update.
Three options spring to mind,
a second service that manages the retrieval, uninstallation and installation of the first service.
Use of some third party framework (suggestions welcome. I believe .NET supports automatic updating for windows forms apps, but not windows services)
Use of a plugin model, whereby the service is merely a shell containing the updating and running logic, and the business logic of the service is contained in a DLL that can be swapped out.
Can anyone shed some light on the solution to this problem?
Thanks

Google have an open-source framework called Omaha which does exactly what your point 1. describes. It runs as a scheduled Windows task in the background, outside the applications it manages. Google use Omaha to auto-update their Windows applications, including Chrome. Because it comes from Google, and because it is installed on every Windows machine that runs Chrome, Omaha is extremely powerful.
There is an article online that explains in more detail how Omaha can be used to update Windows Services. It argues that Omaha is an especially good fit for Services (vs., say, GUI applications) because of its asynchronous nature.
So you could do your points 2. and 1. by using Omaha. I'm afraid I don't know how you would do 3.

Just some thoughts I had.
1 seems problematic because you end up dealing with the situation you're trying to resolve because at some point the updater will need updating.
3 sounds good but if by "swapped out" you mean using some fancy reflection to load the dll during run time I'm not sure if performance will become an issue.
There is a fourth option where the service can spawn an update process which would allow it to update the update executable if necessary before running it. From there it's a simple matter of writing an installation app which the service will spawn just before shutting down.

I use option 1. The updater process gets updated very rarely these days. It uses an XML file containing the details of where to get the files from (currently supports SVN, working on adding NuGet support) and where to put them. It also specifies which ones are services and which ones are websites and specifies the name of the service to use for each project.
The process polls the source, if there is a new version available it copies it down to a fresh version numbered directory and then updates the service. It also keeps 5 copies of each update making it easy to roll-back if there is a problem.
Here's the core piece of code for the updater which stops the existing service, copies the files over, and then restarts it.
if (isService)
{
log.Debug("Stopping service " + project.ServiceName);
var service = GetService(project);
if (service != null &&
service.Status != System.ServiceProcess.ServiceControllerStatus.Stopped && service.Status != System.ServiceProcess.ServiceControllerStatus.StopPending)
{
service.Stop();
}
service.WaitForStatus(System.ServiceProcess.ServiceControllerStatus.Stopped, new TimeSpan(0, 1, 0));
if (service.Status == System.ServiceProcess.ServiceControllerStatus.Stopped)
log.Debug("Service stopped");
else
log.Error("ERROR: Expected Stopped by Service is " + service.Status);
}
log.Debug("Copying files over");
CopyFolder(checkoutDirectory, destinationDirectory);
if (isService)
{
log.Debug("Starting service");
var service = GetService(project);
// Currently it doesn't create services, you need to do that manually
if (service != null)
{
service.Start();
service.WaitForStatus(System.ServiceProcess.ServiceControllerStatus.Running, new TimeSpan(0, 1, 0));
if (service.Status == System.ServiceProcess.ServiceControllerStatus.Running)
log.Debug("Service running");
else
log.Error("Service " + service.Status);
}
}

Related

Erlang/Elixir on Docker and Hot Code Swap

One of the features of Erlang (and, by definition, Elixir) is that you can do hot code swap. However, this seems to be at odd with Docker, where you would need to stop your instances and restart new ones with new images holding the new code. This essentially seem to be what everyone does.
This being said, I also know that it is possible to use one hidden node to distribute updates to all other nodes over network. Of course, just like that is sounds like asking for trouble, but...
My question would be the following: has anyone tried and achieved with reasonable success to set up a Docker-based infrastructure for Erlang/Elixir that allowed Hot-code swapping? If so, what are the do's, don'ts and caveats?
The story
Imagine a system to handle mobile phone calls or mobile data access (that's what Erlang was created for). There are gateway servers that maintain the user session for the duration of the call, or the data access session (I will call it the session going forward). Those server have an in-memory representation of the session for as long as the session is active (user is connected).
Now there is another system that calculates how much to charge the user for the call or the data transfered (call it PDF - Policy Decision Function). Both systems are connected in such a way that the gateway server creates a handful of TCP connections to PDF and it drops users sessions if those TCP connections go down. The gateway can handle a few hundred thousand customers at a time. Whenever there is an event that the user needs to be charged for (next data transfer, another minute of the call) the gateway notifies PDF about the fact and PDF subtracts a specific amount of money from the user account. When the user account is empty PDF notifies the gateway to disconnect the call (you've run out of money, you need to top up).
Your question
Finally let's talk about your question in this context. We want to upgrade a PDF node and the node is running on Docker. We create a new Docker instance with the new version of the software, but we can't shut down the old version (there are hundreds of thousands of customers in the middle of their call, we can't disconnect them). But we need to move the customers somehow from the old PDF to the new version. So we tell the gateway node to create any new connections with the updated node instead of the old PDF. Customers can be chatty and also some of them may have a long-running data connections (downloading Windows 10 iso) so the whole operation takes 2-3 days to complete. That's how long it can take to upgrade one version of the software to another in case of a critical bug. And there may be dozens of servers like this one, each one handling hundreds thousands of customers.
But what if we used the Erlang release handler instead? We create the relup file with the new version of the software. We test it properly and deploy to PDF nodes. Each node is upgraded in-place - the internal state of the application is converted, the node is running the new version of the software. But most importantly, the TCP connection with the gateway server has not been dropped. So customers happily continue their calls or are downloading the latest Windows iso while we are upgrading the system. All is done in 10 seconds rather than 2-3 days.
The answer
This is an example of a specific system with specific requirements. Docker and Erlang's Release Handling are orthogonal technologies. You can use either or both, it all boils down to the following:
Requirements
Cost
Will you have enough resources to test both approaches predictably and enough patience to teach your Ops team so that they can deploy the system using either method? What if the testing facility cost millions of pounds (because of the required hardware) and can use only one of those two methods at a time (because the test cycle takes days)?
The pragmatic approach might be to deploy the nodes initially using Docker and then upgrade them with Erlang release handler (if you need to use Docker in the first place). Or, if your system doesn't need to be available during the upgrade (as the example PDF system does), you might just opt for always deploying new versions with Docker and forget about release handling. Or you may as well stick with release handler and forget about Docker if you need quick and reliable updates on-the-fly and Docker would be only used for the initial deployment. I hope that helps.

Please suggest a good Monitoring and Alerting tool for applications hosted in cloud [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am looking for a monitoring and alerting tool for my application hosted in cloud. My application is hosted across multiple servers and I want to monitor all these servers. I am interested in monitoring the following:
1. Service monitoring:
Check if the service is up. This requires
try siging-up a new user
log-in to the application with given username/password and perform certain steps like search etc.
Monitoring QoS. How much time is it taking for searches and some other opertions
2. resource monitoring
Monitoring the following parameters in each server:
CPU utilization
load average
Memory usage
Disk usage
IOPS
3. process monitoring
Monitor if a set of processes are running or not. If not running try restarting them.
Ex: php-fpm, my application binaries, mysql, nginx, smtp etc.
4. Monitoring log files
Error logs of my application
mysql error log
MySQL slow query log
etc.
Also I should be able to extend its usage by executing shell commands or writing my own shell scripts.
I should be able to set alert if any monitored item is found problematic. I should be able to get alert through
email
Mobile SMS
The Monitoring system should maintain history for the period I want. So that after receiving the alert I should be able to log-in to the
system and view past data (say past 2 weeks) and investigate problems.
Most important:
The tool should have a very good way of managing its own configuration.
The configuration should not be scattered at multiple places. All configuration should be stored in a centralized place. In future say, path of a monitored log file has changed. I would like to search and replace all occurrences of that file in my configuration.
I should be able to version control my configurations.
Instead of going to the web interface and setting configuration manually, I would like set up a script which automatically loads all the configurations and start monitoring.
I am exploring Zabbix but don't see a satisfactory way of configuration management. Should I try Nagios? Any other tool?
2 newer cloud type monitoring solutions that may be of interested to you are http://logicmonitor.com/ and http://copperegg.com/.
LogicMonitor has many of your requirements out of the box as it has a bit of customization for your own alerting.
CopperEgg / RevealCloud is more base system level monitoring (CPU, memory, disk, and network throughput). It has a nice polished interface that is much more straightforward than LogicMonitor. But that is about it.
Well, considering you've tagged this with Zabbix, I assume you're considering this as an option.
We use Zabbix to monitor the Amazon EC2 instances as well as instances in our private openstack cloud. It's as simple as "apt-get install zabbix-agent" really.
Zabbix is especially useful in the case of monitoring our openstack private cloud. We have the server scan an ip-range and automatically set up checks, alerts, etc, based solely on the hostname of the machine found.
Nagios is one of the standard ways of monitoring and can support all the use cases you brought up (plus, plugins have probably already been written for all of them).

CITRIX and disabled "Copy/Paste" [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I must use several Citrix desktops, where "COPY/PASTE" from the local machine to the server is disabled. Are there workarounds or tricks to bypass this limitation?
I've encountered the same problem and have a partial somewhat contrived solution. It allows me to get a little more than 1kb of text from sandboxed Internet Explorer instance.
I use http://goqr.me/ to create a QR-code from the text. Create it in greatest possible resolution and open it. Take a screenshot of the window on the clipboard by pressing Alt-PrtScr. Then I use a small utility (see https://github.com/thoraage/qrscanner) to extract text from the picture on the clipboard.
It is a sick world!
The earlier suggestions and "work-arounds" were useful, but in 2020, there is a better way :)
Microsoft developed a "Relay Service" called Azure Relay. This same service is what's used behind the scenes to power what Microsoft refers to as "Live Code Sharing".
This service runs as an extension with several products, but for developers, this would likely be their IDE and code editor: Visual Studio and VS Code.
The extension is Live Share and it works flawlessly (at least on my machine 😉)
Like other suggestions, this isn't going to let you copy/paste from one machine to another, but in a way, it allows for much more. Instead, this alternative will let you host a project/workspace/notes...etc on your local machine, start a live-share session, then join that live share session from the remote.
Whether you work from the local or the remote, the changes persist and are shared on each machine.
Thanks the other commenters for their suggestions. I may not have thought of this as an option without the prior suggestions to spark this idea.
Best solution for this, I used just open one note app in local machine.
Open citrix and Restore (resize the window []).
Snip the entire text as image and paste it in one note.
Right click on the image and copy the text.
Paste it in TXT doc you got that.
I just open two gmails and sent the info through chat.
Example:
Local computer open GMAIL 1
Remote Citrix Computer open GMAIL 2
Copy from local computer and paste into google hangout with Gmail 2
Send
Done! it will be ready to be copy on Gmail 2 in remote citrix computer!
Cheers
I was running on a similar situation but in my case I was trying to copy files from remote (Windows) to local.
To solve that issue I killed the rdpclip.exe on remote and started it again.
You would need to defined it in the Citrix policy to only restricted or not restrict based on certain conditions.
The answer would also depend on the direction you are coming from? As a user trying to circumvent the system, or a tech trying to have a select group of users approved to do so.
I'm not aware of any tricks, to circumvent.
jezr
If it is just about a picture/ screenshot I suggest the following workaround:
1. open the picture/ file in citrix
2. change to your local machine, open Snipping Tool (Windows)
3. make a screenshot of the citrix content
Solution for this problem:
Open IE explorer and open internet options and open security tab then open trusted sites add your Citrix website which you want to access.
Restore advanced settings in in advanced tab.
Clear your temporary files.
Download Citrix receiver then check for copy paste

Database sync solutions for Delphi [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking for some starting points integrating a Win32 Delphi application's data with a remote database for a web application.
Problem(s) this project intends to solve:
1) The desktop does not perform well over vpns. Users in remote office could use the web app instead.
2) Some companies prefer a web app to the desktop app
3) Mobile devices could hit the web app as a front end.
Issues I've identified:
Web application will run on a Unix based system, probably Linux while the desktop application uses NexusDB while the web application will likely be Postgres. Dissimilar platforms and databases.
Using Delphi it appears the Microsoft Sync Framework is not available for this project.
My first thought was to give the web app your standard REST API and have the desktop app hit the API as though it's a client every n-minutes from the local database server. Tons of issues I see with this already!
Richard, I have been down this path before and all I can say is DON'T DO IT! I use to work for a company that had a large Delphi Desktop Application (over 250 forms) running on DBISAM (very similar to what you have). Clients wanted a "Web" interface so people could remotely work and then have the web app and desktop app synch changes. Well, a few years later and the application was horrible - data issues and user workflow was terrible because managing the same data in two different places is a nightmare.
I would recommend moving your database to something like MySQL (Delphi and Web Client both hit) and use one database between the two interfaces. The reason the Delphi client is not working well over the VPN is because desktop databases like NexusDB and DBISAM copy way to much data over the pipe when it runs queries (pulls back all the data and then filters/orders, etc)- it not truly client / server like SQL Server or MySQL where all the heavy lifting is being done on the server and only the results come back. Of course, moving the Delphi app to DB like MySQL could eleviate speed issues all together - but you don't solve #2 and #3 with that.
Another option is to move the entire application to the web and only have 1 application to support. Of course, a good UI developer in a tool like Delphi can always make a superior user interface to a web app - especially in data-entry heavy applications - so that may not be an option for you.
I would be very weary of "synching data".
My 2 cents worth.
Mike
If you use a RESTful based ORM, you could have both for instance AJAX and Client Delphi applications calling the same Delphi server, using JSON as transmission format, HTTP/1.1 as remote connection layer, Delphi and Javascript objects to access the data.
For instance, if you type http://localhost:8080/root/SampleRecord in your browser, you'll receive something like:
[{"ID":1},{"ID":2},{"ID":3},{"ID":4}]
And if you ask for http://localhost:8080/root/SampleRecord/1 you'll get:
{"ID":1,"Time":"2010-02-08T11:07:09","Name":"AB","Question":"To be or not to be"}
This can be consumed by any AJAX application, if you know a bit about JavaScript.
And the same HTTP/1.1 RESTful requests (GET/POST/PUT/DELETE/LOCK/UNLOCK...) are already available in any Client HTTP/1.1 application. The framework implements the server using the very fast kernel-mode http.sys (faster than any other HTTP server on Windows), and fast HTTP API for the client. You can even use HTTPS to handle a secure connection.
IMHO, using such an ORM is better than using only a database connection, because:
It will follow more strictly the n-Tier principle: the business rules are written ONCE in the Delphi server, and you consume only services and RESTful operations with business objects;
It will use HTTP/1.1 for connection which is faster, more standard across the Internet than any direct database connection, and can be strongly secured via HTTPS;
JSON and RESTful over HTTP are de-facto standard for AJAX applications (even Microsoft uses it for WCF);
The data will be transmitted using JSON, which is a very nice format for multiple front-end;
The Stateless approach makes it very strong, even in unconnected mode;
Using a local small replication of the database (we encourage SQLite for this) allow you to have client access in unconnected mode (for Delphi client, or for HTML 5 clients).
I recommend you have one database, and two front ends (web UI that calls SOAP methods for its back end work, and a SOAP method call based rich client in Delphi, and a SOAP server tier that implements SOAP accessible methods which contains your business logic).
From what you're describing, you think replication will merely speed you up, but what it will do instead, is slow you down and cause you to have replication, coherence, and relational integrity problems that must be sorted out by hand (by you).
Take a look at this
CopyCat is a database replication
engine, written as a component set for
Embarcadero Delphi. CopyCat has been
in production use since 2004, and is
very stable. It is relied upon daily
by a number of small to large
businesses for applications ranging
from inter-site synchronization,
itinerant work, database backup and
more. We are confident that it can
fulfill your needs as well. Read on...

Windows Service can't access network share [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have a windows service running on my local machine. It's configured to run under NT AUTHORITY\NETWORK SERVICE. The program access a network shared drive on a computer in the same subnet. That shared directory has Everyone set to Full control.
I'm getting False on File.Exists, but the file exists. I'm certain this is a permission issue. Am I forgetting anything? Note, the computer with the shared drive is not on a domain.
Solution was found here:
https://serverfault.com/questions/177139/windows-service-cant-access-network-share
The fact that the machine with the shared drive is not on a domain is where your main problem is. In order to get this to work you will have to configure the Windows Service to run as a specific user, and then you'll have to create an identical user on the remote system with the same password. It might work then.
The problem stems from the fact that in order to log in to a machine not in a domain, you have to log into that machine using an account that exists on that machine. The machine account for something else definitely won't exist on that local machine. By creating an identical user with an identical password, you might be able get the login to work."
-sysadmin1138
I created identical accounts on both machines and the service account was able to access the shared drive. Having the servers on the same domain is a better solution, so I'm working towards that, but this will work in the mean time.
Brian T was correct. But I would like to add something. We had this problem even though the service was running on the same DOMAIN\User. Our service was trying to write a file to a shared folder/drive and it was configured in the config.xml like so:
I:/path/to/the/file/to/write.
But when we changed the config to use IP-address of the network instead of drive letter, we managed to fix the issue. However the syntax changed a bit:
\\xxx.xxx.xx.xx\path\to\the\folder\to\write
Hope this helps anyone who still haven't solved the problem
Setting the share permissions is not enough. Also set the NTFS permissions adequately, then it'll work. Everyone Full Control on the share means, everyone can get through the network to the root of the share but from then on NTFS rights are used to determine what is allowed and what not.

Resources