Do server-less functions install modules every time they are called?
I am trying to understand how serverless functions really work. I understand that serverfull is basically a computer that executes code and that the server code runs on it 24/7 unless it is stopped for some reason. On the other hand I understand serverless code just runs when it is called. Where is this code stored? When I call a function in a serverless application does it install the modules (for example from npm) everytime I call the function? Is this what causes cold-start delays?
I understand that serverfull is like my computer running code. How can I describe serverless using the same analogy?
My Questions:
Do server-less functions install modules every time they are called?
If there is no server, where is this code stored in serverless?
I understand that serverfull is like my computer running code. How can I describe serverless using the same analogy?
No, the dependencies are a part of the deployment artifact (e.g. a ZIP file or container image in the case of AWS Lambda), so they do not have to be installed on each invocation.
I understand that serverfull is like my computer running code. How can I describe serverless using the same analogy?
That's not going to be a perfect explanation, but hopefully, it fits your analogy. Imagine that your computer is sleeping, but there's another computer that can receive requests and wake up your computer whenever it receives a new one, so it can be run on your computer. After it finishes running, it goes back to sleep. But instead of a single computer, there are many of them that can be brought from sleep in a matter of milliseconds. Hope that makes sense.
Related
This is a more general question about which types of payloads to host in a Container. In our case we will use Service Fabric guest executables. For this post I will only use the word Container to refer to both. The reason I do this is they have similar properties and think more people may understand a container than a SF Guest Exe.
WebAPIs/Services that needs to scale are a good fit for containers, but this question is related to what we call a "Batch" job. This nomenclature comes out of the old .bat files, but in our case we are using a .NET Framework or Core .exe (console apps).
Currently Windows Task Scheduler kicks off the batch running under a service account on a VM. We want the processing to happen on a certain time of day or day of the week and not before or after. There is not any real scaling here. There is one instance which may or may not be multithreaded and on average they generally run between 2-15 minutes and then stop. Some run longer some run shorter. I understand there are limitations to this approach but this is the type of payload I'm discussing here.
As we modernize the Technology stack we are looking to use the Orchestrator as much as possible. As a technologist I've always tried to understand the different tools in our tool belts and not use a tool just because that's the one I used last, instead use the correct tool for the task.
We started out by not writing any more .net console apps. Instead we put the business logic of these "batches" into WebApi's. Then having the task scheduler call the API when it needed to perform its action. If I put this into Service Fabric and host it my concern is that the system resources are consumed for 23 hours and 45 minutes a day when they are not being used. That seems to be opposite of what you would expect when using a container.
Now if I could spin up a Service Fabric Guest Exe/Container on demand and then after it finishes destroy the instance of the app that could fit the need. Then I could have the benefits of the orchestrator without the determent of having it consume resources all the time. I would hope to retire the Batch Server (VM) as the hardware is usage is not optimized and instead add resources to the cluster.
UPDATE
Looking at Vaclav's Scalability Doco I think there might be a use case in here? https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-concepts-scalability He uses a "Workload Manager Service" combined with CreateServiceAsync, to spin up an instance of the service on demand. I guess I would deploy the app to the image store but not create an instance of the app until needed. Then I need to figure out how to end it, is it as simple as changing the infinite loop in Program.cs? The thing is it doesn't look like there is a Program.cs in a Guest Executable.
This looks like a way to run a package until completion, which was releases as part of 7.1. But how do we start a second execution of the service? I want to execute based on a request coming in.
https://learn.microsoft.com/en-us/azure/service-fabric/run-to-completion
Thoughts?
I can't quite explain the problem, because I myself do not understand it. I'd appreciate getting help with defining/locating/dealing with the issue.
The Setup
I have a Win10 VM having tests run on it, and a Jenkins VM (Windows Server 2008) running those tests on it.
I am using a testing app called JSystem. Sadly, it does not support Windows 10 officially, as it uses Telnet to communicate with target SUTs (which was removed from Windows 10), so I had to create a way to use WinRM to communicate with that type of VM.
The Problem
The gist of it, is that at some point in time, the test on Jenkins just 'freezes'. The connection is still on 'established' state, the VM (host and client) are still working. It does not happen every time, and it might happen a few minutes after the testing started, or a couple of hours. The test that causes is is almost never the same, but naturally it happens when there's some form of communication between the SUT and the testing VM. It can be file transfer, or a simple command like "dir". It can happen during the request for the command to happen, or when sending the result back.
More Information
I did gather some more information that might help.
I did not see it happen when I try to run the test from my own development environment (that is, not using Jenkins as a medium) - However, it might've been because I was unlucky and did not try enough. My own environment is a Windows 10 as well, and not a VM.
Looking at the event viewer on the SUT, there was a warning "Time-Service" event ID 50, an NTPClient time sync issue one minute after the freeze happened. However, the Jenkins VM had no events at all. That said, the event repeats itself a lot on the SUT and it does not always freeze the test, but it's possible it causes interference if it happens during a communication attempt between the VMs.
I can still connect to the SUT with WinRM just fine with other sources, and it responds as well.
Rather than frozen, it's more like SUT is waiting for a request from Jenkins, and Jenkins is waiting for a response from the SUT. The weird thing, however, is that normally these tests have a timeout of 30-60 seconds, it should not wait longer than that (unless configured otherwise in the test, of course) before failing the test step.
I can't be sure if this has anything to do with it, but I do have time sync issues between VMs. I've asked in another question about how to solve it, so if that's the issue in your opinion, please let me know, especially if you have a solution.
What is a good way to approach this?
I have written several services in Delphi now, but I want to add the facility of auto updating the service either from a LAN unc path or from a http server. I have been pondering this and I am interested to hear peoples ideas. I can create a thread that will check for the update periodically, but how do I go about stopping the service uninstalling and installing automatically. My initial thoughts where to write a console app to do this and start it using create process, then let the service stop and the console app do the work, starting the new version of the service before it exits. Is this a good stratergy or shoul I consider something else. Thanks in advance
I do as you suggest. A thread checks occasionally for an update. If it is present, it downloads it and puts it into an appropriate place. It then verifies that it is wholesome (don't want it to be broken!). Finally, the thread then launches another app with parameters to tell it what to do, specifically, the name of the service, the location of the file to replace, and the file to replace it with. Then the service just waits.
When the updater app starts, it pauses a moment to make sure that the service is all stable, and then it uses the service control API to stop the service. It then monitors it until it is gone. Finally, it pauses a little to ensure that Windows has really finished with the file. Then it starts the process of renaming the old file to move it out of the way (if still in use, it retries a few times), and then copying the new file into place. And finally, it starts the service up again. Then the updater quits.
This has worked quite reliably for my services, and also standalone apps too (with different parameters for the updater app to know which mode). And if you are careful, you can update the updater using the exact same system, which is nice to watch.
I would have the service be a shell that only updates another executable or DLL file where the real code is at.
Have some communication method between the shell and the child process to force a shutdown and then have the shell perform the upgrade and relaunch the child.
As a side note, this makes debugging the service much easier as well as you'll be able to run the child process directly without having to worry about the extra efforts required to debug windows services.
your idea seems very good to me, however take this into consideration aswell:
- add module(the main core) to the service that will be unloaded and will load the updated module(*.dll file) when an update is available -- in this time the service should put the "tasks" in a queue or something...
additionally you can use plugins and/or scripts like Pascal script or DWScript
Last versions of Windows (I think since windows 10) does not allow a service to start other programs. So you will need an other program to run the update. It could be an other service.
Windows Services cannot start additional applications because they are
not running in the context of any particular user. Unlike regular
Windows applications, services are now run in an isolated session and
are prohibited from interacting with a user or the desktop.
This probably sounds crazy, but it's a real problem: I have an ISV-provided Windows service that I cannot change. There's a bug in the service where it doesn't "clean up" some data that it should upon startup.
As a workaround, until the vendor can fix the bug, I would like to cause another process or script to always run just before this problem service starts.
For example: I could create a second "monitor" service that is tied to the problem service with a service dependency. The second service would perform this workaround/cleanup before the problem service is allowed to start. But that seems like a sledgehammer of a solution to a simple problem. Anyone else have ideas for a simpler solution?
The workaround code is trivial and could live, for example, in a PowerShell script.
Create a new service that does what you need, then force a dependency on it.
You should check out our Service Protector application which can run a pre-startup script before starting another service. It too may be overkill, but sometimes it is better to purchase a targeted utility rather than investing your programming time in a one-off/throw-away solution.
In any case, your solution of writing another service and enforce a dependency should do the trick, provided that your new service does not declare itself as "Started" until after it has completed its cleanup work. If not, Windows may start your real service too soon.
Good luck.
I'm writing a simple Windows Service that sends out emails to all employees every month. My question is, how to stop itself when it's done? I'm a noobie in this field so please help me out. Really appreciated.
It will be deployed on the server to be run monthly. I did not start this thing and the code was given to me like that. It is written in VB.NET and I'm asked now to change a few things around it. I noticed that there is only 'Sub OnStart' and wondered when the service would stop? After the main sub is done, what it the status of this service? Is it stopped or just hung in there? Sorry, as I said, I am really new to this....
If you have a task that recurs monthly you may be better off writing a console app, and then using Windows Task Scheduler to set it to run monthly. A service should be used for processes that need to run for a long time or constantly, with or without a user logged on
As every other answer has noted, it sounds like this should be an executable or script that you run as a scheduled task.
However, if you are obligated for some reason to run as a Windows Service and you're working in .NET, you just have to call the Stop() method inherited from ServiceBase once your service completes its work. From the MSDN documentation for the method:
The Stop method sets the service state
to indicate a stop is pending and
calls the OnStop method. After the
application is stopped, the service
state is set to stopped. If the
application is a hosted service, the
application domain is unloaded.
There's one important caveat here: the user account under which the service is running must have permission to stop services (which is a topic for ServerFault).
Once a service's OnStart method completes, it will continue running (doing nothing) until something tells it to stop in one of the following ways:
Programatically, by calling Stop
within the service itself or from an
external process using the method
Colin Gravill describes in his
answer.
Via the command-line.
Through the windows Computer Management console's "Services" panel.
If this is a Win32 service (i.e. written in C or C++), then you simply call SetServiceStatus(SERVICE_STOPPED) and return from ServiceMain.
On the other hand, if you're just sending emails once a month, why are you using a service at all? Use the Windows Task Scheduler and run a normal application or script.
net stop [service_name] ...on the command line will do it too.
But, I agree with everyone else; it seems that Windows Task Scheduler will meet your needs better.
It might be better to write this as a scheduled task, it would certainly be easier to develop initially. Then it would naturally terminate and wouldn't be consuming resources for the rest of the month.
To answer the original question, you can get a list of the current running services in C#
services = System.ServiceProcess.ServiceController.GetServices();
Then look for the one you want and set the status to stopped
locatedService.Status == ServiceControllerStatus.Stopped
Full example on msdn
Is there a reason it has to be a Windows service? If not, then follow #Macros solution. However, if it does, then why stop the service? If you stop it, then it'll just have to be restarted when the emails need to be sent. Based on your description, it doesn't sound like it would require a lot of resources, so I'd suggest just installing it and letting it run, firing up once a month to send the emails.
here's what i did in a similar situation.
windows service runs 24/7 and processes work units. it gets work units through a database view.
table Message
ProcessingStartTime
CompletionDTE
...
the database view only pulls records marked not-complete and have a ProcessingStartTime in the past. So after the service confirms the transaction it executes a stored procedure that updates the database record. For this system, end-user upload excel files to asp.net webfrom that imports them into the database.