Need Help-Windows service or other - windows-services

I have to develop a application in .net technology which calls an external service which returns some data which will be stored on server.
Now this application should run weekly or monthly basis and will also log the success or failure of bath. For logging i dont have to use databse.
Please suggest.

I'd say that having an application that is run on schedule using windows task scheduler service would be more resource-effective than having your own service that hangs around for weeks just waiting for time to do the job.
More info:
About Task Scheduler
Task scheduler API

Related

Enabling Scheduler for spring cloud data flow server in pcf

We are using PCF to run our applications, To build data pipelines we thought of leveraging the Spring cloud data flow server, which is given as service inside PCF.
We created a DataFlow server by giving SQL server and maven repo details, and for the scheduler, we didn't provide any extra parameters while creating service, so by default, it is disabled.
Got some info from here, how to enable scheduler: https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_enabling_scheduling
So I tried updating the existing Data Flow service with the below command:
cf updat-service my-service -c '{"spring.cloud.dataflow.features.schedules-enabled":true}'
the Data Flow server is restarted, but still the scheduler is not enabled to schedule the jobs.
When I check with this endpoint GET /about from the Data Flow server, I am still getting
"schedulesEnabled": false
in response body.
I am not sure why the SCDF service isn't updated with the schedules enabled property even after you update service (as it is expected to have it enabled).
Irrespective of that you can try setting the following as environment property for SCDF service instance as well:
SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED: true
Once the schedule is enabled, you need to make sure that you have the following properties set correctly as well:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICES: <all-the-services-for-tasks-along-with-the-scheduler-service-instance>
SPRING_CLOUD_SCHEDULER_CLOUDFOUNDRY_SCHEDULER_URL: <scheduler-url>

How debug/log errors on production services worker installation

We have been using services worker on our mobile web app from some time now.
We use Sentry as event logs tool.
We are getting lot of error of the type:
Cannot update a null/nonexistent service worker registration
Error: AbortError: Failed to update a ServiceWorker for scope ('https://www.some.production.domain/') with script ('https://www.some.production.domain/sw.js'): Timed out while trying to start the Service Worker.
And so,
Is there a standard way to know why and if we should be worried about those kind of errors?
Or even get more details to try to figure out why they happen apparently in a random way?

Performance monitoring of production site using Shell script and Selenium Web Drivers

I will shortly try to explain what I am trying to do here. I need to periodically check the response time of the my site by logging into the system and noting the time to load the welcome page.
I am doing this using Selenium WebDriver and Java. I am currently checking the response time using the org.apache.commons.lang3.time.StopWatch which start when user hits the login button and stops when welcome page renders completely. I check weather this response time is above threshold level and send mail to admin alerting him in case of slow response of system.
Currently, I have created the executable jar file which opens the web browser using Selenium WebDriver and check the response time. I have also created the job in Jenkins using DOS commands which runs periodically using cron schedular. This I'm doing in my Windows 7 pc and I have Jenkins installed on my localhost. The scheduled job builds on Jenkins periodically but I can't see any activity like opening the web browser and the further task explained above. It runs perfectly when I use windows scheduler to execute batch file. The ultimate goal I have, is to run the Selenium WebDriver tests on the Linux system via jenkins while Jenkins server has been installed on a Linux machine.
Any help will be great! Also let me know if anybody wants to see the code.

Worker "dyno" in AWS Elastic Beanstalk

Amazon Web Service has now a worker tiers in their Elastic Beanstalk. But, it nevertheless confuse us who come from the days of Worker dyno.
As a comparison, in Heroku, one can configure two dynos (something like processor?) each for web and worker. The web will work for any request, and will timeout normally at 15 secs. Thus, if you have a request that last more than that, your request will simply timed-out although not terminated per se. In that case, you should use worker and your web dyno should visit the endpoint several times per minutes (maybe) to check if there is any result to be brought back to the user. To make either worker or web dyno, what you need is just slide the slider and you are good to go. Sometimes, you may need a Procfile. But there is nothing fancy, or something really difficult, or confusing about it.
In AWS EBS (Elastic Beanstalk), since day 1 you hit eb init, you will be asked whether it is a Standard or Worker. When you hit Standard, it seems there is no way to make it as worker as well.
In our situation, the worker and standard web is located under one application. So, how could we use an EBS instance both for worker and standard. Our worker is using sidekiq, and redis. Please, point to any guidance or help us in this matter.
AWS Elastic Beanstalk has two types of Environments - Web tier and Worker tier.
Web tier environments are meant for web applications - http/https request processing. You get one or more EC2 instances behind a load balancer. You can get other resources like database per your requirement. You can choose the platform you wish e.g. Ruby, Python, Java, Node.js, PHP, Docker.
Worker environments are meant for asynchronous message processing. When you create a worker environment you do not have a load balancer. All your EC2 instances are in an autoscaling group. All these instances are running a daemon which is polling a single SQS queue for messages. When a message is pulled by the daemon from the SQS queue, the daemon sends a HTTP Post request on localhost:80. You can configure the port but the important thing is that the daemon posts the message as an HTTP request on localhost. Your worker application is actually a web application that receives the post request and processes the message. After the message is successfully processed the worker daemon expects that your web application running on localhost returns a HTTP 200 OK response. The daemon then deletes the message from SQS queue. You can write your worker application for any platform just like standard web server applications - Ruby, Python, Java, Node.js, PHP, Docker.
Based on my understanding of your usecase I would recommend creating two Elastic Beanstalk environments - one Standard and one Worker environment. The Standard web server receives HTTP requests and processes them synchronously. This environment puts the relevant data in an SQS queue. The second environment is a worker and the daemon running on this environment polls this SQS queue for messages. Your second environment is a web application that is NOT open to the internet. The worker daemon posts the messages as HTTP requests to your worker environment. Thus you can process long running workloads asynchronously using this second worker environment.
With worker environments you can use your own queues or Elastic Beanstalk can generate a queue for you. You can configure parameters like message visibility timeout, http connections based on your requirements or you can use the defaults.
Below are some links that may be useful for you:
http://aws.amazon.com/blogs/aws/background-task-handling-for-aws-elastic-beanstalk/
http://blogs.aws.amazon.com/application-management/post/Tx1Y8QSQRL1KQZC/Elastic-Beanstalk-Video-Tutorial-Worker-Tier
https://stackoverflow.com/a/23942498/161628
Does this meet your requirements? Please let me know if you have further questions.
Update
You need to upload your source code at two places - once for the worker environment and once for the web server environment. If someone was starting from scratch then they might have two separate code bases. But I think in your case I think it should be perfectly fine to have a single code base shared between the two environments. Suppose your web request arrives at '/register', then the register() method in your application can post messages to an SQS queue and be done with the HTTP request. Now your worker environment will poll the SQS queue and post messages over HTTP on localhost to the URL '/async_register' which will invoke a method async_register() in your application and do the asynchronous processing. These two methods can live in the same source code bundle which can be shared by both the worker and web server environment. The code path taken by worker and web server will be different so that web server environments will invoke register() and worker environments will invoke the async_register() method.
Another caveat is that HTTP requests sent by the worker daemon on localhost will contain an HTTP header - "User-Agent": "aws-sqsd/1.1". Read more here. So in your web application you can have a single listener to post requests on "/register" and depending on whether this header is present or not you invoke register() or async_register() methods internally.
Also I think if you want to share the code base between the two environments you can upload the code base at only one place. Your environments are logically grouped into applications. So you can have a single application. You upload your source code to this application using the "CreateApplicationVersion" API call. Suppose you upload an application version with label 'v1'. You can now create a worker environment and a web server environment under the same application. When you create an environment you need to provide a version to deploy to your enviroment. In this case you can deploy v1 to both environments. So you will be sharing the same source code for both environments. When you have a new version "v2". You upload this version and then perform an update on both environments changing their version to "v2".
The same version of the source code can be deployed to both environments. They will be running on different EC2 instances because one environment is dedicated for responding to web requests and one environment is dedicated for responding to asynchronous web requests (worker).

How can i connect to remote server for CPU process time monitoring?

I want to connect to remote server to monitor the cpu process time when i run the stress test.
But it always failed, what can i do to successfully connect to remote server ?
If you are using linux, you can ssh into the remote server by knowing its hostname and ip as explained here.
You also require to know the root password of the remote server.
To check the CPU process time, memory ,etc during the stress test, you can use SeaLion.
It allows you to monitor the output of commands like top, free -m,etc on a graphical inteface, thus everytime you perform your test, you wouldn't require to connect to the remote server.
There is also New Relic which is extremely feature rich and provides many functionalities like graphing, etc.

Resources