How to add bitten-slave as a Windows service - windows-services

As Part of setting up continuous integration using bitten, I would like to set up some bitten-slaves on windows. However, bitten documentation lacks instructions on how to register bitten slave as a service.
Looking at Microsoft's documentation on How to create a Windows service by using Sc.exe, I've tried the following:
sc create bitten-slave binPath= "C:\Python26\Scripts\bitten-slave.exe --verbose
--log=C:\dev\bitten.log http://svn/cgi-bin/trac.cgi/builds"
The service was indeed created. But trying to start it, I get the following error:
The bitten-slave service failed to
start due to the following error: The
service did not respond to the start
or control request in a timely
fashion.
What am I doing wrong?

Any random program can't run as a service in Windows, the application needs to be specially written to talk to the service controller.
An application that wants to be a service needs to first be written in such a way that it can handle start, stop, and pause messages from the Service Control Manager.
However, Microsoft does provide a generic service wrapper, SRVANY, which can be used to run an arbitrary program as a service. I use SRVANY to run several python scripts as services, so it should work properly.

This page on the Bitten wiki describes a simple Python script that can be configured as a scheduled task to ensure the Bitten slave is kept running.

Related

How to use TestCafe-Cucumber Node.js project in DevOps deployments

I have a test framework running on my local (& git) that is based on TestCafe-Cucumber (Node.js) example: https://github.com/rquellh/testcafe-cucumber & it works really well.
Now, I am trying to use this framework in the deployment (post-deployment) cycle by hosting it as a service or creating a docker container.
The framework executes through the CLI command (npm test) with few parameters.
I know the easiest way is to call the git repo directly as & when required by adding a Jenkins step, however, that is not the solution I am looking for.
So far, I have successfully built the docker image & container now runs on my localhost 8085 port as http://0.0.0.0:8085 (although I get DNS server as it's not an app - please correct me if I am wrong here)
The concern here is: How can I make it work like an app hosted so that once the deployment completes, the Jenkins/Octopus could call it as a service through the URL (http://0.0.0.0:8085) along with few parameters that the framework used to execute the test case?
I request all experts to provide a solution if there are any.
I guess there is no production-ready application or service to solve this task.
However, you can use a REST framework to handle network requests and subprocesses to start test sessions. If you like Node.js, you can start with the Express framework and the execa module.
This way you can build a basic service that can start your tests. If you need a more flexible solution, you can take look at gherkin-testcafe that provides access to TestCafe's API. You can use it instead of starting TestCafe as a subprocess since this way you will have more options to manage your test sessions.

Access rails console of an app deployed in Google Cloud Run

We deployed a rails app in Google Cloud Run using their managed platform. The app is working fine and it is able to serve requests.
Now we want to get access to the rails console of the deployed app. Can anyone suggest a way to achieve this?
I'm aware that currently, Cloud Run supports only HTTP requests. If no other way is possible I'll have to consider something like rails web console
I think you cannot.
I'm familiar with Cloud Run but I'm not familiar with rails.
I assume you'd need to be able to shell into a container in order to be able to run IRB. Generally, you'd do this by asking the runtime (Docker Engine, Kubernetes, Cloud Run) to connect you to the container so that you could do this.
Cloud Run does not (appear) to permit this. I think it's a potentially useful feature request for the service. For those containers that contain shells, this would be the equivalent of GCE's gcloud compute ssh.
Importantly, your app may be serviced by multiple, load-balanced containers and so you'd want to be able to console into any of these.
However, you may wish to consider alternatives mechanisms for managing your app: monitoring, logging, trace etc. These mechanisms should provide you with sufficient insight into your app's state. Errant container instances should be terminated.
This follows the concept of "pets vs. cattle" whereby, instead of nurturing individual containers (is one failing?), you nurture the containers holistically (is the service comprising many containers failing?)
For completeness, if you think that there's an issue with a container image that you're unable to resolve through other means, you could run the image elsewhere (e.g. locally) where you can use IRB. Since the same container image will behave consistently wherever it's run, you should be able to observe the issue using IRB locally too.

How to interact with already running instance via terminal in Mongooseim?

I am using Mongooseim 3.2.0 from the source code on the ubuntu server. Below are concern:
What is the best way to run mongooseim as a service so that it automatically restarts if mongooseim crashes or system restarts?
How to interact via terminal with already running mongooseim instance on the ubuntu server like "mongooseimctl live". My guess is running "mongooseimctl live" will try to create another instance. I just want to see the live logs and interaction and don't want to keep scrolling the long log files for this purpose.
I apologize if the answer to above is obvious but just want to follow the best guidance.
mongooseimctl live or mongooseimctl foreground is mostly useful for development or smoke testing a deployment (unless you're running inside a container). For real world use cases you should start the server in the background with mongooseimctl start.
Back to the container - the best approach for containerised applications is to run them in the foreground, therefore in a container startup script use mongooseimctl foreground.
Once the server is running (no matter how it was started) attaching a shell to troubleshoot issues can be done with mongooseimctl debug. This is the command to use when you get the Protocol 'inet_tcp': the name mongooseim#localhost seems to be in use by another Erlang node error. Be careful if it's a production environment - you can easily take the server down with access to this shell.
If you're just interested in watching logs, with no interactive access to the server internals that the shell offers, a simple tail -f /your-configured-mongooseim-log-dir/* should be enough.
Ubuntu nowadays uses systemd for managing its services' lifetimes. A systemd .service file can be found at https://github.com/esl/MongooseIM/blob/master/tools/pkg/platforms/debian_stretch/files/build/mongooseim.service - we use it for packaging into Debian/Ubuntu .deb packages.

Docker - inter-container script execution

currently my web application is running on a server, where all the services (nginx, php, etc.) are installed directly in the host system. Now I wanted to use docker to separate these different services into specific containers. Nginx and php-fpm are working fine. But in the web application pdfs can be generated, which is done using wkhtmltopdf and as I want to follow the single-service-per-container pattern, I want to add an additional container which houses wkhtmltopdf and takes care of this specific service.
The problem is: how can I do that? How can I call the wkhtmltopdf binary from the php-fpm container?
One solution is to share the docker.socket, but that is a big security flaw, so I really don‘t like to it.
So, is there any other way to achieve this? And isn‘t this "microservice separation" one of the main purposes/goals of docker?
Thanks for your help!
You can't directly call binaries from one container to another. ("Filesystem isolation" is also a main goal of Docker.)
In this particular case, you might consider "generate a PDF" as an action your service takes and not a separate service in itself, and so executing the binary as a subprocess is a means to an end. This doesn't even raise any complications since presumably mkhtmltopdf isn't a long-running process, you'll launch it once per request and not respond until the subprocess runs to completion. I'd install or include it in the Dockerfile that packages your PHP application, and be architecturally content with that.
Otherwise the main communication between containers is via network I/O and so you'd have to wrap this process in a simple network protocol, probably a minimal HTTP service in your choice of language/framework. That's probably not worth it for this, but it's how you'd turn this binary into "a separate service" that you'd package and run as a separate container.

Easier way to start and stop windows services in Windows XP

I occasionally find myself starting and stopping multiple windows services. The only tool I'm aware of for stopping and starting windows services is the "Services" program under "Administrative Tools" (%SystemRoot%\system32\services.msc /s). This program seems to only allow you to manipulate one service at a time, often pausing while it waits for the service to stop. There is a "Close" button available, but I'd prefer to just select all the services I want to stop or start, and perform a single command on all of them at one time.
Is there an easier way to start and stop multiple windows services for Windows XP?
Use the "net start" and "net stop" commands in your cmd.exe to start and stop a service:
net start "Service name with space"
net stop SerivceNameWithoutSpace
Be aware that you will need quotes if the service name has spaces.
It possible to start/stop Windows services by using command-line tools such as net start and net stop and sc.exe, but as far as I known none of them allows to operate on more than one service at once.
The easiest solution is to invoke the command-line tool multiple times by specifying different service names in a batch file.
Also, note that the reason why there is a delay between issuing a stop command to a Windows Service and the time when the process actually exits, is due to the fact that the Windows Service Controller waits up to 30 seconds to allow services to shutdown properly.If a service doesn't exit by that time, a message will inform you that "the service didn't respond in a timely fashion". More details can be found here.
You could use powershell.
Something like :
get-service -displayname SQL | stop-service
This stops all services with SQL in their display name.
http://www.microsoft.com/technet/scriptcenter/topics/msh/cmdlets/stop-service.mspx
What about the command line?
The net start and net stop commands are where you're going...
Try msconfig (go to the "Run" dialog, type "msconfig"). Choose the "services" tab.
You could write a command/batch script that uses the command-line service controller, sc.exe.
Alternatively, you could check out the SysInternals psservice.exe command-line tool.

Resources