How to use single windows service at multiple instance on same server - windows-services

I have windows service and i create different different compilation symbol for that service , for using multiple names , Now i want to install that same service on single machine 3 times .

Related

Several docker stacks with the same compose file but different ports

I would like to run several instances of a multi-container application at the same time using the same compose file. One of the containers in the application accepts websockets on a certain port.
I have an nginx proxy to forward different domains or locations to different instances of the application. The instances are actually different tenants using the application.
I would like to simply be able to run:
docker stack deploy -c docker-stack.yml tenant1
docker stack deploy -c docker-stack.yml tenant2
And somehow get different ports to the apps, which I then can use in the proxy to forward different websocket connections to different application instances, either using locations or virtual hosts.
So either:
ws://tenant1.mydomain.com
or
ws://mydomain.com/tenant1
How to configure the proxy to do this can surely be figured out. I've started to read a bit about: https://github.com/jwilder/nginx-proxy, which seems nice. However it requires that I set the virtual host name as environment variable for each app-instance and I can't seem to find a way to pass arguments with my docker stack deploy command?
Ideally I would like to not care about exact ports, they would rather be random. But they need to somehow be known to the nginx proxy to be able to forward. I want to easily be able to spin up a new appinstance (tenant) stack and just set up the proxy for that name (or even better if the proxy can handle that automatically with the naming of the app).
Bonus if both examples above works (both virtual host and location) since that would make it possible to test and develop without making subdomains / new domains.
Suggestions?

Linking many containers in Docker

Let's say I have a Java EE application which requires a database + I would also like to use apache.
Now, is it better to make a single image containing all three pieces or 3 containers for each of these and use the docker networking (linking is deprecated, right?) to connect them?
You can also use built-in Docker swarm mode. This gives you built in encryption for passing your secrets around, such as the database login. Here's an official Docker sample app that shows how to do a Java Spring Boot app connecting to a database with each service separated.
Docker is a lightweight solution for isolating applications. So if you have 3 different applications, you will almost always run those in 3 separate containers. Some of the advantages that gives you are:
The ability to independently scale each component
The ability to run components on different hosts
The ability to independently upgrade one component without impacting the others
The only time I merge application components into a single container is when they cannot communicate through a networking API, and they really need filesystem- and process-level integration between the parts.

How to deploy many instances of the same Docker image with unique database connection string

I am researching cloud deployments utilizing Docker containers. Our application will be utilizing Apache Tomcat and a PostgreSQL database.
My question is regarding best practices for configuring and maintaining images for multiple clients when deploying to clusters in the cloud.
We would like to use a single base image for many customers rather than maintaining an image per customer. This means a new context.xml (defines the database connection string for the Tomcat application) for each deployment of the image, as each customer will need to connect to their own database.
I know I can manually copy the context.xml file to the deployed container which is fine until we start to run these containers in a cluster with many replicas. This would require us to copy this connection string to each replica we create and repeat this process every time we update the container with a new version.
Is there a better solution to this problem of many containers running the same image but each group of containers requiring its own database connection string? Or is there a way to leverage container orchestration to update the context.xml file in all of the containers running the same instance at once?
PS. I have looked into using environment variables but the context.xml file is static and will not load values from these variables. At least that is my understanding.
How do you plan to keep track of the database connection strings? They need to reside somewhere...
Services is what allows you to communicate between Pods, and Services are targeting Pods by selector: you can therefore have a Service targeting a cluster of the same Pod type (i.e. your DB)
If you have a DB per customer, you need a Service per customer.
Now the problem is to point each of the App Pods to the right Service.
When you deploy the Pod, you need to know where to point it to. You can pass extra environment variables to your Pod at launch time with the --env= flag
You can also add labels to Pods, and if you want to really automate this, you will need some sort of DB or key value store to store to retrieve the
service name for a Pod, run a script at startup to lookup what is your Pod label and fetch the DB string for it.

How to setup Docker for a polyglot microservice-based application?

Working on a larger-than-usual project of mine, I am building an web application that will talk to several APIs of mine, each written in its own language. I use two databases, one being MariaDB and the second being Dgraph (graph database.)
Here is my local director architecture:
services - all my services
api - contains all my APIs
auth - contains my user auth/signup API
v1 - contains my current (only) API version
trial - contains my an API of mine called trial
etc...
application - contains the app users will interact with
daemon - contains my programs that will run as daemons
tools - contains tools (import data, scrapers, etc)
databases - to contain my two configs (MariaDB and Dgraph)
Because some components are written in PHP7-NGINX while others are in PYTHON-FLASK-NGINX, how can I do a proper Docker setup with that in mind? Each service, api, daemon and tool is independant and they all talk through their own REST-endpoints.
Each has its own private github repository, and I want to be able to take each one and deploy it to its own server when needed.
I am new to Docker and all the reading I do confuses me: should I create a docker-compose.yml for each service or one for the entire project? But each service is deployed separately so how does docker-compose.yml know that?
Any pointers to a clean solution? Should I create a container for each service and in that container put NGINX, PHP or PYTHON, etc?
The usual approach is to put every independent component into a separate container. General Docker idea is 1 container = 1 logical task. 1 task is not exactly 1 process, it's just the smallest independent unit.
So you would need to find 4 basic images (probably existing ones from Docker registry should fit):
PHP7-NGINX
PYTHON-FLASK-NGINX
MariaDB
Dgraph
You can use https://hub.docker.com/search/ to search for appropriate images.
Then create custom Docker file for every component (taking either PHP7-NGINX or PYTHON-FLASK-NGINX as a parent image).
You probably would not need custom Docker file for databases. Typically database images require just mounting config file into image using --volume option, or passing environment arguments (see description of base image for details).
After that, you can just write docker-compose.yml and define here how your images are linked and other parameters. That would look like https://github.com/wodby/docker4drupal/blob/master/docker-compose.yml .
By the way, github is full of good examples of docker-compose.yml
If you are going to run services on different servers, then you can create a Swarm cluster, and run your docker-compose.yml against it: https://docs.docker.com/compose/swarm/ . After that, you can scale easily by deploying as many instances of each microservice as you need (that's why it's more useful to have separate images for every microservice).

Running multiple executables from a windows service

I would like to achieve the following. I have a C# server application which is run by a Windows Service. The service currently requires that the server application is located in a specific directory.
Is it possible to create a Windows Service that takes a directory at start and run the application in that directory? How do you do that?
Can such a "configurable" service be used to start multiple application (executables with same name but located in different directories). This would be used to run different versions of a server application in parallel. Or do you need one service per running instance?
Yes, simply set the context to reflect the desired environment.To do this use Environment.SetEnvironmentVariable.
A single service can start many applications, each with its environment. Use a configuration file or persistent data in the registry.

Resources