One VPS, multiples services, different projects/domains - docker

This is my first VPS, so I am pretty new to administrating my own box. I already have experience with a managed web server, registrars, DNS settings, etc. The basics. Now I'd like to take it a step further and manage my own VPS to run multiple services for different business and private projects.
So far I got an VPS from Contabo, updated the system, set up a new user with sudo rights, secured the root user, configured Ufw, installed Nginx with server blocks for two domains and created SSL certificates for one domain using Certbot.
Before I go on with setting up my VPS, I'd like to verify my approach for hosting multiple services for multiple domains makes sense and is a good way to go.
My goal is to host the following services on my VPS. Some of them will be used by all projects some only by a single one:
static website hosting
dynamic website hosting with a lightweight CMS
send and receive emails
Nextcloud/Owncloud
Ghost blog
My current approach is to run all services except for Nginx and the mail server with Docker. Using Nginx as proxy to the services encapsulated in Docker.
Is this an overkill or a valid way to go forward in order to keep the system nice and clean? Since I am new to all of this, I am unsure if I also could run all of the services without using Docker but still be able to serve the different projects on different domains without messing up the system.
Furthermore, I'd like to make sure, that access to the services and the stored data is properly separated between the different tenants (projects). And of course ideally the admin of the services is kind of manageable.

Related

should I add DB, API and FE in one docker compose?

I have a project FE, BE and a DB.
now all the tutorials I found use the three in one file.
Now should it be the DB in one docker-compose file and the BE and FE in another?
Or should it be one file per project with DB, FE and BE?
[UPDATE]
The stack I'm using is Spring Boot, Postgres and Angular.
Logically your application has two parts. The front-end runs in the browser, and it makes HTTP requests to the back-end. The database is an implementation detail of the back-end and not something you separately need to manage.
So I'd consider two possible Compose layouts:
This is "one application", and there is one docker-compose.yml that manages all of it.
The front- and back-end are managed separately, since they are two separate components with a network API. You have a frontend/docker-compose.yml that manages the front-end, and a backend/docker-compose.yml that manages the back-end and its associated database.
Typical container style is not to have a single shared database. Since it's easy to launch an isolated database in a container, you'd generally have a separate database per application (or per microservice if you're using that style). Of the options you suggest, a separate Compose file only launching a standalone database is the one I'd least consider.
You haven't described your particular tech stack here, but another consideration is that you may not need a container for the front-end. If it's a plain React application, the "classic" development flow of compiling it to static files and publishing them via any convenient HTTP service still works with a Docker-based backend. You get some advantages from this path like Webpack's hashed build files, so you can avoid disrupting running clients when you push out a new build. If you separate the front- and back-ends initially, then you can change the way the front-end is deployed without affecting the back-end at all.

Active directory accounts inside a windows container (server 2016 TP5)

So I have Windows Server 2016 TP5 and I'm playing around with the containers. I am able to do basic docker tasks fine. I'm trying to figure out how to containerize some of our IIS-hosted web applications.
Thing is, we usually use integrated authentication for the DB and use domain service accounts for the app pool. I currently don't have a test VM (that is in a domain) so I can't test if this will work inside a container.
If the host is joined to an AD domain, are its containers also part of the domain? Can I still run processes using domain accounts?
EDIT:
Also, if I specify the "USER" in the dockerfile, does this mean that my app pool will run using that (instead of the app pool identity)?
There are at least some scenarios where AD-integration in Docker container actually works:
You need to access network resources with AD credentials.
Run cmdkey /add:<network-resource-uri>[:port] /user:<ad-user> /pass:<pass> under local identity that needs this access
To apply the same trick to IIS apps without modifying AppPoolIdentity you'll need a simplest .ashx wrapper around cmdkey (Note: you'll have to call this wrapper in run-time, e.g.: during ENTRYPOINT, otherwise network credentials will be mapped to different local identity)
You need to run code under AD user
Impersonate using ADVAPI32 function LogonUser with LOGON32_LOGON_NEW_CREDENTIALS and LOGON32_PROVIDER_DEFAULT as suggested
You need transport layer network security, like when making RPC calls (e.g.: MSDTC) to an AD-based resources.
Set up gMSA by using any guide that suites you best. Note however, that gMSA requires Docker host to be in the domain.
Update: this answer is no longer relevant - was for 2016 TP5. AD support has been added in later releases
Original answer
Quick answer - no, containers are not supported as part of AD so you can't use AD accounts to run processes within a container or authenticate with it
This used to be mentioned on the MS Containers site but the original link now redirects.
Original wording (CTP 3 or 4?):
"Containers cannot join Active Directory domains, and cannot run services or applications as domain users, service accounts, or machine accounts."
I don't know if that will change in a later release.
Someone tried to hack around it but with no joy.
You can't join containers to a domain but if your app needs to authenticate then you can use managed service accounts. Saves you the hassle of having to deal with packaging passwords.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/management/manage_serviceaccounts

What Microsoft Azure should i use?

My company is about provide e-commercial website service and finding the best solution for deployment.
Our application is a multi-tenant e-commercial application. Once purchased, our customer has a website. The newly created website can be browsed by default subdomain on our domain (our domain is invishop.com) eg: mikeshop.invishop.com. The website also can be browsed by customer's custom domain. This requires to create CNAME record to our domain name (invishop.com). Our multi-tenant application is developed in ASP.NET MVC platform.
The simplest solution our considered is Windows hosting with Dedicated IP but this solution does not meet big users requirement. Our application will be used by up to 1000 or higher at a same time. If we deploy our service on shared ASP.NET hosting our service is always can not access. We want to run our application on higher availability Infrastructure but do not want to focus on building and managing Infrastructure. We only want to focus on develop our application and business.
So what service we should use ?
You can use Azure virtual machines for hosting. VMs are scalable and you can make availability set with other VMs. So, if is going down (update etc), the second will working. You can create virtual network and assign some VMs to it (Web server and database server). You can use backup services for each VM too.
You can make VM with dedicated IP - Be careful it is possible to configure ONLY before you make VM - you must use PowerShell.
Create dedicated IP via PowerShell (see on internet commands)
Create virtual network (not necessary)
create VM - web server and assign it to IP via PowerShell
create VM - SQL server
Configure server for IIS - it is possible to host multiple websites on one VM via host names (I use this). Add A record to your domain(xyz.com and www.xyz.com and your dedicated IP)

heavy RoR app horizontally scaled on AWS needs efficient SSL

I am running a Rails app on the AWS infrastructure using several EC2 instances, a RDS DB, a round robin session-sticky load balancer and Route 53.
The application is serving pages for several domain names (same app looks and functions different depending on domain name).
The Rails code is hosted on a NFS share on a staging instance where the web server is running in development mode, while the other boxes load the apache config and application code via NFS and run in production mode.
What I'd like to do is to SSL-enable the whole thing as we're starting to process payments and whatnot. Due to the nature of the application and the heavy apache/Passenger optimization in place, I can't set up a vhost for each domain, but rather use a wildcard for www.* to load pretty much the same code, and the app does the rest internally.
Haven't really been able to figure out an ideal way to resolve this. Would anyone have an idea?
Thanks!
After a bit of discussion in the comments we came to this conclusion:
The application is currently hosted in one single <vhost> on Apache where the Application does the differentiation between hostnames for the different layouts.
The problem here is to support SSL without having to setup each domain with it's own certificate and a different vhost as that would require running the Rails app multiple times where it's unnecessary.
By using a Multiple Domain Certificate (MDC) this problem can be solved with only one vhost and one certificate, but MDCs are more expensive than normal certificates. So depending on the number of domains you need to support it may be cheaper/easier to just do it manually with multiple certs, or opt to pay the more expensive MDC but save time and maintenance cost.
While at it I found this nice wikipedia comparison of Certificate Authorities and their trust level in different browsers:
http://en.wikipedia.org/wiki/Comparison_of_SSL_certificates_for_web_servers

Multiple Web Roles in an Azure Compute Instance [Deployment]

There is an option for us to have 2 or more web roles in a single deployment. But each deployment can be either be staging or production i.e. by extension, we get only 1 URL to access that deployment.
Considering this case how to access the different webroles, what will be the URLs for those.Also what is the use for having multiple webroles in a single deployment.
Why multiple web roles in a single deployment? Consider an application with a public-facing (customer-oriented) website, as well as an administrative website (maybe on port 8000). There are two basic ways to handle this:
Place both sites in the same web role. This means they now share the VM instances, network cards, memory, etc. It also means that, should you need to scale to handle traffic, both sites are scaled together as a single unit.
Place each site in its own role. Now, they're in their own VM instances and may be scaled separately.
Option #1 is more cost-effective because you can get by with only two role instances (minimum two needed for SLA). Option #2 is better for independent scaling. for instance: If you get a huge spike in customer traffic, this could cause trouble for you when trying to access the administrative website, whereas if your admin website is in its own role, it won't be affected by customer traffic.
In both cases, you get one IP addres, one *.cloudapp.net name (and you can map a custom domain name to it with a CNAME).
Staging vs. Production: Your entire deployment may be published to either Staging or Production (or both, as two separate publishes). Staging is not meant for external users - it's really meant for a pre-live area, where you can verify that a new deployment works as expected. You can then perform a virtual IP swap with your currently-running system in Production, which effectively swaps your staging and production deployments. This results in a near-instant upgrade of your software with no customer downtime.
Keep in mind: Every role in a deployment must stay together - you can't deploy one role to one service and the other role to another service. If you want to do this: Separate your roles into separate deployments. Then you can publish them to different URLs.
In a production deployment your webrole can be accessed by the URL with a prefix you defined previously for example myapp.cloudapp.net; web roles in staging deployment on the other hand can be accessed by automatically generated URL for example 205521014d8c440a83852b62e0df9db5.cloudapp.net
I am afraid there is no way to access web role instance directly, bypassing AppFabric router. Why would you ever need to do it anyway?
If you need get access from one web role instance to another, consider using a queue or distributed cache instead of direct communication.

Resources