Can an Akka.net node hosted within a container participate in a cluster outside of the container host? - docker

I'm fairly new to Akka.net and I'm a total noob when it comes to containers so please forgive me if this is too simple (but I kind of hope it is).
I'm trying to build a web app cluster using Azure app services. I want the lighthouse to be hosted in an Azure container instance. I've been successful putting the cluster together on my local box (without docker). I've tried standing up a local docker container with port forwarding but I haven't been able to get it to work.
Thanks in advance for your help.

You can definitely do this, but since you're using Azure App Services I'd recommend taking a look at Akka.Management and Akka.Disovery.Azure instead.
This will eliminate the need to use Lighthouse at all - and instead your nodes can form a cluster on Azure App Service by querying a shared Azure Table Storage table instead.
There's a complete Azure App Services demo that shows how to do this here: https://github.com/petabridge/azure-app-service-akkadotnet
And the relevant code is here: https://github.com/petabridge/azure-app-service-akkadotnet/blob/dev/src/Akka.ShoppingCart/Startup.cs
NOTE: this uses the Akka.Hosting methods, which eliminates 99% of HOCON configuration and ties into Microsoft.Extensions for configuration, hosting, and DI. Akka.Hosting is a relatively new package and just hit stable at the end of 2022. You should definitely use it - all of the documentation and examples will be reworked to incorporate it once Akka.NET v1.5 ships at the end of February, 2023.

Related

Routing a clients connection to a specific instance of a SignalR backend within a Kubernetes cluster

While trying to create a web application for shared drawing I got stuck on a problem regarding Kubernetes and scaling. The application uses an ASP.NET Core backend with SignalR for sharing the drawing data across its users. For scaling out the application I am using a deployment for each microservice of the system. For the SignalR part though, additional configuration is required.
After some research I have found out about the possibility to sync all instances of the SignalR backend either through the use of Azures SignalR Service or the use of a Redis backplane. The latter of which I have gotten to work on my local minikube environment. I am not really happy with this solution because of the following reasons:
My main concern is that like this I have created a hard bottleneck in
the system. Unlike in a chat application where data is sent only once
in a while, messages are sent for every few points drawn in the
shared drawing experience by any client. Simply put, a lot of traffic
can occur and all of it has to pass through the single Redis backplane.
Additionally to me it seems unneccessary to make all instances of the SignalR backend talk to each
other. In this application shared drawing does only occur in small groups of up to 10 clients lets
say. Groups of this size can easily be hosted on a single instance.
So without syncing all instances of the SignalR backend I would have to route the clients connection based on the SignalR group name to the right instance of the SignalR backend when the client is trying to join a group.
I have found out about StatefulSets which allow me to have a persistent address for each backend pod in the cluster. I then could somehow associate the SignalR group IDs with the pod addresses they are running on in lets say another look up microservice. The problem with this is that the client needs to be able to access the right pod from outside of the cluster where that cluster internal address does not really help.
Also I am wondering if there isnt a whole better approach to the problem since I am very new to the world of kubernetes. I would be very greatful for your thoughts on this issue and any hint towards a (better) solution.

Pulling a Google Container Registry container into Google Kubernetes Engine from another GCP project

I am looking to pull a container from Google Container Registry that exists in one Google Cloud Platform project into a Google Kubernetes Engine cluster that exists in a separate GCP project.
There's a good resource on this here: https://medium.com/hackernoon/today-i-learned-pull-docker-image-from-gcr-google-container-registry-in-any-non-gcp-kubernetes-5f8298f28969 but it includes the complexity of a non-GCP project. My guess is that there's an easier approach since everything here resides in Google Cloud Platform.
Thanks,
https://medium.com/google-cloud/using-single-docker-repository-with-multiple-gke-projects-1672689f780c
This Medium post from way back seems to describe what you are trying to do. In short: you need to give “Storage Object Viewer” IAM permission to the service account of the cluster that wants to pull images from the other project's registry. The name of the role isn't exactly intuitive but sort of makes sense when you consider that the images are stored in cloud storage.

Active directory accounts inside a windows container (server 2016 TP5)

So I have Windows Server 2016 TP5 and I'm playing around with the containers. I am able to do basic docker tasks fine. I'm trying to figure out how to containerize some of our IIS-hosted web applications.
Thing is, we usually use integrated authentication for the DB and use domain service accounts for the app pool. I currently don't have a test VM (that is in a domain) so I can't test if this will work inside a container.
If the host is joined to an AD domain, are its containers also part of the domain? Can I still run processes using domain accounts?
EDIT:
Also, if I specify the "USER" in the dockerfile, does this mean that my app pool will run using that (instead of the app pool identity)?
There are at least some scenarios where AD-integration in Docker container actually works:
You need to access network resources with AD credentials.
Run cmdkey /add:<network-resource-uri>[:port] /user:<ad-user> /pass:<pass> under local identity that needs this access
To apply the same trick to IIS apps without modifying AppPoolIdentity you'll need a simplest .ashx wrapper around cmdkey (Note: you'll have to call this wrapper in run-time, e.g.: during ENTRYPOINT, otherwise network credentials will be mapped to different local identity)
You need to run code under AD user
Impersonate using ADVAPI32 function LogonUser with LOGON32_LOGON_NEW_CREDENTIALS and LOGON32_PROVIDER_DEFAULT as suggested
You need transport layer network security, like when making RPC calls (e.g.: MSDTC) to an AD-based resources.
Set up gMSA by using any guide that suites you best. Note however, that gMSA requires Docker host to be in the domain.
Update: this answer is no longer relevant - was for 2016 TP5. AD support has been added in later releases
Original answer
Quick answer - no, containers are not supported as part of AD so you can't use AD accounts to run processes within a container or authenticate with it
This used to be mentioned on the MS Containers site but the original link now redirects.
Original wording (CTP 3 or 4?):
"Containers cannot join Active Directory domains, and cannot run services or applications as domain users, service accounts, or machine accounts."
I don't know if that will change in a later release.
Someone tried to hack around it but with no joy.
You can't join containers to a domain but if your app needs to authenticate then you can use managed service accounts. Saves you the hassle of having to deal with packaging passwords.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/management/manage_serviceaccounts

AzureWorkerHost get the uri after startup for Neo4jClient

I am trying to create a ASP.Net with neo4jclient project to be hosted on the Azure and am kind of unable to grasp how to do the following:
get hold of an neo4j rest endpoint address once the worker role has started. I think I am seeing a different address each time the emulator spins up a instance of worker role. I believe that i'll need this to create an client somewhat like this
neo4jClient = new GraphClient(new Uri("http ://localhost:7474/db/data"));
so any thoughts on how to get hold of the uri after the neo4j is deployed by AzureWorkerHost.
Also how is the graph database persisted on the blob store, in the example its always deploying a new instance of pristine db in the zip and updating, which is probably not correct. I am unable to understand where to configure this.
BTW I am using the Neo4j 2.0 M06 and when it runs in emulator, I get an endpoint somewhat like this http://127.255.0.1:20000 in the emulator log but i am unable to access it from my base machine.
any clue what might be going on here?
Thanks,
Kiran
AzureWorkerHost was a proof of concept that hasn't been touched in a year.
The GitHub readme says:
Just past alpha. Some known deficiencies still. Not quite beta.
You likely don't want to use it.
The preferred way of hosting on Azure these days seems to be IaaS approach inside a VM. (There's a preconfigured one in VM Depot, but that's a little old now too.)
Or, you could use a hosted endpoint from somebody like GrapheneDB.
To answer you question generally though, Azure manages all the endpoints. The worker roles says "hey, I need an endpoint to bind to!" and Azure works that out for it.
Then, you query this from the Web role by interrogating Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.Roles.
You'll likely not want to use the AzureWorkerHost for a production scenario, as the instances in the deployed configuration will destroy your data when they are re-imaged.
Please review these slides that illustrate step-by-step deployment of a Windows Azure Virtual Machine image of Neo4j community edition.
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
A Neo4j 2.0 Community Virtual Machine image will be released with the official release build of Neo4j 2.0. If you plan to use more than 30GB of data storage, please be aware that the currently supported VM image in Windows Azure's image depot must be configured from console through remote SSH to Linux.
Continue with your development using http://localhost:7474/ and then setup the VM when you are ready for a staging or production build to be deployed.
Also you can use Heroku's free Neo4j database deployment but you must configure the basic authentication for your GraphClient connection in Neo4jClient.

my domain name to cloudfoundry instance

I have just deployed my Grails app on public cloudfoundry(myApp.cloudfoundry.me) and i need my domain to point to it. How is this accomplished? or what are the alternatives?
Problem: deploy Grails app via cloudfoundry on cloud with my own domain name instead something.cloudfoundry.me
Resources: i have a virtual server Ubuntu with static public IP available.
Goal: have a way to deploy many of my apps each with their own domain names
If you don't mind sharing how you do it today and, perhaps, if you can reference tutorial that would be very helpful
Thank You,
Cloud Foundry does not currently support custom domain mapping. However, this feature is high on the priority list and development is currently under way. If you do a search at Cloud Foundry Support
you will find a series of posting regarding this issue and some short term workarounds that could be helpful for you and your particular situation.
Thank you eightyoctan! I accepted your replay as answer, however. i wanted share what i end up doing to have my domain point to cloud foundry hosted app
Option 1. i used GoDaddies Forward+Masking to push app on myapp.cloudfoundry.com and then forward+masking on godaddy to have mydomain.com point my app on cloudfoundry....i am sure i am penalized from SEO aspect to some extent but it works so far
Option 2. I also believe the same goal - have my custom domain point to cloud foundry app via Elastic Ip of EC2 as described in the following blog:
http://www.cloudsoftcorp.com/blog/first-steps-with-cloud-foundry-on-amazon-ec2/
Or use Stakato with EC2 that runs on top of cloud foundry from what i can tell. For more:
http://docs.stackato.com/server/ec2.html#vm-ec2
Either way, I hope cloud foundry does get this feature soon so we don't have to make extra steps to accomplish this

Resources