Azure Equivalent of Resource Group Local Host - asp.net-mvc

I've had a little dig through azure documentation but couldn't find a definitive answer.
I have an app service and an azure db sitting in the same resource group, and I am finding the site takes a long time to connect and get responses back from the database only in the hosted environment.
Is it possible to specify a localhost equivalent as they are in the same resource group, and would this make things any quicker?

Resource Group does not have any impact on the connectivity or latency of the application and the database. It is just to group the Azure resources together based on a Project/Envrionment.
There is no equivalent for resourcegroup or even appservice unless if you want to run your application in IIS or any other server.
If you really want to see what is causing the connectivity issue, i will recommend you to monitor the request and response using Azure Monitor.
I think you need to understand the cloud concepts first before trying out anything.

Related

Make the Web App on Azure portal only Available few hours in a Week

I am building a ASP.Net Core Web App which I am trying to host in the Azure portal. We have requirement that these Applications can be accessed only certain times in a week, those times are stored in the Azure Database. Is it possible to make the App Available/Accessible to the users upon looking at database
Here the Setting is that the application should be available only between 14:00 - 16:30 on TUESDAY. When I tried to research we can schedule tasks/workflows in the portal but couldnt find what I am looking for. All I wanted to know is this requirement possible, if so please share the idea.. I am new Web App development and Azure deployment, any help is greatly appreciated
This feature is not available in Azure out of the box. This is something you will have to handle yourself.
One obvious way to implement this would be to check if the application should be available on every request. If the request day and time falls between the available times set in the database, you show your users the website content otherwise show them some kind of not available message.
A more complicated way would be to make use of App_offline.htm file to make your site unavailable. You can dynamically add/delete App_offline.htm file to your WebApp based on the day/time when you want your site to be offline/online.
However please note that while your site is offline, you will still be charged for the WebApp as the resources keep remain provisioned.
You can use Azure Automation Service to orchestrate processes like this. You will have to create a runbook (script in python or powershell) that will query the DB and figure out the times when the Azure Website hosting your webapplication should be started or stopped.

Routing a clients connection to a specific instance of a SignalR backend within a Kubernetes cluster

While trying to create a web application for shared drawing I got stuck on a problem regarding Kubernetes and scaling. The application uses an ASP.NET Core backend with SignalR for sharing the drawing data across its users. For scaling out the application I am using a deployment for each microservice of the system. For the SignalR part though, additional configuration is required.
After some research I have found out about the possibility to sync all instances of the SignalR backend either through the use of Azures SignalR Service or the use of a Redis backplane. The latter of which I have gotten to work on my local minikube environment. I am not really happy with this solution because of the following reasons:
My main concern is that like this I have created a hard bottleneck in
the system. Unlike in a chat application where data is sent only once
in a while, messages are sent for every few points drawn in the
shared drawing experience by any client. Simply put, a lot of traffic
can occur and all of it has to pass through the single Redis backplane.
Additionally to me it seems unneccessary to make all instances of the SignalR backend talk to each
other. In this application shared drawing does only occur in small groups of up to 10 clients lets
say. Groups of this size can easily be hosted on a single instance.
So without syncing all instances of the SignalR backend I would have to route the clients connection based on the SignalR group name to the right instance of the SignalR backend when the client is trying to join a group.
I have found out about StatefulSets which allow me to have a persistent address for each backend pod in the cluster. I then could somehow associate the SignalR group IDs with the pod addresses they are running on in lets say another look up microservice. The problem with this is that the client needs to be able to access the right pod from outside of the cluster where that cluster internal address does not really help.
Also I am wondering if there isnt a whole better approach to the problem since I am very new to the world of kubernetes. I would be very greatful for your thoughts on this issue and any hint towards a (better) solution.

Active directory accounts inside a windows container (server 2016 TP5)

So I have Windows Server 2016 TP5 and I'm playing around with the containers. I am able to do basic docker tasks fine. I'm trying to figure out how to containerize some of our IIS-hosted web applications.
Thing is, we usually use integrated authentication for the DB and use domain service accounts for the app pool. I currently don't have a test VM (that is in a domain) so I can't test if this will work inside a container.
If the host is joined to an AD domain, are its containers also part of the domain? Can I still run processes using domain accounts?
EDIT:
Also, if I specify the "USER" in the dockerfile, does this mean that my app pool will run using that (instead of the app pool identity)?
There are at least some scenarios where AD-integration in Docker container actually works:
You need to access network resources with AD credentials.
Run cmdkey /add:<network-resource-uri>[:port] /user:<ad-user> /pass:<pass> under local identity that needs this access
To apply the same trick to IIS apps without modifying AppPoolIdentity you'll need a simplest .ashx wrapper around cmdkey (Note: you'll have to call this wrapper in run-time, e.g.: during ENTRYPOINT, otherwise network credentials will be mapped to different local identity)
You need to run code under AD user
Impersonate using ADVAPI32 function LogonUser with LOGON32_LOGON_NEW_CREDENTIALS and LOGON32_PROVIDER_DEFAULT as suggested
You need transport layer network security, like when making RPC calls (e.g.: MSDTC) to an AD-based resources.
Set up gMSA by using any guide that suites you best. Note however, that gMSA requires Docker host to be in the domain.
Update: this answer is no longer relevant - was for 2016 TP5. AD support has been added in later releases
Original answer
Quick answer - no, containers are not supported as part of AD so you can't use AD accounts to run processes within a container or authenticate with it
This used to be mentioned on the MS Containers site but the original link now redirects.
Original wording (CTP 3 or 4?):
"Containers cannot join Active Directory domains, and cannot run services or applications as domain users, service accounts, or machine accounts."
I don't know if that will change in a later release.
Someone tried to hack around it but with no joy.
You can't join containers to a domain but if your app needs to authenticate then you can use managed service accounts. Saves you the hassle of having to deal with packaging passwords.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/management/manage_serviceaccounts

Failover IP when server on DNS supplied IP fails iOS

My iOS app uses a single hard-code URL api.xyz.com to find our REST service. At the moment there are just two servers running this service, and we use Amazon Route 53 DNS. But I've found that the timeout of an hour (or more) is too long incase one of our servers fails; don't want to leave users in the dark that long.
The alternative would be to implement a failover mechanism in the app. To be honest, I don't like the idea of pulling this low level DNS-related logic in the app, but I don't see another solution at the moment.
So my question is: How do I implement such a failover mechanism on iOS? I'm using AFNetworking for my REST API.
Or, are there better alternatives on server side? At the moment the servers are individually rented ones, so no Amazon, Google, ... cloud service.

AzureWorkerHost get the uri after startup for Neo4jClient

I am trying to create a ASP.Net with neo4jclient project to be hosted on the Azure and am kind of unable to grasp how to do the following:
get hold of an neo4j rest endpoint address once the worker role has started. I think I am seeing a different address each time the emulator spins up a instance of worker role. I believe that i'll need this to create an client somewhat like this
neo4jClient = new GraphClient(new Uri("http ://localhost:7474/db/data"));
so any thoughts on how to get hold of the uri after the neo4j is deployed by AzureWorkerHost.
Also how is the graph database persisted on the blob store, in the example its always deploying a new instance of pristine db in the zip and updating, which is probably not correct. I am unable to understand where to configure this.
BTW I am using the Neo4j 2.0 M06 and when it runs in emulator, I get an endpoint somewhat like this http://127.255.0.1:20000 in the emulator log but i am unable to access it from my base machine.
any clue what might be going on here?
Thanks,
Kiran
AzureWorkerHost was a proof of concept that hasn't been touched in a year.
The GitHub readme says:
Just past alpha. Some known deficiencies still. Not quite beta.
You likely don't want to use it.
The preferred way of hosting on Azure these days seems to be IaaS approach inside a VM. (There's a preconfigured one in VM Depot, but that's a little old now too.)
Or, you could use a hosted endpoint from somebody like GrapheneDB.
To answer you question generally though, Azure manages all the endpoints. The worker roles says "hey, I need an endpoint to bind to!" and Azure works that out for it.
Then, you query this from the Web role by interrogating Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.Roles.
You'll likely not want to use the AzureWorkerHost for a production scenario, as the instances in the deployed configuration will destroy your data when they are re-imaged.
Please review these slides that illustrate step-by-step deployment of a Windows Azure Virtual Machine image of Neo4j community edition.
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
A Neo4j 2.0 Community Virtual Machine image will be released with the official release build of Neo4j 2.0. If you plan to use more than 30GB of data storage, please be aware that the currently supported VM image in Windows Azure's image depot must be configured from console through remote SSH to Linux.
Continue with your development using http://localhost:7474/ and then setup the VM when you are ready for a staging or production build to be deployed.
Also you can use Heroku's free Neo4j database deployment but you must configure the basic authentication for your GraphClient connection in Neo4jClient.

Resources