connect azure functions to local signalr instance - docker

I am currently writing tests for an existing project based on azure functions. The project uses signalr to send live update messages to the clients.
For my tests I am currently using a signalr instance, that is running in the cloud, but I need to replace it by a "local" instance on the system, that is running the tests, so i can be 100% sure, that the signalr message is coming from my test session.
Does anybody have an idea, how to get a signalr-server running in a docker container for my tests (i need a connection string i can provide for the azure functions app)?
I could not find anything online. I am sure I am not the only one, who wants to test if signalr messages are send correctly and i would prefer not to implement the signalr-server myself.

The bindings available in Azure Functions are for the Azure SignalR Service not SignalR itself, so there is no way unfortunately to test this locally.
You could simply instead just create a test Azure SignalR Service instance and use that instead.

I didn't find any way how to achieve this, so i created a repo with a small mock service for the SignalR service. I hope I am allowed to post such stuff here.
My repo and my docker image
Feel free to use / fork it. I am not sure if I will ever find any time to maintain it.

Related

Can an Akka.net node hosted within a container participate in a cluster outside of the container host?

I'm fairly new to Akka.net and I'm a total noob when it comes to containers so please forgive me if this is too simple (but I kind of hope it is).
I'm trying to build a web app cluster using Azure app services. I want the lighthouse to be hosted in an Azure container instance. I've been successful putting the cluster together on my local box (without docker). I've tried standing up a local docker container with port forwarding but I haven't been able to get it to work.
Thanks in advance for your help.
You can definitely do this, but since you're using Azure App Services I'd recommend taking a look at Akka.Management and Akka.Disovery.Azure instead.
This will eliminate the need to use Lighthouse at all - and instead your nodes can form a cluster on Azure App Service by querying a shared Azure Table Storage table instead.
There's a complete Azure App Services demo that shows how to do this here: https://github.com/petabridge/azure-app-service-akkadotnet
And the relevant code is here: https://github.com/petabridge/azure-app-service-akkadotnet/blob/dev/src/Akka.ShoppingCart/Startup.cs
NOTE: this uses the Akka.Hosting methods, which eliminates 99% of HOCON configuration and ties into Microsoft.Extensions for configuration, hosting, and DI. Akka.Hosting is a relatively new package and just hit stable at the end of 2022. You should definitely use it - all of the documentation and examples will be reworked to incorporate it once Akka.NET v1.5 ships at the end of February, 2023.

Routing a clients connection to a specific instance of a SignalR backend within a Kubernetes cluster

While trying to create a web application for shared drawing I got stuck on a problem regarding Kubernetes and scaling. The application uses an ASP.NET Core backend with SignalR for sharing the drawing data across its users. For scaling out the application I am using a deployment for each microservice of the system. For the SignalR part though, additional configuration is required.
After some research I have found out about the possibility to sync all instances of the SignalR backend either through the use of Azures SignalR Service or the use of a Redis backplane. The latter of which I have gotten to work on my local minikube environment. I am not really happy with this solution because of the following reasons:
My main concern is that like this I have created a hard bottleneck in
the system. Unlike in a chat application where data is sent only once
in a while, messages are sent for every few points drawn in the
shared drawing experience by any client. Simply put, a lot of traffic
can occur and all of it has to pass through the single Redis backplane.
Additionally to me it seems unneccessary to make all instances of the SignalR backend talk to each
other. In this application shared drawing does only occur in small groups of up to 10 clients lets
say. Groups of this size can easily be hosted on a single instance.
So without syncing all instances of the SignalR backend I would have to route the clients connection based on the SignalR group name to the right instance of the SignalR backend when the client is trying to join a group.
I have found out about StatefulSets which allow me to have a persistent address for each backend pod in the cluster. I then could somehow associate the SignalR group IDs with the pod addresses they are running on in lets say another look up microservice. The problem with this is that the client needs to be able to access the right pod from outside of the cluster where that cluster internal address does not really help.
Also I am wondering if there isnt a whole better approach to the problem since I am very new to the world of kubernetes. I would be very greatful for your thoughts on this issue and any hint towards a (better) solution.

SignalR on non-Azure Web Farm

I have implemented SignalR support for web application. It works great. The problem I'm dealing now is make it work in non-Azure web farm environment. SignalR supports Windows Azure Service Bus and Redis out of the box. Also there is RabbitMQ implementation on GitHub. All these solutions implement IMessageBus interface.
Based on our current situation we can't use Redis or RabbitMQ. So I have few questions:
1) Is there any alternative solution that uses SQL Server or MSMQ?
2) Is it difficult (possible) to implement your own solution for SQL Server or MSMQ? David's post on SignalR 0.5 (http://weblogs.asp.net/davidfowler/archive/2012/05/02/signalr-0-5.aspx) says they are going to support SQL Server QNS or Service Broker (not SQL Server DB itself) so maybe it's a wrong way at all?
3) Is there a way to work around until this support is implemented? For example, it sounds like the we need to handle state of the connections list between servers. If we know number of nodes and their IPs we can share this information between servers via Web Service calls instead. Does it make any sense?
Damian Edwards appears to have just started working on the SQL scaleout implementation. You can find the details of that implementation here on GitHub and the issue tracking this work can be followed here.

Multi tenancy app to deploy on azure at a later stage

I am currently developing an MVC app using asp.net. My final aim is to deploy the saas on Azure.
But would it be feasible to do it at a later stage or should i incorporate it into my development?
When it comes to use Azure authentication etc i will require that due to the app being multi tenancy.
Just wanted to know peoples thoughts on this?
Cheers
It would be better if you can provide more information. Do you want to know if you ignore Azure at the moment, how much effort you need to take if you decide to deploy the application to Azure? In general it would not take too much effort, unless you want to use Azure services, such as storage, ACS, and so on. Deploying an ASP.NET application to Azure web site is just like deploy to a remote IIS. Deploy to web role requires you to create an additional cloud service project. Deploy to virtual machine usually does not require any modifications to the project, but requires you to setup all the environment.
In addition, please note there’re still some difference between Azure and local environment. For example, we usually use Azure SQL Service instead of connecting to the local SQL server.
Best Regards,
Ming Xu.
I'm doing something similar, but without developing on Azure right now. I have prepared for it though by making sure I use interfaces as much as possible. For instance, I don't write to a file system using File and Directory, but to interfaces IFile and IDirectory.
If you can avoid assuming anything based on your current localised, Windows Server environment then you can at least write implementations to satisfy requirements that do work in Azure. I'm planning to deploy to Azure and local Web servers and use Dependency Injection to satisfy the concrete implementation of the interfaces. I could just as easily use the same codebase entirely and have it detect the environment before injecting the implementations.

Receiving REST response on local machine

I use a web service to convert files. The service returns the converted file as an HTTP POST, along with identifier data. My app receives the response, updates its database and saves the file to the appropriate location.
At least that's the idea, but how do I develop and test this on a local machine? Since it isn't publicly facing, I can't provide a directive URL. What's the best way to handle this? I want to keep the process as clean as possible, and the only ideas I can come up with have seemed excessively kludgey.
Given how common REST API development is, I assume there are well-established best practices for this. Any help appreciated.
The solution will change a bit depending on which server your using.
But the generally accepted method is using the loopback address: 127.0.0.1 in place of a fully qualified domain name. Your server may need to be reconfigured to listen on this IP address, but that's usually a trivial fix.
example: http://127.0.0.1/path/to/resource.html
You can use curl or even your browser if your application has a proper frontend. There are many other similar tools to test this from a command line, and each language has a set of libraries for establishing http connections and transferring data along them.
If your machine isn't accessible to the service you are using, then your only option would really be to build a local implementation of the service that will exercise your API. A rake task that sends the POST with the file and the info would be a nice thing so you could start your rails app locally, and then kick off the task with some params to run your application through its paces.
This is the case any time you are trying to develop a system that can't connect to a required resource during development. You need to build a development harness of sorts so that you can exercise all the different types of actions the external service will call on your application.
This certainly won't be easy or straight forward, especially if your interface to this external service is complicated. Be sure to have your test cases send bad POSTs to your application so that you are sure you handle both what you expect, and what you don't.
Also make sure that you do some integration testing with the actual service before you "go-live" with the application. Hopefully you can deploy to an external server that the web service will be able to access in order to test. Amazon's EC2 hosting environment would let you set up a server very quickly, run your tests, and then shut down without much cost at all.
You have 2 options:
Set up dynamic dns and expose your app to the outside world. This only works if you have full control over your network.
Use something like webrat to fake the posts to your app. Since it's only 1 request, this seems pretty trivial.
Considering that you should be writing automated tests for this, I'd go with #2. I used to do #1 when developing facebook apps since there was far to many requests to mock them all out with webrat.
If your question is about testing, why don't you use mocks to fake the server? It's more elegant than using Webrat, and easier to deploy (you only have one app instead of an app and a test environment).
More info about mocks http://blog.floehopper.org/presentations/lrug-mock-objects-2007-07-09/
You've got some info about mocks with Rspec here http://rspec.info/documentation/mocks/

Resources