Failover IP when server on DNS supplied IP fails iOS - ios

My iOS app uses a single hard-code URL api.xyz.com to find our REST service. At the moment there are just two servers running this service, and we use Amazon Route 53 DNS. But I've found that the timeout of an hour (or more) is too long incase one of our servers fails; don't want to leave users in the dark that long.
The alternative would be to implement a failover mechanism in the app. To be honest, I don't like the idea of pulling this low level DNS-related logic in the app, but I don't see another solution at the moment.
So my question is: How do I implement such a failover mechanism on iOS? I'm using AFNetworking for my REST API.
Or, are there better alternatives on server side? At the moment the servers are individually rented ones, so no Amazon, Google, ... cloud service.

Related

Routing a clients connection to a specific instance of a SignalR backend within a Kubernetes cluster

While trying to create a web application for shared drawing I got stuck on a problem regarding Kubernetes and scaling. The application uses an ASP.NET Core backend with SignalR for sharing the drawing data across its users. For scaling out the application I am using a deployment for each microservice of the system. For the SignalR part though, additional configuration is required.
After some research I have found out about the possibility to sync all instances of the SignalR backend either through the use of Azures SignalR Service or the use of a Redis backplane. The latter of which I have gotten to work on my local minikube environment. I am not really happy with this solution because of the following reasons:
My main concern is that like this I have created a hard bottleneck in
the system. Unlike in a chat application where data is sent only once
in a while, messages are sent for every few points drawn in the
shared drawing experience by any client. Simply put, a lot of traffic
can occur and all of it has to pass through the single Redis backplane.
Additionally to me it seems unneccessary to make all instances of the SignalR backend talk to each
other. In this application shared drawing does only occur in small groups of up to 10 clients lets
say. Groups of this size can easily be hosted on a single instance.
So without syncing all instances of the SignalR backend I would have to route the clients connection based on the SignalR group name to the right instance of the SignalR backend when the client is trying to join a group.
I have found out about StatefulSets which allow me to have a persistent address for each backend pod in the cluster. I then could somehow associate the SignalR group IDs with the pod addresses they are running on in lets say another look up microservice. The problem with this is that the client needs to be able to access the right pod from outside of the cluster where that cluster internal address does not really help.
Also I am wondering if there isnt a whole better approach to the problem since I am very new to the world of kubernetes. I would be very greatful for your thoughts on this issue and any hint towards a (better) solution.

Concerns with gRPC architecture (gRPC, nginx, docker)

I'm currently trying to create a tracing tool for fun (which supports gRPC tracing) and was confused as to whether or not I was thinking about this architecture properly. A tracing tool keeps track of the entire workflow/journey of the request (from the moment a user clicks the button, to when the request goes to the API gateway, between microservices, and back.
Let's say the application is a bookstore, and it is broken up to 2 microservices, maybe account and books. Let's say that there is a User Interface, and when you click a button, it allows a user to favorite a book. I'm only using 2 microservices to keep this example simple.
**Different parts of the Fake/Mock up application**
UI ->
nginx -> I wanted to use this as an API Gateway.
microservice 1 -> (Contains data for all Users of a bookstore)
microservice 2 -> (Contains data for all the books)
**So my goal is to figure a way to trace that request. So we can imagine the request goes to nginx
Concern #1: When the request goes to nginx, it is HTTP. Cool, but when the request is sent to the microservice, it is a grpc call (or over http2). Can nginx get an http request and then send that request over http2...? Not sure if I'm wording this correctly or not. I know nginx plus supports http2. I also know that grpc has a grpc gateway too.
Concern #2: Containerization. Do I have to containerize both microservices individually, or would I have to containerize the entire docker container itself. Is it simple to link nginx and docker?
Concern #3: When tracing gRPC requests (finding out how much time a request is fulfilled), I'm considering using a middleware logger or a tracing API (opentracing, jaegar, etc.) to do this. How else would I figure out how long it takes for gRPC to make requests?
I was wondering if it was possible to address these concerns, if my thought process is correct, and if this architecture is feature.
Most solutions in the industry are implemented on top of a container orchestration solution (Kubernetes, Docker Swarm, etc).
It is usually not a good idea to "containerize" and manage reverse proxy yourself.
The reverse proxy should be aware of all the containers status (by hooking to orchestrator) and dynamically update its configuration when a container created, crashed, or relocated (due to a machine gets out of service).
Kubernetes handles GRPC using the mesh networks. Please take a look at kubernetes service mesh.
If you decided to use Traefik and Docker Swarm check out traefik h2c support.
In conclusion, consider more modern alternatives to Nginx when you want to load balance GRPC.

How can I implement a sub-api gateway that can be replicated?

Preface
I am currently trying to learn how micro-services work and how to implement container replication and API gateways. I've hit a block though.
My Application
I have three main services for my application.
API Gateway
Crawler Manager
User
I will be focusing on the API Gateway and Crawler Manager services for this question.
API Gateway
This is a docker container running a Go server. The communication is all done with GraphQL.
I am using an API Gateway because I expect to have different services in my application each having their own specialized API. This is to unify everything.
All it does is proxy requests to their appropriate service and return a response back to the client.
Crawler Manager
This is another docker container running a Go server. The communication is done with GraphQL.
More or less, this behaves similar to another API gateway. Let me explain.
This service expects the client to send a request like this:
{
# In production 'url' will be encoded in base64
example(url: "https://apple.example/") {
test
}
}
The url can only link to one of these three sites:
https://apple.example/
https://peach.example/
https://mango.example/
Any other site is strictly prohibited.
Once the Crawler Manager service receives a request and the link is one of those three it decides which other service to have the request fulfilled. So in that way, it behaves much like another API gateway, but specialized.
Each URL domain gets its own dedicated service for processing it. Why? Because each site varies quite a bit in markup and each site needs to be crawled for information. Because their markup is varied, I'd like a service for each of them so in case a site is updated the whole Crawler Manager service doesn't go down.
As far as the querying goes, each site will return a response formatted identical to other sites.
Visual Outline
Problem
Now that we have a bit of an idea of how my application works I want to discuss my actual issues here.
Is having a sort of secondary API gateway standard and good practice? Is there a better way?
How can I replicate this system and have multiple Crawler Manager service family instances?
I'm really confused on how I'd actually create this setup. I looked at clusters in Docker Swarm / Kubernetes, but with the way I have it setup it seems like I'd need to make clusters of clusters. That makes me question my design overall. Maybe I need to not think about keeping them so structured?
At a very generic level, if service A calls service B that has multiple replicas B1, B2, B3, ... then it needs to know how to call them. The two basic options are to have some sort of service registry that can return all of the replicas, and then pick one, or to put a load balancer in front of the second service and just directly reach that. Usually setting up the load balancer is a little bit easier: the service call can be a plain HTTP (GraphQL) call, and in a development environment you can just omit the load balancer and directly have one service call the other.
/-> service-1-a
Crawler Manager --> Service 1 LB --> service-1-b
\-> service-1-c
If you're willing to commit to Kubernetes, it essentially has built-in support for this pattern. A Deployment is some number of replicas of identical pods (containers), so it would manage the service-1-a, -b, -c in my diagram. A Service provides the load balancer (its default ClusterIP type provides a load balancer accessible only within the cluster) and also a DNS name. You'd configure your crawler-manager pods with perhaps an environment variable SERVICE_1_URL=http://service-1.default.svc.cluster.local/graphql to connect everything together.
(In your original diagram, each "box" that has multiple replicas of some service would be a Deployment, and the point at the top of the box where inbound connections are received would be a Service.)
In plain Docker you'd have to do a bit more work to replicate this, including manually launching the replicas and load balancers.
Architecturally what you've shown seems fine. The big "if" to me is that you've designed it so that each site you're crawling potentially gets multiple independent crawling containers and a different code base. If that's really justified in your scenario, then splitting up the services this way makes sense, and having a "second routing service" isn't really a problem.

How do I create an AWS server that could recieve data continously from an IOS(swift) app?

I am trying to code an IOS application and already have an ec2 server designated for the app. I want to know how the app could send image data to the server. The ec2 server would receive incoming image data continuously from all the users that use the app. The server would then process the data. It would be similar to what applications such as Instagram do but, of course not at such a large scale.
I am a beginner at client-server communication and want to know how to implement this into my app. I also do not use stack overflow too frequently, so please tell me if I am doing something wrong if you need more information.
To be more specific, a user would post an image in the app. I currently have already set up an ec2 server to possibly receive that image. I want all of the images that users post to be delivered, processed, then stored in the ec2 server. Is there some way to handle the actual delivery of data. The question is a little broad because I want to know where to look. Would I have to write a script that is constantly running in the background and receiving data at some port? Is there another service I could use that handles this?
Um, briefly, you'll have issues with running on an EC2 if you have many users sending images at the same time.
Look into setting up API Gateway <-> Lambda <-> DynamoDB or S3 on AWS. Then your client can POST images/data to your gateway with a HTTP request.
First you must decide if your data is streaming (continuously pushed from server) or stored (pulled from the server as needed). The Instagram example you provided suggest that you have no need for real-time streaming data.
A streaming solution is more complicated and may typically require a technology like web sockets (or AWS IoT) to accomplish. A storage solution will be much simpler.
For storing you have the choice between creating and managing server(s) using a platform like EC2 (you'll need more than one server to scale to many users), or using managed 'serverless' technology like Lamba where you only need provide the code. The tradeoff is for this convenience is usually price.
For image storage, a typical pattern is creating database records that contain an S3 URL for the underlying image (as well as any metadata). You can create this database record and upload your file using whatever server technology you choose; Lambda may require an API Gateway server but remember that the AWS SDK can invoke Lambda functions directly.

Server side requirements for a SIP VoIP app

I'm at the beginning of trying to understand the requirements for developing a VoIP app. From what I've learned so far, frameworks that allow for communication using SIP/TCP are the best (I don't intend to implement SIP myself).
However, although SIP can be peer-to-peer, its recommended to use an SIP server service. But I'm finding it difficult to locate information about what SIP services are appropriate for an iOS application / what is required from me in terms of setup of the server so that I can concentrate on client-side development.
Any advice would be much appreciated.
You need to figure out your use-cases to decide. A SIP server is like an HTTP server, it will analyse the request URI, the request headers and whatever hints it can see to execute some resource at the backend. Think if you plan to have a user database and authentication. Do you want presence? Do you want voicemail, call transfers, pbx features? Do you want video, audio, IM? Do you want to support arbitrary endpoints? Encryption, NAT traversal, HA? Only then you can think about actual servers and hosting. Many "minimal" configurations will include at least one SIP/media front-end (for NAT/SBC), a SIP/media server (to act on requests), a database server (to store persistent state) and an HTTP server (for config/admin UIs). While there are products that combine some of these into single server, they are generally at least reasonably isolated modules.

Resources