is there recommended value for request timeout for management api of apigee edge when using from Devconnect.?
Depends on two factors
User experience for the particular use case
Network latency between frontend ( DevConnect server) and backend (Management Server)
My personal favorite is about 20 seconds for a workable web interface.
Related
An ex-employee planned a Microservice Architecture which is being implemented now. I've few question regarding the design and I'd highly appreciate your feedbacks.
Explanation
Dematerialized UI has a matching dematerialized API.
Dematerailized API validates the user and generates token via SSO Library.
Flight API does the I/O validation & validate the request via validate request microservice
Flight API calls Booking API to get some bookings based on the UserId
Flight API calls Print Booking API to generate Messages using Generate Message Microservice
Print Booking API must call Data Access API to get data and then call Generate PDF microservices.
Data Access API calls the database for data.
My Project Structure
FlightBookingsMicroserice.V1 //solution
ApiGatways //folder
DMZ.API/DMZ.API.csproj //Folder/project
BuildingBlocks
EventBus/EventBus.csproj
EventBus/EventBusRabbitMQ
Services
SSO
SSO.API/SSO.csproj
SSO.UnitTests
Flight
Flight.API/Flight.API.csproj
Flight.UnitTets
//Similar for all
ValidationRequest
Booking
PrintBooking
PrintBooking.API.csproj
DataAccess
DataAccess.API.csproj
GeneratePDF
GenerateMessage
UI
UI
Docker-compose
Questions
Should I be using ocelot in DMZ.API.csproj, Flight API and Print Booking API.
Is my project structure a Microservice way of development
Should I continue to use ASP.NET Core Web API with .NET 6 for Dematerialized API in orange, Function API in blue and Microservice in purple projects.
For validation, since the SSO is passed from Dematerialized UI what if the token expires while CRUD operations
is already performed for some stages [rolling back changes is a hassle].
Should each API access to an identidy server and validate the user passed and generate its own token for its
services in purple.
Thank you in advance.
The core question is if you really need all those services and if you perhaps are making things too complicated. I think the important thing is to really consider and really make sure you justify why you want to go through this route.
If you do synchronous API calls between the services, that creates coupling and in the long run a distributed monolith.
For question #4, you typically use one access token for the user to access the public service, and then you use a different set of internal tokens (machine-to-machine also called client credentials in OpenID Connect parlor) between services that have a totally different lifetime.
q1: ocelot is an API GATEWAY which is the entry point for your requests. so it should be the first layer/service meet by user request in front of your services and it forwards the request to the service according to its configuration. so it is lay in the front for all services you have. some arch provide another api gateway for different reasons like specific api gateway for mobiles request for example.
q2: as looking separate services (i cant understand function api but i assume they are services also ) yes but the microservices development is not just about separating things, its about design and identifying the services from business context (Domain Driven Design).its very challenging to identify services and their size and the way they are communicate to each other (asynchronous communication and synchronous communication).
q3: microservices is not about languages and frameworks.one of benefits of microservices architecture is its not language or framework dependent. the may be multiple languages used in microservices. choosing languages it depends on organization policy or your own reasons. if you are .net developer then go for .net.
q4: all the services are registered with identity server and they validate the given token by it. the identity server generate token (there may be multiple tokens) with scopes . the request from identified users always has the token in the headers and the services validate incoming token by referring identity server. this tokens has lifetime and also identity server generates refresh tokens in case of expiry of current token. please look at Oauth docs and rfc. also this https://www.youtube.com/watch?v=Fhfvbl_KbWo&list=PLOeFnOV9YBa7dnrjpOG6lMpcyd7Wn7E8V may helped. you can skip the basic topics. i learned a lot from this series.
I am currently implementing an SSO-OBO flow for my addin. In this flow, I basically request for a bootstrap token from the Azure Identity platform using the Office.Auth.getAccessToken() API. And then exchange that for a Microsoft GRAPH token using the auth-code flow. And it works correctly.
This entire process of getting the Bootstrap token, and exchanging it for a Microsoft GRAPH token takes around 2.52 seconds. (Average time over 10 trials.)
I have 2 questions related to this :
Is 2.52 seconds normal for this type of transaction?
If it is not normal, are there any documents or techniques that can help me increase the speed of this 'exchange'?
Please note that I used a free account AWS-Lambda/API Gateway for the tests I executed. I am not so sure if that affects latency. Also the test was done using a 50mbps connection.
Thank you all in advance.
I have similar numbers (a bit faster though, ~800ms. Probably it's related to the azure datacenter location vs your app location). The bandwidth does not matter, as the amount of data transferred is tiny, but ping does (I'd at least make sure that your AWS app and Azure datacenter you are making request to, are located in the same country / part of the world)
You can cache the token (in the local storage, for example) and use the cached copy until it expires. The expiration time is written in the token itself ("exp" field, you can look at the contents of the token using the https://jwt.io for example). Or you could use some library that would do caching for you.
I currently have my web application hosted on AWS, and I use two ELB instances, one to load balance the frontend requests to the app servers, and a second to load balance the backend requests FROM the app servers TO the API servers, like so (sorry for the crappy ascii diagram):
/-->APP1--\ /-->API1
User-->ELB1 ELB2
\-->APP2--/ \-->API2
In other words, the API requests that the APP servers make are load balanced evenly across the two backend API servers.
But, because I'm caching responses on the API servers, and use a cache invalidation mechanism which is NOT shared between the API servers, I'd like for a user's session to be stuck to one backend API server.
I already have the user's session stuck to one APP server, using the normal ELB load balancer-generated cookie stickiness, but is there any way to get the backend ELB stuck to a session? Of course, those requests are not coming from a browser, so there's nothing to manage cookies, and it seems that ELB's can only manage stickiness with cookies. Can I emulate the necessary cookies my backend requests?
To close off this question, yes, this is fairly easy to achieve by simply capturing the 'Set-Cookie' response header from the ELB, and then passing the cookie back in subsequent requests. But, see my caveat below.
I don't believe it would be possible to achieve stickiness between your App servers and API servers without doing a whole load of messy work. I could be wrong, and am very open to correction but I don't believe there is an easy solution, unless the language you're using for your App Server logic has something to offer.
Regardless, the best solution here would be to decouple your App Servers and your Cache. It would make more sense to have a single cache shared between the API servers that is served by separate servers. This will increase your infrastructure's fault tolerance and give you better quality data in your cache (especially as you scale up). You could use the ElastiCache service to do this for you and avoid any heavy lifting.
I have created simple ASP.NET Web API(self-host OWIN) which has endpoint http://MyIp:80/process/file. It can accept only five simultaneous requests and takes about 30 seconds to proceed it, if requests number exceeded the Rest Api brings HTTP exception. To increase number of requests I am planning to host the ASP.NET Web API application on another server but in that case I will get different IP address for Rest Api. I know that there are load balancing solutions but can't find good source how to use it with ASP.NET Web API. Any advise would be appreciated!
The easiest way for you would be to use cloud. Especially for Microsoft technologies I would recommend Microsoft Azure. You will get scaling and load balancing out of the box as well as a lot of other benefits of using cloud.
This is a poor man's load balancing... bear with me.
You can declare one server as primary and the second one as secondary.
If a request hits the primary, but it is already with those 5 requests processing, it can issue a HTTP 307 Temporary Redirect to the secondary one. If the secondary is also overloaded, then throw an error.
Using a 307 temporary redirection, the caller knows that can retry the same HTTP verb to the new location. So if the caller was doing a POST call to the primary, and gets a HTTP 307 pointing to the secondary, will reissue the POST request to the secondary.
My iOS app uses a single hard-code URL api.xyz.com to find our REST service. At the moment there are just two servers running this service, and we use Amazon Route 53 DNS. But I've found that the timeout of an hour (or more) is too long incase one of our servers fails; don't want to leave users in the dark that long.
The alternative would be to implement a failover mechanism in the app. To be honest, I don't like the idea of pulling this low level DNS-related logic in the app, but I don't see another solution at the moment.
So my question is: How do I implement such a failover mechanism on iOS? I'm using AFNetworking for my REST API.
Or, are there better alternatives on server side? At the moment the servers are individually rented ones, so no Amazon, Google, ... cloud service.