I use IdentityServer4 (with Asp.Net Core Identity) as a centralized auth point for multiple applications.
In one of the applications I want to setup a scheduled job to send out e-mail notifications to multiple users. When the job will execute, it will have no access to user claims (as it will execute not in the context of a single user request) and thus there will be no place to read user's e-mail from. This means I will have to duplicate e-mails in the application DB.
But how to keep e-mails in sync between the app and the IdentityServer if some user wants to change it?
A good approach would be to implement integration events in your system. This is a mechanism that raises an event 'This special thing happend', and allows other parts of your system to be notified.
You can use RabbitMQ or Azure ServiceBus for example to send messages to. Every system being subscribed to that kind of message, will receive it.
So in your case, you would create an event called UserChangedEmailAddressIntegrationEvent for example. Then in your emailing system, you subscribe to exact this event. Once it's raised, your emailing system will receive the message and be able to handle it.
The UserChangedEmailAddressIntegrationEvent could in fact be a class, containing (for example) two properties, OldEmail and NewEmail so the emailing system knows what value to change.
See the eShopOnContainers example project, which has this exact technique implemented
https://github.com/dotnet-architecture/eShopOnContainers
We have faced a similar problem where we had Hangfire jobs running that would extract reporting data from our system and then email the reports to the set of users configured when creating the scheduled job.
We also used Identity Server 4 with ASP.Net Identity and ended up storing the user id's on the scheduled job info. We then also created an api endpoint on our ASP.Net Identity that would serve back the required user info given user id (or a list of user id's). Lastly, we used client_credentials and created a client to consume that api during the job execution that would retrieve appropriate user info at a given point in time from our ASP.Net Identity api.
Such approach has worked well for us so far and it removed the pain of having to think about how to ensure data syncing all together.
Related
I'm mid-way through a task to migrate a legacy .NET MVC app to use Single Sign On (SSO) to make integration with a to-be-developed mobile app possible. I'm planning on using Azure AD B2C to facilitate this and based on my researched, I need to use custom policies to achieve the required functionality.
Work on this migration is proceeding very slowly. I'm finding the custom policy XML very clunky to work with. It's going to take quite some time to achieve parity with the existing system given the current velocity. I'm wondering whether it would be wise to sidestep a lot of the migration headaches by using the Microsoft Graph API in place of custom policies.
Take registration for example. It appears common to redirect the user to a SignUp.xml custom policy (or the integrated SignUpOrSignIn.xml) to handle adding the user record in the AD B2C data store. Part of this policy would involve calling a REST API to create a corresponding record for this user in the app's database (stores email settings and such). Instead of using these custom policies, my plan would be to instead take the existing registration process and simply add a step which creates the user record on the B2C side using the Microsoft Graph API.
It appears like most things I need may be achieved using the Microsoft Graph API. Things I'd need that I can see are not available are:
logging in to a user account and;
sending verification emails
Are there any other common authentication-related tasks I'm likely to need that couldn't be achieved using the Graph API?
As far as downsides, the fact I'd be handling user passwords (even if it was just to create the user and nothing else) is an obvious concern, but perhaps acceptable. The main thing I'm after is a simple SSO solution that generates secure access tokens (incl. handling reset tokens, etc). I hope then, that this could be a feasible option.
You will miss out on password reset, profile edit, SSO and token expiration etc.
A better way may be to use the base custom policies and achieve a lot of what you need by having the policy call REST API's.
What is your use case?
I am stumped trying to find a similar idea on how we can achieve this.
We are currently using a model-driven power app/Dataverse to house school applications. Once an application is submitted, our representatives will begin updating the application record "Application Status" custom field as they are going through the various steps.
Our partner wants to create a external website (just simple HTML/CSS/JavaScript) to display an application lookup where applicants can type in their application ID or send applicants direct-links to view the status of their application. (example: domain.com/application-status/?appid=1234)
This external, public website would have to connect to our dataverse/power app via the web api to make the request and display it to the applicant searching/viewing the website.
How can this be achieved? All I have read is that the user looking up data will need to have a Microsoft account and authenticated in our environment to view the data.
Can someone point me in the right direct on how to get this done (article or existing thread). Your help is highly appreciated.
This is normally being handled by the use of a PowerPlatform Portal.
Portals are designed to allow interactions with B2B/B2C.
This is, however, a hefty price tag.
Another way is to make your website use Rest API calls to your Dataverse tables.
To enable these, you need to create a client application registration in Azure and add this application user in your environment as an application user. Once registered, assign the appropriate rights(Sysadmin, Syscustomizer, whatever you want) and you can access your environment in two steps:
Generate an access token based on the scope of your environment, client id, and client secret.
Use the access token your application user has given to do your CRUD operations.
I'm developing a Mac app that uses CloudKit as its back-end. Some of my users are requesting the ability to ingest and extract data via an automation/integration service such as Zapier. For this, I need to introduce a web API.
I am planning to use CloudKit Web Services to access the app's data. This data is user-specific and hence, resides in a private database. As a result, CloudKit requires user authentication as described here.
Essentially the user needs to be redirected to an Apple-hosted authentication page. After successful authentication, an authentication token is provided that can be used for data operations. Similar to how OAuth2 works, but different enough to not work with Zapier's (or probably any other similar services) supported authentication schemes.
Who has done something similar? What are my options? I want to keep things as simple as possible and make my web API's implementation as thin as possible.
Thanks.
Niels
This is definitely doable and you are on-track with your thinking. Here's how I envision it working:
You could do all of this with a front-end web app (no server-side app needed). I personally prefer Vue.js but you probably have something in mind already.
Your app will need to authenticate the user to CloudKit using the flow you mentioned. I highly recommend you use the Web Services API and not try to wrestle Apple's neglected CloudKit JS API. For this, you are going to need to generate an API token in the CloudKit Dashboard.
You app would then prompt the user to authenticate to Zapier.
You should now have user credentials for both CloudKit and Zapier in place in the user's browser cache (you can save, for example, the CloudKit token to sessionStorage and likewise with Zapier).
Make API calls to Zapier, pull down the data, and then save it to CloudKit all within your JS app. It's all API transactions at this point. I'm a fan of Axios for making the HTTP requests.
If you are downloading files, transacting huge amounts of data, or doing processor-intensive stuff, you might consider using a server for that work. But if you just need a place to pull and push reasonable chunks of data, I see no reason why you can't do it all in a front-end app.
Alternatively, if you don't want a web app at all, and want to only have the user work in the Mac app, that can be done, too. Just make API calls directly to Zapier from within your Cocoa app. Whether or not this is feasible depends some on how you want it to work.
If you have more specific questions or need help with any of the implementation details, feel free to add a follow-up comment or ask a new question.
Good luck!
I think the other answer is mostly correct. I don't know much about CloudKit, but we can talk through what you'd need for it to work.
Let's say you had a simple iOS app that stored contacts. On the iOS side, Apple presumably abstracts the upload and download operations.
If you wanted to make a web viewer for synced contacts using CloudKit, you'd need an endpoint to fetch all rows belonging to the authenticated user (each of which would have a UUID, name, and a phone number). I believe that's possible with CloudKit code Apple provides (but let me know if I'm off base).
Now, we want to integrate with Zapier. Say, a "New Contact" trigger. You make some sort of authenticated HTTP request from Zapier to Apple on behalf of an authenticated user. It gives back a list of contacts and Zapier can trigger on the ones it hasn't seen before. To do that, Zapier needs some sort of user token.
That's where the little front-end page the other answerer mentioned comes into play. If you've got a web page that can surface a user's token to them, they can paste it into Zapier and all of the above becomes possible. I'm not sure what the lifespan of the token is, but hopefully it can be automatically refreshed as needed (rather than the user needing to take any manual action).
I'm not sure if what I've described is possible, but do let me know if it is. It would be huge if it were possible to integrate Zapier and the iOS ecosystem!
Edit to respond to comments:
Zapier won't be able to interact with CloudKit in a way sufficient for me (some minor business logic is needed)
I'm not sure what that entails for you, but it's common to put logic in the Zapier integration to structure data in a way Zapier expects. There's a full Node.js execution environment, so the sky's the limit here.
I don't think Zapier can authenticate against CloudKit as it uses a non-standard authentication scheme
Once you've got a user's token (described above, which is unusual), you will almost certainly be able to use it in requests to cloudkit. Zapier provides a "custom" authentication scheme which lets you do basically anything you want. So unless Apple uses something that fetch can't handle (unlikely), it should be fine (once you get the token).
I would like to push data directly from my app into Zapier and have it done whatever magic the user has configured
This is also probably possible. Zapier ingests data in two ways:
polling, where Zapier frequently makes a web request, store the IDs of items we've seen before, and act on the new ones. You can read more about that here. Assuming you can work your business logic into the integration, this is doesn't require an external server besides Apple's
webhooks, where Zapier registers a subscription with you and you send new data, on demand, to that address. This would probably require a webserver on your end to handle. It's optional though - you can do polling instead.
Hopefully this cleared it up a bit. Feel free to reach out to partners#zapier.com and reference this question to talk more about it.
I have been working on an Angular/Ionic application and am using the OAuth.io plugin to handle a facebook login to gain a user's information. From that I derive a simple database name based on the user's firstname and their Facebook ID number.
What I am wanting to do would be to sync this local pouchDB instance to an online CouchDB instance (currently using http://iriscouch.com) for replication to a desktop app, or something similar. The piece I am missing is how to handle user authentication/authorization to be able to only read and write to their own database and no one else's as all of the code currently lives on the client side with no app server to handle any login aside from the OAuth.io plugin.
Is this possible to handle without adding an app server layer, and without manual intervention to create a user on the CouchDB instance?
Currently you can only do per-user read-write permissions in CouchDB by having an additional process on the server side (details), which would be troublesome for you since you're using IrisCouch, so you'd need a separate server somewhere to host this process.
A few alternative options are available to you right now:
Couchbase, which has per-user databases
Janus, which works using Mozilla Persona rather than Facebook ID, and isn't ready yet, but should be unveiled shortly
I'm developing an application that has various types of Notifications. Examples of notifications:
Message Created
Listing Submitted
Listing Approved
I'd like to tie all of these up to SignalR so that any connected clients get updates in real-time.
As far as architecture goes - right now the application is entirely within a single solution hosted on an Azure Website. The triggers for each of these notification types live within this application.
When a trigger is hit, I'd like to tell signalR, "Hey, send this message to the following clients" along with a list of userIds. I'm assuming that it's possible to identify connected clients based on userId... and I'm assuming that the process of send message to clients should be executed outside of the web application, so as to not slow down the MVC app or risk losing data in a broken async call. First question - are these assumptions correct?
Assuming so, this means that I'll need something like a dedicated web/worker role to be sending messages to clients. I could pass messages from my web application directly to this process, but what happens if the process dies? The resiliency concerns lead me to believe that the proper way to pass messages would be via a queue of some sort. Second question - is this a valid train of thought?
Assuming so, this means that I can either use a good ol' Azure SQL database as a queue, but it seems like there are some specialized (and maybe cheaper) services to handle message queueing, such as this:
http://www.windowsazure.com/en-us/develop/net/how-to-guides/queue-service/
Third question: Should this be used as a queueing mechanism for signalR? I'm interested in using Redis for caching in the future... would Redis be better or worse than the queue service?
Final Question:
I've attempted to illustrate my proposed architecture here:
What I'm most unclear on here is how the MVC app will know when to queue, or how the SignalR processes will know when to broadcast. Should the MVC app queue blindly, without caring about connected clients? This seems to introduce a lot of wasted space on the queue, and wasted cycles in the worker roles, since a very small percentage of clients will ever be connected.
The only other approach I can think of is to somehow give the MVC app visibility into the SignalR processes to see if the client is connected... and if they are, then Enqueue. This makes me uncomfortable though because it means I have to hit that red line on the diagram for every trigger that gets hit, which - even if done async - gets me worrying about performance and reliability.
What is the recommended architecture for scalable, performant SignalR message broadcasting? Performance is top priority, followed closely by cost.
Bonus question:
What if some messages are of higher priority than others? Should two queues be used, one of which always gets checked before the other?
If you want to target some users, you'll have to come up with a mechanism, off the top of my head I can give an example, if any user hits a page, you can create a group for that page and push to all users in that group/in that page.
It's not clear to me why you need the queues. Usually users subscribe to some events when hitting a page or by some action like join a chat room, and the server pushes data using those events/functions when appropriate.
For scalability, you can run signalr in different servers, in which case you should use sql server, or service bus or redis as a backplane.
Firstly you need to create a SignalR server to which all the users can connect to. This SignalR server can be created either in the web role or worker role. If you have a huge user base then its better to create the SignalR server on a separate role.
Then wherever the trigger is hit and you want to send messages to users, you have to create a SignalR client (.NET or javascript) and then connect to SignalR server. Then you can send the message to SignalR server which in turn will broadcast to all the other users connected. After that you can disconnect the connection with SignalR server. This way you dont have to use queues to communicate with the SignalR role.
And also to send messages to specific users you can store the socket id's along with their user id's in a table (azure table storage should do) when they connect to SignalR server. Then using socket id you can send messages to specific user.