I have a question related to Apache Storm. Currently we use some servers to implement Storm, our application needs facebook/Twitter tokens.
So we want to design like this: each token belongs to a specific server, when a bolt received tuple, it'll request a token which is specifically for that bolt running instance, this is to prevent token blocking if different servers use same token in a short time.
Anyone knows how to achieve this way, is there any way to know which servers of a running instance of bolt? Thanks a lot.
If you want one token per bolt instance then add an instance variable to your bolt class to hold that token and initialize/cleanup that token at the appropriate times in the bolt lifecycle.
If you want to have a token for each machine then you can create a Singleton bean to hold one token for the entire JVM. Note that if you want to have more than one worker on a single machine then you need to be happy with multiple tokens for each machine (one per JVM on the machine), or build a stand-alone middleware server that owns the token and which handles requests from multiple JVM's on the machine. Even if that is acceptable you'll still need to work out how to make all of the bolt instances in a single JVM/worker share the one token for that JVM.
Related
For a Flutter web app, I'm using a package that creates its own HTTP client instance, communicating with some gateway.
Additionally I have my own server for the app.
Should I modify the package, to have both connections on the same instance?
Or should I have a HTTP client for every connection?
The benefit of reusing an instance of HttpClient is that Dart can then let a connection to stay open for later reuse in case you are going to make another request against the same server.
So I would in general recommend reusing a HttpClient instance if possible, but here are no issue in having multiple instances of it. Especially if the two instances are used to connect to different servers.
There can be several reasons for having multiple instances like e.g. you want different connection settings (e.g. timeout, user-agent, different handling of certificates) for different endpoints.
I'm building a small microservice-based webapp using JHipster with JWT authorization. The Architecture is simple, one gateway and two services with repositories. The problem that I had for the last few hours is the communication between the two backend-services.
At first, I tried to find a token on the services themself, but couldn't find it. If I just missed it in all the docs (quite overwhelming when beginning with the full stack :P), I would be happy to revert my changes and use the predefined token.
My second approach was that each service will authorize itself with the gateway at PostConstruct and save the token in memory to use it each API call. It works without a problem, but I find it hard to believe that this functionality is not already programmed in JHipster.
So my question is whether my approach is usual? If neither is true and there are some best-practices for it, I'm also interested in them.
It depends on the use case.
For user requests, a common approach is: the calling service forwards the token it received to the other service without going through the gateway suing #AuthorizedFeignClient.
For background tasks like scheduled jobs, your approach can be applied or you could also issue long life tokens as long as they have limited permissions through roles. This way you don't have to go through gateway.
Keycloak's offline tokens approach could also inspire you.
I'm developing an API which only needs to be accessed by servers, as opposed to specific, human users. I've been using the client credentials grant which, if I'm not mistaken, is appropriate for this use case.
So the remote websites/apps, after registering their corresponding OAuth2 clients, are simply requesting an an access token using their client ID/secret combination, via a SSL POST request + HTTP Basic authentication.
Now I was wondering if it would be a good idea, during said access token request, to check the remote IP to make sure it actually belongs to the client that was registered (you'd have to state one or several IPs when declaring your app, then it would be checked against the remote IP of the server making the POST /token request).
I feel like this would be an easy way to make sure that, even if the client ID/secret are somehow stolen, they wouldn't be just usable from anywhere.
Being fairly new to the OAuth2 protocol, I need some input as to whether this is a valid approach. Is there a more clever way to do this, or is it straight up unnecessary (in which case, for what reasons)?
Thanks in advance
That's certainly a valid approach but binds the token tightly to the network layer and deployment which may make it difficult to change the network architecture. The way that OAuth addresses your concern is by the so-called Proof-of-Possession extensions https://datatracker.ietf.org/doc/html/draft-ietf-oauth-pop-architecture.
It may be worth considering implementing that: even though it is not a finalized specification yet, it binds the token to the client instead of the IP address which safeguards against network changes and is more future proof.
I'm designing an API to enable remote clients to execute PowerShell scripts against a remote server.
To execute the commands effectively, the application needs to create a unique runspace for the remote client (so it can initialise the runspace with an appropriate host and command set for that client). Every time the client makes a request, the API will need to ensure the request is executed within the correct runspace.
An (over-simplified) view of the flow might look like this:
Client connects to Web API, POSTs credentials for the backend application
Web API passes these credentials through to the backend app, which uses them to create a RunSpace uniquely configured for that client
Web API and app "agree" on a linked session-runspace ID
Web API either informs client of session-runspace ID or holds it in memory
Client makes request: e.g. "GET http://myapiserver/api/backup-status/"
Web API passes request through to backend app function
Backend app returns results: e.g. "JSON {this is the current status of backup for user/client x}"
Web API passes these results through to remote client
Either timeout or logout request ends 'session' and RunSpace is disposed
(In reality, the PowerShell App might just be a custom controller/model within the Web API, or it could be an IIS snap-in or similar - I'm open to design suggestions here...).
My concern is, in order to create a unique RunSpace for each remote client, I need to give that client a unique "session" ID so the API can pass requests through to the app correctly. This feels like I'm breaking the stateless rule.
In truth, the API is still stateless, just the back-end app is not, but it does need to create a session (RunSpace) for each client and then dispose of the RunSpace after a timeout/end-session request.
QUESTIONS
Should I hack into the Authentication mechanism in ASP.NET MVC to spin-up the RunSpace?
Should I admit defeat and just hack up a session variable?
Is there a better SOA that I should consider? (Web API feels very neat and tidy for this though - particularly if I want to have web, mobile and what-have-you clients)
This feels like I'm breaking the stateless rule.
Your application is stateful - no way around it. You have to maintain a process for each client and the process has to run on one box and client always connecting to the same box. So if you have a single server, no problem. If you have multiple, you have to use sticky session so client always comes back to the same server (load balancers could do that for you).
Should I hack into the Authentication mechanism in ASP.NET MVC to
spin-up the RunSpace?
If you need authentication.
Should I admit defeat and just hack up a session variable?
No variable, just use plain in-memory session. In case more than 1 server, use sticky session as explained above.
Is there a better SOA that I should consider? (Web API feels very neat
and tidy for this though - particularly if I want to have web, mobile
and what-have-you clients)
SOA does not come into this. You have a single service.
I'm currently building a mobile application (iOS at first), which needs a backend web service to communicate with.
Since this service will be exposing data that I only want to be accessed by my mobile clients, I would like to restrict the access to the service.
However I'm in a bit of a doubt as to how this should be implemented. Since my app doesn't require authentication, I can't just authenticate against the service with these credentials. Somehow I need to be able to identify if the request is coming from a trusted client (i.e. my app), and this of course leads to the thought that one could just use certificates. But couldn't this certificate just be extracted from the app and hence misused?
Currently my app is based on iOS, but later on android and WP will come as well.
The web service I'm expecting to develop in nodejs, though this is not a final decision - it will however be a RESTful service.
Any advice on best practice is appreciated!
Simple answer: You cannot prevent just anybody from acecssing your web site from a non-mobile client. You can, however, make it harder.
Easy:
Send a nonstandard HTTP header
Set some unique query parameter
Send an interesting (or subtly non-interesting) User Agent string
(you can probably think of a few more)
Difficult:
Implement a challenge/response protocol to identify your client
(Ab)use HTTP as a transport for your own encrypted content
(you can probably think of a few more)
Of course anybody could extract the data, decompile your code, replay your HTTP requests, and whatnot. But at some point, being able to access a free Web application wouldn't be worth the effort that'd be required to reverse-engineer your app.
There's a more basic question here, however. What would be the harm of accessing your site with some other client? You haven't said; and without that information it's basically impossible to recommend an appropriate solution.