Authentication issues with cloud scheduler - google-cloud-run

Can't figure out what could possible be wrong. I've deployed a service, set the trigger to require authentication.
Created a new service account for the cloud scheduler: scheduler-invoker#<REDACTED>.iam.gserviceaccount.com
Went to cloud run's permissions and added that account as cloud run invoker (although during creation I have already set up that role)
On cloud scheduler, I add this account as the service account, and the audience is set to theurl of the service.
But invocations are failing with a 403 error. Can't figure out this one, followed every step outlined at https://cloud.google.com/run/docs/triggering/using-scheduler and I'm pretty sure I've done this in the past with no issues.
Any ideas?
I saw a few posts here on SO but even after reading them I'm still on the same spot

I missed the fact that I was on a project where cloud scheduler was activated before 2019. Adding service-[project-number]#gcp-sa-cloudscheduler.iam.gserviceaccount.com seems to fix it

Related

connect azure functions to local signalr instance

I am currently writing tests for an existing project based on azure functions. The project uses signalr to send live update messages to the clients.
For my tests I am currently using a signalr instance, that is running in the cloud, but I need to replace it by a "local" instance on the system, that is running the tests, so i can be 100% sure, that the signalr message is coming from my test session.
Does anybody have an idea, how to get a signalr-server running in a docker container for my tests (i need a connection string i can provide for the azure functions app)?
I could not find anything online. I am sure I am not the only one, who wants to test if signalr messages are send correctly and i would prefer not to implement the signalr-server myself.
The bindings available in Azure Functions are for the Azure SignalR Service not SignalR itself, so there is no way unfortunately to test this locally.
You could simply instead just create a test Azure SignalR Service instance and use that instead.
I didn't find any way how to achieve this, so i created a repo with a small mock service for the SignalR service. I hope I am allowed to post such stuff here.
My repo and my docker image
Feel free to use / fork it. I am not sure if I will ever find any time to maintain it.

Spring cloud data stream deploy stuck on load

I am a beginner to spring cloud data flow and i am following their official doc. But when i deploy the stream from spring cloud data flow dashboard it just stuck on loading and the stream is never deployed.
The DSL for the stream i want to deploy is:
http | log
I changed the ports for skipper but nothing works
I expect that when i click on deploy the stream then it should show me the status 'deploying' but instead it just keeps on loading forever.
When reporting for issues like this, it'd be great if you could share the versions in use and the logs for review.
Depending on the platform (local, cf or k8s), I'd recommend reviewing the troubleshooting steps included in the SCDF Microsite.
If you followed those steps and if you still see issues, please update the description of the post with the relevant details, and we can review then.

Spring Cloud Data Flow Basic Authentication

Spring Cloud Data Flow Server (Local) does not have any dynamic way to set up users and roles either through dashboard UI or shell, ie. there is no way to add or delete users with roles while the server is running.
I have been able to get both single user or file based authentication and authorization working but both of them I had to set up the docker-compose.yml file like so:
spring.cloud.dataflow.security.authentication.file.enabled=true
spring.cloud.dataflow.security.authentication.file.users.bob=bobpass, ROLE_MANAGE
spring.cloud.dataflow.security.authentication.file.users.alice=alicepass, ROLE_VIEW, ROLE_CREATE
spring.cloud.dataflow.security.authentication.file.users.hare=harepass, ROLE_VIEW
However, if I have to add new users with roles, I will have to docker-compose down, edit the docker-compose.yml and then do docker-compose up, for the new user authentication authorization to work.
Is there any work around this?
There isn't any other approach to dynamically add/update users and then have it reflect at runtime in SCDF.
However, in SCDF 2.0, we have redesigned/rewritten the security architecture. In this baseline, we rely on Cloud Foundry's UAA component, which is a standalone application that can work in Local, CF or K8s.
Here, you can directly interact with UAA outside of SCDF. You can add, update, and delete users, too. Of course, you can centrally manage the OAuth token-credentials such as remote renewals and revocations. Check out the end-to-end sample demonstration of the new design with SCDF + OAuth + LDAP, all in action.
The recent 2.0 M1 release already include this improvement - see blog. Try it out and let us know if you have any questions/feedback.
UPDATE:
I recently also bumped into a UAA Web-UI from the community. Perhaps UAA team could consider adding it to the official stack eventually.

Is there a Best Practice standard for providing OAUTH2 security over a Kubernetes cluster?

I am starting to experiment with Oauth2 authorisation for a Kubernetes cluster.
I have found a good Oauth2 identity provider using UAA
My original intention was to deploy this into a Kubernetes cluster, and then allow it to provide authentication over that cluster. This would provide a single sign on solution hosted in the cloud, and enable that solution to manage Kubernetes access as well as access to the applications running on my cluster.
However, when thinking this solution through, there would seem to be some edge cases where this kind of configuration could be catastrophic. For instance if my cluster stops then I do not think I will be able to restart that cluster, as the Oauth2 provider would not be running, and thus I could not be authenticated to perform any restart operations.
Has anybody else encountered this conundrum ?
Is this a real risk ?
Is there a 'standard' approach to circumvent this issue ?
Many Thanks for taking the time to read this !
Kubernetes support multiple authentication( ref: https://kubernetes.io/docs/reference/access-authn-authz/authentication/).
You can enable multiple of them. You can log into kubernetes cluster using any of the them(if they enabled and configured correctly) .
According to kubernetes documentation: When multiple authenticator modules are enabled, the first module to successfully authenticate the request short-circuits evaluation. The API server does not guarantee the order authenticators run in.
So, if you enable multiple authentication, i think you are fine. I am using kubernetes cluster. In that cluster certificates authentication and webhook token authentication using guard is enabled. And this guard is running in that kubernetes cluster.
The use of UAA consists of two 2 procedures—authentification and authorization—where the latter allows for performing certain actions within a cluster. They are used through the kubectl command-line tool.
One can use 2 existing modules of authorization (ABAC and RBAC). Here you can find a side-by-side comparison of these two options where the author vouched for the RBAC mode as it "doesn’t require the API server to be restarted every time the policy files get updated".
If I understood your question right, this article may be of help.

How to direct pf_auth.pf_authenticate request to on-premise Multi Factor Authentication Server

I've been beating my head for hours on this request.
I have an on-premise installation of an Azure MultiFactor Authentication Server. I'm building a new ASP.Net MVC 5 application that will do an LDAP lookup for users in Active Directory (also on-premise) with no ADFS configured.
I've gone through the sdk for MFA Server and can easily enable SMS requests to be sent. I get the otp code from calling pf_auth.pf_authenticate(authParams, out otp, out callStatus, out errorId);
This works for test. But I need to direct this request to my on-site MFA Server. I can't find anything that tells me where I can set this value.
I know that if I login to a machine on that domain it automatically sends the SMS text to my phone and I can enter it into the next screen to complete a login (the default user portals set up with MFA). I would assume that this would possibly work when I call ValidateCredentials on my application's newly created PrincipalContext. But how do I submit the sms code without some sort of RequestId to synch up the communication.
I'm sorry if this doesn't make much sense. It's just all the examples I can find are for using MFA with a local ADFS. I only have Active Directory which is causing me to do the custom LDAP lookup.
Any help or direction is greatly appreciated.
OK, sorry for the delay in responding to this post. After getting no responses I moved on but have recently noticed that there have been 45+ views since my post and thought I should update for others who might be experiencing a similar issue.
Turns out that when using MFA on premise you can point multiple applications to a single MFA server, like Remote Access, VPN, etc.
However if you are attempting to setup a Web Application hosted on IIS you need to install a copy of the MFA server on the IIS server hosting the application.
When installing you can point to the existing MFA setup so that both machines are in the same configuration. This local install also adds a custom IIS Plugin that does the request interception and directs it through the MFA pipeline. If everything looks good the request is then forwarded to your web application like normal.
This is really pretty straight forward but the documentation for MFA setup was sorely lacking. Hopefully in the future there will be a decent sample app provided by Microsoft that demos this process using local MFA and not just the Azure hosted solution.

Resources