getEffectiveOrgPolicy error on google cloud run secret mounted volume - google-cloud-run

Today when I went to deploy a new revision of a cloud run application I was unable to press the deploy key. No error or nothing just an unresponsive key.
I use secret manager and I have narrowed down the issue to the step where you add the secret mounted volume. When I do this, I receive the following error when I inspect the network.
{
"error": {
"code": 404,
"message": "Requested entity was not found.",
"status": "NOT_FOUND"
}
}
{"constraint":"constraints/gcp.SecretManagerFilesystemAccess"}
However, when I go lookup the constraints in the documentation, this constraint doesn't exist.
I do have some organizational policies set like refusing the ability to create service accounts, create service keys or upload keys, but I have confirmed that my other organization has the same settings and is having no trouble.
Does anyone from google have any information regarding this issue?
EDIT:
Steps to reproduce this issue.
Open Google Cloud Platform.
Click "Cloud Run" from navigation bar.
Select Service
Click "Edit and Deploy New Revision"
Open Inspector, click Network, clear current items.
"Select Variables and Secrets"
Click "reference a secret".
This produces the following error on url:
https://cloudresourcemanager.clients6.google.com/v1/projects/PROJECTID:getEffectiveOrgPolicy?key=
{
"error": {
"code": 404,
"message": "Requested entity was not found.",
"status": "NOT_FOUND"
}
}
{"constraint":"constraints/gcp.SecretManagerFilesystemAccess"}
I can see this also produces a validation form error when the "Deploy" button is pressed that is not visible as well.

I was able to solve this using the following command in the CLI.
gcloud beta run deploy nightpricer-api \
--image=gcr.io/io-nightpricer-prod/nightpricer-api#sha256:d74ac81ced1628929075d6c8e97b039ac705663bf3a988cbb57bfad77a30a6dd \
--platform=managed \
--region=us-central1 \
--project=io-nightpricer-prod \
--update-secrets=/config/secrets=APP_SECRETS:latest,/config1/gmail=GMAIL_APPLICATION_CREDENTIALS:latest \
--service-account=firebase-adminsdk-hbr00#io-nightpricer-prod.iam.gserviceaccount.com

Related

Configure IoTEdge module to receive messages port 53000

I'm loosely following along this article to develop and debug modules for IoTEdge
https://learn.microsoft.com/en-us/azure/iot-edge/how-to-visual-studio-develop-module?view=iotedge-2020-11
The article leverages the iotedgehubdev which is where, presumably, the configuration exists to expose port 53000.
My question is, without using the simulator or iotedgehubdev tool, how do I configure the port to allow messages to be sent using this type of syntax
curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages
// Register callback to be called when a message is received by the module
await ioTHubModuleClient.SetInputMessageHandlerAsync("input1", PipeMessage, ioTHubModuleClient);
static async Task<MessageResponse> PipeMessage(Message message, object userContext)
{
....
}
Target environment: Ubuntu, IoTEdge 1.1.4, published via IoTHub pulled from ACR
Development: Windows 11, Visual Studio 2022, debug via SSH to docker module on Ubuntu
Once the module is up and running, I want to send a post request to the module from the Ubuntu machine hosting the module. The module is being published from IoTHub
I've looked across many articles for clues on how port 53000 is setup and listening but haven't found anything that helps so far.
Appreciate the help.
Sending a message is now easy, once your code is running on Simulator, you can send messages by issuing a CURL request to the endpoint you received when starting the Simulator. Please follow below Reference in which we have detailed information about:
( curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}' http://localhost:53000/api/v1/messages)
Even I looked across many articles for clues. How to Set up port 53000 without using the simulator or iotedgehubdev tool. if you want to work without using the simulator or iotedgehubdev.
you can reach out to Azure Support or Can raise a GitHub Issue.
You can refer this article( Azure IoT Edge Simulator — Easily run and test your IoT Edge application | by Xavier Geerinck | Medium ) by Xavier Geerinck
We have to build a custom API module which will listen to the port just what the iotedgedev utility is doing, in which ever language you are writing it in.
Create a Rest API.
Use Azure Devices Module Client module with IOTEdge enable in project file.
Create an output in your custom API module and send the message using module client.
Create the route config in you deployment file and provide the output of this module output to the input of another module in the routes section.
Edit: Do not forget to mention the createOptions Portbindings and the Exposed Ports section, like for e.g
"createOptions": {
"ExposedPorts": {
"9000/tcp": {}
},
"HostConfig": {
"PortBindings": {
"9000/tcp": [
{
"HostPort": "9000"
}
]
}
}
}

Pulling docker image in GKE

Apologies if this is a duplicate, I haven't found a solution in similar questions.
I'm trying to upload a docker image to Google Kubernetes Engine.
I've did it successfully before, but I can't seem to find my luck this time around.
I have Google SDK set up locally with kubectl and my Google Account, which is project owner and has all required permissions.
When I use
kubectl create deployment hello-app --image=gcr.io/{project-id}/hello-app:v1
I see the deployment on my GKE console, consistently crashing as it "cannot pull the image from the repository.ErrImagePull Cannot pull image '' from the registry".
It provides 4 recommendations, which I have by now triple checked:
Check for spelling mistakes in the image names.
Check for errors when pulling the images manually (all fine in Cloud Shell)
Check the image pull secret settings
So, based on this https://blog.container-solutions.com/using-google-container-registry-with-kubernetes, I manually added 'gcr-json-key' from a new service account with project view permissions as well as 'gcr-access-token' to kubectl default service account.
Check the firewall for the cluster to make sure the cluster can connect to the ''. Afaik, this should not be an issue with a newly set up cluster.
The pods themselve provide the following error code:
Failed to pull image "gcr.io/{project id}/hello-app:v1":
[rpc error: code = Unknown desc = Error response from daemon:
Get https://gcr.io/v2/{project id}/hello-app/manifests/v1: unknown: Unable to parse json key.,
rpc error: code = Unknown desc = Error response from daemon:
Get https://gcr.io/v2/{project id}/hello-app/manifests/v1:
unauthorized: Not Authorized., rpc error: code = Unknown desc = Error response from daemon:
pull access denied for gcr.io/{project id}/hello-app,
repository does not exist or may require 'docker login': denied:
Permission denied for "v1" from request "/v2/{project id}/hello-app/manifests/v1".]
My question now, what am I doing wrong or how can I find out why my pods can't pull my image?
Kubernetes default serviceaccount spec:
kubectl get serviceaccount -o json
{
"apiVersion": "v1",
"imagePullSecrets": [
{
"name": "gcr-json-key"
},
{
"name": "gcr-access-token"
}
],
"kind": "ServiceAccount",
"metadata": {
"creationTimestamp": "2020-11-25T15:49:16Z",
"name": "default",
"namespace": "default",
"resourceVersion": "6835",
"selfLink": "/api/v1/namespaces/default/serviceaccounts/default",
"uid": "436bf59a-dc6e-49ec-aab6-0dac253e2ced"
},
"secrets": [
{
"name": "default-token-5v5fb"
}
]
}
It does take several steps and the blog post you referenced appears to have them correctly. So, I suspect your error is in one of the steps.
Couple of things:
The error message says Failed to pull image "gcr.io/{project id}/hello-app:v1". Did you edit the error message to remove your {project id}? If not, that's one problem.
My next concern is the second line: Unable to parse json key. This suggests that you created the secret incorrectly:
Create the service account and generate a key
Create the Secret exactly as shown: kubectl create secret docker-registry gcr-json-key... (in the default namespace unless --namespace=... differs)
Update the Kubernetes spec with ImagePullSecrets
Because of the ImagePullSecrets requirement, I'm not aware of an alternative kubectl run equivalent but, you can try accessing your image using Docker from your host:
See: https://cloud.google.com/container-registry/docs/advanced-authentication#json-key
And then try docker pull gcr.io/{project id}/hello-app:v1 ensuring that {project id} is replaced with the correct GCP Project ID.
This proves:
The Service Account & Key are correct
The Container Image is correct
That leaves, your creation of the Secret and your Kubernetes spec to test.
NOTE The Service Account IAM permission of Project Viewer is overly broad for GCR access, see the permissions
Use StorageObject Viewer (roles/storage.objectViewer) if the Service Account needs only to pull images.

Deploying react app with nginx by using s2i strategy in OpenShift

I want to deploy my react app by using s2i strategy from local directory that includes build files to openshift.(OpenShift 3.11 version)
Firstly, I created a build-config file by using oc-cli tool
oc new-build nginx:1.12--name=s2i-frontend--binary=true
It's ok. After I have created builder image sucessfully that called nginx:1.12, switch path which includes build files of react app that creates after npm run build phase. I typed the following command
oc start-build s2i-frontend --from-dir=build/ --wait --loglevel=10
but I came across an error like that:
my code is here
Response Body:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"unable
to wait for build s2i-frontend-15 to run: timed out waiting for the
condition","reason":"BadRequest","code":400}
Uploading finished I0129 11:40:28.529232 31060 helpers.go:201]
server response object: [{ "metadata": {}, "status": "Failure",
"message": "unable to wait for build s2i-frontend-15 to run: timed out
waiting for the condition", "reason": "BadRequest", "code": 400 }]
F0129 11:40:28.530229 31060 helpers.go:119] Error from server
(BadRequest): unable to wait for build s2i-frontend-15 to run: timed
out waiting for the condition
I have opened an issue to github for this problem and I was warned.Then I have controlled my configuration resources. I have got the problem which is about the resource quota definitions configs. The problem is solved and run successfully.

Create repo on Bitbucket programmatically

I used to do
curl -k -X POST --user john#outlook.com:doe13 "https://api.bitbucket.org/1.0/repositories" -d "name=logoApp"
and success.
now I got : error
{"type": "error", "error": {"message": "Resource removed", "detail": "This API is no longer supported.\n\nFor information about its removal, please refer to the deprecation notice at: https://developer.atlassian.com/cloud/bitbucket/deprecation-notice-v1-apis/"}}
Does anyone know a know way to do this ?
There's a difference between a success from curl (OK:200) and an error from the service you're trying to use. The error, however, mentions that you're trying to use the Cloud Rest API version 1, which is deprecated effective 30 June 2018.
Read this for more information.
I don't use Bitbucket Server (a local option), and I think that has more features for this sort of thing.
For the public Bitbucket, you can still do it but it isn't documented.
The v1.0 API has been removed, and the new v2.0 API doesn't document a POST to a /repositories. Instead, you have to hit an endpoint that includes the repo that doesn't yet exist: /repositories/workspace/repo_slug
The JSON payload needs to know the project for the repo: look in the slug for a project that already exists. Fill in the user/team and repo name in the URL. And, you can make an application password so you aren't using your account password. This app password can limit the scope of what that access can do.
% curl -X POST --user 'user:app_pass' \
-H "Content-type: application/json" \
-d '{"project":{"key":"PROJ"}}' \
"https://api.bitbucket.org/2.0/repositories/USER/REPO"

Argo artifact passing cant save output

I am trying to run the artifact passing example on Argoproj. However, I am getting the following error:
failed to save outputs: verify serviceaccount platform:default has necessary privileges
This error is appearing in the first step (generate-artifact) itself.
Selecting the generate-artifact component and clicking YAML gives following line highlighted
Nothing appears on clicking LOGS.
I need to understand the correct sequence of steps in running the YAML file so that this error does not appear and artifacts are passed. Could not find much resources on this issue other than this page where the issue is discussed on argo repository.
All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName, or if omitted, the default service account of the workflow's namespace.
Here the default service account of that namespace doesn't seem to be given any roles by default.
Try granting a role to the “default” service account in a namespace:
kubectl create rolebinding argo-default-binding \
--clusterrole=cluster-admin \
--serviceaccount=platform:default \
--namespace=platform
Since the default service account now gets all access via the 'cluster-admin' role, the example should work now.

Resources