I know we have configuration arguments where you can specify network and subnets, I tried doing that but with a Shared VPC network, it gives me this error.
The usage of subnetworks in Cloud Dataflow require to specify the subnetwork parameter when running the pipeline; However, in the case of subnetwork that are located in a Shared VPC network, it is required to use the complete URL based on the following format:
https://www.googleapis.com/compute/v1/projects/<HOST_PROJECT>/regions/<REGION>/subnetworks/<SUBNETWORK>
Additionally, verify you are adding the project's Dataflow service account into the Shared VPC's project IAM table and give it the "Compute Network User" role permission in order to ensure that the service has the required access scope.
You can take a look on the Subnetwork parameter official Google's documentation which contains detailed information about this matter.
Be sure to include the Project ID in the --subnetwork option:
/projects/<PROJECT_ID>/regions/<REGION>/subnetworks/<SUBNETWORK>
and give to the Dataflow Service account the Network User role in the host project, which is what I suspect is going on according to the error message.
Related
I am running a cloudbuild.yaml job in Google Cloud Platform that builds, pushes and tags a Docker Image and then it creates a Compute Engine instance to run that image via gcr.io/cloud-builders/gcloud.create-with-container. I also specify a service account to be used in this step:
- id: "Create Compute Engine instance"
name: gcr.io/cloud-builders/gcloud
args: [
'compute',
'instances',
'create-with-container',
'${INSTANCE_NAME}',
'--container-image',
'eu.gcr.io/${PROJECT_ID}/${PROJECT_ID}-${REPO_NAME}',
'--zone',
'${ZONE}',
'--service-account',
'${SERVICE_ACCOUNT},
'--machine-type',
'n2-standard-4'
]
However I am getting an error:
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: (gcloud.compute.instances.create-with-container) Could not fetch resource:
- Required 'compute.instances.create' permission for 'projects/...'
The service account in use does have the permissions for that as it has been assigned "role": "roles/compute.instanceAdmin.v1", which includes compute.instances.* as per documentation.
Anyone has experienced this or a similar situation and can give a hint on how to proceed? Am I missing something obvious? I have tried using other service accounts, including the project default compute account and get the same error. One thing to note is I do not specify a service account for Docker steps (gcr.io/cloud-builders/docker).
Make sure that you are not misinterpreting service accounts.
There is a special service account used by Cloud Build.
There is also the service account to "be used" by the VM/instance you are creating.
The "compute.instances.create" permission should be granted to the special Cloud Build account, not to the account for the instance.
The Cloud Build account has a name like 123123123#cloudbuild.gserviceaccount.com.
In the Cloud Console go to Cloud Build -> Settings -> Service Accounts
and check if correct permissions are granted.
I'm trying to make a request in Go client.AnnotateVideo(ctx, &annotateVideoRequest) to the Google Cloud Video Intelligence API using the package cloud.google.com/go/videointelligence/apiv1.
I noticed that if I'm on a Google VM, i don't need any credentials or environment variable because the API says:
For API packages whose import path is starting with "cloud.google.com/go",
such as cloud.google.com/go/storage in this case, if there are no credentials
provided, the client library will look for credentials in the environment.
But I guess I can't authenticate because I'm running a Docker Container inside the Google VM, and I don't know if I really need a credentials file in that docker container, because I don't know if the library automatically creates a credentials file, or it just check if there is a $GOOGLE_APPLICATION_CREDENTIALS and then use that (But that makes no sense. I'm on a GOOGLE VM, and I'm supposed to have that permission).
The error is:
PermissionDenied: The caller does not have permissions
Some links that might be helpful:
https://pkg.go.dev/cloud.google.com/go/storage
https://cloud.google.com/docs/authentication#environment-service-accounts
https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-go
https://cloud.google.com/video-intelligence/docs/common/auth#adc
Thanks in advance!
I have a Dokerswarm application(.Net) which is using Authconfig class for storing information [username, passord, serveraddress, tokens etc] for authenticating with the registries. The same application I am trying to write in Kubernetes using KubernetesClient.
Can someone please let me know if there is any equivalent of Authconfig class in Kubernetes K8s.Model client also ?
The similar class for creating connection to the k8s APIServer endpoint would be the following:
KubernetesClientConfiguration (in case you have proper KUBECONFIG environment variable set, or at least k8s config on the disk)
More specific classes could be found in the folder:
csharp/src/KubernetesClient/KubeConfigModels/
Usage examples could be found here:
csharp/examples/
I would also recommend to read the following documentation pages:
Access Clusters Using the Kubernetes API
Configure Access to Multiple Clusters
My Dataflow job fails when it tries to access a secret:
"Exception in thread "main" com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Permission 'secretmanager.versions.access' denied for resource 'projects/REDACTED/secrets/REDACTED/versions/latest' (or it may not exist)."
I launch the job using gcloud dataflow flex-template run. I am able to view the secret in the console. The same code works when I run it on my laptop. As I understand it, when I submit a job with the above command, it runs under a service account that may have different permissions. How do I determine which service account the job runs under?
Since Dataflow creates workers, they create instances. You can check this on Logging
Open GCP console
Open Logging -> Logs Explorer (make sure you are not using the "Legacy Logs Viewer")
At the query builder type in protoPayload.serviceName="compute.googleapis.com"
Click Run Query
Expand the entry for v1.compute_instances.create or any other resources used by compute.googleapis.com
You should be able to see the service account used for creating the instance. This service account (boxed in red) is used anything related to the running the dataflow job.
Take note that I tested this using the official dataflow quick start.
By default the worker nodes of dataflow run with the compute engine default service account (YOUR_PROJECT_NUMBER-compute#developer.gserviceaccount.com) lacking of the "Secret Manager Secret Accessor" rights.
Either you need to add those rights to the service account or you have to specify the service account in the pipeline options:
gcloud dataflow flex-template run ... --parameters service_account_email="your-service-account-name#YOUR_PROJECT_NUMBER.iam.gserviceaccount.com"
How do I configure to run my Cloud dataflow job using Internal IP?
Our policy doesn't allow to use external IP to spawn the workers. So, looking for options that would disallow external IP. I ran and got the below error.
Startup of the worker pool in zone XXX failed to bring up any of the desired 1 workers. Please check for errors in your job parameters, check quota, and retry later, or please try in a different zone/region.
Add instance projects to use external IP with it.
You can use the --usePublicIps=false flag. Here you can look at some examples.
looks like they updated flags
now it's
--no_use_public_ips or --use_public_ips
PS: Python