Our company policy requires the policy contraint "compute.requireShieldedVm" to be enabled. However, when running a Cloud Dataflow job, it is failing to create a worker with the error :
Constraint constraints/compute.requireShieldedVm violated for project projects/********. The boot disk's 'initialize_params.source_image' field specifies a non-Shielded image: projects/dataflow-service-producer-prod/global/images/dataflow-dataflow-owned-resource-20200216-22-rc00. See https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints for more information."
Is there any way when running a Dataflow job to request that a ShieldedVm be used for the worker compute?
It is not possible to provide a custom image as there is no such parameter that one can provide during job submission as can be seen here Job Submission Parameters
Alternatively, if you are running a Python based dataflow job you can setup the environment through setup files. An example of which can be found here Dataflow - Custom Python Package Environment
Related
I am trying to write a script to automate the deployment of a Java Dataflow job. The script creates a template and then uses the command
gcloud dataflow jobs run my-job --gcs-location=gs://my_bucket/template
The issue is, I want to update the job if the job already exists and it's running. I can do the update if I run the job via maven, but I need to do this via gcloud so I can have a service account for deployment and another one for running the job. I tried different things (adding --parameters update to the command line), but I always get an error. Is there a way to update a Dataflow job exclusively via gcloud dataflow jobs run?
Referring to the official documentation, which describes gcloud beta dataflow jobs - a group of subcommands for working with Dataflow jobs, there is no possibility to use gcloud for update the job.
As for now, the Apache Beam SDKs provide a way to update an ongoing streaming job on the Dataflow managed service with new pipeline code, you can find more information here. Another way of updating an existing Dataflow job is by using REST API, where you can find Java example.
Additionally, please follow Feature Request regarding recreating job with gcloud.
Have any other fargate ECS jenkins cluster needed to handle the passRole/trust relationship? We normally do not pass special roles to our ecs instances and configure a general role as all the instances have identical requirements.
After upgrading jenkins and then upgrading all of its plugins I began to see errors. Amazon Elastic Container Service plugin upgraded from 1.16 to 1.20. First, I saw errors in the jenkins log about adding iam:PassRole missing from my ecs controlling policy. After adding it I now reach the error:
com.amazonaws.services.ecs.model.ClientException: ECS was unable to
assume the role 'arn:aws:iam::...:role/ecsTaskExecutionRole'
that was provided for this task.
Please verify that the role being passed has the proper trust
relationship and permissions and that your IAM user has
permissions to pass this role.
I will configured the pass role/trust relationship but the fact I need to do so is troubling as we aren't passing a role or at the very least do not intend to do so. It sure looks like we do not have a choice in the matter.
I found a solution. It seems that the upgrade from 1.16 to 1.20 resulted in taskRole and executionRole for each template set to values in jenkin's config.xml. I stopped jenkins, manually deleted those keys from each template and started. Now, jenkins can attempt to launch the container tasks
When running Dataflow job in Google cloud, how can the Dataflow pipeline be configured to shoot an email upon failure (or successful completion)? Is there an easy option, where it can be configured inside the Pipeline program, or any other options?
One possible option from my practice is to use Apache Airflow(on GCP, it's Cloud Composer).
Cloud Composor/Apache Airflow can send you failure emails when DAG (jobs) face failures. So I can host my job in Airflow and it sends emails upon failures.
You can check[1] to see if it satisfies your need
https://cloud.google.com/composer/docs/quickstart
I am trying to setup a Dataflow job using the google provided template PubSub to BigQuery. However I am getting this error on start up:
Message: The resource 'projects/my-project/global/networks/default' was not found
I think the google provided template is hardcoded to use the default network. The error goes away if I create a default network in automatic mode. But we can't have a default network in production.
The document here mentions a network parameter. I tried adding an additional parameter called network from GCP console UI passing in our custom network name. But I am getting this error:
The template parameters are invalid.
Is there any way I can tell the Google provided Dataflow template to use my custom network(created in manual mode) instead of the default? What are my options here?
Appreciate all the help!
This is not currently supported for Dataflow pipelines created from a template. For now, you can either run the template in the default VPC network, or submit a Dataflow pipeline using the Java or Python SDK and specify the network pipeline option.
You can use gcloud beta command gcloud beta dataflow jobs run as explained here in gcloud beta dataflow jobs run.
It supports more parameters like [--network=NETWORK] and [--subnetwork=SUBNETWORK] which are useful for your usecase.
I have only one topic which was created in the production project. I want to run my dataflow job in dev environment which needs to consume production pubsub topic. When I submit my dataflow job in dev project it is not working and it always shows running in dataflow UI but not reading any elements from pubsub. If I submit to production project it works perfectly.
Why it is not reading messages from different project topic? I'm using java-sdk 2.1 and runner is "dataflowrunner"
PCollection<String> StreamData = p.apply("Read pubsub message",PubsubIO.readStrings().fromSubscription(options.getInputPubSub()));
Using mvn to submit dataflow job
mvn compile exec:java -Dexec.mainClass=dataflow.streaming.SampleStream -Dexec.args="—project=project-dev-1276 --stagingLocation=gs://project-dev/dataflow/staging --tempLocation=gs://project-dev/dataflow/bq_temp --zone=europe-west1-c --bigQueryDataset=stream_events --bigQueryTable=events_sample --inputPubSub=projects/project-prod/subscriptions/stream-events --streaming=true --runner=dataflowRunner"
Note: If I am using directrunner it works and consumes messages from different project pubsub topic.
No elements added in the queue and no estimated size.
You need to add Pub/Sub Subscriber permissions in your production project for a user (a service account) that your job will use. By default, workers use your project’s Compute Engine service account as the controller service account. This service account (<project-number>-compute#developer.gserviceaccount.com) should be given Pub/Sub Subscriber permission.
Read more here https://cloud.google.com/dataflow/docs/concepts/security-and-permissions and here https://cloud.google.com/pubsub/docs/access-control