Is there a way to set an alerts on com.google.cloud.dataflow.sdk.transforms.Aggregator? We can view current counter on Dataflow UI but there is no way to get current value from the aggregator instance or read current value from the stackdrive.
Not at the moment, but we are working on better integrating Dataflow with Stackdriver, and future enhancements to alerting in Dataflow will be made via Stackdriver
Related
I am searching for preventing attacks like Ddos, I am not sure I came across a solution.
Case 1
in which from every request putting a increment value in firestore database. After certain value such as 100000 a cloud function will trigger which will destroy / deactivate all my cloud functions.
Case 2
Is there any easy way using if else in cloud functions.
I am new to Firebase cloud functions
I am writing cloud functions in dart
Is there any way to write security rules for calling function
Is there any way to limit invocations
Can cdn or another service integration help in this situation. I dnt want surprise bill
First of, see this documentation on the guidelines you should follow to avoid security attacks in Firebase.
in which from every request putting a increment value in firestore database. After certain value such as 100000 a cloud function will trigger which will destroy / deactivate all my cloud functions.
Unfortunately, this is not how a managed service works. Cloud Functions can only be triggered when invoked or during a response to an event. If there's no traffic, then the function is not running. It's not possible to deactivate them.
You can however, list all you functions and delete them one-by-one by using Cloud Functions Client Library and method deleteFunction().
Is there any easy way using if else in cloud functions.
For this question, are you referring to conditional statements or on how a traffic is redirected?
I am new to Firebase cloud functions I am writing cloud functions in dart
Currently, there is no official way to deploy a function running in Dart Runtime, though there are community supported projects that allow you to run Dart functions on other environments.
Node is the only runtime being supported in Cloud Function for Firebase as of the moment. See documentation here.
Is there any way to write security rules for calling function
Firebase security rules are for Cloud Firestore, Realtime Database, and Cloud Storage. See this SO that shows how to protect HTTP functions using auth id tokens and database rules.
Additionally, in this documentation, you can find how to setup security rules in your Firebase project. Sample scripts can be found here.
Is there any way to limit invocations
You can find a similar SO question here on limiting invocations in Firebase Cloud Functions. Additional details regarding Quotas and Limits can be found here.
Can cdn or another service integration help in this situation. I dnt want surprise bill
CDNs can help you bring down costs due to caching behavior, however it is not the complete solution to avoid surprise bills. One way to avoid this is to setup budget alerts to send email notifications whenever your project exceeds (or about to exceed) the set spend threshold. See documentation on Avoiding surprise bills here.
I have a series of jenkins pipeline jobs to move Apps to Cloud Foundry. My client application need to be able to listen to all the updates of a push. I.e. apart from getting text logs, i need other events like Git repo cloned, cloud foundry logged in, App pushed.
One crud way of doing this is to submit POST requests to an event server from a shell script(Curl). However, I think it is unlikely that such a functionality does not exist already on Jenkins(either through a plugin or something like that).
I need an advice from best practices point of view.
Thanks.
As commented by mdabdullah. But this needs a person to set up kibana or splunk. (I did not try this).
Statistics gatherer plugin
https://plugins.jenkins.io/statistics-gatherer/
Jenkins notification plugin
https://plugins.jenkins.io/notification/
Both 2,3 are available plugins in the Jenkins community. They need to configured for server endpoints before use.
According to what I read of DataFlow, the Pub/Sub datasource only gives the message body to work with in the pipeline. We have a use-case where we want to inspect the attributes of the message to make certain decisions. Is there any way of achieving this currently? I'm open to extending the Pub/Sub I/O to incorporate this if required.
Currently, there is no way to access the message attributes of your messages via the PubsubIO connector, but it would clearly be useful to do so. This is tracked in Apache Beam (incubating) as the issue BEAM-404.
I recommend following this issue to keep abreast of new developments.
I'm working on a POC to automate downstream processes in external systems based on JIRA processes and have hit a wall with the API. It appears to have great integration for pulling data about tickets out of JIRA and for the ability to externally generate tickets into JIRA.
However I don't see how to trigger external calls as a part of my workflows. For example if a ticket should be prevented from being routed to the next stage of a workflow without accessing a database to ensure availability of inventory first how could I do that in JIRA?
Based on attributes in the JIRA ticket upon final completion of the workflow we'd like to send a JMS or REST message or possibly update an external database. Is this possible?
Thanks all in advance for the help!
If you want to do a "before" check, use a Validator on the Workflow Transition.
I strongly suggest deploying the (free) Script Runner add-on. There you can implement a ton of things. For example, you'll get a new validator option "Script Validator", where you can specify a Groovy script that decides if it lets through the transition or aborts it.
I am trying to find a way to order a Control-M job via a message from an external application. We are using Control-M v8. We are able to send messages to the queue, but we have been unsuccessful in receiving messages that perform some sort of action in Control-m.
Erick, look at the documentation for the Control-M Business Process Integration Suite Manual. This suite provides the capability that you are looking for.
We have application back-end in UNix and, we use Control-M in-built utilities to call jobs from unix. The jobs should be created in desktop, and should have been uploaded to control M database without any specific schedule. A utility called 'ctmorder' can be used to call these jobs as and when required.