I am trying to use Zapier Storage in conjunction with Conditionally Run. The task I am trying to achieve is as follows:
If value/child exists -> Complete Action A.
Else -> Complete Action B.
The issue I am running into is the Storage App Returns no value if none is found. Meaning, there is no data for the 'Conditionally Run' app to go off which stops it running.
Is there a build in work around for when Zapier Storage finds no matching value?
Any direction would be appreciated.
Solved. The Zapier paths and Zapier Filter options includes a "Does not exist" option. Selecting this allows the app run conditionally based on no value being returned from the Zapier Storage function.
Related
Not sure if someone else came across a similar issue before. When I create a remote config parameter and choose it as a boolean with no conditional values. In my iOS project, firebase can read it correctly. However, when I start using conditional statements, where it is checking if the user is in a particular location e.g. UK in the image below.
When setting up this param, I did the following:

Every time I am accessing the “test_country” parameter, it is showing me the default value of false, rather then the expected “true”. I looked through different questions on stack overflow and set
configSettings.minimumFetchInterval = 60
in debug mode to fetch quicker then the recommended 12 hours in production and ever tried another stack overflow recommendation of setting the
remoteConfig.fetch(withExpirationDuration: 0)
to zero, to force fetch from remote . Just to ensure its not a fetching issue.
Any suggestion what could be going wrong? I'm not sure what more information is needed to help in this case, please let me know.
I've looked through the following questions:
FirebaseRemoteConfig fetchAndActivate not updating new value
Firebase Remote Config fetch doesn't update values from the Cloud
And many more, even posted the question on Firebase slack page.
This issue has been resolved. With the information I currently gave, it was probably going to be hard to deduce what went wrong. Apologies for anyone that read this question.
On to the answer. For my app we have a release build and a debug build. Only one firebase to manage them. So, for debug build, we normally turn off:
FIREBASE_ANALYTICS_COLLECTION_DEACTIVATED
For more information on this see this link -> Configure Analytics data collection and usage
This needs to be turned on, in order for Google Analytics offers to control the collection and use Analytics data. Which I believe is required for conditional statements. Especially if the conditional statement is using a custom definition.
I'm using the LightService gem for Rails. I have created a few services to collect data for me. In one case, I want this data to be collected based on previously updated values.
To do this, I created an Organizer service that first calls an Update service, which in turn updates some properties. Then the Organizer calls another service which gives me the data based on these updated properties.
The problem now is that my LightService::Context now contains the promises of both called services. This is correct in other cases, but in this specific case I only need the data of the second service in the LightService::Context and the possibility to continue calling the success? method.
Is there a way to tell Rails
or LightService that I only want the LightService::Context of the second service?
Many thanks in advance
I am setting up an Azure Function for housekeeping my relational database.
I want to be able to control what table can be clear in which interval via Application Settings (i.e. env vaiable), so I am investigating the best way to insert multiple value in one application settings.
I currently have 2 ideas:
Idea 1:
Use JSON, so the application settings will be something like this:
HOUSEKEEPING_VALUE={"table_a":3,"table_b":6}
After decoding the JSON format, I will clear table_a in 3 months interval, table_b in 6 months interval.
Idea 2:
Use the same format as those used by Azure connection string x1=y1;x2=y2;x3=y3;:
HOUSEKEEPING_VALUE=table_a=3;table_b=6;
Would like to ask the community, any other ways to achieve my goal which is more elegant? Or perhaps using JSON for my case is the norm? Thanks!
There are no elegant ways for multiple values stored in app setting in azure portal.
You should use the 2 solutions as mentioned in your question, then parse them by yourself.
There other option (secured/centralized) is using Azure App Configuration Store to and bootsrap in your Azure Functions
quick start of Azure App Configuration
how to leverage json content type
My employers recently started using Google Cloud Platform for data storage/processing/analytics.
We're EU based so we want to restrict our Cloud Dataflow jobs to stay within that region.
I gather this can be done on a per job/per job template basis with --region and --zone, but wondered (given that all our work will use the same region) if there's a way of setting this in a more permanent way at a wider level (project or organisation)?
Thanks
Stephen
Update:
Having pursued this, it seems that Adla's answer is correct, though there is another workaround (which I will respond with). Further to this, there is now an open issue with google this now which can be found/followed at https://issuetracker.google.com/issues/113150550
I can provide a bit more information on things that don't work, in case that helps others:
Google support suggested changing where dataprep-related folders were stored as per How to change the region/zone where dataflow job of google dataprep is running - unfortunately this did not work for me, though some of those responding to that question suggest it has for them.
Someone at my workplace suggested restricting Dataflow's quotas for non-EU regions here: https://console.cloud.google.com/iam-admin/quotas to funnel it towards using the appropriate region, but when tested Dataprep continued to favour using US.
Cloud Dataflow uses us-central1 as a default region for each job and if the desired regional endpoint differs from the default region, the region needs to be specified in every Cloud Dataflow command job launched for it to run there. The zone will be automatically assigned workers to the best zone within the region, but you can also specify it with --zone.
As of this moment it is not possible to force the region or zone used by Cloud Dataflow based on the project or organization settings.
I suggest you to request a new Google Cloud Platform feature. Make sure to explain your use case and how this feature would be useful for you.
As a workaround, to restrict the jobs creation on Dataflow for a specific region and zone, you can write a script or application to only create jobs with the specific region and zone you need. If you also want to limit the creation of jobs to be done only with the script, you can remove your users’ job creation permissions and only give this permission to a service account which would be used by this script
A solution Google support supplied to me, which basically entails using Dataprep as a Dataflow job builder rather than a tool in of itself
Create the flow you want in Dataprep, but if there's data you can't send out of region, create a version of it (sample or full) where the sensitive data is obfuscated or blanked out & use that. In my case, setting the fields containing a user id to a single fake value was enough.
Run the flow
After the job has been executed once, in the Dataprep webUI under “Jobs”, using the three dots on the far right of the desired job, click on “Export results”.
The resulting pop up window will have a path to the GCS bucket containing the template. Copy the full path.
Find the metadata file at the above path in GCS
Change the inputs listed in the files to use your 'real' data instead of the obfuscated version
In the Dataflow console page, in the menu to create a job using a custom template, indicated the path copied from 2 as the “Template GCS Path”.
From this menu, you can select a zone you would like to run your job in.
It's not straightforward but it can be done. I am using a process like this, setting up a call to the REST API to trigger the job in the absence of Dataflow having a scheduler of it's own.
I want to ask user some questions before he/she builds. Questions will be like
Are you sure you have included all the files? (Answers: Yes, No )
Have you created a ticket in JIRA related to this fix? (Answer: Yes, No )
Is there any way I can do it? is there any plugin available for this?
A freestyle job can be configured to build with parameters. See:
https://wiki.jenkins.io/display/JENKINS/Parameterized+Build
You can configure the parameter type (string, boolean, drop down etc), give a description of the parameter and a default value. The job parameters can even include more complex things like validation rules:
https://wiki.jenkins.io/display/JENKINS/Validating+String+Parameter+Plugin
Or groovy scripts:
https://wiki.jenkins.io/display/JENKINS/Dynamic+Parameter+Plug-in
Or values shown in one parameter list change depending on the value of another:
https://wiki.jenkins.io/display/JENKINS/Active+Choices+Plugin
Your user then has to start the job by building with parameters - in effect being shown the parameter and descriptions (a bit like being asked the question).
Further validation can be done before initiating build steps using the 'Prepare an environment for the run' from the:
https://wiki.jenkins.io/display/JENKINS/EnvInject+Plugin
Build steps can be made optional based on user responses using:
https://wiki.jenkins.io/display/JENKINS/Conditional+BuildStep+Plugin
or
https://wiki.jenkins.io/display/JENKINS/Groovy+plugin
I've used all of the above to refine the choices the user has and what gets done with/because of those choices. I'm using Jenkins 2.116 in process of planning for upgrade to pipeline.
you can use input in your pipeline builds, with the questions you interested to provide to user. You can read more of usage in official doc of Jenkins here - https://jenkins.io/doc/pipeline/steps/pipeline-input-step/