I am trying to use "retrieve email messages from outlook" in power automate and filter by "body contains", so I want to set a dynamic variable in the body contains which is today's date, how to achieve this ?
I try to use “Run python script"
from datetime import date
import sys
today = date.today()
sys.stdout.write(str(today))
and get variables produced "pythonscriptoutput" = 2022-09-07
however, when I pass the variable in "Body Contains" it just doesn't filter correctly
then I create an input variable "Todaydate"and hardcode default value to 2022-09-08,
then use "Todaydate" in the "Body Contains" to filter and it works. Please help thanks!
Related
At the moment, in Jenkins 2.204.1, using JSON API it is not possible to retrieve date of build job. How can I add that field to build info?
On the bottom right of the build page there is an API link. There I found this URL for querying the build timestamp:
https://JENKINSROOT/job/JOBNAME/BUILDNUM/buildTimestamp
Example output: 2/28/20 10:23 PM
This could be local-dependent.
You can also specify a date format:
https://JENKINSROOT/job/JOBNAME/BUILDNUM/buildTimestamp?format=yyyy/MM/dd+HH:mm
Example output: 2020/02/28 22:23
To get latest build you can use literal "lastBuild" in place of BUILDNUM, e. g.:
https://JENKINSROOT/job/JOBNAME/lastBuild/buildTimestamp
I am creating a Oracle Report. The report is supposed to take a bunch of input and create the report. It is not working for some specific cases.
The input are as follows
1. Account number
2.org id
3. start _date
4.end date
the report is supposed to generate a report for the org_id(constant) based on the account number and for date between start date and end date.
When the account_number is not provided it will return all the information regarding all accounts.
It works when I give specific account information
It works for specific dates (10-jan-25th jan ,20thjan-31st-jan) with out any account number: i.e. it returns information about all account number for given time period.
but it fails to give me information about 10-jan-31st jan. Which I can not figure out why.
I have tried to get the xml and put it in the template and create a preview,
the preview does not work and gives me the following error:
error: Conf File: C:\Template Builder for Word\config\xdo config.xml Font Dir: C:\Template Builder for Word\fonts Run XDO Start Template: C:\MyFiles\XML_Publisher\lATEST OUTPUT\XXONT_M193_CANCELLED_HOLDS .rtf RTFProcessor setLocale: en-us FOProcessor setData: C:\MyFiles\XML_Publisher\Test\errchk15jan7feb.xml FOProcessor setLocale: en-us
With the longer date range, the XML file might be too big for the output postprocessor. Did you check the size of the XML file?
If this is the problem, you can use our company's Blitz Report, which doesn't have size limitations: https://www.enginatics.com/faq/#how-does-blitz-report-compare-with-oracle-bi-publisher
I've exported a Cloud Dataflow template from Dataprep as outlined here:
https://cloud.google.com/dataprep/docs/html/Export-Basics_57344556
In Dataprep, the flow pulls in text files via wildcard from Google Cloud Storage, transforms the data, and appends it to an existing BigQuery table. All works as intended.
However, when trying to start a Dataflow job from the exported template, I can't seem to get the startup parameters right. The error messages aren't overly specific but it's clear that for one thing, I'm not getting the locations (input and output) right.
The only Google-provided template for this use case (found at https://cloud.google.com/dataflow/docs/guides/templates/provided-templates#cloud-storage-text-to-bigquery) doesn't apply as it uses a UDF and also runs in Batch mode, overwriting any existing BigQuery table rather than append.
Inspecting the original Dataflow job details from Dataprep shows a number of parameters (found in the metadata file) but I haven't been able to get those to work within my code. Here's an example of one such failed configuration:
import time
from google.cloud import storage
from googleapiclient.discovery import build
from oauth2client.client import GoogleCredentials
def dummy(event, context):
pass
def process_data(event, context):
credentials = GoogleCredentials.get_application_default()
service = build('dataflow', 'v1b3', credentials=credentials)
data = event
gsclient = storage.Client()
file_name = data['name']
time_stamp = time.time()
GCSPATH="gs://[path to template]
BODY = {
"jobName": "GCS2BigQuery_{tstamp}".format(tstamp=time_stamp),
"parameters": {
"inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name),
"outputLocations": '{{\"location1\":\"[project]:[dataset].[table]\", [... other locations]"}}',
"customGcsTempLocation": "gs://[my bucket]/dataflow"
},
"environment": {
"zone": "us-east1-b"
}
}
print(BODY["parameters"])
request = service.projects().templates().launch(projectId=PROJECT, gcsPath=GCSPATH, body=BODY)
response = request.execute()
print(response)
The above example indicates invalid field ("location1", which I pulled from a completed Dataflow job. I know I need to specify the GCS location, the template location, and the BigQuery table but haven't found the correct syntax anywhere. As mentioned above, I found the field names and sample values in the job's generated metadata file.
I realize that this specific use case may not ring any bells but in general if anyone has had success determining and using the correct startup parameters for a Dataflow job exported from Dataprep, I'd be most grateful to learn more about that. Thx.
I think you need to review this document it explains exactly the syntax required for passing the various pipeline options available including the location parameters needed... 1
Specifically with your code snippet the following does not follow the correct syntax
""inputLocations" : '{{\"location1\":\"[my bucket]/{filename}\"}}'.format(filename=file_name)"
In addition to document1, you should also review the available pipeline options and their correct syntax 2
Please use the links...They are the official documentation links from Google.These links will never go stale or be removed they are actively monitored and maintained by a dedicated team
I would like to store the message of an alert in influxDB using influxDBOut. Is is possible?
Here is my tick script
batch
|query('SELECT mean(value) as value FROM "metrics"."autogen"."__MEASUREMENT__"')
.period(15m)
.every(5s)
.groupBy(*)
.fill(0)
|alert()
.id('[METRICS] - {{ .Name }}')
.message('{{ .ID }} changed state to {{ .Level}} [{{ .Time }}] => The metric {{ index .Fields "value" }} in the last 15m.')
.info(lambda: TRUE)
.warn(lambda: "value" < __WARN_THRESHOLD__)
.crit(lambda: "value" < __CRIT_THRESHOLD__)
.stateChangesOnly()
.levelField('Severity')
|influxDBOut()
.database('alerts')
.retentionPolicy('autogen')
.measurement('__MEASUREMENT__')
.tag('Condition', 'Low')
Thank you in advancek
Unfortunately there currently isn't a way to achieve a result like this. If this functionality is particularly important to you, I'd recommend opening up a feature request on Kapacitor detailing your use case.
Q:
I would like to store the message of an alert in influxDB using influxDBOut. Is is possible?
A:
Michael definitely knows waayyy better than I do. Yes, there is no straight forward way out at the moment. However it doesn't mean that this is not do-able.
What you're trying to do here is a typical software dev problem.
Open a file
Read its content
Format it
Write it somewhere else.
You can handle this sort of problem in any scripting language that supports the highlighted points above. The only tricky thing is probably #4 as not every scripting language has a influxdb database driver, but still you can do curl commands to perform the writes.
What you could do is
Modify your TICK script to output the alert to a file. See log() of alert node.
Write a simple script to lookout for any new files written by the log() functionality.
Parse the file
format the data so that they can be inserted into a measurement
setup a scheduler like unix's cron to periodically run your script.
Hope it helps.
So, I try to use plugin https://wiki.jenkins-ci.org/display/JENKINS/URLTrigger+Plugin.
I want to trigger my Jenkins job when the text "Last build (#40), 17 hr ago" in the response of provided URL is changed (build number will be different after each build).
So I made following configurations:
1. Build trigger: Set [URLTrigger] - Poll with a URL.
2. Specified URL to another Jenkins: http://mydomain:8080/job/MasterJobDoNothing/
3. Set Inspect URL content option
4. Set Monitor the contents of a TEXT response
5. Set following regular expression: ^Last build[.]*
6. Set Schedule every minute: * * * * *
7. Trigger the job on another Jenkins
Actual result: My job wasn't triggered.
Then I tried to deal with XML/XPath and specify
8. Set Monitor the contents of an XML response
9. Set XPath: //*[#id="side-panel"] (also tried with one "/")
Actual result: the same.
Tell me please what I'm doing wrong? Please provide examples of RegEx or XPath if possible.
Thanks, Dima
I managed to trigger reliably with regex setting.
The regex pattern matches each line of the input.
No need to use ^ or $. it always match line start to line end.
This plugin compares the contents of the matched lines. It triggers if different.
This plugin compares the count of the matched lines. It triggers if the count is different.
This plugin uses matches() method of java.util.regex.Matcher. So the regex pattern should conform to it. (it's fairly normal regex)
As for your example,
Last build.*
may work.
Refs:
Reference of regex patten:
http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html
Reference of Matcher: http://docs.oracle.com/javase/7/docs/api/java/util/regex/Matcher.html#matches()
The regex trigger source code:
github.com/jenkinsci/urltrigger-plugin/blame/master/src/main/java/org/jenkinsci/plugins/urltrigger/content/TEXTContentType.java
I'd recommend to use the "RSS for all" link as a trigger URL instead, and /feed/entry[1] as the XPath expression for the XML response content nature.
PS: I was using PathEnq to debug the XPath expression.