What storage method in the Azure Logic App - storage

I'm new to the Azure Logic App, and I'm creating an app that needs the date, time and status for the previous run.
I can see this information in the run history for the app, but is there a way I can retrieve this and use it in my logic app?
The first thought was to create a table in SQL database, but it's a bit overambitious for only 1 table.
Is there a smart way to solve this in logic app?

If your requirement is to get the running information of one logic app in another logic app, you can refer to the solution below:
1. Create a "Log Analytics workspace" and add "Logic Apps Management" in it, you can refer to this tutorial.
2. Create a new logic app and configure the "Diagnostic settings", please refer to the steps on this tutorial.
3. After completing the configuration above, please wait for a few minutes and then run your logic app serval times for test (The logs in Log Analytics workspace will be a little bit of a delay, I wait for more than 30 minutes. It just show the logs of running instance which run after 30 minutes of the "Diagnostic settings" deployment).
4. Go to your "Log Analytics workspace", click "Workspace summary" --> "Logs".
There are four query samples for logic app to query the logs, you can also write some other queries to query the logs which you want.
I just use the third query sample to query the logs distribution by status, we can see there are two success and one failed.
5. Then create another logic app to get the logs. First add "Run query and list results(preview)" action and copy the query sample to the "Query" box.
6. Run the logic app, we can see the logs result show as below format.
{
"value": [
{
"LogicAppName": "huryLogLogic",
"NumberOfExecutions": 1,
"RunStatus": "Failed",
"Error": "ActionFailed"
},
{
"LogicAppName": "huryLogLogic",
"NumberOfExecutions": 2,
"RunStatus": "Succeeded",
"Error": ""
}
]
}

Related

Zapier: How to select accounts dynamically/programmatically

A lot of Zapier steps have "Choose account" as the second configuration section ("Choose app & event" --> "Choose account" --> "Set up actions" --> "Test action").
Is there any way to select the account dynamically? We want to be able to use the account that corresponds to conditions determined earlier in the zap (i.e. using a value from the output of a step). Right now, we have to manually select the account from the list of connected accounts, and there seems to be no way to change it during the zap.
Which means for each new account that we manage, we have to copy the entire zap just to change the selected account. And then we have an ever growing list of zaps to manage (every change needs to be repeated for every single account/zap).
Background: Our company manages a growing number of accounts -- let's use Twitter accounts as an example. We use Zapier to update these accounts (ex: send out a new tweet) based on some triggers & conditions. Imagine managing hundreds of accounts this way; it's not scalable.
And please don't tell me about Paths. That is not a scalable solution either, and it's limited to only 3 paths (it can be increased to 10, but that's no better).
We're also aware that Zapier has a limit of 2,000 app connections. That's a problem we'll deal with later.
EDIT: We're also considering using an external service to update the accounts (ex: send out the new tweets), so we'd be open to suggestions in that regard. Worst case scenario, we'll build a small custom API to perform the updates on any specified account, and Zapier will call that API instead of performing the updates directly on a specific account.
This is not currently possible - the params of all steps in a zap (selected account, field mappings, input field values, etc) are all locked in at zap creation time.
This is cool feedback though! Definitely worth reaching out to Zapier support about if you haven't.

Strange Google Dataflow job log entries

Recently my jobs logs in a job details view are full of entries such as:
"Worker configuration: [machine-type] in [zone]."
Jobs themselves seem to work fine, but these entries didn't show up before and I am worried I won't be able to spot meaningful log entries because of it.
Is it something I should be worried about? Do you know how to get rid of them?
Yes, those logs are spammy and are not to be worried about. I have submitted an internal bug to reduce these spammy logs (with this being the first). While it is being fixed, you can familiarize yourself with the Stackdriver Logs Exclusion feature. This allows you to create filters to exclude logs based on a user-defined query.
Here are the steps to exclude specific Datawflow logs:
Navigate to the logs ingestion page
Find the "Dataflow Step" row
Click the right-most button on the same row
Select the "Create Exclusion Filter..." option from the drop-down
Write the query to select which logs you want to exclude
(in your case: resource.type="dataflow_step" "Worker configuration")
Name your filter
Select your percent of logs to exclude (exclude 100% of selected logs is the default)
Click the "Create Exclusion" button
You can view your created exclusion filter in the "Exclusions" tab in the logs ingestion page
And you should not see such logs spamming for newly scheduled jobs now! We've added logic to prevent excessive logging of this kind of message.

How to stop Sumo Logic alerts

How can I (force) stop receiving the Sumo Logic alerts?
I have scheduled a Sumo Logic search, and started receiving the email alerts. However, after I unscheduled it (Run frequency = "Never") and even deleted it, I'm still receiving these alerts. It's been over 24 hours now.
I am looking at our org's "Library"; that's where I deleted the scheduled search. Is there anywhere else I can look to see why it's still running?
You may have multiple copies of the same query.
When you receive Alert email, it includes a link to query. Open the link and then “Edit” it, then change schedule to “Never”
With the help of Sumologic Support, I got to the bottom of this.
In short, I had saved my scheduled search elsewhere (duplicating it) by mistake, and it was this other instance (of which I was unaware) that was sending the alerts.
Looking back, this is where it had gone wrong:
first, I created a scheduled search by running a Sumo search and clicking "Save As"; I saved it to a team folder, where it really belonged
some time later, I must have run the query again and clicked "Save As" again
this is wrong; after a query is saved once, it should be modified via the "Edit" link, not "Save As"
what's worse, the "Save As" dialog offers my personal folder as the default save location, and I must have overlooked it, thus producing a copy of my scheduled search
at this point, I had two identical searches scheduled: one in the team folder, and one in my personal folder (which I didn't know about); no matter how I modified the scheduled search in the team folder, even deleting it, I never stopped being alerted (because the other search was still active)
I recommend using Sumologic Support; they accessed my account, looked around, and quickly figured out what was wrong.

jira script runner script that counts 2 statuses or fields togheter

i want to count how many issues are in status open, and in verify (our custom flow) per day.
if today 3 issues entered into status open, and 3 entered into status verify i would like to see the result of that field saying 6.
now sure how the script should be done in SCRIPT RUNNER.
thanks guys =)
It would help if you could tell what you want to solve. JIRA allows you to define filters, that will give all the time as result a list of issues found. You can then define / use reports and / or gadgets on dashboards to display the data based on those filters.
So a solution could be:
Define a filter that searches for the issues. Something like project = XYZ AND status in (open, verify).
Save that filter under a name (e.g. "Open and verified").
Use that filter then in a gadget that displays the issues as chart, ...

Running automated test for 24 hours with robot framework and selenium2library on jenkins

I have a test case for activation account which have 2 scenarios.
First TC is where user creates an account and gets activation link
in email and he goes to email and clicks on the link and his account
is activated.
The other TC is wherein i have to check the case where the link is
dead after 24 hours. Now i am using selenium2library and
robotframework for my test cases.
I am not able to find a way for second TC wherein i have to pause the running of TC for 24 hours before making a call to email and get the expired link. I tried going to DB and changing the timestamp of the link by -24 hours but touching DB while running TC is not a good option.
Does anyone knows a workaround for this? Is there any selenium2keyword etc to achieve this?
I think I would actually look to break down your scenario into a few different cases and determine the best way to approach each case.
TC1. Create new account and get activation link - Maybe selenium? I would actually prefer if possible to do this via and api or database call if that is where the real logic is. Then I would look to capture the URL before it is even sent out by email. It would be particularly helpful is this was a return value from an API or a value in the db. Otherwise if you end up having to go through the hassle of logging into an email system and getting it from there. The verification on this is that the URL is generated.
TC2. Setup a new account or use an existing account / URL pair that has been reset to an "pre activation" state. If you use Selenium here there is no real need for a mouse click event. Simply navigate to the URL ie (driver.get("myactivationURL")). You can then verify either in the UI a successful activation or query the db that the activation is successful.
TC3. You could do an A and B for this. One with and activated account and one with a non activated account with timestamp older than 24 hours. Verify that in both cases if the activation link is sent it gives the proper messaging and the values in the db are still correct depending on the previous state of the account.
This would be faster and more reliable than trying to wait 24 hours during the running of one long test. It would also mean you could test discrete parts of the process.
For testing purposes, would it be possible to set the expiration time of the link to something more testable, such as five minutes? It should be reasonable to assume that if it is configured to expire in five minutes and it does, then if it is configured for twenty-four hours it will, too.
Of course, you would definitely want to follow this test up with a manual check at some point. Proper testing doesn't require 100% automation. Some tests are best left to humans at a keyboard.

Resources