Request to Azure Powershell Durable Activity Function denied SQL Access - azure-durable-functions

I am having an issue with an Azure Powershell Durable Function and could use some ideas on where to look for a solution. One of the Activity Functions triggers an API on a separate Function App to execute a query to an Azure SQL DB which returns an error. The error indicates the triggered function cannot reach the server - but when the query is directly triggered from the FunctionApp itself - the query succeeds.
I have logs and diagnostics enabled but have yet to find a log of the error anywhere indicating why Durable Activity Function fails - but triggering the target Function directly succeeds.
Note: The Durable Function also triggers another function on a different function app for a list of available DBs in the Azure SQL Elastic Pool. This call successfully executes via the Durable Function.

Related

Service worker will not intercept fetches

I am serving my service worker from /worker.js and want it to intercept fetches to /localzip/*, but the fetch event is never fired.
I register it like this:
navigator.serviceWorker.register(
"worker.js",
{ scope: "/localzip/" }
);
And I claim all clients when it activates, so that I can start intercepting fetches from the current page immediately. I am sure that the service worker is activating and that clients.claim() is succeeding.
self.addEventListener("activate", (e) => {
// Important! Start processing fetches for all clients immediately.
//
// MDN: "When a service worker is initially registered, pages won't use it
// until they next load. The claim() method causes those pages to be
// controlled immediately."
e.waitUntil(clients.claim());
});
Chrome seems happy with it and the scope appears correct:
My fetch event handler is very simple:
self.addEventListener("fetch", (e) => {
console.log("Trying to make fetch happen!");
});
From my application, after the worker is active, I try to make a request, e.g.,
const response = await fetch("/localzip/lol.jpg");
The fetch does not appear to trigger the above event handler, and the browser instead tries to make a request directly to the server and logs GET http://localhost:3000/localzip/lol.jpg 404 (Not Found).
I have tried:
Making sure the latest version of my worker code is running.
Disabling / clearing caches to make sure the fetch isn't being handled by the browser's cache.
Hosting from an HTTPS server. (Chrome is supposed to support service workers on plaintext localhost for development.)
What more does it want from me?
Live demo: https://rgov.github.io/service-worker-zip-experiment/
Note that the scope is slightly different, and the fetch is performed by creating an <img> tag.
First, let's confirm you are not using hard-reload while testing your code. If you use hard-reload, all requests will not go through the service worker.
See https://web.dev/service-worker-lifecycle/#shift-reload
I also checked chrome://serviceworker-internals/ in Chrome, and your service worker has fetch handler.
Then, let's check the codes in detail.
After trying your demo page, I found a network request is handled by the service worker after clicking "Display image from zip archive" button since I can see this log:
Service Worker: Serving archive/maeby.jpg from zip archive
Then, the error is thrown:
Failed to load ‘https://rgov.github.io/localzip/maeby.jpg’. A ServiceWorker passed a promise to FetchEvent.respondWith() that rejected with ‘TypeError: db is undefined’.
This is caused by db object is not initialized properly. It would be worth confirming whether you see the DB related issue as I see in your demo. If not, my following statement might be incorrect.
I try to explain some service worker mechanism alongside my understanding of your code:
Timing of install handler
Your DB open code happens in the install handler only. This means DB object will be assigned only when the install handler is executed.
Please notice the install handler will be executed only when it's necessary. If a service worker exists already and does not need to update, the install handler won't be called. Hence, the db object in your code might not be always available.
Stop/Start Status
When the service worker does not handle events for a while (how long it would be is by browser's design), the service worker will go to stop/idle state.
When the service worker is stopped/idle (you can check the state in the devtools) and started again, the global db object will be undefined.
In my understanding, this is why I see the error TypeError: db is undefined’.
Whenever the service worker wakes up, the whole worker script will be executed. However, the execution of event handlers will depend on whether the events are coming.
How to prevent stop/idle for debugging?
Open your devtools for the page then the browser will keep it being alive.
Once you close the devtool, the service worker might go to "stop" soon.
Why does the service worker stops?
The service worker is designed for handling requests. If no request/event should be handled by a service worker, the service worker thread is not necessary to run for saving resources.
Please notice both fetch and message events (but not limited to) will awake the service worker.
See Prevent Service Worker from automatically stopping
Solution for the demo page
If the error is from the DB, this means the getFromZip function should open the DB when db is unavailable.
By the way, even without any change, your demo code works well in the following steps:
As a user, I open the demo page at the first time. (This is for ensuring that the install handler is called.)
I open the devtools ASAP once I see the page content. (This is for preventing the service worker goes to "stop" state)
I click "Download zip archive to IndexedDB" button.
I click "Display image from zip archive" button.
Then I can see the image is shown properly.
Jake Archibald pointed out that I was confused about the meaning of the service worker's scope, explained here:
We call pages, workers, and shared workers clients. Your service worker can only control clients that are in-scope. Once a client is "controlled", its fetches go through the in-scope service worker.
In other words:
The scope affects which pages (clients) get placed under the service worker's control and not which fetched URLs get intercepted.

Google Cloud Storage Transfer - Python method for getting status of TransferOperation

There does not appear to be any method in the Python client API for Google's storage transfer service that checks the status of an ongoing transfer operation. There is get_transfer_job, which shows the status of a transfer job itself (and gives the latest operation name). But I can't find any way of getting the status of an actual operation, which is critical.
I know other languages' client APIs (including at least Go and node.js) have this functionality. It may be possible to use a naked REST API request, but we're running into authentication issues. Is there any other way that I'm missing? Any way to call the TransferOperation type directly (such as client.TransferOperation(<transfer_operation_name>)?
There is method available for the same which you can use in the following manner.
GetTransferJobRequest(mapping=None, *, ignore_unknown_fields=False, **kwargs)
Reference Link - Class GetTransferJobRequest

Service worker not working in electron but works great in browser

In my react application have a service worker working in the browser as expected. Unfortunately when the application is built and packaged with electron the service worker does not work. It seems to register but the service worker throws a error - uncaught in promise typeerror failed to fetch
The only real difference I can see is the origin of the worker. In electron it is stating a file path where as the one in the browser is stating the http://localhost path. This is using electron 11. The application is written in react from create-react app which has not been ejected.
This error originates from CORS(Cross Origin Resource Sharing). The thing is if you delete your service worker content and leave maybe a simple console.log statement the app would work. That is because in your fetch handler in service worker you are manually sending requests with Fetch API.
Unfortunately, CORS rules which are set inside your application are lost inside your service worker if you use the Fetch API.
If you really want to have that fetch handler in your service worker you have two choices:
You can customize the fetch handler so it doesn't do anything for index.html(which is the most important file for you). In that case if you don't do anything and you don't use event.respondWith, the request will not be stopped by service worker and your CORS headers will not be lost.
This is the tougher option. You need to set your CORS headers manually in the service worker fetch handler.

Is it possible to execute some code like logging and writing result metrics to GCS at the end of a batch Dataflow job?

I am using apache beam 2.22.0 (java sdk) and want to log metrics and write them to a GCS bucket after a batch pipeline finishes execution.
I have tried using result.waitUntilFinish() followed by the intended code:
DirectRunner- GCS object is created as expected and the logs appear on the console
DataflowRunner- GCS object is created but logs (post pipeline exec) don't appear on stackdriver
Problem: When a GCS template is created for the same, Neither the GCS object is created nor logs appear using the template.
what you are doing is the correct way of getting a signal for when the pipeline is done. There is no direct API in Apache Beam that allows for getting that signal within the running pipeline aside from wait_until_finish().
For your logging problem, you need to use the Cloud Logging API in your code. This is because the pipeline is submitted to the Dataflow service and runs in GCE VMs which logs to Cloud Logging. However, the code outside of your pipeline runs locally.
See Perform action after Dataflow pipeline has processed all data for a little more information.
It is possible to export the logs from your Dataflow job to Google Cloud Storage, Big Query or PubSub. In order to do that, you can use Cloud Logging Console, Cloud Logging API or gcloud logging to export the desired metrics to a specific sink.
In summary, to use the log export:
Create a sink, selecting Google Cloud Storage as the Sink Service( or one of the desired other options).
Within the sink, create a query to filter your logs (Optional)
Export destination
Afterwards, every time Cloud Logging receives new entries it will add them to the sink, only the new entries.
While you did not mention if you are using custom metrics, I should point that you should follow the Metrics naming rules, here. Otherwise, it won't show up in StackDriver.

Is there a way to get notification when a transaction is ended in Hyperledger Fabric?

I would like to get notification when my transaction (chaincode deploy or invoke) is ended. I use the REST API and I try to avoid the errors which say
'Error when querying chaincode: Error:Failed to launch chaincode spec(Could not get deployment transaction for 97e1a9887ad9695f8ce5b0a8d0e6f250bb75ba19db49f2f610b4c450deba0233ee41d9d00a6c1142bfb021946ab36e506e454053ad5231414d43c9fba0a601c7 - ledger: resource not found)'.
Is there a way or should i just polling the transaction at the http://vp:5000/transactions/{txuuid} endpoint and post the query message after it gets back with proper result?
The closest thing to this is to have a block listener application that examines incoming blocks and parses them for criteria specified by your app.
Example block listener

Resources