bazel new_http_archive re-try supported? - bazel

We have an external dependency that we are using
new_http_archive for.
There was a situation when the external download failed and as a result one of the pre-submits failed. There does not seem to be a way to re-try with new_http_archive.
It would be useful to re-try so there is less churn with external connectivity hiccups which do happen as we see.
Any idea how to accompish that? Is there a way to tell bazel to try again if external URL download fails?
Any help is appreciated

You can prefetch dependencies using the command bazel fetch before calling bazel build. If the fetch returns a non-zero error code you could bazel re-run bazel fetch as many times as you want to try the external URL download.

Retry support is already built into new_http_archive: it should already attempt to download the file 8 times before giving up (unless it gets an error that suggests it would be fruitless to retry, e.g., "403: permission denied").
You can also specify multiple URLs for Bazel to try, e.g.,
new_http_archive(
name = "whatever",
urls = [
"https://mirror1.example.com/your_archive.zip",
"https://mirror2.example.com/your_archive.zip",
],
...

Related

dotnet-isolated azure function container loads 0 of 1 function from metadata and then gives http status 204 (content not found)

I have .net 6 isolated function docker container that works locally but not in azure. The docker file copies the build output binaries to the home/site/wwwroot directory of the container based on the image mcr.microsoft.com/azure-functions/dotnet-isolated:4-dotnet-isolated6.0.
When I look at the live log stream I can see
the configuration is setup correctly as far as I can see but I don't have full access. its setup as dotnet-isolated and functions version 4. I can see its pointing at the right docker image.
I'm not sure what else to check troubleshoot why it doesn't start properly. Are the files in the correct location in the docker file? does it need anything else that I have missed?
Any advice will be greatly appreciated.
Thanks
Thanks, i should have mentioned that this is for a timer trigger only so there is no http triggers
In the Azure Functions:
For the Http Triggers, response comes in Http Status Codes.
For Timer Triggers, failed responses can be thrown in the form of exceptions but not the status codes.
I found an article in dontcodetired site where the author mentioned that we can write the status code manually which returns automatically during some situations which is taken care of by Azure Functions Runtime.
One of those situations is returning the status codes automatically in the case of failed operation like without the exception, function completes the execution without proper result which is a kind of internal server problem- it means the request of (any trigger type) is processed/succeeded but without proper response or operation result.

How to catch an ouppuit of a process or a command

I'm trying to write a rule where the condition depends on the output of a script.sh. I had tried several approaches, but I did not have success.
Searching in your documentation but didn´t find anything that help me. I tried several evt or proc, but neither of them given me any info.
In fact, this is the rule with I'm trying to see how I can find a workaround:
- rule: FIM Custom rule
desc: Testing rule
condition: access_log_files and (evt.type=close)
output: Test result (proc_name=%proc.name command=%proc.cmdline evt_type=%evt.type evt.args =%evt.args syslog_.facility_str=%syslog.facility.str syslog_message=%syslog.message)
priority: WARNING
Consider please that I´m running Falco on Docker with the last image.
When I execute in the Ubuntu host the command logger test, I recievedin the stdout of the docker falco container this message:
{"hostname":"dc95654c63c3","output":"01:21:29.759239580: Warning Test result (proc_name=python3 command=python3 /usr/lib/ubuntu-advantage/timer.py evt_type=close evt.args =res=0 syslog_.facility_str= syslog_message=)","priority":"Warning","rule":"FIM Custom rule","source":"syscall","tags":[],"time":"2022-12-17T01:21:29.759239580Z", "output_fields": {"evt.args":"res=0 ","evt.time":1671240089759239580,"evt.type":"close","proc.cmdline":"python3 /usr/lib/ubuntu-advantage/timer.py","proc.name":"python3","syslog.facility.str":null,"syslog.message":null}}
So please tell me what I can do.
Thanks
In order to feed Falco with external sources of events (those that are not Kernel Syscalls) you'd need to use a Falco plugin. There are plugins to obtain events from Kubernetes, AWS CloudTrail, or even from GitHub. However, there is no plugin, that I know of, to obtain information from the standard output of a program or from Syslog.
Due to the nature of the project Falco, anyone in the community can contribute with such a plugin, so I invite you to join the Falco slack channel and ask around, or even write your own plugin.

Cross site issue with Microsoft Graph Toolkit

I'm following this tutorial to create a simple web app with a Microsoft 365 login. I'm currently getting this error when debugging locally (http://localhost:8080):
Warning:
mgt-loader.js:61 A parser-blocking, cross site (i.e. different eTLD+1) script, https://unpkg.com/#microsoft/mgt/dist/bundle/wc/webcomponents-loader.js, is invoked via document.write. The network request for this script MAY be blocked by the browser in this or a future page load due to poor network connectivity. If blocked in this page load, it will be confirmed in a subsequent console message. See https://www.chromestatus.com/feature/5718547946799104 for more details.
In Azure, I have the Redirect URIs set up to match (http://localhost:8080).
After some googling, I tried adding async, but then I get this warning and the login button doesn't appear:
mgt-loader.js:61 Failed to execute 'write' on 'Document': It isn't possible to write into a document from an asynchronously-loaded external script unless it is explicitly opened.
What would be causing this warning and how can I fix it?
First, check out how document.write works: https://developer.mozilla.org/en-US/docs/Web/API/Document/write
You will understand why you cannot run document.write in asynchronous context (try running document.write('Hello world!'); in console on any page).
Warning tells you that a parser blocking (synchronous), cross site (not coming from the same domain as website) scripts can be blocked by Chrome in the future if someone has unstable or bad internet connection.
If you want it to run synchronously without that warning, you have to bundle that JS code with your own, or just serve it from your own origin, same as your website (e.g. localhost:8080). You can download #microsoft/mgt npm package and for bundling - use gulp, webpack or other tool of your choice.
https://unpkg.com/#microsoft/mgt#2.4.0/dist/bundle/wc/webcomponents-loader.js
This script tries to differentiate between async and sync contexts (line 175) and run document.appendChild (instead of write) for async context - but for some reason the check fails (readyState === loading).
https://developer.mozilla.org/en-US/docs/Web/API/Document/readyState
How to check if an Javascript script has been loaded Asynchronously (Async) or async attribute is present?
If you want to run this in non-blocking manner, you could try to fix the script by yourself.
There is a Github repo for that toolkit (https://www.npmjs.com/package/#microsoft/mgt), but there is no issue regarding async loading, nor regarding the warning that you have noticed - so maybe nobody else has noticed or thought about it yet.

how to get bazel cache items or fail

I know this is stupid, but I am trying to prove that bazel will do great things for us. We have a hairy, complex build system and it is going to be a huge lift to move it to bazel. I have been told we can't have the money/time to do this. So I trying to do this bass ackward.
I want to make rules for our unit tests that don't use bazel for the build. My thinking is that when I run a test, it first looks for a marker file with the current hash tree. If it's not there, I run the test and gather stats about the time it took. Then I put that info in the marker file with a bazel rule. The next time for the same hash tree, I find the marker file, extract the info and generate a nice message that bazel just saved X time on this job. I can then scrape those messages and produce shiny management graphs demonstrating how great having hash dependency test control is. Hopefully, this will get us funded to do it right.
I am hoping you stop laughing at me long enough to help figure this out.
thanks,
jerry
Bazel do not write anything to source directory, and it is hard to do this. Your solution is probably doable, but you need to know how bazel works underhood, and it will be overkill for such a hack
IMO the best way is to write a simple bash script, which will run you tests:
sh_test(
name = "test",
srcs = ["test_wrapper.sh"],
data = [all_files_required_by_tests_runner],
)
and you will get that pretty message about saved time for free

ILOG - version 8.0.1

Sometimes when the rules are deployed from the decision center to RES, although the recent changes are visible in the new archive, on RES, but the execution results don't reflect them. It is as if the changes are not recognized at execution time. A second deployment without any changes to the rules, will fix the situation. Can somebody explain why this is happening?
You can try couple of things -
The XU MBean ruleset archive changed/modified notification might be failing. Check if you have the necessary access for this notification. You can try logging into the RES and Diagnostics->Run Diagnostics; and see if there are any errors. Also, you can see the ODM server logs for any errors, when you deploy the ruleset.
Check if there is any caching issue

Resources