Where are cpu tags taken into account - bazel

https://bazel.googlesource.com/bazel/+show/master/CHANGELOG.md mentions, that there are cpu tags. Of course now the question to me is where else these tags are being taken into account.

Posting the commit message here as I think it answers the question perfectly:
TLDR: You can increase the CPU reservation for tests by adding a "cpu:" (e.g. "cpu:4" for four cores) tag to their rule in a BUILD file. This can be used if tests would otherwise overwhelm your system if there's too much parallelism.
This lets users specify that their test needs a minimum of CPU cores
to run and not be flaky. Example for a reservation of 4 CPUs:
sh_test(
name = "test",
size = "large",
srcs = ["test.sh"],
tags = ["cpu:4"],
)
This could also be used by remote execution strategies to tune their
resource adjustment.
As of 2017-06-21 the following alternating options are possible:
genrule: Setting tags same as in sh_test.
Example:
genrule(
name = "foo",
srcs = [],
outs = ["foo.h"],
cmd = "./$(location create_foo.pl) > \"$#\"",
tools = ["create_foo.pl"],
tags = ["cpu:4"],
)
Skylark rules: This can work as long as you do NOT use Workers. See.
For Skylark rules cpu can be set manually for any created action individually. This is accomplished by setting execution_requirements.
Example:
ctx.action(
execution_requirements = {
"cpu:4": "", # This is no mistake - you really encode the value in the dict key and an empty string in dict value
},
)

Related

Change request.comment-value?

I use different kinds of stop losses and would like to be notified (SendNotification()) about which kind of stop loss was hit upon trade exit.
Let's say I entered a trade by...
request.action = TRADE_ACTION_DEAL;
request.symbol = pSymbol;
request.type = pType;
request.sl = pStop;
request.tp = pProfit;
request.comment = pComment;
request.volume = pVolume;
request.price = SymbolInfoDouble(pSymbol,SYMBOL_ASK);
request.price = SymbolInfoDouble(pSymbol,SYMBOL_BID)
OrderSend(request,result);
I would now like to have the request.comment changed by the last stop loss like so:
request.action = TRADE_ACTION_SLTP;
request.symbol = pSymbol;
request.sl = pStop;
request.tp = pProfit;
request.comment = "Fixed SL";
PositionSelect(_Symbol);
request.order = PositionGetInteger(POSITION_IDENTIFIER);
OrderSend(request,result);
Unfortunately the second block of code does not change the first request.comment = pComment; though (instead the new comment is [sl 1.19724]).
Is it possible to change the comment via TRADE_ACTION_SLTP? What am I doing wrong?
Thank you!
I would now like to have the request.comment changed
There was never a way to do this in MQL4/5 trading platforms
Sad, but true.
The core-functionality was always focused on engineering a fast, reliable soft-real-time ( providing still just a best-effort scheduling alongside the stream of externally injected FxMarket-Event-Flow ), so bear with the product as-is.
Plus, there was always one more degree-of-uncertainty, the Broker-side automation was almost free for modifying the .comment-part of the Trade-position, so even if your OrderSend() was explicit on what ought be stored there, the result was unsure and the Broker-side could ( whenever, be it immediately or at any later stage ) change this field outside of any control ( which was not left on your side ), so the only semi-UUID# keys could have been placed into the .magic ( and your local-side application code always had to do all the job via some key:value storage extension to the otherwise uncertain Broker-side content.
Even the Trade number ( ID, ticket ) identifier is not always a persistent key and may change under some Trade management operations, so be indeed very carefull, before deciding your way.
like to be notified ( SendNotification() ) about which kind of stop loss was hit upon trade exit.
Doable, yet one will need to build all the middleware-logic on one's own :
The wish is clear and doable. Given a proper layer of middleware-logic will get built, one can enjoy whatever such automation.
Having built things like an augmented-visual-trading, remote AI/ML-quant-predictors or real-time fully-adaptive non-blocking GUI-quant-tools augmentations ( your trader gets online graphical visual aids inside GUI, automatically overlaid over other EA + Indicator tools on the GUI-surface, fully click-and-modify interactive / adaptive for fast visually augmented discretionary modifications of the traded asset management ), so only one's imagination and resources available are one's limit here.
Yet, one has to respect the published platform limits - the same as OrderModify() does not provide any means for the wish above, the add-on traded assets customer-specific reporting on position terminations is to be assembled by one's own initiative, as the platform does not provide ( for obvious reasons noted above ) any tools, relevant for such non-core activity.

How does the parsing of variables in Yocto work?

There are some variables that I just use without knowing what it does. Could someone explain the logic behind all these parsing in Yocto?
What does the underscore do? What are the available arguments other than _append_pn?
PACKAGECONFIG_append_pn-packagename = " packagename"
PREFERRED_VERSION_linux-imx_mx6 = "3.10.17"
SRC_URI_append_toolchain-clang = " file://0004-Remove-clang-unsupported-compiler-flags.patch "
EXTRA_OECONF_append_arm = " --enable-fpm=arm"
How about this one? I know that adding in this way is to single out a package, but how does it work?
LICENSE_FLAGS_WHITELIST_append = " commerical_packagename"
Someone also mentioned something weird with this that worked for them: bitbake: how to add package depending on MACHINE?
IMAGE_INSTALL_append_machine1 += " package1"
The documentation covers this pretty well: https://www.yoctoproject.org/docs/latest/bitbake-user-manual/bitbake-user-manual.html#basic-syntax
The longer version is that _ introduces an override, which is a way of saying "do something special" instead of just assigning.
Some are operations such as append and prepend.
FOO = "1"
FOO_append = "2"
FOO is now "12" as 2 was appended to 1.
(_prepend does what you'd expect)
_remove can be used to remove items from a whitespace-separated list.
FOO = "1 2 3"
FOO_remove = "2"
FOO is now "1 3".
pn_[foo] is an override for a specific recipe name (historical naming, it means package name, but actually refers to the recipe). So your local.conf can do:
EXTRA_OEMAKE_pn-foo = "bar"
And you've just set EXTRA_OEMAKE for the foo recipe, and just the foo recipe.
There are other overrides. The architectures all have overrides, so _arm _x86 _mips etc specify that an assignment is specific to those architectures.

Information passing between extra_actions

I have an action_listener:
action_listener(
name = "foo_listen",
mnemonics = [
"Foo", # Foo might usually take several minutes
],
extra_actions = [
"foo_action_pre", # Start some processing
"foo_action_post", # Finish parts of processing that needs action output
],
)
In foo_action_pre, I set
out_templates=[
"foo_action_pre_data",
],
in order to pass information to foo_action_post.
Now when I add $(location foo_action_pre_data) to the cmd of foo_action_post Bazel complains, that it is not a prereq.
No matter whether I add that to tools or data though, it never is detected as a prereq. How can I declare the correct dependency?
You have to use $(output foo_action_pre_data) instead of $(location foo_action_pre_data).
See extra_action.cmd.

$location expansion in Bazel

I want to add $(location) expansion to rules_scala for jvm_flags attribute where I set the dependency in the data attribute but that fails with:
label '//src/java/com/google/devtools/build/lib:worker' in $(location) expression is not a declared prerequisite of this rule.
I define a dependency in my target on that label in the data attribute like this:
scala_specs2_junit_test(
...
data = ["//src/java/com/google/devtools/build/lib:worker"],
jvm_flags = ["-XX:HeapDumpPath=/some/custom/path", "-Dlocation.expanded=$(location //src/java/com/google/devtools/build/lib:worker)"],
)
I saw that when I add ctx.attr.data to the expand_location call expansion works but I wasn't really sure why this is not a hack. Is data indeed a special case?
location_expanded_jvm_flags = []
for jvm_flag in jvm_flags:
location_expanded_jvm_flags.append(ctx.expand_location(jvm_flag, ctx.attr.data))
Also tried looking in the java_* rules sources to see how this works (since $(location) expansion there supports the data attribute) but couldn't find the relevant place.
Full target:
scala_specs2_junit_test(
name = "Specs2Tests",
srcs = ["src/main/scala/scala/test/junit/specs2/Specs2Tests.scala"],
deps = [":JUnitCompileTimeDep"],
size = "small",
suffixes = ["Test"],
data = ["//src/java/com/google/devtools/build/lib:worker"],
jvm_flags = ["-XX:HeapDumpPath=/some/custom/path", "-Dlocation.expanded=$(location //src/java/com/google/devtools/build/lib:worker)"],
)
You're doing it right.
I looked at the source code and you're right: srcs, deps, and tools (if defined on the rule) are added to the set of labels that expand_locations understands. data is added only if LocationExpander is created with allowDataAttributeEntriesInLabel=true, which it isn't. That's why you must add it to expand_locations(targets).

How to merge AndroidManifest.xml in bazel

My android project contains some aar modules, which have their own AndroidManifest.xml. What should I do to have the aar's manifest to be merged into the final AndroidManifest.xml?
Thanks very much for any help!
My android_binary rule:
android_binary(
name="apk",
custom_package = "com.xtbc",
manifest_merger = "android",
manifest = "AndroidManifest.xml",
resource_files = glob(["res/**"], exclude=["res/.DS_Store"]),
assets = glob(["assets/**"], exclude=["assets/.DS_Store"]),
assets_dir = "assets",
multidex = "manual_main_dex",
main_dex_list = "mainDexList.txt",
dexopts = [
"--force-jumbo"
],
deps = [
":lib",
":base_lib",
":jni"
]
)
The :base_lib is a module (ie, an android_library rule):
android_library(
name = "base_lib",
srcs = glob(["base/src/**/*.java"]),
custom_package = "com.xtbc.base",
manifest = "base/AndroidManifest.xml",
resource_files = glob(["base/res/**"], exclude=["base/res/.DS_Store"]),
assets = glob(["base/assets/**"], exclude=["base/assets/.DS_Store"]),
assets_dir = "base/assets",
deps = [
"#androidsdk//com.android.support:support-annotations-23.0.1"
]
)
It has its own base/AndoridManifest.xml, what I want is that the :base_lib's AndroidManifest.xml will be merged into the final AndroidManifest.xml(ie, the :apk's AndroidManifest.xml).
I do not have enough stackoverflow reputation to respond to the comment chain, but it sounds like what you are after is the exports_manifest attribute of android_library.
The documentation at https://bazel.build/versions/master/docs/be/android.html#android_library.exports_manifest says that the default is 1, however, that documentation is based on source code changes that have not made it into a Bazel release yet. For now you will need to add exports_manifest = 1 onto your android_library. In the next Bazel release, this will no longer be necessary.
Also, regarding "AAR modules": If these are prebuilt .aar files, you will want to use the aar_import rule. It does not have an exports_manifest attribute, because it will always export by default. If these are Gradle Android library modules, then you can just use the android_library rule. If you were referring to the support libraries, #androidsdk//com.android.support:support-annotations-23.0.1 is actually a JAR, not an AAR.

Resources