Automatically detect if using GUI or batch mode - path

I am using Stata both in the GUI and running scripts in batch mode using a slurm cluster. The filepaths need to be established differently in each use case but I would like to have one .do file where all of the paths are defined.
Is there a way to write a falsifiable if statement that can evaluate to true if run from the GUI and false if run in batch?
Something akin to
glob using_gui = T
if $using_gui == T {
glob dir "/mydir"
} else {
glob dir "D:/mydir"
}
But where $using_gui is automatically determined as T or F

As answered within the statalist question linked above, this can be answered with c(mode) so
if "`c(mode)'" == "batch" {
glob dir "/mydir"
}
else {
glob dir "D:/mydir"
}
There are many ways to skin a cat. See help creturn for like options.

Are you running the GUI on one machine and the batch job on a different machine? If so, you can use c(username) for this as the two machines will have different usernames.
For example:
if "`c(username)'" == "MyGUILaptop" {
glob dir "/mydir"
}
else if "`c(username)'" == "MySlurmCluster" {
glob dir "D:/mydir"
}
You can see the username of the computer you are using by using display "`c(username)'". See more about this here (disclaimer: I wrote the book this links to).

Related

Have all Bazel packages expose their documentation files (or any file with a given extension)

Bazel has been working great for me recently, but I've stumbled upon a question for which I have yet to find a satisfactory answer:
How can one collect all files bearing a certain extension from the workspace?
Another way of phrasing the question: how could one obtain the functional equivalent of doing a glob() across a complete Bazel workspace?
Background
The goal in this particular case is to collect all markdown files to run some checks and generate a static site from them.
At first glance, glob() sounds like a good idea, but will stop as soon as it runs into a BUILD file.
Current Approaches
The current approach is to run the collection/generation logic outside of the sandbox, but this is a bit dirty, and I'm wondering if there is a way that is both "proper" and easy (ie, not requiring that each BUILD file explicitly exposes its markdown files.
Is there any way to specify, in the workspace, some default rules that will be added to all BUILD files?
You could write an aspect for this to aggregate markdown files in a bottom-up manner and create actions on those files. There is an example of a file_collector aspect here. I modified the aspect's extensions for your use case. This aspect aggregates all .md and .markdown files across targets on the deps attribute edges.
FileCollector = provider(
fields = {"files": "collected files"},
)
def _file_collector_aspect_impl(target, ctx):
# This function is executed for each dependency the aspect visits.
# Collect files from the srcs
direct = [
f
for f in ctx.rule.files.srcs
if ctx.attr.extension == f.extension
]
# Combine direct files with the files from the dependencies.
files = depset(
direct = direct,
transitive = [dep[FileCollector].files for dep in ctx.rule.attr.deps],
)
return [FileCollector(files = files)]
markdown_file_collector_aspect = aspect(
implementation = _file_collector_aspect_impl,
attr_aspects = ["deps"],
attrs = {
"extension": attr.string(values = ["md", "markdown"]),
},
)
Another way is to do a query on file targets (input and output files known to the Bazel action graph), and process these files separately. Here's an example querying for .bzl files in the rules_jvm_external repo:
$ bazel query //...:* | grep -e ".bzl$"
//migration:maven_jar_migrator_deps.bzl
//third_party/bazel_json/lib:json_parser.bzl
//settings:stamp_manifest.bzl
//private/rules:jvm_import.bzl
//private/rules:jetifier_maven_map.bzl
//private/rules:jetifier.bzl
//:specs.bzl
//:private/versions.bzl
//:private/proxy.bzl
//:private/dependency_tree_parser.bzl
//:private/coursier_utilities.bzl
//:coursier.bzl
//:defs.bzl

How can I pass a pointer to a file in helm upgrade command?

I have a truststore file(a binary file) that I need to provide during helm upgrade. This file is different for each target env(dev,qa,staging or prod). So I can only provide this file at time of deployment. helm upgrade --set-file does not take a binary file. This seem to be the issue I found here: https://github.com/helm/helm/issues/3276. This truststore files are stored in Jenkins Credential store.
As the command itself is described below:
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
it is also important to know The Format and Limitations of
--set.
The error you see: Error: failed parsing --set-file data... means that the file you are trying to use does not meet the requirements. See the example below:
--set-file key=filepath is another variant of --set. It reads the
file and use its content as a value. An example use case of it is to
inject a multi-line text into values without dealing with indentation
in YAML. Say you want to create a brigade project with certain value
containing 5 lines JavaScript code, you might write a values.yaml
like:
defaultScript: |
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
Being embedded in a YAML, this makes it harder for you to use IDE
features and testing framework and so on that supports writing code.
Instead, you can use --set-file defaultScript=brigade.js with
brigade.js containing:
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
I hope it helps.

Minifying import paths for modules in webpack

I've got a TypeScript project that uses Webpack successfully to yield ES6, which is run through babel-minify to tree-shake it and produce a significantly smaller bundle output file.
This file appears to contain all the logic from my own program as well as the logic for each of the pieces of the imported libraries I'm using (e.g. rxjs, lodash, etc.)
However, I'm looking through the generated file and it appears that at the top we've got some webpack logic, then a map of the path of the original import to a function that implements it, and well, really a lot of that all the way down, with various portions pointing to dependencies and their path strings.
Now, given that everything is self-contained within this webpack bundle (no other chunks), the inclusion of all the source file names seems to take up a lot of space needlessly.
For example, I'm looking at one section in here for lodash's isBoolean script:
"./node_modules/lodash/isBoolean.js": function(e, t, o) {
var r = o("./node_modules/lodash/_baseGetTag.js"),
s = o("./node_modules/lodash/isObjectLike.js");
e.exports = function(e) {
return !0 === e || !1 === e || s(e) && "[object Boolean]" == r(e)
}
},
Now, it seems like there are a lot of characters being taken up to describe the source file. Since there's no actual dependency at this time on the source file, why can't each key just be replaced with a shorter string identifier throughout, as in the following example:
"a": function(e, t, o) {
var r = o("b"),
s = o("c");
e.exports = function(e) {
return !0 === e || !1 === e || s(e) && "[object Boolean]" == r(e)
}
},
where "a", "b", and "c" are all representative of each place where the original string values occur throughout the entire bundle. This shouldn't impact all strings, but rather just the import file path strings.
There appears to be someone asking a similar question at Webpack compress path names who didn't really get a satisfactory answer, in my opinion.
Is there some option or plugin I can use that could mangle the module path names?
Figured it out after reading through Webpack's source code and figuring out how it actually assembles the whole thing.
I had the NamedModulesPlugin in my config (likely an artifact from one of the various quickstarts out there) and this plugin inserts all the module paths into the output.
Commenting that out (or just removing it) from the config removes it entirely from the output (no mangling necessary).

Grep a file and save the result to variable synchronously using Gulp

I have this case. I need to grep a file for some RegEx and the result string (or array of strings) I need to save to variable for later use. And this has to be achieved using Gulp.
It should look like this in my idea:
var line;
gulp.task('grep', function(callback) {
line = someCoolSyncFunction('/needle/', './haystack.txt');
callback();
});
gulp.task('useIt', ['grep'], function() {
console.log(line);
});
Important is this someCoolSyncFunction to be synchronous and to handle file on physical/virtual file system, not the Vinyl file.
Is there a way to do this using Gulp? Or any other approach to achieve similar effect?
PS: to explain the reason, I need to extract version number from Debian package changelog and insert it to configuration file inside the package during the build process.
Thanks a lot.
Vit

FitNesse SliM - root page references to .class files MANY different projects

I am developing some fixtures in Java to use with fitnesse slim. I run into problems (EXCEPTION:java.lang.NoClassDefFoundError:) when I must update my root page with paths like this:
!define TEST_SYSTEM {slim}
!path: C:\WORKSPACE\Projects\iperoom_67_workspace\acceptance_test_project\bin
!path: C:\WORKSPACE\Projects\iperoom_67_workspace\iperoom\BASE\common_util\target\classes
!path C:\WORKSPACE\Projects\iperoom_67_workspace\iperoom\BASE\dfc_util\target\classes
Where a class in i.e. ...BASE\dfc_util\target\classes; has the following imports:
import no.joint.iperoom.test.AbstractDfcTest;
code
.
.
.
Which gives the complete path in my local C drive workspace:
C:\WORKSPACE\Projects\iperoom_67_workspace\iperoom\BASE\dfc_util\target\classes\no\joint\iperoom\test
My question is could I say, on the root page:
classpath: C:\WORKSPACE\Projects\iperoom_67_workspace\iperoom\BASE*; as in take in all the .class files from here and up. Something more general?
and possibly import several pats to .class files on the fitnesse test page:
|import|
|dfc_util.target.classes.no.joint.iperoom.test.AbstractDfcTest|
Or is there any other and better way to solve this problem with a growing number of '!paths' in my root page due to calling one .class from another .class from antoher .class and so forth.
Or maybe my fixture code is not good enough:
public class SessionHelperTest /extends AbstractDfcTest/{
public boolean testNewSession() {
System.out.println("Hello Joint");
IDfSession session = SessionRegistry.getSuperUserSession("eRoomPCI_v_1_1");
try {
String si = session.getSessionId();
System.out.println("The sessionId is:\n" + si);
return true;
} catch (DfException e) {
e.printStackTrace();
return false;
}
}
}
Cheers
Magnus
I don't think path is going to work the way you want it to. If you define it at too low a level, I'm pretty sure it won't find your classes.
The !path works fine when you do any of the following:
This will get all of the class files under build/classes if it is under the folder fitnesse starts in:
!path build/classes
This will handle multiple jar files:
!path lib/*.jar
Important to note is that you can leverage environment variables for this. Assuming you have an environment variable called WORKSPACE defined that points to the base of your project, you can do this:
!path ${WORKSPACE}/acceptance_test_project/bin
!path ${WORKSPACE}/acceptance_test_project/common_util/target/classes
!path ${WORKSPACE}/acceptance_test_project/dfc_util/target/classes
The reality is that if your files are scattered across multiple folders, you will have to use multiple entries. If just to make sure you can control the order the path is processed. If you only do this on your FrontPage, then everything below it will inherit the same path. Then you only have to manage it in one location. So while the list might be longer than you prefer, the maintenance is managed.

Resources