I know this is stupid, but I am trying to prove that bazel will do great things for us. We have a hairy, complex build system and it is going to be a huge lift to move it to bazel. I have been told we can't have the money/time to do this. So I trying to do this bass ackward.
I want to make rules for our unit tests that don't use bazel for the build. My thinking is that when I run a test, it first looks for a marker file with the current hash tree. If it's not there, I run the test and gather stats about the time it took. Then I put that info in the marker file with a bazel rule. The next time for the same hash tree, I find the marker file, extract the info and generate a nice message that bazel just saved X time on this job. I can then scrape those messages and produce shiny management graphs demonstrating how great having hash dependency test control is. Hopefully, this will get us funded to do it right.
I am hoping you stop laughing at me long enough to help figure this out.
thanks,
jerry
Bazel do not write anything to source directory, and it is hard to do this. Your solution is probably doable, but you need to know how bazel works underhood, and it will be overkill for such a hack
IMO the best way is to write a simple bash script, which will run you tests:
sh_test(
name = "test",
srcs = ["test_wrapper.sh"],
data = [all_files_required_by_tests_runner],
)
and you will get that pretty message about saved time for free
Related
I just started with using jenkins, and learning a lot.
Installed on windows 7.
One thing i can figure out is about File operations plugin.
I don't know how to set up just simple "copy, paste" from one location to another.
Yea maybe is dumb, but i just cant figure out, try a lot of and always got build break.
What i need to set up for simple test ? like i liked to say, show me a door, i'll figure out rest :)
The include file pattern wants an ANT style filter (relative to your workspace as the root).
Include File Pattern: myWorkspaceSubFolder\*.txt
Target location also assumes the root workspace though I guess you can redirect it to another drive.
I have a lua project with lua files specified in multiple directories all under the same root folder with some dependencies.
Occasionally I run into issues where when a table is being loaded at load time I get a nil exception as the table is referencing a not yet initialised table, like:
Customer =
{
Type = CustomerTypes.Friendly
}
Which causes a nil exception for CustomerTypes as CustomerTypes.lua has not yet loaded.
My current solution is to simply have a global function call in these lua files to load the dependency scripts.
What I would like to do is pre-process my lua files to find all dependencies and at run time load them in that order without needing function calls or special syntax in my lua files to determine this (i.e. the pre-processor will procedurally work out dependencies).
Is this something which can be realistically achieved? Are there other solutions out there? (I've come across some but not sure if they're worth pursuing).
As usual with lua there are about 230891239122 ways to solve this. I'll name 3 off the top of my head but I bet I could illustrate at least 101 of them and publish a coffee table book.
First of all, it must be said that the notion of 'dependencies' here is strictly up to your application. Lua has no sense of it. So this isn't anything like overcoming a deficiency in lua, it's simply you creating a scripting environment in your application that makes you comfortable, and that's what lua's all about.
Now, it seems to me you've jumped to a conclusion that preprocessing is required to solve the given problem. I don't think that's warranted. I feel somewhat comfortable saying a more conventional approach to solving the problem would be to make a __newindex metamethod globally which handles the "CustomerTypes doesnt exist yet" situation by referencing a list of scripts which have been scanned out of the filesystem initially for one called CustomerTypes.lua and running that.
But maybe you have some good reason for wanting it done strictly as preprocessing. In your case, I would start by considering 'dependencies' to be any name which is a script found in your scripts filesystem. Then scan each script to look for the names of dependencies using the definitions/list you just created, and prepend a load(dependency) command to each of those scripts.
Since the concept of "runtime" or "preprocessing" is somewhat ambiguous in this context, you might mean at script-compile-time. You could use the LuaMacros token filters system to effect a macro which replaces CustomerTypes with require("CustomerTypes.lua") or something to that effect after having discovered that CustomerTypes is a legal dependency name.
I'm trying to add a .mo file for en_US translations but I keep getting this error:
Updating the catalog failed. Click on 'Details >>' for details.
And the content is:
execvp(xgettext--force-po, -o, /tmp/poeditf0AcvR/0extrated.pot <...> ) failed with error 2!
You know what's a much better place to report problems with applications you use than SO? As a rule of thumb, its developer is best able and most qualified to help. It's a good idea to provide relevant details, too (this evergreen is worth every second spent reading it, please do: http://www.chiark.greenend.org.uk/~sgtatham/bugs.html), such as version of the app, your platform, specifics of what you're actually doing ("add a .mo" isn't quite as descriptive as it could be).
Seeing that Poedit is apparently trying to launch a program named xgettext--force-po, which quite obviously doesn't exist, my blind guess would be that you went to Poedit's preferences, messed with the settings for whatever extractor you use for this and accidentally removed a space after xgettext from the extraction command in there.
Remove the extractor, quit Poedit and let it recreate it.
I ran a dynamic simulation in Abaqus 6.11, and need a way to output the results in an efficient manner. I would like to report the velocity (among other quantities) of all the nodes at all time steps. In the GUI I could create a field output and select each step one at a time to report, but this approach is not practical. Does anyone know how to do this? In the end I'm hoping to get one/multiple rpt files containing the data I need. Then I can write a script in Matlab for reading/performing operations with the data.
Thanks
You should write a script to automate the process for you. Since Abaqus exposes interface for writing Python scripts, you should try that out.
If you've never done something like that, then create a field report for one step/frame manually and then open abaqus.rpy file to see the code necessary to create that single output. Once you figure out how to do it for one step, write a script with a loop to do it for all steps.
When you open abaqus.rpy file, there will probably be a lot of code, depending on how much commands you had previously issued. The like you need to look for looks something like
session.writeFieldReport(some parameters...)
The script you write can be run via 'File > Run script'.
If you need actual help writing the script, maybe you should open a question with specific problem.
Is there any way of persisting my F# session or serializing it into a file? i.e. so I can hand it to a friend and say "run this" and they will be at the same place I was? I know forth had this ability but I can't find any way of doing this.
An alternative would be a log file or something of similar ilk, but ideally it would strip the output and just give me the code I wrote.
On the topic of user questions, is there a config file for F# so I can add some "always includes" or alter the defaults?
There is no way to serialize the F# Interactive session or create some log of commands automatically.
The typical user interaction is that you write all your code in F# Script File (.fsx extension) and evaluate code by selecting lines and sending them to F# Interactive using Alt+Enter. If you work like this, then the F# Script File is a bit like log of your work - and you can easily send it to other people.
The good thing about this approach is that you can edit the file - if you write something wrong, you can correct it and the wrong version will not appear in the log. The bad thing is that you need some additional effort to keep the source file correct.
Regarding automatic inclusions - you can specify options for fsi.exe in Visual Studio Options (F# Tools). The --load command line parameter can be used to load some F# source at startup.