I'm writing a simple applescript script for iTerm2 that generates a bunch of tabs and starts running services within them (I have a lot of microservices, and all need to be running to test things locally).
Things are mostly working, however I'm experiencing some slightly odd behavior which I think is related to applescript sending commands early. Let's look at a concrete example:
create tab with default profile
tell the current session
write text "cd services/myservice"
write text "make build-docker"
write text "make run-docker"
end tell
In theory, this chunk should
1) Create a new tab
2) Change into a new directory
3) Build a docker image and
4) Run that docker image.
This will occasionally work, but more frequently I run into problems on step 4. Specifically, I'll check the tab only to find out that 'make build-docker' was the last command run. This command takes some time, so I'm assuming "make run-docker" is sent while build is running and is ignored. Is there a way to force applescript/iTerm2 to wait for this command to finish so that run is executed correctly?
Hopefully this is clear. Thanks for reading!
If you have iTerm's shell integration enabled, you can poll is at shell prompt to determine if the command has finished:
tell application "iTerm2"
tell current window
create tab with profile "BentoBox"
tell current session
write text "sleep 5"
repeat while not (is at shell prompt)
delay 0.5
end repeat
write text "sleep 5"
end tell
end tell
end tell
Otherwise, you will want to string all the commands together in one tell:
write text "cd services/myservice; make build-docker; etc; etc; etc.."
Or place them in a shell script and execute that:
write text "my_super_duper_shell_script.sh"
Or use AppleScript to write the cmds to a tmp file and execute that:
re : How can I create or replace a file using Applescript?
Related
I want to create some "build traceability" functionality, and include the actual bazel command that was run to produce one of my build artifacts. So if the user did this:
bazel run //foo/bar:baz --config=blah
I want to actually get the string "bazel run //foo/bar:baz --config=blah" and write that to a file during the build. Is this possible?
Stamping is the "correct" way to get information like that into a Bazel build. Note the implications around caching though. You could also just write a wrapper script that puts the command line into a file or environment variable. Details of each approach below.
I can think of three ways to get the information you want via stamping, each with differing tradeoffs.
First way: Hijack --embed_label. This shows up in the BUILD_EMBED_LABEL stamping key. You'd add a line like build:blah --embed_label blah in your .bazelrc. This is easy, but that label is often used for things like release_50, which you might want to preserve.
Second way: hijack the hostname or username. These show up in the BUILD_HOST and BUILD_USER stamping keys. On Linux, you can write a shell script at tools/bazel which will automatically be used to wrap bazel invocations. In that shell script, you can use unshare --uts --map-root-user, which will work if the machine is set up to enable bazel's sandboxing. Inside that new namespace, you can easily change the hostname and then exec the real bazel binary, like the default /usr/bin/bazel shell script does. That shell script has full access to the command line, so it can encode any information you want.
Third way: put it into an environment variable and have a custom --workspace_status_command that extracts it into a stamping key. Add a line like build:blah --action_env=MY_BUILD_STYLE=blah to your .bazelrc, and then do echo STABLE_MY_BUILD_STYLE ${MY_BUILD_STYLE} in your workspace status script. If you want the full command line, you could have a tools/bazel wrapper script put that into an environment variable, and then use build --action_env=MY_BUILD_STYLE to preserve the value and pass it to all the actions.
Once you pick a stamping key to use, src/test/shell/integration/stamping_test.sh in the Bazel source tree is a good example of writing stamp information to a file. Something like this:
genrule(
name = "stamped",
outs = ["stamped.txt"],
cmd = "grep BUILD_EMBED_LABEL bazel-out/volatile-status.txt | cut -d ' ' -f 2 >\$#",
stamp = True,
)
If you want to do it without stamping, just write the information to a file in the source tree in a tools/bazel wrapper. You'd want to put that file in your .gitignore, of course. echo "$#" > cli_args is all it takes to dump them to a file, and then you can use that as a source file like normal in your build. This approach is simplest, but interacts the most poorly with Bazel's caching, because everything that depends on that file will be rebuilt every time with no way to control it.
Since Ranorex does not provide re-run functionality from under the hood, I have to write my own and before I started, just want to ask for advice from people who've done it or maybe possible existing solution on the market.
Goal is:
In the end of the run, to re-run failed test cases.
Requirements:
Amount of recursive iterations should be customized
If Data binding is used, should include only Iterations for Data binding that failed
I would use the Ranorex command line argument possiblities to achieve this. Main thing would be to structure the suit accordingly that each test-case could be run seperately.
During the test I would log down the failed test cases either into a text file, database or any other solution that you can later on read the data from (even parse it from the xml result if you want to).
And from that data you'll just insert the test-case name as a command line argument while running the suite again:
testSuite.exe /testcase:TestCaseName
or
testSuite.exe /tc:TestCaseName
The full command line args reference can be found here:
https://www.ranorex.com/help/latest/lesson-4-ranorex-test-suite
Possible solutions:
1a. Based on the report xml: Parse report and collect info about all failed TC.
Cons:
Parse will be tricky
or:
1b. Or create list of failed TC on runtime : If failure occurs on tear-down add this iteration to the re-run list (could be file or DB table).
Using for example:
string testCaseName = TestCaseNode.Current.Name;
int testCaseIndex = TestSuite.Current.GetTestCase(testCaseName).DataContext.CurrentRowIndex;
then:
2a. Based on the list, run executable with parameters, looping though each record.
like this:
testSuite.exe /tc:testCaseName tcdr:testCaseIndex
or:
2b. Or generate new TestSuite file .rxtxt and recompile solution to created updated executable.
and last part:
3a. In the end repeat process, checking that failedTestCases == 0 || currentRerunIterations < expectedRerunIterations with script through CI run executable
or:
3b. Wrap whole Test Suite into Rerun test module and do the same check for failedTestCases == 0 || currentRerunIterations < expectedRerunIterations and run Ranorex from TestModule
Please let me know what you think about it.
I want to have a single script, that either collects tensorboard data or not, depending on how I run it. I am aware that I can pass flags to tell my script how I want it to be run. I could even hard code it in the script and just manually change the script.
Either solution has a bigger problem. I find myself having to write an if statement everywhere on my script when I want the summary writer operations to be ran or not. For example I find that I would have to do something like:
if tb_sys_arg = 'tensorboard':
merged = tf.merge_all_summaries()
and then depending on the value of tb_sys_arg run the summaries or not, as in:
if tb_sys_arg = 'tensorboard':
merged = tf.merge_all_summaries()
else:
train_writer = tf.train.SummaryWriter(tensorboard_data_dump_train, sess.graph)
this seems really silly to me. I'd rather not have to do that. Is this the right way to do this? I just don't want to collect statistics each time I run my main script but I also don't want to have two separate scripts either.
As an anecdotical story, few months ago I started using TensorBoard and it seems I have been running my main file as follow:
python main.py —logdir=/tmp/mdl_logs
so that it collects tensorboard data. But realized that I don't think I need that last flag to collect tensorboard data. Its been so long that now I forget if I actually need that. I've been reading the documentation and tutorials but it seems I don't need that last flag (its only needed to run the web app as in tensorboard --logdir=path/to/log-directory, right?) Have I been doing this wrong all this time?
You can launch Supervisor without "summary" service, so it won't run the summary nodes, see "Launching fewer services" section of the Supervisor docs -- https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.train.Supervisor.md#launching-fewer-services
I have a script that I want to run within some tests using RSpec that I want to test the input / output on.
For example,
I have a script 'script.rb' that I run using 'ruby script.rb'
The script then outputs to STDOUT and takes an input using STDIN. I want to test this using RSPec and check that everything works appropriately (a feature test). How would I go about doing this? Just do an execute 'ruby script.rb' within RSPec and then test that the output is what I expect and that it takes input from STDIN?
Since you want to test your script as a black box (only STDIN/STDOUT), you may perfectly do it via system() call: just make sure to clean up all side effects your script is leaving (for example, temporary files).
EDIT: Just create some file with needed content, say stdin.txt.
After that you may run your command this way:
system('cat stdin.txt | ruby myscript.rb')
Is there a way to define a task in Ant that always gets executed at the end of every run? This SO question provides a way to do so, at the start of every run, before any other targets have been executed but I am looking at the opposite case.
My use case is to echo a message warning the user if a certain condition was discovered during the run but I want to make sure it's echoed at the very end so it gets noticed.
use a buildlistener, f.e. the exec-listener which provides a taskcontainer for each build result
( BUILD SUCCESSFUL | BUILD FAILED ) where you can put all your needed tasks in, see :
https://stackoverflow.com/a/6391165/130683
for details.
It's an interesting situation. Normally, I would say you can't do this in an automated way. You could wrap Ant in some shell script to do this, but Ant itself really isn't a full fledge programming language.
The only thing I can think of is to add an <ant> call at the end of each task to echo out what you want. You could set it up, that if a variable isn't present, the echo won't happen. Of course, this means calling the same target a dozen or so times just to get that final <echo>.
I checked through AntXtras and Ant-Contrib for possible methods, but couldn't find any.
Sorry.
Wrap your calls in the sequential container.
http://ant.apache.org/manual/Tasks/sequential.html