testing errorlevel in a batch file does not return expected result - errorlevel

Windows 7 64 bit
I'm writing a batch file that copies a file from a directory to another directory then deletes the file. There are times where the file i wish to copy is being written by another program when the batch file is run and as a result issues the error "The process cannot access the file because it is being used by another process." I was expecting to be able to do an errorlevel test on this condition and when it exists jump to a delay and then retry. I am unable to get an IF ERRORLEVEL 1 type test to give me a 1 condition so I can jump to my delay. I've tried all kinds of variations using % around errorlevel etc. I can't seem to get it to produce a non 0 result even know it's producing the "The process cannot access the file..." error. I swear I had this working at one time but I can't for the life of me figure out why it's no longer working. I even tried a much simpler test below and it won't see the failed delete command as a non zero errorlevel. How do I construct a proper errorlevel test that will pick up the failing delete command and allow me to jump to a delay?
:start
copy c:\users\pc\documents\recordings\*.mp3 c:\recordings
del c:\users\pc\documents\recordings\*.mp3
if errorlevel 1 goto delay
goto start
:delay
echo delay
ping 1.1.1.1 > nul
goto start

Well, I was able to come at this a slightly different way. After searching around I found some code that will let me do basically what I want. After integrating this into my much larger script I am able to jump out to a delay then try again until the file is no longer locked.
#echo off
2>nul (
>>test.txt echo off
) && (echo file is not locked) || (echo file is locked)
I am however perplexed as to why I was unable to get errorlevel checking to work. I swear I had this working previously but then was unable to get it working again. I've done a lot of searching and haven't really come up with a definitive answer. The errorlevel checking would have been much simpler had it worked.
---update---
I went searching and found the findstr command and playing around with some code came up with what is another method. I'm "locking" the file using the read only attribute which yeilds a different error message "Access is denied" but I'm using findstr to search the error code that is piped into tmp and if it's there do one thing, if it's not, do another. Seems to work. I might integrate this into my code to see how it works.
#echo off
cls
:start
del c:\recording.mp3 2> tmp > nul
findstr "Access" tmp > nul
if %ERRORLEVEL% EQU 0 GOTO delay
ECHO NO DELAY
ping 1.1.1.1 > nul
goto start
:delay
echo DELAY
ping 1.1.1.1 -n 1 > nul
goto start

Related

Lua not running script from cmd line

why I am in this trying to figure out what I done wrong mess. As was mentioned before I deleted and started over. This is a fresh install of windows 11 about 4 days now. I did add the folders to my system path. And do not get the program that #1 back up company that starts with an A and ends with an S. When their file goes corrupt and it's like pretty much oh well. Not much. But here we go. Look at the photos. I am going to try and format this again. How the hell do you know what line this thing is bitching about. There is only 6 lines of code in this thing. The rest is my troubleshooting. ~~~
local x = math.pi
local r = 6
local Area2 = (x * r ^ 2)
local Area = Area2-Area2%0.01
print(Area = ..Area.. Oh Yea)
* Executing task in folder MA_scripts: lua54 c:\Users\iSpeedyG\OneDrive\Documents\MA_scripts\LuaUdemy\4_Variables_Expressions\circleArea.lua
Area = 113.09 Oh Yea
* Terminal will be reused by tasks, press any key to close it.
**"DEFAULT COMMAND PROMPT above"** ALL THIS IN VSCODE
------------------------------------------------------------
* Executing task in folder MA_scripts: lua54 c:\Users\iSpeedyG\OneDrive\Documents\MA_scripts\LuaUdemy\4_Variables_Expressions\circleArea.lua
C:\lua\lua54.exe: cannot open c:UsersiSpeedyGOneDriveDocumentsMA_scriptsLuaUdemy4_Variables_ExpressionscircleArea.lua: No such file or directory
* The terminal process C:\Program Files\Git\bin\bash.exe --login, -c, lua54 c:\Users\iSpeedyG\OneDrive\Documents\MA_scripts\LuaUdemy\4_Variables_Expressions\circleArea.lua terminated with exit code: 1.
* Terminal will be reused by tasks, press any key to close it.
**"DEFAULT BASH PROMPT above"**
----------------------------------------------------------------
Next I added a file all the way in my c:\lua called just checking.lua. 1 line of code
1. print(Trying to figure out why i cannot run a lua from cmd line. thinking it has someinthing to do with my docs folder now in one drive.)
* Executing task in folder MA_scripts: lua54 c:\lua\justchecking.lua
C:\lua\lua54.exe: cannot open c:luajustchecking.lua: No such file or directory
* The terminal process C:\Program Files\Git\bin\bash.exe --login, -c, lua54 c:\lua\justchecking.lua terminated with exit code: 1.
* Terminal will be reused by tasks, press any key to close it.
**"DEFAULT BASH PROMPT above"** ```FROM BEFORE DIDNT CHANGE IT```
-----------------------------------------------------------------
* Executing task in folder MA_scripts: lua54 c:\lua\justchecking.lua
```*Trying to figure out why i cannot run a lua from cmd line. thinking it has someinthing to do with my docs folder now on one drive."*```
* Terminal will be reused by tasks, press any key to close it.
**"DEFAULT COMMAND PROMPT above"** ```*"THESE FOUR above WERE WITH CTRL+SHIFT+B -->Terminal Menu--run, build, task"*```
---------------------------------------------------------------------
```"NOW JUST GOING TO TYPE THE COMMAND IN BOTH FROM PROMPT AND THEN LUA54 NOTE EVEN CHANGED DIRECTORY TO WHERE THE FILE IS DEFAULT FOLDER IS ACTUALLY SET TO MA_SCRIPTS SET FROM LOADED WORKSPACE"```
c:\lua>lua54
Lua 5.4.2 Copyright (C) 1994-2020 Lua.org, PUC-Rio
lua justchecking.lua
stdin:1: syntax error near justchecking
justchecking.lua
stdin:1: attempt to index a nil value (global justchecking)
stack traceback:
stdin:1: in main chunk
[C]: in ?
os.exit()
c:\lua>
**"DEFAULT COMMAND PROMPT above"**
-------------------------------------------------------------------
iSpeedyG#iSpeedyG-PC MINGW64 ~/OneDrive/Documents/MA_scripts
$ cd c:lua
iSpeedyG#iSpeedyG-PC MINGW64 /c/lua
$ dir
justchecking.lua lua54.dll lua54.exe luac54.exe wlua54.exe
iSpeedyG#iSpeedyG-PC MINGW64 /c/lua
$ lua justchecking.lua
bash: lua: command not found
iSpeedyG#iSpeedyG-PC MINGW64 /c/lua
$ justchecking.lua
/c/lua/justchecking.lua: line 1: syntax error near unexpected token Trying to figure out why i cannot run a lua from cmd line. thinking it has someinthing to do with my docs folder now in one drive.
/c/lua/justchecking.lua: line 1: print(Trying to figure out why i cannot run a lua from cmd line. thinking it has someinthing to do with my docs folder now in one drive.)
iSpeedyG#iSpeedyG-PC MINGW64 /c/lua
$
**"DEFAULT BASH PROMPT abpve"**
-------------------------------------------------------------------
I have no understanding what happened here. I dont know where I screwed up the path on install. I only created the just checking thing because that was where I placed the Binaries in the beginning right at C:\ but MA_Scripts is in my documents folder and notice when I put it in OneNote it politely just moved the folder to the cloud. Going to see if I can attach a photo of my enviroments. I tried to show as much information as I can. I am running through a course online and tried to do
The run from command line like he does and thats when I found this issue out. I hope it is something simple. I am pretty new to this whole thing and have been on a roller-coaster for a few months hence the reformat and new windows the other day. Thats a longer story. On an added issue at 1st CTRL+L used to select the line. Now it doesnt after I turned sync with github account on. I dont even know where to look. Thanks in advance. There are numbers in front of the code they just dont want to copy for some reason.
[Picture of all of this I have pasted][1]
[picture of my PATH][2]
[1]: https://i.stack.imgur.com/YWgZF.png
[2]: https://i.stack.imgur.com/fztGU.png

GNU parallel: deleting line from joblog breaks parallel updating it

If you run GNU parallel with --joblog path/to/logfile and then delete a line from said logfile while parallel is running, GNU parallel is no longer able to append future completed jobs to it.
Execute this MWE:
#!/usr/bin/bash
parallel -j1 -n0 --joblog log sleep 1 ::: $(seq 10) &
sleep 5 && sed -i '$ d' log
If you tail -f log prior to execution, you can see that parallel keeps writing to this file. However, if you cat log after 10 seconds, you will see that nothing was written to the actual file now on disk after the third entry or so.
What's the reason behind this? Is there a way to delete something from the file and have GNU parallel be able to still write to it?
Some background as to why this happened:
Using GNU parallel, I started a few jobs on remote machines with --sshloginfile. I then needed to pkill a few jobs on one of the machines because a colleague needed to use it (and I subsequently removed the machine from the sshloginfile so that parallel wouldn't reuse it for new runs). If you pkill those processes started on the remote machine, they get an Exitval of 0 (it looks like they finished without issues; you can't tell that they were killed). I wanted to remove them immediately from the joblog so that when I restart parallel --resume later, parallel can have a look at the joblog and determine what's missing.
Turns out, this was a bad idea, as now my joblog is useless.
While #MarkSetchell is absolutely right in his comment, root problem here is due to man sed lying:
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if SUFFIX supplied)
sed -i does not edit files in place.
What it does is to make a temporary file in the same dir, copy the input file to the temporary file while doing the editing, and finally renaming the temporary file to the input file's name. Similar to this:
sed '$ d' log > sedXxO11P
mv sedXxO11P log
It is clear that the original log and sedXxO11P have different inodes - let us call them ino1 and ino2. GNU Parallel has ino1 open and really does not know about the existence of ino2. GNU Parallel will happily append to ino1 completely unaware that when it closes the file, the file will vanish because it has already been unlinked.
So you need to change the content of the file without changing the inode:
#!/usr/bin/bash
seq 10 | parallel -j1 -n0 --joblog log sleep 1 &
sleep 5
# Obvious race condition here:
# Anything appended to log before sed is done is lost.
# This can be avoided by suspending parallel while running this
tmp=$RANDOM$$
cp log $tmp
(rm $tmp; sed '$ d' >log) < $tmp
wait
cat log
This works right now. But do not expect this to be a supported feature - ever.

Used `tar -xz` without `f` and now program stuck

Strangely, I had assumed the -f option was for "force", not for "file".
I ran tar -xz because I wanted to see if any files would be overwritten. Now it has extracted all the files but has not returned control back to me. Should I just kill the process? Is it waiting for input?
-f commands tar to read the archive from a file. Without it, it tries to read it from stdin.
You can input Ctrl-C to kill it or Ctrl-D (Ctrl-Z in Windows) to send it EOF (at which point, it'll probably complain about incorrect archive format).
Without an -f option, tar will attempt to read from the TAPE device specified by the TAPE environment variable, or a file built into tar (usually something like /dev/st0 or stdin) if TAPE isn't set to anything.

watching memory in PBS

I'm running a job on a cluster (using PBS) that runs out of memory. I'm trying to print the memory status for each node separately while my other job is running. I created a shell script and included a call to that script from inside my job submission script. But when I'm submitting my job it gives me permission denied error on the line that calls the script. I don't understand why do I get that error.
Secondly, I was thinking that I can have a 'watch free' or 'watch ps aux' in my script file but now I'm thinking if that will cause my submitted job to get stuck in memory-watching script and never continue to get to the main line that calls my parallel program.
After all, how can I achieve logging my memory in PBS for the jobs I'm submitting. My code is a C++ program using MRMPI (MPI MapReduce) library.
To see how much memory is being used throughout the job, run qstat -f:
$ qstat -f | grep used
resources_used.cput = 00:02:51
resources_used.energy_used = 0
resources_used.mem = 6960kb
resources_used.vmem = 56428kb
resources_used.walltime = 00:01:26
To examine past jobs you can look in the accounting file. This is located in the server_priv/accounting directory, the default is /var/spool/torque/server_priv/accounting/.
The entries look like this:
09/14/2015 10:52:11;E;202.napali;user=dbeer group=company jobname=intense.sh queue=batch ctime=1442248534 qtime=1442248534 etime=1442248534 start=1442248536 owner=dbeer#napali exec_host=napali/0-2 Resource_List.neednodes=1:ppn=3 Resource_List.nodect=1 Resource_List.nodes=1:ppn=3 session=20415 total_execution_slots=3 unique_node_count=1 end=0 Exit_status=0 resources_used.cput=1989 resources_used.energy_used=0 resources_used.mem=9660kb resources_used.vmem=58500kb resources_used.walltime=995
NOTE: if your ssh access to computing nodes of the cluster is closed, this method won't work!
This is how I ended up doing this. It might not be the best way but it works:
In summary, I added some short sleep periods in between my map and reduce steps by calling c++ sleep() function. And also wrote a script that ssh's to the nodes my job is running on and then gets the memory status on those nodes writing them in a file (using 'free' or 'top' commands).
More detailed: in my PBS job script, somewhere before the call to my binary, I added this line:
#this goes in job script, before the call to the job binary:
cat $PBS_NODEFILE > /some/path/nodelist.log
This writes a list of the nodes that my job runs on, into a file.
I have a second script "watchmem.sh":
#!/bin/bash
for i in $(seq 60)
do
while read line;
do
ssh $line 'bash -s' < /some/path/remote.sh "$line"
done < /some/path/nodelist.log
sleep 10
done
This script reads the file nodelist.log that we generated before, performs an ssh into each node and calls a third (and last script), remote.sh, on each of those nodes.
remote.sh contains the commands that we run on every node of our job. In this case it prints the current time and the result of 'free' into separate files for each node:
#remote.sh
echo "Current time : $(date)" >> $1
free >> $1 #this can be replaced by top by specifying a -n for it
Comparing the times from these files and the times I'm printing from my binary let's me find out the memory consumption (alloc/dealloc) in each step.
The sleep periods in my job is to make sure my scripts capture the memory status in between steps. 'sleep 10' in my script is to avoid unnecessary writes to the file; this period should be comparable to the sleep duration in the main job.

Jenkins post build task script aborting when result of `{cmd}` is empty in script

I got strange behavior of Jenkins post build task script.
its purpose is showing build error in slack like following.
EDIT: our Jenkins running on Mac OSX Yosemite (10.10.4) and using Unity3d as build tool.
SLACK_BOT_PATH=$WORKSPACE/tools/bot.rb
SLACK_BOT_NAME="cortana"
SLACK_BOT_TOKEN=`cat $WORKSPACE/../../sendchat_token`
ERRORS=`tail -5000 ~/Library/Logs/Unity/Editor${BUILD_NUMBER}.log | grep ": error"`
ruby $SLACK_BOT_PATH $SLACK_BOT_NAME $SLACK_BOT_TOKEN "build fails : $ERRORS"
and strange behavior is, it aborted on the ERRORS= line when ERRORS has no contents (empty string). Jenkins console output is like following.
[workspace] $ /bin/sh -xe /var/folders/j3/8x825bdn2l9dm497yjs2144c0000gn/T/hudson7348609981772923445.sh
+ SLACK_BOT_PATH=*snip*
+ SLACK_BOT_NAME=cortana
++ cat *snip*/../../sendchat_token
+ SLACK_BOT_TOKEN=*snip*
++ tail -5000 ~/Library/Logs/Unity/Editor1710.log
++ grep ': error'
+ ERRORS=
POST BUILD TASK : FAILURE
after I change grep filter so that ERRORS has some contents, the post build script runs correctly again.
I want to report some general error message (eg. build fails) when no actual ERRORS found in logs. but also report detail error message when its available.
of course it is easy that inserting some line to send general message before grep error log so that every time such a general message sent to slack, but I want know the reason why empty ERRORS terminate entire script.
does anyone encounter same issue? if so, finally you know the cause of problem? thanks.
To be precise Jenkins will terminate your build when ERRORS is empty in you code because when there is no output from the grep command, your shell errorlevel will be set to 1 automatically , therefore it terminates your build with error ERRORS= , you can debug the same by printing errorlevel value. echo $ERRORLEVEL [linux] or echo %ERRORLEVEL% [windows]. Error 0 stand for success and other error codes stands for failure. if you want your post build script to be executed ignoring even if grep output is empty , then set errorlevel value in post build step of jenkins build.
ERRORLEVEL=0
Try the above and see if that helps
As #prudviraj explained, the issue is that your grep command returns without finding anything, and therefore returns with exit code 1.
Read detailed answer here: Jenkins Build Script exits after Google Test execution
In short: your script is launched with /bin/sh -xe, the -e means "fail immediately on any error", and your grep "errors out" when nothing is found.
The quick fix is to put set +e before the command. But the better way is proper error handling, as explained in the other answer I've linked.

Resources