Why is the instruction to swap stderr with stdout backward? - stdout

In numerouse places, I've found people suggesting that you can swap stderr with stdout as follows:
command 3>&2 2>&1 1>&3
This looks backwards to me. If we send 3 to 2 and then immediately send 2 to 1 (which now would appear to be sending both 3 and 2 to 1). I think there's something basic I don't understand about IO redirection, but I can't find anything which clarifies it.

You will find a detailed explanation at http://www.catonmat.net/blog/bash-one-liners-explained-part-three/ , section 21:
$ command 3>&1 1>&2 2>&3 Here we first duplicate file descriptor 3 to
be a copy of stdout. Then we duplicate stdout to be a copy of stderr,
and finally we duplicate stderr to be a copy of file descriptor 3,
which is stdout. As a result we've swapped stdout and stderr.
There is more detail, and pictures, at the link given. The key insight is:
3>&1 means "3 points to where 1 is pointing". Then 1>&2 says "now 1 points to where 2 is pointing" (1 now points to stream 2, but 3 doesn't follow...), and finally 2>&3 says "now 2 points to where 3 is pointing (which is stream 1).
Graphically (but see link - it's much better than my ascii-art):
0 --> /dev/tty0
1 --> /dev/tty1
2 --> /dev/tty2
After 3>&1:
0 --> /dev/tty0
1 --> /dev/tty1
2 --> /dev/tty2
3 --> /dev/tty1
After 1>&2:
0 --> /dev/tty0
1 --> /dev/tty2
2 --> /dev/tty2
3 --> /dev/tty1
After 2>&3:
0 --> /dev/tty0
1 --> /dev/tty2
2 --> /dev/tty1
3 --> /dev/tty1
As you can see, 1 and 2 have been swapped. The same link recommends closing temporary stream 3 with 3>&-

Huh, I've never seen that .... anyway ...it seems that is essentially saying firstly ... send 3 to the address of 2 .... then send two to the address of 1 .... then send one to the address of 3...which from the first step, we know is the original address of two ... so you've swapped them .... haven't tested that out myself though....

Related

Stapling In Postscript File on Ricoh or Keyoceria Or Toshiba Printers

I am writing a postscript file through coding in VB.net and pslibrary. My Main purpose for the job is tray switching from 3 different trays and having stapled the sets based on variable input. i.e I have a post script file of 100 pages first two pages will be simplex and will be printed from two different trays. On third page we will use the third tray and pages from third tray to onward 10 pages will be stapled. After page eleven to next 8 pages will be stapled separately. So it will go so on.
Note: Ricoh Aficio/ Gestatner/ Toshiba Printers is in use 2105-2090 models are being in used.
Tray switching and file is working fine except stapling
Stapling is not working through PS although working fine on machine separately.
Following code is being used to do the work
**{{{
%%Page: 3 3
%%BeginPageSetup
<< /PageSize[595 841] /Duplex false /MediaColor (Red) /Jog 3 /Staple 3 /StapleDetails << /Type 1 /StapleLocation (SinglePortrait) >>>> setpagedevice
save
%%EndPageSetup
(InvoiceNo 50011287697) 72 755.28 /ArialMT 15 SF
%EndPage: 3
restore
showpage
<</PageSize [595 842]/MediaType (Red) /MediaColor (Red) /MediaWeight 75/Duplex false>> setpagedevice
%%Page: 4 4
%%BeginPageSetup
save
%%EndPageSetup
(InvoiceNo 50011287697) 72 755.28 /ArialMT 15 SF
%EndPage: 4
restore
showpage
<< /Jog 0 >> setpagedevice
<< /Staple 0 >> setpagedevice
}}}**
But no stapling is done and printing is started to get out from first paper and that too through its finisher. Printer is just ignoring Staple commands
Things like tray selection and stapling are printer specific. You'll need to extract appropriate code fragments from the .PPD files for the printers in question.
Depending on the exact code fragments needed, it may be possible to combine the fragments into a single PostScript fragment that will work on all of these printers. But it's unlikely to make a fully general solution.
For example, the Ricoh Afficio 2105 PPD file has fragments like this:
<<
/Collate true /CollateDetails <</Type 6 /AlignSet true>>
/Staple 2 /StapleDetails << /Type 14 /Angle 0 /Position 0 >>
>> setpagedevice
The Position changes for different locations but is always a small integer for this printer.
Gestetner 2212 shows fragments that look the same to me as for the Ricoh.
The fragment for a Toshiba 2500C is completely different:
<</TSBPrivate (DSSC PRINT STAPLING=769) >> setpagedevice

Shift in the columns of spool file

I am using a shell script to extract the data from 'extr' table. The extr table is a very big table having 410 columns. The table has 61047 rows of data. The size of one record is around 5KB.
I the script is as follows:
#!/usr/bin/ksh
sqlplus -s \/ << rbb
set pages 0
set head on
set feed off
set num 20
set linesize 32767
set colsep |
set trimspool on
spool extr.csv
select * from extr;
/
spool off
rbb
#-------- END ---------
One fine day the extr.csv file was having 2 records with incorrect number of columns (i.e. one record with more number of columns and other with less). Upon investigation I came to know that the two duplicate records were repeated in the file. The primary key of the records should ideally be unique in file but in this case 2 records were repeated. Also, the shift in the columns was abrupt.
Small example of the output file:
5001|A1A|AAB|190.00|105|A
5002|A2A|ABB|180.00|200|F
5003|A3A|AAB|153.33|205|R
5004|A4A|ABB|261.50|269|F
5005|A5A|AAB|243.00|258|G
5006|A6A|ABB|147.89|154|H
5003|A7A|AAB|249.67|AAB|153.33|205|R
5004|A8A|269|F
5009|A9A|AAB|368.00|358|S
5010|AAA|ABB|245.71|215|F
Here the primary key records for 5003 and 5004 have reappeared in place of 5007 and 5008. Also the duplicate reciords have shifted the records of 5007 and 5008 by appending/cutting down their columns.
Need your help in analysing why this happened? Why the 2 rows were extracted multiple times? Why the other 2 rows were missing from the file? and Why the records were shifted?
Note: This script is working fine since last two years and has never failed except for one time (mentioned above). It ran successfully during next run. Recently we have added one more program which accesses the extr table with cursor (select only).
I reproduced a similar behaviour.
;-> cat input
5001|A1A|AAB|190.00|105|A
5002|A2A|ABB|180.00|200|F
5003|A3A|AAB|153.33|205|R
5004|A4A|ABB|261.50|269|F
5005|A5A|AAB|243.00|258|G
5006|A6A|ABB|147.89|154|H
5009|A9A|AAB|368.00|358|S
5010|AAA|ABB|245.71|215|F
See the input file as your database.
Now I write a script that accesses "the database" and show some random freezes.
;-> cat writeout.sh
# Start this script twice
while IFS=\| read a b c d e f; do
# I think you need \c for skipping \n, but I do it different one time
echo "$a|$b|$c|$d|" | tr -d "\n"
(( sleeptime = RANDOM % 5 ))
sleep ${sleeptime}
echo "$e|$f"
done < input >> output
EDIT: Removed cat input | in script above, replaced by < input
Start this script twice in the background
;-> ./writeout.sh &
;-> ./writeout.sh &
Wait until both jobs are finished and see the result
;-> cat output
5001|A1A|AAB|190.00|105|A
5002|A2A|ABB|180.00|200|F
5003|A3A|AAB|153.33|5001|A1A|AAB|190.00|105|A
5002|A2A|ABB|180.00|205|R
5004|A4A|ABB|261.50|269|F
5005|A5A|AAB|243.00|200|F
5003|A3A|AAB|153.33|258|G
5006|A6A|ABB|147.89|154|H
5009|A9A|AAB|368.00|358|S
5010|AAA|ABB|245.71|205|R
5004|A4A|ABB|261.50|269|F
5005|A5A|AAB|243.00|258|G
5006|A6A|ABB|147.89|215|F
154|H
5009|A9A|AAB|368.00|358|S
5010|AAA|ABB|245.71|215|F
When I edit the last line of writeout.sh into done > output I do not see the problem, but that might be due to buffering and the small amount of data.
I still don't know exactly what happened in your case, but it really seems like 2 progs writing simultaneously to the same script.
A job in TWS could have been restarted manually, 2 scripts in your masterscript might write to the same file or something else.
Preventing this in the future can be done using some locking / checks (when the output file exists, quit and return errorcode to TWS).

Neo4j batch importer NotFoundException

I'm consistently running into a NotFoundException when using the batch importer to read large nodes and relationship files. I've used the importer successfully before with an even larger dataset, but I've rewritten the way I generate the two files, and I'm trying to figure out why it now throws an error.
The problem
It seems to read the nodes file and then throws an error near the start of the rels file, stating that it cannot find a node. I believe this is because it hasn't really imported all the nodes. It reports importing only half of the nodes in nodes.tsv (2.1m of 4.6m total).
Things I've checked:
The node numbers in nodes.tsv are sequential and continuous (0 to ~4.5m)
The node that throws the exception appears in both files (including as both source and target in rels.tsv)
I can successfully import a smaller subset of my data (~80k nodes) using the same tsv generator script
Even though the relationships are not sorted on target (only on source), the smaller subset does not throw this exception
The insert command:
./import.sh wiki.db nodes.tsv rels.tsv
Error message
Using Existing Configuration File
.....................
Importing 2129648 Nodes took 6400 seconds
Total import time: 6404 seconds
Exception in thread "main" org.neo4j.graphdb.NotFoundException: id=3608148
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.getNodeRecord(BatchInserterImpl
.java:1215)
at org.neo4j.unsafe.batchinsert.BatchInserterImpl.createRelationship(BatchInserte
rImpl.java:777)
at org.neo4j.batchimport.Importer.importRelationships(Importer.java:154)
at org.neo4j.batchimport.Importer.doImport(Importer.java:232)
at org.neo4j.batchimport.Importer.main(Importer.java:83)
The files
nodes.tsv (4578730 lines)
node name l:label degrees
0 Stroud_railway_station Page 21
1 ATP–ADP_translocase Page 38
2 Pedro_Hernández_Martínez Page 12
3 Christopher_Lowther Page 4
4 Cloncurry_River Page 10
5 Neil_Kinnock Page 147
6 Free_agent_(business) Page 10
7 Christian_Hilt Page 27
8 2009_Riviera_di_Rimini_Challenger Page 27
rels.tsv (113322480 lines)
start end type
0 3608148 LINKS_TO
0 870126 LINKS_TO
0 1516248 LINKS_TO
0 3493391 LINKS_TO
0 3034096 LINKS_TO
0 1421544 LINKS_TO
0 2808745 LINKS_TO
0 1872783 LINKS_TO
0 1673612 LINKS_TO
Hmm seems to be a problem with your CSV file, did you try to run CSVKit or similar on it?
Perhaps you can narrow down the issue by bisecting the nodes.csv and finding the offending line?
Also try to use the opencsv parser by enabling quotes in your batch.properties
https://github.com/jexp/batch-import/tree/20#csv-experimental
batch_import.csv.quotes=true
Or flip it to false. Perhaps you have stray single our double quotes in your text? If so then please quote it.

searching multiple words from a .tr file

I need to calculate the received packets from .tr file. The problem is that one string is necessary for me but some unnecessary events are also counted.
So I want a solution.
line1: r 0.500000000 1 RTR — 0 cbr 210 [0 0 0 0] ——- [1:0 5:0 32 0] [0] 0 0
line2: r 0.501408175 3 RTR — 0 AODV 48 [0 ffffffff 1 800] ——- [1:255 -1:255 30 0] [0x2 1 1 [5 0] [1 4]] (REQUEST)
I want only line 1 but as I am searching for ‘^r’ only so both files are returned. Please help me how can I search the line where 2 patterns are needed?
You can use grep to match more than one expression at a time, or to return results if one of the expressions exist.
OR alike - if foo or bar matches:
grep -e "foo|bar" file
AND alike - if foo and bar matches:
grep "foo" file | grep "bar"
As you need 2 patterns, I will go for the latest. But still, I think you may improve your question putting an example of what you expect your code to return, and what exact command are you currently using.
You may also define a grep command to find the occurrences of foo, but that does not include bar. Maybe that is easier, depending on what you need.
grep "foo" file | grep -v "bar"

Erlang/OTP - Timing Applications

I am interested in bench-marking different parts of my program for speed. I having tried using info(statistics) and erlang:now()
I need to know down to the microsecond what the average speed is. I don't know why I am having trouble with a script I wrote.
It should be able to start anywhere and end anywhere. I ran into a problem when I tried starting it on a process that may be running up to four times in parallel.
Is there anyone who already has a solution to this issue?
EDIT:
Willing to give a bounty if someone can provide a script to do it. It needs to spawn though multiple process'. I cannot accept a function like timer.. at least in the implementations I have seen. IT only traverses one process and even then some major editing is necessary for a full test of a full program. Hope I made it clear enough.
Here's how to use eprof, likely the easiest solution for you:
First you need to start it, like most applications out there:
23> eprof:start().
{ok,<0.95.0>}
Eprof supports two profiling mode. You can call it and ask to profile a certain function, but we can't use that because other processes will mess everything up. We need to manually start it profiling and tell it when to stop (this is why you won't have an easy script, by the way).
24> eprof:start_profiling([self()]).
profiling
This tells eprof to profile everything that will be run and spawned from the shell. New processes will be included here. I will run some arbitrary multiprocessing function I have, which spawns about 4 processes communicating with each other for a few seconds:
25> trade_calls:main_ab().
Spawned Carl: <0.99.0>
Spawned Jim: <0.101.0>
<0.100.0>
Jim: asking user <0.99.0> for a trade
Carl: <0.101.0> asked for a trade negotiation
Carl: accepting negotiation
Jim: starting negotiation
... <snip> ...
We can now tell eprof to stop profiling once the function is done running.
26> eprof:stop_profiling().
profiling_stopped
And we want the logs. Eprof will print them to screen by default. You can ask it to also log to a file with eprof:log(File). Then you can tell it to analyze the results. We tell it to collapse the run time from all processes into a single table with the option total (see the manual for more options):
27> eprof:analyze(total).
FUNCTION CALLS % TIME [uS / CALLS]
-------- ----- --- ---- [----------]
io:o_request/3 46 0.00 0 [ 0.00]
io:columns/0 2 0.00 0 [ 0.00]
io:columns/1 2 0.00 0 [ 0.00]
io:format/1 4 0.00 0 [ 0.00]
io:format/2 46 0.00 0 [ 0.00]
io:request/2 48 0.00 0 [ 0.00]
...
erlang:atom_to_list/1 5 0.00 0 [ 0.00]
io:format/3 46 16.67 1000 [ 21.74]
erl_eval:bindings/1 4 16.67 1000 [ 250.00]
dict:store_bkt_val/3 400 16.67 1000 [ 2.50]
dict:store/3 114 50.00 3000 [ 26.32]
And you can see that most of the time (50%) is spent in dict:store/3. 16.67% is taken in outputting the result, another 16.67% is taken by erl_eval (this is why you get by running short functions in the shell -- parsing them becomes longer than running them).
You can then start going from there. That's the basics of profiling run times with Erlang. Handle with care, eprof can be quite a load on a production system or for functions that run for too long. Especially on a production system.
You can use eprof or fprof.
The normal way to do this is with timer:tc. Here is a good explanation.
I can recommend you this tool: https://github.com/virtan/eep
You will get something like this https://raw.github.com/virtan/eep/master/doc/sshot1.png as a result.
Step by step instruction for profiling all processes on running system:
On target system:
1> eep:start_file_tracing("file_name"), timer:sleep(20000), eep:stop_tracing().
$ scp -C $PWD/file_name.trace desktop:
On desktop:
1> eep:convert_tracing("file_name").
$ kcachegrind callgrind.out.file_name

Resources