I'm trying to exploit a .c, and it asks for 2 inputs. I have to use hexadecimal entries to change the return adress to whatever I want. How can I pipe an input for two entries at the same time? Is that possible?
Related
I'm learning Lua, and I want to know the difference of print() and = or print() and io.write().
print is used for outputting text messages. It joins its arguments with a tab character, and automatically inserts a newline.
io.write is more simple. While it also accepts any number of arguments, it simply concatenates them (without inserting any characters) and doesn't add any newline. Think of it as file:write applied to the standard output.
These lines are equivalent:
io.write("abc")
io.write("a", "b", "c")
io.write("a") io.write("b") io.write("c")
I'd recommend using print for outputting normal text messages, or for debug, and io.write when you either want to print a number of strings without concatenating them explicitly (using io.write saves more memory), be able to write parts of a text separately, or outputting binary data via strings.
This short paragraph from "Programming in Lua" explains some differences:
21.1 The Simple I/O Model
Unlike print, write adds no extra characters to the output, such as
tabs or newlines. Moreover, write uses the current output file,
whereas print always uses the standard output. Finally, print
automatically applies tostring to its arguments, so it can also show
tables, functions, and nil.
There is also following recommendation:
As a rule, you should use print for quick-and-dirty programs, or for
debugging, and write when you need full control over your output
Essentially, io.write calls a write method using current output file, making io.write(x) equivalent to io.output():write(x).
And since print can only write data to the standard output, its usage is obviously limited. At the same time this guarantees that message always goes to the standard output, so you don't accidently mess up some file content, making it a better choice for debug output.
Another difference is in return value: print returns nil, while io.write returns file handle. This allows you to chain writes like that:
io.write('Hello '):write('world\n')
I've built a PCollection in Cloud Dataflow that I will write to disk as it is. I would like to build another collection that references items in the first collection by their index. e.g.
PC1:
strings go here
some other string here
more strings
PC2:
0,1
1,1
0,2
I'm unsure how to get the indices in PC1 without writing the whole pipeline and starting another, and even then I'm not sure how to keep record of the line/record number being read. Is it safe to simply use a static variable? I would assume not based on the generally parallel nature of the platform.
PCollection's are inherently unordered, so there's no such thing as "index of an item in a collection" - however, you can include the line number in the element itself: have PC1 be a PCollection<KV<Integer, String>> where the Integer is the line number - basically read lines from a text file paired with their line number.
We currently don't provide a built-in source that does this - your best bet would be to write a simple DoFn<String, KV<Integer, String>> that takes the filename as input and uses IOChannelFactory to open the file and read it line by line and emit the contents with line numbers to produce PC1.
Lets say I have a massive string of just a single character say x. I need to use huffman encoding.
A huffman encoding is a fully binary tree. So how does one create a huffman code for just a single character when we dont need two leaves at all ?
jbr's answer is fine; this is just a longer version of it.
Huffman is meant to produce a minimal-length sequence of bits that contains all the information in the original sequence of symbols, assuming that the decoder already knows the set of symbols. If there's only one symbol, the input data contains no information except its length.
In Huffman-based data formats, length is usually encoded separately, not as part of the Huffman-encoded bit sequence itself. The decoder of a single-symbol Huffman code therefore has all the information it needs to reconstruct the input without needing to read anything from the Huffman-encoded bit sequence. it is logical, then, that the Huffman encoder's output should be 0 bits long.
If you don't have a length encoded separately, then you must have a symbol to represent End Of Sequence so the decoder knows when to stop reading. Then your Huffman tree will have 2 nodes and you won't run into this special case.
If you only have one symbol, then you only need 1 bit per symbol. So you really don't have to do anything except count the number of bits and translate each into your symbol.
You simply could add an edge case in your code.
For example:
check if there is only one character in your hash table, which returns only the root of the tree without any leafs. In this case, you could add a code for this root node in your encoding function, like 0.
In the encoding function, you should refer to this edge case too.
My requirements are to write binary records inside a file. The binary records can be thought of as raw bytes in memory. I need a way to delimit each record, so that i can do something similar to binary search on the file. For example start in middle of file, find the next record delimited and start the search.
My question is that can ASCII such "START-RECORD" be used to delimit the binary record ?
START-RECORD, data-length, .......binary data...........START-RECORD, data-length, .......binary data...........
When starting from an arbitrary position within a file, i can simply search for ASCII String "START-DATA". Is this approach feasible?
Not in a single pass, since you're reading in binary mode or not. If you insert some strings or another pattern as "delimiter", you'd need to search for the binary representation of it while reading the file.
We am using OpenVMS system and I believe it is using the Cobol from HP.
With a data file of a lot of records ( 500mb or more ) which variable length. The records are comma delimited. I would like to parse each records and extract corresponding fields for processing. After that, I might want to sort it by some particular fields. Is it possible with cobol?
I've seen sorting with fixed-length records only.
Variable length is no problem, not sure exactly how this is done in VMS cobol but the IBMese for this is:-
FILE SECTION.
FD THE-FILE RECORD IS VARYING DEPENDING ON REC-LENGTH.
01 THE-RECORD PICTURE X(5000) .
WORKING-STORAGE SECTION.
01 REC-LENGTH PICTURE 9(5) COMPUTATIONAL.
When you read the file "REC-LENGTH" will contain the record length, when write a record it will write a record of length REC-LENGTH.
To handle the delimited record files you will probably need to use the "UNSTRING" verb to convert into a fixed format. This is pretty verbose (but then this is COBOL).
UNSTRING record DELIMITED BY ","
INTO field1, field2, field3, field4, field5 etc....
END-UNSTRING
Once the record is in fixed format you can use the SORT as normal.
The Cobol SORT verb will do what you need.
If the SD file contains variable-length records, all of the KEY data-items must be contained within the first n character positions of the record, where n equals the minimum records size
specified for the file. In other words, they have to be in the fixed part.
However, you can get around this easily by using an input procedure. This will let you create a virtual file that has its keys in the right place. In your input procedure, you will reformat your variable, comma delimited, record, into one that has its keys at the front, then "Release" it to the sort.
If my memory is correct, VMS has a SORT/MERGE utility that you could use after you have processed the file into a fixed file format (variable may also be possible). Typically a standalone SORT utility performs better than in-line COLBOL SORT and can be better design if the sort criteria changes in the future.
No need to write a solution in COBOL, at least not to sort the file. The UNIX sort utility should do it just fine, just call sort -t ',' -n with maybe a couple of other options.