I was reading through some code examples and came across a & on Oracle's website on their Bitwise and Bit Shift Operators page. In my opinion it didn't do too well of a job explaining the bitwise &. I understand that it does a operation directly to the bit, but I am just not sure what kind of operation, and I am wondering what that operation is. Here is a sample program I got off of Oracle's website: http://docs.oracle.com/javase/tutorial/displayCode.html?code=http://docs.oracle.com/javase/tutorial/java/nutsandbolts/examples/BitDemo.java
An integer is represented as a sequence of bits in memory. For interaction with humans, the computer has to display it as decimal digits, but all the calculations are carried out as binary. 123 in decimal is stored as 1111011 in memory.
The & operator is a bitwise "And". The result is the bits that are turned on in both numbers. 1001 & 1100 = 1000, since only the first bit is turned on in both.
The | operator is a bitwise "Or". The result is the bits that are turned on in either of the numbers. 1001 | 1100 = 1101, since only the second bit from the right is zero in both.
There are also the ^ and ~ operators, that are bitwise "Xor" and bitwise "Not", respectively. Finally there are the <<, >> and >>> shift operators.
Under the hood, 123 is stored as either 01111011 00000000 00000000 00000000 or 00000000 00000000 00000000 01111011 depending on the system. Using the bitwise operators, which representation is used does not matter, since both representations are treated as the logical number 00000000000000000000000001111011. Stripping away leading zeros leaves 1111011.
It's a binary AND operator. It performs an AND operation that is a part of Boolean Logic which is commonly used on binary numbers in computing.
For example:
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
You can also perform this on multiple-bit numbers:
01 & 00 = 00
11 & 00 = 00
11 & 01 = 01
1111 & 0101 = 0101
11111111 & 01101101 = 01101101
...
If you look at two numbers represented in binary, a bitwise & creates a third number that has a 1 in each place that both numbers have a 1. (Everywhere else there are zeros).
Example:
0b10011011 &
0b10100010 =
0b10000010
Note that ones only appear in a place when both arguments have a one in that place.
Bitwise ands are useful when each bit of a number stores a specific piece of information.
You can also use them to delete/extract certain sections of numbers by using masks.
If you expand the two variables according to their hex code, these are:
bitmask : 0000 0000 0000 1111
val: 0010 0010 0010 0010
Now, a simple bitwise AND operation results in the number 0000 0000 0000 0010, which in decimal units is 2. I'm assuming you know about the fundamental Boolean operations and number systems, though.
Its a logical operation on the input values. To understand convert the values into the binary form and where bot bits in position n have a 1 the result has a 1. At the end convert back.
For example with those example values:
0x2222 = 10001000100010
0x000F = 00000000001111
result = 00000000000010 => 0x0002 or just 2
Knowing how Bitwise AND works is not enough. Important part of learning is how we can apply what we have learned. Here is a use case for applying Bitwise AND.
Example:
Adding any even number in binary with 1's binary will result in zeros. Because all the even number has it's last bit(reading left to right) 0 and the only bit 1 has is 1 at the end.
If you were to ask write a function which takes an argument as a number and returns true for even number without using addition, multiplication, division, subtraction, modulo and you cannot convert number to string.
This function is a perfect use case for using Bitwise AND. As I have explained earlier. You ask show me the code? Here is the java code.
/**
* <p> Helper function </p>
* #param number
* #return 0 for even otherwise 1
*/
private int isEven(int number){
return (number & 1);
}
is doing the logical and digit by digit so for example 4 & 1 became
10 & 01 = 1x0,0x1 = 00 = 0
n & 1 is used for checking even numbers since if a number is even the oeration it will aways be 0
import.java.io.*;
import.java.util.*;
public class Test {
public static void main(String[] args) {
int rmv,rmv1;
//this R.M.VIVEK complete bitwise program for java
Scanner vivek=new Scanner();
System.out.println("ENTER THE X value");
rmv = vivek.nextInt();
System.out.println("ENTER THE y value");
rmv1 = vivek.nextInt();
System.out.println("AND table based\t(&)rmv=%d,vivek=%d=%d\n",rmv,rmv1,rmv&rmv1);//11=1,10=0
System.out.println("OR table based\t(&)rmv=%d,vivek=%d=%d\n",rmv,rmv1,rmv|rmv1);//10=1,00=0
System.out.println("xOR table based\t(&)rmv=%d,vivek=%d=%d\n",rmv,rmv1,rmv^rmv1);
System.out.println("LEFT SWITH based to %d>>4=%d\n",rmv<<4);
System.out.println("RIGTH SWITH based to %d>>2=%d\n",rmv>>2);
for(int v=1;v<=10;v++)
System.out.println("LIFT SWITH based to (-NAGATIVE VALUE) -1<<%d=%p\n",i,-1<<1+i);
}
}
Related
Starting with these frequencies:
A:7 F:6 H:1 M:2 N:4 U:5
at a later step I have 5 6 7 7, where one of the 7's is the "A". Which 7 branch I pick to be a 0 or a 1 is arbitrary.
So how do I get uniquely decodable code word?
You need to send the code to the receiver, not the frequencies. You can arbitrarily assign 0's and 1's to all of the branches, and then send the codes for each symbol before the coded symbols themselves. There are many possible Huffman codes from the same set of frequencies.
More commonly only the code lengths in bits for each symbol are sent. In this case those are A:2 F:2 H:4 M:4 N:3 U:2. Then a canonical code is used on both ends that depends only on the lengths. In this case, starting with 0's, the canonical code would be:
A: 00
F: 01
U: 10
N: 110
H: 1110
M: 1111
where codes of equal length are assigned to the symbols in lexicographical order. Note that the Huffman tree that was built is not needed. All that is needed is the number of bits for each symbol.
I am trying to decode the bitstring to decimal value. For e.x I have these kind of bitstrings
<<96,64,112,153,9:4>>. I want to convert them to decimal values like you take four bits as a digit (96(01100000) --> 60( first four bits is 6, next four bits is 0) , 64 --> 40 and so on. The output would be 604070999. The last 9:4 represents that you consider 4 bits to represent.
Can anyone help in doing this function erlang.
If you have a binary rather than a bitstring (i.e., without the trailing 9:4 part), you can apply a hex conversion to each byte within a binary comprehension, then convert the resulting binary to an integer:
1> Bin = <<96,64,112,153>>.
<<96,64,112,153>>
2> binary_to_integer(<< <<(integer_to_binary(B,16))/binary>> || <<B:8>> <= Bin >>).
60407099
The same also works for your bitstring, taking 4 bits at a time instead of 8 in the comprehension:
3> Bits = <<96,64,112,153,9:4>>.
<<96,64,112,153,9:4>>
4> binary_to_integer(<< <<(integer_to_binary(B,16))/binary>> || <<B:4>> <= Bits >>).
604070999
But as #Hynek-Pichi-Vychodil points out in the comments, for the bitstring you don't need the integer_to_binary/2 call at all, but instead can convert each 4-bit digit to its corresponding character by adding $0, the literal for the character 0:
5> binary_to_integer(<< <<($0+B)>> || <<B:4>> <= Bits >>).
604070999
Hi I am trying to understand whether it is possible to take instruction opcodes and 'poke' them into memory or smehow convert them to a binary program. I have found an abandoned lisp project here: http://common-lisp.net/viewvc/cl-x86-asm/cl-x86-asm/ which takes x86 asm instructions and converts them into opcodes (please see example below). The project does not go further to actually complete the creation of the binary executable. Hence I would need to do that 'manually' Any ideas can help me. Thanks.
;; assemble some code in it
(cl-x86-asm::assemble-forms
'((.Entry :PUSH :EAX)
(:SUB :EAX #XFFFEA)
(:MOV :EAX :EBX)
(:POP :EAX)
(:PUSH :EAX)
(.Exit :RET))
Processing...
;; print the assembled segment
(cl-x86-asm::print-segment)
* Segment type DATA-SEGMENT
Segment size 0000000C bytes
50 81 05 00 0F FF EA 89
03 58 50 C3
Clozure Common Lisp for example has this built-in. This is usually called LAP, Lisp Assembly Program.
See defx86lapfunction.
Example:
(defx86lapfunction fast-mod ((number arg_y) (divisor arg_z))
(xorq (% imm1) (% imm1))
(mov (% number) (% imm0))
(div (% divisor))
(mov (% imm1) (% arg_z))
(single-value-return))
SBCL can do some similar with VOP (Virtual Operations).
http://g000001.cddddr.org/2011-12-08
I learned that it can be done using CFFI/FFI for example the very simple asm code:
(:movl 12 :eax)
(:ret)
This will be converted to the following sequence of octets: #(184 12 0 0 0 195) which in hex it is: #(B8 C 0 0 0 C3). The next step is to send it to a location in memory as such:
(defparameter pointer (cffi:foreign-alloc :unsigned-char :initial-contents #(184 12 0 0 0 195)))
;; and then execute it as such to return the integer 12:
(cffi:foreign-funcall-pointer pointer () :int)
=> result: 12
Thanks to the experts in #lisp (freenode irc channel) for helping out with this solution.
This question already has answers here:
What does the ^ operator do to a BOOL?
(7 answers)
Closed 10 years ago.
Sorry to ask such a simple question but these things are hard to Google.
I have code in iOS which is connected to toggle which is switching between Celsius and Fahrenheit and I don't know what ^ 1 means. self.celsius is Boolean
Thanks
self.celsius = self.celsius ^ 1;
It's a C-language operator meaning "Bitwise Exclusive OR".
Wikipedia gives a good explanation:
XOR
A bitwise XOR takes two bit patterns of equal length and performs the
logical exclusive OR operation on each pair of corresponding bits. The
result in each position is 1 if only the first bit is 1 or only the
second bit is 1, but will be 0 if both are 0 or both are 1. In this we
perform the comparison of two bits, being 1 if the two bits are
different, and 0 if they are the same. For example:
0101 (decimal 5)
XOR 0011 (decimal 3)
= 0110 (decimal 6)
The bitwise XOR may be used to invert selected bits in a register
(also called toggle or flip). Any bit may be toggled by XORing it with
1. For example, given the bit pattern 0010 (decimal 2) the second and fourth bits may be toggled by a bitwise XOR with a bit pattern
containing 1 in the second and fourth positions:
0010 (decimal 2)
XOR 1010 (decimal 10)
= 1000 (decimal 8)
It's the bitwise XOR operator (see http://www.techotopia.com/index.php/Objective-C_Operators_and_Expressions#Bitwise_XOR).
What it's doing in this case is switching back and forth, because 0 ^ 1 is 1, and 1 ^ 1 is 0.
It's an exclusive OR operation.
I am looking for a checksum algorithm where for a large block of data the checksum is equal to the sum of checksums from all the smaller component blocks. Most of what I have found is from RFCs 1624/1141 which do provide this functionality. Does anyone have any experience with these checksumming techniques or a similar one?
If it's just a matter of quickly combining the checksums of the smaller blocks to get to the checksums of the larger message (not necessarily by a plain summation) you can do this with a CRC-type (or similar) algorithm.
The CRC-32 algorithm is as simple as this:
uint32_t update(uint32_t state, unsigned bit)
{
if (((state >> 31) ^ bit) & 1) state = (state << 1) ^ 0x04C11DB7;
else state = (state << 1);
return state;
}
Mathematically, the state represents a polynomial over the field GF2 that is always reduced modulo the generator polynomial. Given a new bit b the old state is transformed into the new state like this
state --> (state * x^1 + b * x^32) mod G
where G is the generator polynomial and addition is done in GF2 (xor). This checksum is linear in the sense that you can write the message M as a sum (xor) of messages A,B,C,... like this
10110010 00000000 00000000 = A = a 00000000 00000000
00000000 10010001 00000000 = B = 00000000 b 00000000
00000000 00000000 11000101 = C = 00000000 00000000 c
-------------------------------------------------------------
= 10110010 10010001 11000101 = M = a b c
with the following properties
M = A + B + C
checksum(M) = checksum(A) + checksum(B) + checksum(C)
Again, I mean the + in GF2 which you can implement with a binary XOR.
Finally, it's possible to compute checksum(B) based on checksum(b) and the position of the subblock b relative to B. The simple part is leading zeros. Leading zeros don't affect the checksum at all. So checksum(0000xxxx) is the same as checksum(xxxx). If you want to compute the checksum of a zero-padded (to the right -> trailing zeros) message given the checksum of the non-padded message it is a bit more complicated. But not that complicated:
zero_pad(old_check_sum, number_of_zeros)
:= ( old_check_sum * x^{number_of_zeros} ) mod G
= ( old_check_sum * (x^{number_of_zeros} mod G) ) mod G
So, getting the checksum of a zero-padded message is just a matter of multiplying the "checksum polynomial" of the non-padded message with some other polynomial (x^{number_of_zeros} mod G) that only depends on the number of zeros you want to add. You could precompute this in a table or use the square-and-multiply algorithm to quickly compute this power.
Suggested reading: Painless Guide to CRC Error Detection Algorithms
I have only used Adler/Fletcher checksums which work as you describe.
There is a nice comparison of crypto++ hash/checksum implementations here.
To answer Amigable Clark Kent's bounty question, for file identity purposes you probably want a cryptographic hash function, which tries to guarantee that any two given files have an extremely low probability of producing the same value, as opposed to a checksum which is generally used for error detection only and may provide the same value for two very different files.
Many cryptographic hash functions, such as MD5 and SHA-1, use the Merkle–Damgård construction, in which there is a computation to compress a block of data into a fixed size, and then combine that with a fixed size value from the previous block (or an initialization vector for the first block). Thus, they are able to work in a streaming mode, incrementally computing as they go along.