Converting six-bit binary number to it's corresponding two digit BCD number? - digital

Here is the question that I tried so hard but I couldn't solve it.
I captured the question as it was from the question-paper, I couldn't solve it in the exam, and non of student's could.
You probably ask, why don't you ask your lecturer ( it's fair question because you are not here to solve the exam-question or a homework), we did but she didn't solve it she just told us the BCD is like this :
10^0 10^1 10^2 ....
Any Help appreciated, Thanks.

It seems like what she wants is for you to specify the binary BCD output for these inputs. So, for example, 53 would output as 101 0011 (starting from D6). (101 = 5, 0011 = 3)

7.17,The 32 * 6 ROM, together with the 20 line, as shown in Fig. P7.17, converts a six‐bit binary
number to its corresponding two‐digit BCD number. For example, binary 100001 converts
to BCD 011 0011 (decimal 33). Specify the truth table for the ROM.see the answer

Related

How to add zeros before number in Lua?

I need to add zeros before, so the number always stays in the 3 digits range. If the number is 9 then I would need 009, same goes with with 2 digit if the number is 99 then I would need 099. I have no idea how to do it in Lua, if someone knows the answer I would be very grateful for it. Thanks.
EDIT
string.format("%03d", 99)
and it worked

Delphi base convert Binary to Decimal

Im converting binary to decimal and Im converting Decimal to binary. My problem is Length of the binary integer. For example:
Convertx("001110",2,10) = 14
Convertx("14",10,2) = 1110
But length of the binary is NOT constant, So How can I get exact original binary with zeros front of it? How can I get "001110" instead of "1110" ?
I m using this function in Delphi 7. -> How can I convert base of decimal string to another base?
The function you are using returns a string that is the shortest length required to express the value you have converted.
Any zeroes in front of that string are simply padding - they do not alter the binary value represented. If you need a string of a minimum length then you need to add this "padding" yourself. e.g. if you want a binary representation of a "byte" (i.e. 8 binary digits) then the minimum length you would need is 8:
binStr := Convertx("14",10,2);
while Length(binStr) < 8 do
binStr := '0' + binStr;
If you need the exact number of zeroes that were included in the "padding" of some original binary value when converting from binary to decimal and then back to "original" binary again, then this is impossible unless you separately record how many padding zeroes there were or the length of the original string, including those zeroes.
i.e. in your example, the ConvertX function has no idea (and now way to figure out) that the number "14" it is asked to convert to binary was originally converted from a 6 digit binary string with 2 leading zeroes, rather than an 8 digit binary with 4 leading zeroes (or a 16 digit binary with 12 leading zeroes, etc etc).
What you are hoping for is impossible. Consider
Convertx('001110', 2, 10)
and
Convertx('1110', 2, 10)
These both return the same output, 14. At that point there is no way to recover the length of the original input.
The way forward is therefore clear. You must remember the length of the original binary, as well as the equivalent decimal. However, once you have reached that conclusion then you might wonder whether there is an even simpler approach. Just remember the original binary value and save yourself having to convert back from decimal.

what is the difference between binary and binary coded decimal?

I'm studying encoders, which convert decimal to a code such as binary or binary coded decimal. What is binary coded decimal? Is it different from binary?
It's very late but I saw this so I thought I might answer. So binary coded decimal is a way to denote larger binary numbers in decimal form, except each digit is represented in binary for example.
1111(binary) = 15(decimal)
1111(binary) = 0001 0101(BCD)
so the bcd form of 1111 is two 4 bits numbers where the first 4 bit number in decimal is 1 and the second in decimal is 5, thus giving us 15. The way to calculate this is through an algorithm called double dabble.
(B)inary (C)oded (D)ecimal data usage is found mostly in Assembler programs. On mainframes, it was mostly used to save half a byte, typically to allow an 8-digit date to be stored in a (four-byte) fullword as YYYYMMDD, avoiding a full binary conversion while also keeping the date more "eye friendly" format (i.e. easier to see in a file dump).
IBM Mainframe Assembler provides a special instruction - MVO: (M)o(V)e with (O)ffset - enabling very easy conversion from Packed Decimal (i.e. COMP-3 in COBOL) to BCD format (and vice-versa) without using an algorithm.
Example: Assume a date of 31-DEC-2017 in YYYYMMDD (8-byte) format is to be converted to a 8-digit BCD format field (4-bytes).
(1) Use the PACK instruction to convert the 8-char DATE into 5-bytes PACKED
(2) Use the MVO instruction to move the 8 significant binary decimal digits to the BCD field
[Note length override "...BCD(5)...": so sign X'F' from PACKED is shifted into byte after the BCD field]
BCD now contains X'20171231'
SAMPLE CSECT
[...]
(1) PACK PACKED,DATE C'20171231' BECOMES X'020171231F' IN PACKED
(2) MVO BCD(5),PACKED X'020171231F' BECOMES X'20171231' IN BCD
[...]
BCD DS XL4
PACKED DS PL5
DATE DC CL8'21071231'
Likewise, to convert an 8-digit BCD date to an 8-char DATE is a simple sequence of 3 instructions:
(1) Insert sign into rightmost byte of a 5-byte packed decimal field
[think of this as restoring the sign shifted out in step 2 "MVO BCD(5),PACKED" in the first example, above]
(2) Use the MVO instruction to extract the 8 binary decimal digits into the 5-byte packed decimal field
(3) Use UNPK to convert the 5-byte packed decimal field to an 8-char date
DATE now contains C'20171231'
SAMPLE CSECT
[...]
(1) MVI PACKED+(L'PACKED-1),X'0F' INSERT SIGN (PACKED BECOMES X'........0F'
(2) MVO PACKED,BCD X'20171231' BECOMES X'020171231F' IN PACKED
(3) UNPK DATE,PACKED X'020171231F' BECOMES C'20171231' IN DATE
[...]
BCD DC XL4'20171231'
PACKED DS PL5
DATE DS CL8

Why 255 is the limit

I've seen lots of places say:
The maximum number of characters is 255.
where characters are ASCII. Is there a technical reason for that?
EDIT: I know ASCII is represented by 8 bits and so there're 256 different characters. The question is why do they specify the maximum NUMBER of characters (with duplicates) is 255.
I assume the limit you're referring to is on the length of a string of ASCII characters.
The limit occurs due to an optimization technique where smaller strings are stored with the first byte holding the length of the string. Since a byte can only hold 256 different values, the maximum string length would be 255 since the first byte was reserved for storing the length.
Some older database systems and programming languages therefore had this restriction on their native string types.
Extended ASCII is an 8-bit character set. (Original ASCII is 7-bit, but that's not relevant here.)
8 bit means that 2^8 different characters can be referenced.
2^8 equals 256, and as counting starts with 0, the maximum ASCII char code has the value 255.
Thus, the statement:
The maximum number of characters is 255.
is wrong, it should read:
The maximum number of characters is 256, the highest possible character code is 255.
To understand better how characters are mapped to the numbers from 0 to 255, see the 8-bit ASCII table.
the limit is 255 because 9+36+84+126 = 255. the 256th character (which is really the first character) is zero.
using the combinatoric formula Ck(n) = n/k = n!/(k!(n-k)!) to find the number of non-repeating combinations for 1,2,3,4,5,6,7,8 digits you get this:
of digits: 1 2 3 4 5 6 7 8
of combinations: 9 36 84 126 126 84 36 9
it is unnecessary to include 5-8 digits since it's a symmetric group of M. in other words, a 4 element generator is a group operation for an octet and its group action has 255 permutations.
interestingly, it only requires 3 digits to "count" to 1000 (after 789 the rest of the numbers are repetitions of previous combinations).
The total number of Character in ASCII table is 256 (0 to 255). 0 to 31(total 32 character ) is called as ASCII control characters (character code 0-31). 32 to 127 character is called as ASCII printable characters (character code 32-127). 128 to 255 is called as The extended ASCII codes (character code 128-255).
The ASCII value of a-z = 97-122
The ASCII value of A-Z = 65-90
The ASCII value of 0-9 = 48-57
Is there a technical reason for that?
Yes there is. Early ASCII encoding standard is 7 bit log, which can represent 2^7 = 128 (0 .. 127) different character codes.
What you are talking about here is a variant of ASCII encoding developed later, which is 8 bit log and can hold 2^8 = 256 (0 .. 255) character codes.
See Wikipedia for more information on the same.

Finding the correct formula for encoded hex value in decimal

I have a case here where I am trying to figure out how a hex number is converted into a decimal number.
I had a similar case before, but found out that if I reversed the hex string, and swapped each second value (little-endian), then converting it back to a decimal value I got what I wanted, but this one is different.
here is the values we received
Value nr. 1 is
Dec: 1348916578
Hex: 0a66ab46
I just have this one decimal/hex for now but I am trying to get more values to compare results.
I hope any math genius out there will be able to see what formula might been used here :)
thanks
1348916578
= 5 0 6 6 D 5 6 2 hex
= 0101 0000 0110 0110 1101 0101 0110 0010
0a66ab46
= 0 A 6 6 A B 4 6 hex
= 0000 1010 0110 0110 1010 1011 0100 0110
So, if a number is like this, in hex digits:
AB CD EF GH
Then a possible conversion is:
rev(B) rev(A) rev(D) rev(C) rev(F) rev(E) rev(H) rev(G)
where rev reverses the order of bits in the nibble; though I can see that the reversal could be done on a byte-wise basis also.
Interesting.... I expanded the decimal and hex into binary, and this is what you get, respectively:
1010000011001101101010101100010
1010011001101010101101000110
Slide the bottom one over by padding with some 0s, then split into 8-byte blocks.
10100000 1100110 11010101 01100010
10100 1100110 10101011 01000110
It seems to start to line up. Let's make the bottom look like the top.
Pad the first block with 0s and it's equal.
The second block is ok.
Switch the 3rd block around (reverse it) and 10101011 becomes 11010101.
10100000 1100110 11010101 01000110
Likewise with the 4th.
10100000 1100110 11010101 01100010
Now they're the same.
10100000 1100110 11010101 01100010
10100000 1100110 11010101 01100010
Will this work for all cases? Impossible to know.
The decimal value of x0a66ab46 is 174500678 or 1185637898 (depending which endian you use, with any 8, 16 or 32bit access). There seems to be no direct connection between these values. Maybe you just have the pair wrong? It could help if you posted some code about how you generate these value pairs.
BTW, Delphi has a fine little method for this: SysUtils.IntToHex
What we found was that our min USB reader that gave 10 bit decimal format is actually not showing the whole binary code. The hexadecimal reader finds the full binary code. so essentially it is possible to convert from hexadecimal value to 10 bit decimal by taking off 9 characters after binary conversion.
But this does not work the other way around (unless we strip away 2 characters from the hexadecimal value the 10 bit decimal code will only show part of the full binary code).
So case closed.

Resources