C# CRC-16-CCITT 0x8408 Polynomial. Help needed - crc16

I am new in communications programming. Basically, I need to get the hex equivalent of the CRC output. I have a hex string which is the parameter -
EE0000000015202020202020202020202020323134373030353935
This is concatenation of two strings. The output I need is E6EB in hex or 59115 in ushort. I tried different approaches based on what I found in the web but to no avail. The polynomial that I should be using is 0x8408, which is [CRC-16-CCITT][1], http://en.wikipedia.org/wiki/Polynomial_representations_of_cyclic_redundancy_checks.
I tried this approach, CRC_CCITT Kermit 16 in C#, but the output is incorrect. I also tried the bitwise ~ operator as some suggested for reverse computation, but still failed.
Any help is very much appreciated.

RevEng reports:
% ./reveng -s -w 16 EE0000000015202020202020202020202020323134373030353935e6eb
width=16 poly=0x1021 init=0xffff refin=true refout=true xorout=0xffff check=0x906e name="X-25"
So there's your CRC. Note that the CRC is reflected, where 0x8408 is 0x1021 reflected.

I found a solution and I'll post them in case someone will encounter the same.
private ushort CCITT_CRC16(string strInput)
{
ushort data;
ushort crc = 0xFFFF;
byte[] bytes = GetBytesFromHexString(strInput);
for (int j = 0; j < bytes.Length; j++)
{
crc = (ushort)(crc ^ bytes[j]);
for (int i = 0; i < 8; i++)
{
if ((crc & 0x0001) == 1)
crc = (ushort)((crc >> 1) ^ 0x8408);
else
crc >>= 1;
}
}
crc = (ushort)~crc;
data = crc;
crc = (ushort)((crc << 8) ^ (data >> 8 & 0xFF));
return crc;
}
private byte[] GetBytesFromHexString(string strInput)
{
Byte[] bytArOutput = new Byte[] { };
if (!string.IsNullOrEmpty(strInput) && strInput.Length % 2 == 0)
{
SoapHexBinary hexBinary = null;
try
{
hexBinary = SoapHexBinary.Parse(strInput);
if (hexBinary != null)
{
bytArOutput = hexBinary.Value;
}
}
catch (Exception ex)
{
throw ex;
}
}
return bytArOutput;
}
import System.Runtime.Remoting.Metadata.W3cXsd2001 for SoapHexBinary.

Related

The base64 encode formatted output from Arduino HMAC-SHA1 does not match with JAVA/python/online tool

I am working on an Arduino project which is required an authorized authentication based on OAuth 1.0 to connects to the cloud. This is alike [Authorizing a request to Twitter API][1], and I am stuck in the step of [Creating a signature][2]. The whole process of creating a signature requires algorithms like encodeURL, base64encode, and hmac-sha1. On my Arduino project, I use Cryptosuite(link 3) library for hmac-sha1 and arduino-base64(link 4) library for base64encode. Both of them are working fine separately. However, I need to get a base64-formatted output of hmac-sha1. So I have tried this:
#include <avr/pgmspace.h>
#include <sha1.h>
#include <Base64.h>
uint8_t *in, out, i;
char b64[29];
static const char PROGMEM b64chars[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
char key[] = "testKey";
char basestring[] = "testing";
void printHash(uint8_t* hash) {
int i;
for (i=0; i<20; i++) {
Serial.print("0123456789abcdef"[hash[i]>>4]);
Serial.print("0123456789abcdef"[hash[i]&0xf]);
}
Serial.println();
}
void setup() {
Serial.begin(115200);
Serial.print("Result:");
Sha1.initHmac((uint8_t*)key, strlen(key));
Sha1.print(basestring);
printHash(Sha1.resultHmac());
Serial.println();
// encoding
char* input;
input = (char*)(Sha1.resultHmac());
int inputLen = strlen(input);
int encodedLen = base64_enc_len(inputLen);
char encoded[encodedLen];
// note input is consumed in this step: it will be empty afterwards
base64_encode(encoded, input, inputLen);
Serial.print("base64 result: ");
Serial.println(encoded);
}
void loop() {
}
The output of printHash that I got is 60d41271d43b875b791e2d54c34bf3f018a29763, which is exactly same with the online verification tool(link 5).
However, I supposed to get YNQScdQ7h1t5Hi1Uw0vz8Biil2M= for the base64 result. But I got L18B0HicKRhuxmB6SIFpZP+DpHxU which seems wrong. I have also tried to write a JAVA program and a python program, which also said that the output of the base64 result should be YNQScdQ7h1t5Hi1Uw0vz8Biil2M=
I also found this post: Issues talking between Arduino SHA1-HMAC and base64 encoding and Python(link 6). I have also tried the tidy function it mentioned from Adafruit-Tweet-Receipt(link 7).
// base64-encode SHA-1 hash output. This is NOT a general-purpose base64
// encoder! It's stripped down for the fixed-length hash -- always 20
// bytes input, always 27 chars output + '='.
for(in = Sha1.resultHmac(), out=0; ; in += 3) { // octets to sextets
b64[out++] = in[0] >> 2;
b64[out++] = ((in[0] & 0x03) << 4) | (in[1] >> 4);
if(out >= 26) break;
b64[out++] = ((in[1] & 0x0f) << 2) | (in[2] >> 6);
b64[out++] = in[2] & 0x3f;
}
b64[out] = (in[1] & 0x0f) << 2;
// Remap sextets to base64 ASCII chars
for(i=0; i<=out; i++) b64[i] = pgm_read_byte(&b64chars[b64[i]]);
b64[i++] = '=';
b64[i++] = 0;
Is there any mistake I've made in here?
Thanks!
So full example will be:
#include <avr/pgmspace.h>
#include <sha1.h>
#include <Base64.h>
char key[] = "testKey";
char basestring[] = "testing";
void printHash(uint8_t* hash) {
for (int i=0; i<20; i++) {
Serial.print("0123456789abcdef"[hash[i]>>4]);
Serial.print("0123456789abcdef"[hash[i]&0xf]);
}
Serial.println();
}
void setup() {
Serial.begin(115200);
Serial.print("Input: ");
Serial.println(basestring);
Serial.print("Key: ");
Serial.println(key);
Serial.print("Hmac-sha1 (hex): ");
Sha1.initHmac((uint8_t*)key, strlen(key));
Sha1.print(basestring);
uint8_t *hash;
hash = Sha1.resultHmac();
printHash(hash);
// base64 encoding
char* input = (char*) hash;
int inputLen = strlen(input) - 1; // skip null termination
int encodedLen = base64_enc_len(inputLen);
char encoded[encodedLen];
// note input is consumed in this step: it will be empty afterwards
base64_encode(encoded, input, inputLen);
Serial.print("Hmac-sha1 (base64): ");
Serial.println(encoded);
}
void loop() { }
which outputs:
Input: testing
Key: testKey
Hmac-sha1 (hex): 60d41271d43b875b791e2d54c34bf3f018a29763
Hmac-sha1 (base64): YNQScdQ7h1t5Hi1Uw0vz8Biil2M=

dylib or executable export list

I am writing a plugins subsystem and one of the ideas is to iterate through a dylib (or at least current global scope) exported functions. I know there are other ways, just really want to give this one a try.
What I am wondering, is there a way to get a list of functions exported by a dylib or available in global scope through OS X and iOS API?
Thanks in advance!
You can use a command 'nm' for getting an information from a dynamic library.
See additionally system manual for this command on Mac.
If you are looking to do that from code, you could use this method.
std::vector<std::string> load_mach_o(std::string file_name)
{
/*
Parse the Mach-O structure to find all the exported symbols
Mach-O structure:
mach_header_64
cmd
...
cmd
data
...
data
*/
std::vector<std::string> methods;
off_t offset = sizeof(struct mach_header_64);
BYTE * bytes = load_bytes(file_name.c_str());
if (bytes == NULL)
{
return methods;
}
struct mach_header_64 *header = (struct mach_header_64 *)bytes;
//Get the load commands
struct load_command *cmd = (struct load_command *)(bytes + offset);
for (uint32_t i = 0U; i < header->ncmds; i++)
{
if (cmd->cmd == LC_SYMTAB)
{
struct symtab_command * symtab = (struct symtab_command *)cmd;
off_t string_start = 0;
const char* strings = (const char *)(bytes + symtab->stroff + 1);
for (uint32_t i = 0 ; i < symtab->strsize ; i++)
{
if (strings[i] == '\0')
{
i++;
size_t size = sizeof(char) * (i - string_start);
if (size == 1)
{
string_start = i+1;
continue;
}
methods.push_back(std::string((const char *)(strings + string_start)));
string_start = i+1;
}
}
}
offset += cmd->cmdsize;
//load next command
cmd = (struct load_command *)(bytes + offset);
}
free(bytes);
return methods;
}
This function read the file and parses the structure till mach-O strings section, then, parses each string and store it in a vector containing all the exposed functions.
Best regards.

zError function call in zLib impacting performance

When using zlib 1.25 in an iOS project, I've noticed in my profiler (Instruments) that the function zError is being called repeatedly, and is occupying 50% of the overall inflate time.
Does anyone know why zError would be getting invoked like this? I don't call it anywhere in my own code, which is a pretty boilerplate inflate function, pasted below:
int UPNExtractorGZInflate(const void *src, int srcLen, void *dst, int dstLen) {
z_stream strm = {0};
strm.total_in = strm.avail_in = srcLen;
strm.total_out = strm.avail_out = dstLen;
strm.next_in = (Bytef *) src;
strm.next_out = (Bytef *) dst;
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
strm.opaque = Z_NULL;
int err = -1;
int ret = -1;
err = inflateInit2(&strm, (15 + 16)); //15 window bits, and the +16 tells zlib to decode gzip
if (err == Z_OK) {
err = inflate(&strm, Z_FINISH);
if (err == Z_STREAM_END) {
ret = strm.total_out;
}
else {
inflateEnd(&strm);
return err;
}
}
else {
inflateEnd(&strm);
return err;
}
inflateEnd(&strm);
return ret;
}
And here is the relevant profiler output (notice zError taking 50% of the overall inflate time):
zError isn't called by any zlib function. If you're not calling it, then your profiler is misidentifying the function taking that time.

madvise() function not working

I am trying madvise() to mark allocated memory as mergeable so that two applications having same pages can be merged.
While using the madvise() function it shows "invalid argument".
#include<stdio.h>
#include<sys/mman.h>
#include<stdlib.h>
#include<errno.h>
#define ADDR 0xf900f000
int main()
{
int *var1=NULL,*var2=NULL;
size_t size=0;
size = 1000*sizeof(int);
var1 = (int*)malloc(size);
var2 = (int *)malloc(size);
int i=0;
for(i=0;i<999;i++)
{
var1[i] = 1;
}
for(i=0;i<999;i++)
{
var2[i] = 1;
}
i = -1;
while(i<0)
{
i = madvise((void *)var1, size, MADV_MERGEABLE); //to declare mergeable
printf("%d %p\n", i, var1); //to print the output value
err(1,NULL); //to print the generated error
i = madvise((void *)var2, size, MADV_MERGEABLE); //to declare mergeable
printf("%d\n", i);
}
return 0;
}
Error:
a.out: Invalid argument
Please help me.
Thank You.
You can only merge whole pages. You can't merge arbitrary chunks of data.

BlackBerry - Problem with GZip decompression

There is a strange problem I've run in using RIM compression API, I can't make it work as it's described in documentation.
If I gzip plain text file using win gzip tool, add gz to resources of blackberry project and in app try to decompress it, there will be infinite loop, gzis.read() never return -1...
try
{
InputStream inputStream = getClass().getResourceAsStream("test.gz");
GZIPInputStream gzis = new GZIPInputStream(inputStream);
StringBuffer sb = new StringBuffer();
char c;
while ((c = (char)gzis.read()) != -1)
{
sb.append(c);
}
String data = sb.toString();
add(new RichTextField(data));
gzis.close();
}
catch(IOException ioe)
{
}
After the compressed content there is repetition of 65535 value in gzis.read(). The only workaround I've found is dumb
while ((c = (char)gzis.read()) != -1 && c != 65535)
But I'm curious what is the reason, what I'm doing wrong, and why 65535?
char is an unsigned, 16-bit data type. -1 cast to a char is 65535.
Change to:
int i;
while ((i = gzis.read()) != -1)
{
sb.append((char)i);
}
And it should work. The example on RIM's API can't possibly work, as no char will ever equal -1.

Resources