CocoaAsyncSocket processing hex values - ios
I'm using CocoaAsyncSocket to process a fixed length header message. My node implementation is as follows
var output = "\xA5\xA5" + jsonStr;
socket.write(output);
I intend to read the header first (\xA5\xA5) and then process the body of the message.
The problem that I have is that I've noticed that A5 takes 4 bytes to be represented in iOS. So when I read 2 bytes I receive c2a5.
CocoaAsync reading code
socket?.readDataToLength(2, withTimeout: -1, tag: 1)
What am I doing wrong here ?. I would like to keep a fixed length header on my server side.
Any help is much appreciated.
Related
iOS SecItemCopyMatching RSA public key format?
I'm trying to extract a 1024-bit RSA public key from an already generated key pair (two SecKeyRefs), in order to send it over the wire. All I need is a plain (modulus, exponent) pair, which should take up exactly 131 bytes (128 for the modulus and 3 for the exponent). However, when I fetch the key info as a NSData object, I get 140 bits instead of 131. Here's an example result: <30818902 818100d7 514f320d eacf48e1 eb64d8f9 4d212f77 10dd3b48 ba38c5a6 ed6ba693 35bb97f5 a53163eb b403727b 91c34fc8 cba51239 3ab04f97 dab37736 0377cdc3 417f68eb 9e351239 47c1f98f f4274e05 0d5ce1e9 e2071d1b 69a7cac4 4e258765 6c249077 dba22ae6 fc55f0cf 834f260a 14ac2e9f 070d17aa 1edd8db1 0cd7fd4c c2f0d302 03010001> After retrying the key generation a couple of times and comparing the resulting NSData objects, the bytes that remain the same for all keys are the first 7: <30818902 818100> The last three bytes look like the exponent (65537, a common value). There are also two bytes between the "modulus" and the exponent: <0203> Can someone with more crypto experience help me identify what encoding is this? DER? How do I properly decode the modulus and exponent? I tried manually stripping out the modulus and exponent using NSData* modulus = [keyBits subdataWithRange:(NSRange){ 7, 128 }]; NSData* exponent = [keyBits subdataWithRange:(NSRange){ 7 + 128 + 2, 3 }]; but I get errors when trying to decrypt data which the remote host encoded using that "key". EDIT: Here's a gist of the solution I ended up using to unpack the RSA blob: https://gist.github.com/vl4dimir/6079882
Assuming you want the solution to work under iOS, please have a look at this thread. The post confirms that the encoding is DER and shows how to extract the exponent and modulus from the NSData object you started with. There is another solution that won't work on iOS, but will work on Desktop systems (including MacOS X) that have OpenSSL installed in this thread. Even if you are looking for the iOS-only solution you can still use this to verify your code is working correctly.
Erlang decode binary data from packet
I get an UDP packet, like so: <<83,65,77,80,188,64,171,138,30,120,105,0,0,0,10,0,4,0,0,0,84,101,115,116,15,0,0,0,82,101,122,111,110,101,32,82,111,108,101,80,108,97,121,11,0,0,0,83,97,110,32,65,110,100,114,101,97,115>> How can I decode packet if I know that I can remove first 11 bytes, and the 12-13 byte contains amount of players online on the server (Byte width is 2), how can I get this amount? UPD Maybe I send incorrect packet... SAMP Query So, I send: <<$S,$A,$M,$P,188,64,172,136,7808:16,$i>> For server 188.64.172.136:7808, and I get <<83,65,77,80,188,64,172,136,30,128,105,0,0,0,10,0,4,0,0,0,84,101,115,116,15,0,0,0,82,101,122,111,110,101,32,82,111,108,101,80,108,97,121,11,0,0,0,83,97,110,32,65,110,100,114,101,97,115>>
You can use the bit syntax and clever pattern matching to get the result: <<_:11/bytes, NumberOfPlayers:16/integer-big, _/binary>> = <<83,65,77,80,188,64,171,138,30,120,105,0,0,0,10,0,4,0,0,0,84,101,115,116,15,0,0,0,82,101,122,111,110,101,32,82,111,108,101,80,108,97,121,11,0,0,0,83,97,110,32,65,110,100,114,101,97,115>>, NumberOfPlayers.
If your packet binary is stored in P, you can do something like (assuming big endian): <<NumberOfPlayersOnline:16/big>> = binary:part(P,11,2). The result is in NumberOfPlayers.
websocket client packet unframe/unmask
I am trying to implement latest websocket spec. However, i am unable to get through the unmasking step post successful handshake. I receive following web socket frame: <<129,254,1,120,37,93,40,60,25,63,71,88,92,125,80,81,73, 51,91,1,2,53,92,72,85,103,7,19,79,60,74,94,64,47,6,83, 87,58,7,76,87,50,92,83,70,50,68,19,77,41,92,76,71,52, 70,88,2,125,90,85,65,96,15,14,20,107,31,14,28,100,27,9, 17,122,8,72,74,96,15,86,68,37,68,18,76,48,15,28,93,48, 68,6,73,60,70,91,24,122,77,82,2,125,80,81,85,45,18,74, 64,47,91,85,74,51,21,27,20,115,24,27,5,37,69,80,75,46, 18,68,72,45,88,1,2,40,90,82,31,37,69,76,85,103,80,94, 74,46,64,27,5,60,75,87,24,122,25,27,5,47,71,73,81,56, 21,27,93,48,88,76,31,57,77,74,11,55,73,68,73,115,65,81, 31,104,26,14,23,122,8,75,68,52,92,1,2,110,24,27,5,53, 71,80,65,96,15,13,2,125,75,83,75,41,77,82,81,96,15,72, 64,37,92,19,93,48,68,7,5,62,64,93,87,46,77,72,24,40,92, 90,8,101,15,28,83,56,90,1,2,108,6,13,21,122,8,82,64,42, 67,89,92,96,15,93,19,56,28,8,65,101,31,94,16,105,28,10, 20,56,30,14,65,56,27,93,71,106,16,11,17,63,25,4,17,57, 73,89,17,59,29,88,29,106,24,27,5,46,65,72,64,54,77,69, 24,122,66,93,93,49,5,12,8,109,15,28,76,59,90,93,72,56, 76,1,2,41,90,73,64,122,8,89,85,50,75,84,24,122,25,15, 23,105,25,5,19,106,26,14,20,111,25,27,5,53,77,85,66,53, 92,1,2,110,26,13,2,125,95,85,65,41,64,1,2,108,27,10,19, 122,7,2>> As per base framing protocol defined here (https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-17#section-5.2) i have: fin:1, rsv:0, opcode:1, mask:1, length:126 Masked application+payload data comes out to be: <<87,58,7,76,87,50,92,83,70,50,68,19,77,41,92,76,71,52,70,88,2,125,90,85,65,96, 15,14,20,107,31,14,28,100,27,9,17,122,8,72,74,96,15,86,68,37,68,18,76,48,15, 28,93,48,68,6,73,60,70,91,24,122,77,82,2,125,80,81,85,45,18,74,64,47,91,85, 74,51,21,27,20,115,24,27,5,37,69,80,75,46,18,68,72,45,88,1,2,40,90,82,31,37, 69,76,85,103,80,94,74,46,64,27,5,60,75,87,24,122,25,27,5,47,71,73,81,56,21, 27,93,48,88,76,31,57,77,74,11,55,73,68,73,115,65,81,31,104,26,14,23,122,8,75, 68,52,92,1,2,110,24,27,5,53,71,80,65,96,15,13,2,125,75,83,75,41,77,82,81,96, 15,72,64,37,92,19,93,48,68,7,5,62,64,93,87,46,77,72,24,40,92,90,8,101,15,28, 83,56,90,1,2,108,6,13,21,122,8,82,64,42,67,89,92,96,15,93,19,56,28,8,65,101, 31,94,16,105,28,10,20,56,30,14,65,56,27,93,71,106,16,11,17,63,25,4,17,57,73, 89,17,59,29,88,29,106,24,27,5,46,65,72,64,54,77,69,24,122,66,93,93,49,5,12,8, 109,15,28,76,59,90,93,72,56,76,1,2,41,90,73,64,122,8,89,85,50,75,84,24,122, 25,15,23,105,25,5,19,106,26,14,20,111,25,27,5,53,77,85,66,53,92,1,2,110,26, 13,2,125,95,85,65,41,64,1,2,108,27,10,19,122,7,2>> While the 32-bit masking key is: <<37,93,40,60,25,63,71,88,92,125,80,81,73,51,91,1,2,53,92,72,85,103,7,19,79,60, 74,94,64,47,6,83>> As per https://datatracker.ietf.org/doc/html/draft-ietf-hybi-thewebsocketprotocol-17#section-5.2 : j = i MOD 4 transformed-octet-i = original-octet-i XOR masking-key-octet-j however, i doesn't seem to get my original octet sent from client side, which is basically a xml packet. Any direction, correction, suggestions are greatly appreciated.
I think you've mis-read the data framing section of the protocol spec. Your interpretation of the first byte (129) is correct - fin + opcode 1 - final (and first) fragment of a text message. The next byte (254) implies that the body of the message is masked and that the following 2 bytes provide its length (lengths of 126 or 127 imply longer messages whose length's can't be represented in 7 bits. 126 means that the following 2 bytes hold the length; 127 mean that its the following 4 bytes). The following 2 bytes - 1, 120 - imply a message length of 376 bytes. The following 4 bytes - 37,93,40,60 - are your mask. The remaining data is your message which should be transformed as you write, giving the message <body xmlns='http://jabber.org/protocol/httpbind' rid='2167299354' to='jaxl.im' xml:lang='en' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh' ack='1' route='xmpp:dev.jaxl.im:5222' wait='30' hold='1' content='text/xml; charset=utf-8' ver='1.1 0' newkey='a6e44d87b54461e62de3ab7874b184dae4f5d870' sitekey='jaxl-0-0' iframed='true' epoch='1324196722121' height='321' width='1366'/>
How to manual set minimal value for dynamic buffer in Netty 3.2.6? For example 2048 bytes
I need to receive full packet from other IP (navigation device) by TCP/IP. The device has to send 966 bytes periodically (over one minute), for example. In my case first received buffer has length 256 bytes (first piece of packet), the second is 710 bytes (last piece of packet), the third is full packet (966 bytes). How to manual set minimal value for first received buffer length? This is piece of my code: Executor bossExecutors = Executors.newCachedThreadPool(); Executor workerExecutors = Executors.newCachedThreadPool(); NioServerSocketChannelFactory channelsFactory = new NioServerSocketChannelFactory(bossExecutors, workerExecutors); ServerBootstrap bootstrap = new ServerBootstrap(channelsFactory); ChannelPipelineFactory pipelineFactory = new NettyServerPipelineFactory(this.HWController); bootstrap.setPipelineFactory(pipelineFactory); bootstrap.setOption("child.tcpNoDelay", true); bootstrap.setOption("child.keepAlive", true); bootstrap.setOption("child.receiveBufferSizePredictorFactory", new FixedReceiveBufferSizePredictorFactory(2048) ); bootstrap.bind(new InetSocketAddress(this.port));
No matter what receiveBufferSizePredictorFactory you specify, you will see a message is split into multiple MessageEvents. It's because TCP/IP is not a message-oriented protocol but a stream-oriented one. Please read the user guide that explains how to write a proper decoder that deals with this common issue.
TCP/IP Client / Server commands data
I have a Client/Server architecture (C# .Net 4.0) that send's command packets of data as byte arrays. There is a variable number of parameters in any command, and each paramater is of variable length. Because of this I use delimiters for the end of a parameter and the command as a whole. The operand is always 2 bytes and both types of delimiter are 1 byte. The last parameter_delmiter is redundant as command_delmiter provides the same functionality. The command structure is as follow: FIELD SIZE(BYTES) operand 2 parameter1 x parameter_delmiter 1 parameter2 x parameter_delmiter 1 parameterN x ............. ............. command_delmiter 1 Parameters are sourced from many different types, ie, ints, strings etc all encoded into byte arrays. The problem I have is that sometimes parameters when encoded into byte arrays contain bytes that are the same value as a delimiter. For example command_delmiter=255.. and a paramater may have that byte inside of it. There is 3 ways I can think of fixing this: 1) Encode the parameters differently so that they can never be the same value as a delimiter (255 and 254) Modulus?. This will mean that paramaters will become larger, ie Int16 will be more than 2 bytes etc. 2) Do not use delimiters at all, use count and length values at the start of the command structure. 3) Use something else. To my knowledge, the way TCP/IP buffers work is that SOME SORT of delimiter has to be used to seperate 'commands' or 'bundles of data' as a buffer may contain multiple commands, or a command may span multiple buffers.. So this BinaryReader / Writer seems like an obvious candidate, the only issue is that the byte array may contain multiple commands ( with parameters inside). So the byte array would still have to be chopped up in order to feel into the BinaryReader. Suggestions? Thanks.
The standard way to do this is to have the length of the message in the (fixed) first few bytes of a message. So you could have the first 4 bytes to denote the length of a message, read those many bytes for the content of the message. The next 4 bytes would be the length of the next message. A length of 0 could indicate end of messages. Or you could use a header with a message count. Also, remember TCP is a byte stream, so don't expect a complete message to be available every time you read data from a socket. You could receive an arbitrary number of bytes at ever read.