Here is the poc piece of code I had:
let text = "Hello, World"
let data = text.data(using: .utf8)! as CFData
let newData = Data(data as NSData)
print(newData.base64EncodedString())
return true
When I run this in a device running iOS 13 or 14, I get the following output:
SGVsbG8sIFdvcmxk
which is the correct base64 for "Hello, World". However, when I run the exact same code in iOS 12, I get the following:
SAAAAAAAAAAMAAAA
which is quite meaningless in text (utf-8) format, but the hex for this is:
48 00 00 00 00 00 00 00 0c 00 00 00
So I'm not sure what is happening here, and even if it was just filling in zeroes, why is it just not zeroes all the way through? While I get that what I'm doing is wrong, and the following works perfect on all versions:
print((data as Data).base64EncodedString())
I'm just curious as to why is it acting differently on the versions. The documentation does not seem to mention any particular behavioral caveat in an older version.
Please stop dealing with CFData and NSData
In Swift the recommended API to convert a String to Data is
let text = "Hello, World"
let data = Data(text.utf8)
print(data.base64EncodedString())
It works reliably in all versions and it avoids the optional.
The problem is the unnecessary CF - NS bridging. It seems that a zero-terminator is inserted after the H somewhere.
Related
I am developing a VPN (iOS Network Extension), and using C/C++ to read file-descriptor directly (instead of Swift), currently it successfully captures device's request Packets, but I don't know how to parse iOS's packets, I could not even find what network layer or protocol the packets are formatted in.
I converted Packet's binary into Hex to be able to decode with online tools; below are samples of what I need to parse:
000000024500003B5461000040110C390A07000208080808FA2D0035002739B4DE790100000100000000000003777777056170706C6503636F6D0000010001
000000024500003CBAE200004011A5B60A07000208080808E48A0035002892DAE43B01000001000000000000037777770669636C6F756403636F6D0000010001
00000002450000375324000040110D7A0A07000208080808DD7F003500232BBA841801000001000000000000056170706C6503636F6D0000010001
But when tried parsing with online decoder, they fail saying invalid packet.
What network layer or protocol is above?
Note that above are 3 packet samples (not one splitted by me).
It's tun-layer protocol with 4 bytes prefix:
1. Once we use C/C++ to read file-descriptor, in NEPacketTunnelProvider like:
let tunFd = self.packetFlow.value(forKeyPath: "socket.fileDescriptor") as! Int32
// ... pass above to C/C++ backend
Instead of using Swift like:
self.packetFlow.readPackets { [weak self] (packets: [Data], protocols: [NSNumber]) in
// Handle packets here...
}
2. There are 4 additional bytes prefixed to the tun-layer packet (e.g. 00 00 00 02), each time we read packets.
3. To allow most online-tools to understand the packet, remove those starting 4 bytes, and instead prefix it with Mac-Header-Hex like:
01 00 5E 00 00 09 C2 01 17 23 00 00 08 00
Note that by doing above, we convert initial tun-layer packet to a tap-layer packet.
Also, remember to prefix again those 4 bytes, once writing a packet into the file-descriptor (after removing Mac-Header).
Update 2021; Apple discourages accessing file-descriptor directly (and it may be removed in future iOS releases).
I'm currently using this NFC reader ACR1255 to reads NTAG213 with an text ndef on it on an iOS app
I'm currently doing 2 read commands for 16 bytes each (because i know the size of my ndef message) but i find it not really reliable
While reading the NTAG213 doc I found the FAST_READ command but I can't seem to use it ?
I'm kinda new at NFC and it's a pretty mess for me I'm not sure i'm doing something wrong ?
For the moment i'm sending this apdu to read: "FF B0 00 04 10" and "FF B0 00 04 10" and concat the result and I got my ndef message
But for the FAST_READ I try this : "FF 3A 04 00 00" and I receive "6A 81" for function not supported.
Is there a protocol compatibily I didn't see ? Or someone has an idea ?
Seems like your end index is incorrect, it should be 4 or more.
Working example.
Hi in my iOS app I'm trying to use the ACR1255U, I want to write to a NTAG213 card some data,I found a way by using APDU command. I found a list of APDU command here, so the command to update binary for ACR1255U it's the follow:
FF D6 00 P1 Lc DATA (P1 --> starting block to update, Lc --> number of bytes)
so for example if I want to write A003, I generate the following NSString:
FF D6 00 04 04 41 30 30 33
but sometimes when I try to write it to a card it works perfectly and sometimes it fails, but I've no idea why... The command is correct and I'm using the SDK inserted in demo app that I download from here.
Does anyone has an idea about why sometimes it works and sometimes it doesn't work?
Thank you
It happen in the demo app and in my app
We are having issues with the Postscript code generated by the "Canon iR-ADV C5235/5240 PS3" printer driver. We print a test document (3 A4 pages of Lorem Ipsum text) and get the following Postscript output in the Windows spool directory:
http://files.etvdzs.info/original.ps
We have tried various Postscript viewers/converters, and they are unable to handle this file. The reason is that the file contains sections of binary data at the beginning and end, and also the following byte sequence at position 0x1060b:
cd ca 10 02 00 1a 00 01 82 6f ff ff 00 00 00 00 00 00 00 00 01
If we remove these three binary sequences, we get the resulting file, which works fine in most Postscript viewers/converters we have tried:
http://files.etvdzs.info/cleaned.ps
Has anybody else encountered similar issues with Canon printer drivers? Does anybody know what these binary sequences signify, or what format they are in?
The binary sequences are CPCA codes. One can download documentation about the data structures used in CPCA after registering here:
https://www.developersupport.canon.com/user/register
It is then reasonably straightforward to write a program that strips the CPCA codes out of the file. The file can then be loaded successfully by 3rd-party Postscript viewers/converters.
I am using DeDe to create an API (Interface) I can compile to. (Strictly legit: while we wait for the vendor to deliver a D2010 version in two months, we can at least get our app compiling...)
We'll stub out all methods.
Dede emits constant declarations like these:
LTIMGLISTCLASS =
00: ÿÿÿÿ....LEADIMGL|FF FF FF FF 0D 00 00 00 4C 45 41 44 49 4D 47 4C|
10: IST32. |49 53 54 33 32 00|;
DS_PREFIX =
0: ÿÿÿÿ....DICM.|FF FF FF FF 04 00 00 00 44 49 43 4D 00|;
How would I convert these into a compilable statement?
In theory, I don't care about the actual values, since I doubt they're use anywhere, but I'd like to get their size correct. Are these integers, LongInts or ???
Any other hints on using DeDe would be welcome.
Those are strings. The first four bytes are the reference count, which for string literals is always -1 ($ffffffff). The next four bytes are the character count. Then comes the characters an a null terminator.
const
LTIMGLISTCLASS = 'LEADIMGLIST32'; // 13 = $0D characters
DS_PREFIX = 'DICM'; // 4 = $04 characters
You don't have to "doubt" whether those constants are used anywhere. You can confirm it empirically. Compile your project without those constants. If it compiles, then they're not used.
If your project doesn't compile, then those constants must be used somewhere in your code. Based on the context, you can provide your own declarations. If the constant is used like a string, then declare a string; if it's used like an integer, then declare an integer.
Another option is to load your project in a version of Delphi that's compatible with the DCUs you have. Use code completion to make the IDE display the constant and its type.