Not able to show image in ios - ios

I am getting an json from server in ios which returns one of the value as datatype byte[], I am receiving like
NSData *imageData =[receivedData objectForKey:#"img1"];
if I print this in iOS using NSLog it shows something like below :
(
"-1",
"-40",
"-1",
"-32",
0,
16,
74,
70,
73,
70,
0,
1,
2,
1,
0,
72,
0,
72,
0,
0,
"-1",
"-31",
20,
"-79",
69,
120,
105,
102,
0,
0,
77,
77,
0,
42,
0,
0,
0,
8,
0,
7,
1,
18,
0,
3,
0,
0,
0,
1,
0,
1,
0,
0,
1,
26,
0,
5,
0,
0,
0,
1,
0,
0,
0,
98,
1,
27,
0,
5,
....)
As showing some of the values is showing with double quotes is that the issue? As this is working fine at web application , just need to show this array as image in iOS ?I have to show the image in iOS using the byte array coming from server.
Can anybody point the issue I am new to iOS ..?

Try this.
NSArray *byteArrayReceived = [receivedData objectForKey:#"img1"];
unsigned c = byteArrayReceived.count;
uint8_t *bytes = malloc(sizeof(*bytes) * c);
unsigned i;
for (i = 0; i < c; i++)
{
NSString *str = [byteArrayReceived objectAtIndex:i];
int byte = [str intValue];
bytes[i] = (uint8_t)byte;
}
NSData *data = [NSData dataWithBytes:bytes length:c];
UIImage *image = [UIImage imageWithData:data];

Add this code:
NSArray *array = [receivedData objectForKey:#"img1"];
unsigned c = array.count;
uint8_t *bytes = malloc(sizeof(*bytes) * c);
unsigned i;
for (i = 0; i < c; i++)
{
NSString *str = [array objectAtIndex:i];
int byte = [str intValue];
bytes[i] = byte;
}
NSData *imageData =[NSData dataWithBytesNoCopy:bytes length:c freeWhenDone:YES];// your imageData
UIImage *img=[UIImage imageWithData:data]; // your image
[yourImageView setImage:img];// set to imageView

First you need to convert image data into Base64String.
Make a category of NSData and write code for converting string to base 64
#define BINARY_UNIT_SIZE 3
#define BASE64_UNIT_SIZE 4
void *Base64Decode(
const char *inputBuffer,
size_t length,
size_t *outputLength)
{
if (length == -1)
{
length = strlen(inputBuffer);
}
size_t outputBufferSize = ((length+BASE64_UNIT_SIZE-1) / BASE64_UNIT_SIZE) * BINARY_UNIT_SIZE;unsigned char *outputBuffer = (unsigned char*)malloc(outputBufferSize);
size_t i = 0;
size_t j = 0;
while (i < length)
{
unsigned char accumulated[BASE64_UNIT_SIZE];
size_t accumulateIndex = 0;
while (i < length)
{
unsigned char decode = base64DecodeLookup[inputBuffer[i++]];
if (decode != xx)
{
accumulated[accumulateIndex] = decode;
accumulateIndex++;
if (accumulateIndex == BASE64_UNIT_SIZE)
{
break;
}
}
}
if(accumulateIndex >= 2)
outputBuffer[j] = (accumulated[0] << 2) | (accumulated[1] >> 4);
if(accumulateIndex >= 3)
outputBuffer[j + 1] = (accumulated[1] << 4) | (accumulated[2] >> 2);
if(accumulateIndex >= 4)
outputBuffer[j + 2] = (accumulated[2] << 6) | accumulated[3];
j += accumulateIndex - 1;
}
if (outputLength)
{
*outputLength = j;
}
return outputBuffer;
}
// Convert string to base64String
+ (NSData *)dataFromBase64String:(NSString *)aString
{
NSData *data = [aString dataUsingEncoding:NSASCIIStringEncoding];
size_t outputLength;
void *outputBuffer = Base64Decode([data bytes], [data length], &outputLength);
NSData *result = [NSData dataWithBytes:outputBuffer length:outputLength];
free(outputBuffer);
return result;
}
// now write this code and convert nsdata to image
NSData* data = [NSData dataFromBase64String:[receivedData objectForKey:#"img1"];];
UIImage* image = [UIImage imageWithData:data];
NSLog(#"Image --- %#", image);
Try this.. Hopefully it helps you.. Thanks

Related

Convert YUV data to CVPixelBufferRef and play in AVSampleBufferDisplayLayer

I'm having a stream of video in IYUV (4:2:0) format and trying to convert it into CVPixelBufferRef and then into CMSampleBufferRef and play it in AVSampleBufferDisplayLayer (AVPictureInPictureController required). I've tried several version of solution, but none actually works well, hope someone with video processing experience can tell what I've done wrong here.
Full function:
- (CMSampleBufferRef)makeSampleBufferFromTexturesWithY:(void *)yPtr U:(void *)uPtr V:(void *)vPtr yStride:(int)yStride uStride:(int)uStride vStride:(int)vStride width:(int)width height:(int)height doMirror:(BOOL)doMirror doMirrorVertical:(BOOL)doMirrorVertical
{
NSDictionary *pixelAttributes = #{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:#{}}; // For 1,2,3
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result;
result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange // For 1,2,3
// kCVPixelFormatType_32BGRA, // For 4.
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
if (result != kCVReturnSuccess) {
NSLog(#"PIP: Unable to create cvpixelbuffer %d", result);
return nil;
}
/// Converter code below...
CMFormatDescriptionRef formatDesc;
result = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &formatDesc);
if (result != kCVReturnSuccess) {
NSAssert(NO, #"PIP: Failed to create CMFormatDescription: %d", result);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
CMTime now = CMTimeMakeWithSeconds(CACurrentMediaTime(), 1000);
CMSampleTimingInfo timingInfo;
timingInfo.duration = CMTimeMakeWithSeconds(1, 1000);
timingInfo.presentationTimeStamp = now;
timingInfo.decodeTimeStamp = now;
#try {
if (#available(iOS 13.0, *)) {
CMSampleBufferRef sampleBuffer;
CMSampleBufferCreateReadyWithImageBuffer(kCFAllocatorDefault, pixelBuffer, formatDesc, &timingInfo, &sampleBuffer);
// CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
pixelBuffer = nil;
// free(dest.data);
// free(uvPlane);
return sampleBuffer;
} else {
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
} #catch (NSException *exception) {
NSAssert(NO, #"PIP: Failed to create CVSampleBuffer: %#", exception);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return nil;
}
}
Here's some solutions that I found:
Combine UV, but half bottom is green.
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPtr, width * height);
CGFloat uPlaneSize = width * height / 4;
CGFloat vPlaneSize = width * height / 4;
CGFloat numberOfElementsForChroma = uPlaneSize + vPlaneSize;
// for simplicity and speed create a combined UV panel to hold the pixels
uint8_t *uvPlane = calloc(numberOfElementsForChroma, sizeof(uint8_t));
memcpy(uvPlane, uPtr, uPlaneSize);
memcpy(uvPlane += (uint8_t)(uPlaneSize), vPtr, vPlaneSize);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
Interleave U and V, image is still distorted
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
for (int j = 0; j < width; j ++) {
yDestPlane[k++] = ((unsigned char *)yPtr)[j + i * yStride];
}
}
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int row = 0, index = 0; row < height / 2; row++) {
for (int col = 0; col < width / 2; col++) {
uvDestPlane[index++] = ((unsigned char *)uPtr)[col + row * uStride];
uvDestPlane[index++] = ((unsigned char *)vPtr)[col + row * vStride];
}
}
Some what similar to 1.
int yPixels = yStride * height;
int uPixels = uStride * height/2;
int vPixels = vStride * height/2;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPtr, yPixels);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane , uPtr, uPixels);
memcpy(uvDestPlane + uPixels, vPtr, vPixels);
Use Accelerate to convert YUV to BGRA and then convert to CVPixelBuffer, no error but no video rendered
vImage_Buffer srcYp = {
.width = width,
.height = height,
.rowBytes = yStride,
.data = yPtr,
};
vImage_Buffer srcCb = {
.width = width / 2,
.height = height / 2,
.rowBytes = uStride,
.data = uPtr,
};
vImage_Buffer srcCr = {
.width = width / 2,
.height = height / 2,
.rowBytes = vStride,
.data = vPtr,
};
vImage_Buffer dest;
dest.data = NULL;
dest.width = width;
dest.height = height;
vImage_Error error = kvImageNoError;
error = vImageBuffer_Init(&dest, height, width, 32, kvImagePrintDiagnosticsToConsole);
// vImage_YpCbCrPixelRange pixelRange = (vImage_YpCbCrPixelRange){ 0, 128, 255, 255, 255, 1, 255, 0 };
vImage_YpCbCrPixelRange pixelRange = { 16, 128, 235, 240, 255, 0, 255, 0 };
vImage_YpCbCrToARGB info;
error = kvImageNoError;
error = vImageConvert_YpCbCrToARGB_GenerateConversion(kvImage_YpCbCrToARGBMatrix_ITU_R_601_4,
&pixelRange,
&info,
kvImage420Yp8_Cb8_Cr8,
kvImageARGB8888,
kvImagePrintDiagnosticsToConsole);
error = kvImageNoError;
uint8_t permuteMap[4] = {3, 2, 1, 0}; // BGRA - iOS only support BGRA
error = vImageConvert_420Yp8_Cb8_Cr8ToARGB8888(&srcYp,
&srcCb,
&srcCr,
&dest,
&info,
permuteMap, // for iOS must be no NULL, mac can be NULL iOS only support BGRA
255,
kvImagePrintDiagnosticsToConsole);
if (error != kvImageNoError) {
NSAssert(NO, #"PIP: vImageConvert error %ld", error);
return nil;
}
// vImageBuffer_CopyToCVPixelBuffer will give out error destFormat bitsPerComponent = 0 is not supported
// vImage_CGImageFormat format = {
// .bitsPerComponent = 8,
// .bitsPerPixel = 32,
// .bitmapInfo = (CGBitmapInfo)kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst,
// .colorSpace = CGColorSpaceCreateDeviceRGB()
// };
// vImageCVImageFormatRef vformat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer);
//
// error = vImageBuffer_CopyToCVPixelBuffer(&dest, &format, pixelBuffer, vformat, 0, kvImagePrintDiagnosticsToConsole);
result = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32BGRA,
dest.data,
dest.rowBytes,
NULL,
NULL,
(__bridge CFDictionaryRef)pixelAttributes,
&pixelBuffer);
I have to resort to use a third-party library OGVKit to makes it works with some minor tweaks. The decoder is from the function (void)updatePixelBuffer420:pixelBuffer works with very fast decoding time for YUV420 data.

How to convert a BGR type bitmap into UIImage

- (UIImage *)setImage:(UIImage *)img{
CGSize size = img.size;
int imageW = size.width;
int imageH = size.height;
unsigned char *cImage = [self convertUIImageToBitmapBGR:img];
unsigned char *poutBGRImage = (unsigned char *)malloc(imageW * imageH * 3);
if (!self.handle) {
NSLog(#"init handle fail");
}
cv_result_t result = cv_imagesdk_dynamic_imagetone_picture(self.handle, cImage, CV_PIX_FMT_BGR888, imageW, imageH, imageW * 3, poutBGRAImage, CV_PIX_FMT_BGR888, imageW, imageH, imageW * 3, 1.0, 1.0,1.0);
free(cImage);
if (result == CV_OK) {
UIImage * image = [UIImage imageWithData:[NSData dataWithBytes:poutBGRAImage length:imageW*imageH*3]];
free(poutBGRAImage);
return image;
}else{
free(poutBGRAImage);
return [[UIImage alloc] init];
}
}
I can convert the UIImage into the BGR bitmap successfully by using the convertUIImageToBitmapBGR: function. And after running the cv_imagesdk_dynamic_imagetone_picture: function I can get the poutBGRImage as a unsigned char type variable. I use [UIImage imageWithData:] now but it doesn't work. Is there any other function to make the unsigned char type data into the UIImage type?

How to capture the screen and save to BMP image file on iOS?

I want to capture the whole screen of iOS and save it to a BMP (using private api), I get the IOSurfaceRef with IOMobileFramebufferConnection first, then find a way to save the surface bytes to a BMP file.
I tried two methods, method screenshot0: got the bytes from screenSurface directly and save it to BMP, but got a fuzzy dislocation image; method screenshot1: used IOSurfaceAcceleratorTransferSurface to transfer the surface bytes to a new IOSurfaceRef and saved it to a BMP file, got a clear but mirrored and 360 degree turned image.
I want to know, why can't I use the bytes from the original IOSurfaceRef directly? Are the bytes in IOSurfaceRef are mirrored? How can I get the right BMP screenshot?
Thank you!
screenshot0: method image:
screenshot1: method image:
- (NSString *)getBmpSavePath:(NSString *)savePath
{
NSString *path = nil;
if (![[[savePath pathExtension] lowercaseString] isEqualToString:#"bmp"]) {
path = [savePath stringByDeletingPathExtension];
path = [path stringByAppendingPathExtension:#"bmp"];
}
return path;
}
- (IOSurfaceRef)getScreenSurface
{
IOSurfaceRef screenSurface = NULL;
io_service_t framebufferService = NULL;
IOMobileFramebufferConnection framebufferConnection = NULL;
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleH1CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleM2CLCD"));
if(!framebufferService)
framebufferService = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("AppleCLCD"));
if (framebufferService) {
kern_return_t result;
result = IOMobileFramebufferOpen(framebufferService, mach_task_self(), 0, &framebufferConnection);
if (result == KERN_SUCCESS) {
IOMobileFramebufferGetLayerDefaultSurface(framebufferConnection, 0, &screenSurface);
}
}
return screenSurface;
}
- (void)screenshot0:(NSString *)savePath
{
IOSurfaceRef screenSurface = [self getScreenSurface];
if (screenSurface) {
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, NULL);
size_t width = IOSurfaceGetWidth(screenSurface);
size_t height = IOSurfaceGetHeight(screenSurface);
void *bytes = IOSurfaceGetBaseAddress(screenSurface);
NSString *path = [self getBmpSavePath:savePath];
bmp_write(bytes, width, height, [path UTF8String]);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, NULL);
}
}
- (void)screenshot1:(NSString *)savePath
{
IOSurfaceRef screenSurface = [self getScreenSurface];
if (screenSurface) {
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, NULL);
size_t width = IOSurfaceGetWidth(screenSurface);
size_t height = IOSurfaceGetHeight(screenSurface);
size_t bytesPerElement = IOSurfaceGetBytesPerElement(screenSurface);
OSType pixelFormat = IOSurfaceGetPixelFormat(screenSurface);
size_t bytesPerRow = self.bytesPerElement * self.width;
size_t allocSize = bytesPerRow * self.height;
//============== Why shoud I do this step? Why can't I IOSurfaceGetBaseAddress directly from screenSurface like method screenshot0:???
NSDictionary *properties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kIOSurfaceIsGlobal,
[NSNumber numberWithUnsignedLong:bytesPerElement], kIOSurfaceBytesPerElement,
[NSNumber numberWithUnsignedLong:bytesPerRow], kIOSurfaceBytesPerRow,
[NSNumber numberWithUnsignedLong:width], kIOSurfaceWidth,
[NSNumber numberWithUnsignedLong:height], kIOSurfaceHeight,
[NSNumber numberWithUnsignedInt:pixelFormat], kIOSurfacePixelFormat,
[NSNumber numberWithUnsignedLong:allocSize], kIOSurfaceAllocSize,
nil];
IOSurfaceRef destSurf = IOSurfaceCreate((__bridge CFDictionaryRef)(properties));
IOSurfaceAcceleratorRef outAcc;
IOSurfaceAcceleratorCreate(NULL, 0, &outAcc);
IOSurfaceLock(screenSurface, kIOSurfaceLockReadOnly, NULL);
IOSurfaceAcceleratorTransferSurface(outAcc, screenSurface, destSurf, (__bridge CFDictionaryRef)(properties), NULL);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, NULL);
CFRelease(outAcc);
//==============
void *bytes = IOSurfaceGetBaseAddress(destSurf);
NSString *path = [self getBmpSavePath:savePath];
bmp_write(bytes, width, height, [path UTF8String]);
IOSurfaceUnlock(screenSurface, kIOSurfaceLockReadOnly, NULL);
}
}
int bmp_write(const void *image, size_t xsize, size_t ysize, const char *filename)
{
unsigned char header[54] = {
0x42, 0x4d, 0, 0, 0, 0, 0, 0, 0, 0,
54, 0, 0, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 32, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0
};
long file_size = (long)xsize * (long)ysize * 4 + 54;
header[2] = (unsigned char)(file_size &0x000000ff);
header[3] = (file_size >> 8) & 0x000000ff;
header[4] = (file_size >> 16) & 0x000000ff;
header[5] = (file_size >> 24) & 0x000000ff;
long width = xsize;
header[18] = width & 0x000000ff;
header[19] = (width >> 8) &0x000000ff;
header[20] = (width >> 16) &0x000000ff;
header[21] = (width >> 24) &0x000000ff;
long height = ysize;
header[22] = height &0x000000ff;
header[23] = (height >> 8) &0x000000ff;
header[24] = (height >> 16) &0x000000ff;
header[25] = (height >> 24) &0x000000ff;
char fname_bmp[128];
sprintf(fname_bmp, "%s", filename);
FILE *fp;
if (!(fp = fopen(fname_bmp, "wb")))
return -1;
fwrite(header, sizeof(unsigned char), 54, fp);
fwrite(image, sizeof(unsigned char), (size_t)(long)xsize * ysize * 4, fp);
fclose(fp);
return 0;
}
CGDisplayCreateImage(CGMainDisplayID()) ? I don't know if it works on iOS by works on macOS. Why are you using CGDsipalyStream ?

NSData find sequence of bytes

I need to find the sequence of bytes in my image data. I have next code on java, but I need make the same in obj-c.
Java:
private static int searchInBuffer(byte[] pBuf, int iBufferLen) {
for(int i = 0; i<iBufferLen - 7; i++) {
if (pBuf[i] == 'l' && pBuf[i + 1] == 'i' && pBuf[i + 2] == 'n' && pBuf[i + 3] == 'k')
return (int)pBuf[i + 4];
}
return -1;
}
public static int checkFlagInJpeg(String pFullFileName) {
int iRes = -1;
try {
File f = new File(pFullFileName);
FileInputStream is = new FileInputStream(f);
int iBufferSize = 6 * 1024, iCount = 15;
byte buf[] = new byte[iBufferSize];
while((is.available() > 0) && (iCount >= 0)) {
int iRead = is.read(buf),
iFlag = searchInBuffer(buf, iRead);
if (iFlag > 0) {
iRes = iFlag;
break;
}
iCount--;
}
is.close();
}
}
Obj-C (my version):
UIImage *image = [UIImage imageWithCGImage:[[[self.assets objectAtIndex:indexPath.row] defaultRepresentation] fullScreenImage]];
NSData *imageData = UIImageJPEGRepresentation(image, 1.0f);
NSUInteger length = MIN(6*1024, [imageData length]);
Byte *buffer = (Byte *)malloc(length);
memcpy(buffer, [imageData bytes], length);
for (int i=0; i < length - 1; i++) {
if (buffer[i] == 'l' && buffer[i + 1] == 'i' && buffer[i + 2] == 'n' && buffer[i + 3] == 'k')
NSLog(#"%c", buffer[i + 4]);
}
free(buffer);
I'm still not sure, that I understand all aspects of work with bytes, so I need a help.
UPDATE:
The problem was in getting image data. With help of Martin R. I combine to solutions in one and get next working code:
ALAssetRepresentation *repr = [[self.assets objectAtIndex:indexPath.row] defaultRepresentation];
NSUInteger size = (NSUInteger) repr.size;
NSMutableData *data = [NSMutableData dataWithLength:size];
NSError *error;
[repr getBytes:data.mutableBytes fromOffset:0 length:size error:&error];
NSData *pattern = [#"link" dataUsingEncoding:NSUTF8StringEncoding];
NSRange range = [data rangeOfData:pattern options:0 range:NSMakeRange(0, data.length)];
int iRes = -1;
if (range.location != NSNotFound) {
uint8_t flag;
[data getBytes:&flag range:NSMakeRange(range.location + range.length, 1)];
iRes = flag;
}
NSLog(#"%i", iRes);
It's working perfect! Thank you again!
NSData has a method rangeOfData:... which you can use to find the pattern:
NSData *pattern = [#"link" dataUsingEncoding:NSUTF8StringEncoding];
NSRange range = [imageData rangeOfData:pattern options:0 range:NSMakeRange(0, imageData.length)];
If the pattern was found, get the next byte:
int iRes = -1;
if (range.location != NSNotFound && range.location + range.length < imageData.length) {
uint8_t flag;
[imageData getBytes:&flag range:NSMakeRange(range.location + range.length, 1)];
iRes = flag;
}

UIImagePIckerController Selected Image as Base64 String

I am trying to retrieve Base64 string from iOS UIImagePickerController selected event as follows.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
//on selected
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
NSData *imgData = UIImageJPEGRepresentation(image, 1.0);
NSString *imageString = [[NSString alloc] initWithBytes: [imgData bytes] length:[imgData length] encoding:NSUTF8StringEncoding];
//NSLog(#"Image Data: %#", imageString); it returns Nothing except "Image Data: "
[picker dismissModalViewControllerAnimated:YES];
[picker.view removeFromSuperview];
[picker release];
}
+ (NSString*)base64forData:(NSData*)theData {
const uint8_t* input = (const uint8_t*)[theData bytes];
NSInteger length = [theData length];
static char table[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
NSMutableData* data = [NSMutableData dataWithLength:((length + 2) / 3) * 4];
uint8_t* output = (uint8_t*)data.mutableBytes;
NSInteger i;
for (i=0; i < length; i += 3) {
NSInteger value = 0;
NSInteger j;
for (j = i; j < (i + 3); j++) {
value <<= 8;
if (j < length) {
value |= (0xFF & input[j]);
}
}
NSInteger theIndex = (i / 3) * 4;
output[theIndex + 0] = table[(value >> 18) & 0x3F];
output[theIndex + 1] = table[(value >> 12) & 0x3F];
output[theIndex + 2] = (i + 1) < length ? table[(value >> 6) & 0x3F] : '=';
output[theIndex + 3] = (i + 2) < length ? table[(value >> 0) & 0x3F] : '=';
}
return [[[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding] autorelease];
}

Resources