How to add/use kiwiviewer in ios app to view .ply file (3d model) or any other way - ios

I am building an ios app on xcode that at some point i have to display a 3d model to the user , I have read and search until i found something called kiwiviewer , the 3d model is (.ply) files so i want a help how to add kiwiviewer to xcode or use it inside the app, or if there is other way to view the 3d models in iphone app .
I searched and watched a youtube video that have no sound and did not get the way , please help.
I am using ios 5.1 .
thank you.

kiwi viewer does not support the PLY files so the above answer is not right you can use a VTP files in order to view it in 3D via kiwi viewer so try to convert your file to VTP and use the same function mentioned above.
std::string dataset = [[[NSBundle mainBundle] pathForResource:#"skull" ofType:#"vtp"] UTF8String];

You can use this code in KiwiSimple (lite version of KiwiViewer) to load PLY files:
-(void) initializeKiwiApp
{
[EAGLContext setCurrentContext:self.context];
std::string dataset = [[[NSBundle mainBundle] pathForResource:#"skull" ofType:#"ply"] UTF8String];
self->mKiwiApp = vesKiwiViewerApp::Ptr(new vesKiwiViewerApp);
self->mKiwiApp->initGL();
[self resizeView];
self->mKiwiApp->loadDataset(dataset);
self->mKiwiApp->resetView();
}

Related

How to record live streaming video?

I'm using VideoCore library for streaming the live video which is working perfectly.
my requirement is that i want that live streaming video record and store in document directory so anyone can tell me how can i do that?
how can i record live streaming video?
I checked library and it looks like the only way to have record function is creation of custom output.
There is Split class which allows to push a buffer to multiple outputs. So you need to create new IOuput implementation with file saving function and add it to flow using that Split class.
Update #1
I found that there is a file output example inside library (VCSimpleSession.mm file):
{
m_h264Packetizer = std::make_shared<videocore::rtmp::H264Packetizer>(ctsOffset);
m_aacPacketizer = std::make_shared<videocore::rtmp::AACPacketizer>(self.audioSampleRate, self.audioChannelCount, ctsOffset);
m_h264Split->setOutput(m_h264Packetizer);
m_aacSplit->setOutput(m_aacPacketizer);
}
{
/*m_muxer = std::make_shared<videocore::Apple::MP4Multiplexer>();
videocore::Apple::MP4SessionParameters_t parms(0.) ;
std::string file = [[[self applicationDocumentsDirectory] stringByAppendingString:#"/output.mp4"] UTF8String];
parms.setData(file, self.fps, self.videoSize.width, self.videoSize.height);
m_muxer->setSessionParameters(parms);
m_aacSplit->setOutput(m_muxer);
m_h264Split->setOutput(m_muxer);*/
}
m_h264Packetizer->setOutput(m_outputSession);
m_aacPacketizer->setOutput(m_outputSession);
Try to uncomment it and check.

OpenCV 3.0 CascadeClassifier.load() Assertion Fail (!empty)

I've just updated my iOS project to version OpenCV 3.0 and whenever I try and load the haarcascade file I get an Assertion Fail.
Previous version of OpenCV works fine and there is no change to how I get the path and load the file (see below), it's just seems not to work with version 3.0
NSString *faceCascadePath = [[NSBundle mainBundle] pathForResource:kFaceCascadeFilename ofType:#"xml"];
_faceCascade.load([faceCascadePath UTF8String])
I've also attempted to amend the way I read the file (another example I found below).
const CFIndex CASCADE_NAME_LEN = 2048;
char *CASCADE_NAME = (char *) malloc(CASCADE_NAME_LEN);
CFStringGetFileSystemRepresentation( (CFStringRef)faceCascadePath, CASCADE_NAME, CASCADE_NAME_LEN);
But again to no avail...
Any suggestions would be much appreciated.
C.
Okay figured it out, I was running the "detectMultiScale" in a seperate thread and trying to load the haarcascade file in the main ViewDidLoad.
Moved the load within the thread doing the actual detection and it seemed to fix it.
Still not sure why previous versions were not effected though.

How to open a PDF file in my iPad/iPhone using my iOS application?

How can I open a PDF file that was stored in my iPad/iPhone, using my own application?
You can use UIwebview to load it. It is very simple. If you want more flexibility you should use Quartz framework classes.
EDIT:
To view downloaded PDF, you can provide open-in functionality in your app. This is how you add "open-in" to your app.
Look here for complete tutorial.
There is a good tutorial available here which demonstrates how to open .pdf, .xls files in your application.
The main class you have to refer for this is QLPreviewController. here
This is the Datasource Method you would have to call for that
- (id <QLPreviewItem>)previewController: (QLPreviewController *)controller previewItemAtIndex:(NSInteger)index
{
// Break the path into it's components (filename and extension)
NSArray *fileComponents = [[arrayOfDocuments objectAtIndex: index] componentsSeparatedByString:#"."];
// Use the filename (index 0) and the extension (index 1) to get path
NSString *path = [[NSBundle mainBundle] pathForResource:[fileComponents objectAtIndex:0] ofType:[fileComponents objectAtIndex:1]];
return [NSURL fileURLWithPath:path];
}
Also someone would like to refer this SO question :iPhone - Opening word and excel file without using UIWebview.
You can use Core Graphics for this task.
Look at PDFView sample from apple resources

Quicktime metadata APIs and iTunes

I'm trying to set some metadata in a .mov file with the quicktime metadata APIs and have it show up in iTunes. I've got it working for most of the properties, but I can't get the description field to populate. Here is the code I'm using (shortened to only show what I think is the relevant portion).
const char* cString = ([#"HELLO WORLD" cStringUsingEncoding:NSMacOSRomanStringEncoding]);
QTMovie* qtMovie = [[QTMovie alloc] initWithFile:filename error:&error];
Movie movie = [qtMovie quickTimeMovie];
QTMetaDataRef metaDataRef = NULL;
OSStatus err = noErr;
err = QTCopyMovieMetaData(movie, &metaDataRef);
QTMetaDataItem outItem;
QTMetaDataAddItem(metaDataRef,
kQTMetaDataStorageFormatiTunes,
kQTMetaDataKeyFormatCommon,
(const UInt8 *)&key,
sizeof(key),
(const UInt8 *)cString,
strlen(cString),
kQTMetaDataTypeUTF8,
&outItem);
I found the following link, stating that for the information and description properties, I should be using kQTMetaDataStorageFormatQuicktime, but that doesn't seem to make any difference. Has anyone else had any success getting the description column to populate when importing metadata into iTunes videos?
http://lists.apple.com/archives/quicktime-api/2006/May/msg00115.html
I ended up using AtomicParsley http://atomicparsley.sourceforge.net/ without any issues which also has the benefit that it supports mp4 and m4v files and not just mov files which is also something I needed. With that the descriptions showed up fine. It was also much easier to use than the QTMetaData api.
Edit: Argh.. Just found out that this app doesn't work with mov files. This will work with mp4 and m4v files, but I guess the original question still stands because I would like to support mov files as well.
Figured it out finally with the help of this post and some deep debugging into the contents of my tagged media.
Retrieving the key name on AVMetadataItem for an AVAsset in iOS
I set the data format to kQTMetaDataStorageFormatiTunes and the key format to kQTMetaDataKeyFormatiTunesShortForm. And then the tags I use are the encoded id3 tags like in the post above. The common keys (kQTMetaDataCommonKeyArtist, kQTMetaDataCommonKeyComment) will generally not work if your goal is to view the data in iTunes. It seems a couple of them still do work, but in general they don't map over properly to their id3 counterparts.

OpenCV and iPhone

I am writing an application to create a movie file from a bunch of images on an iPhone. I am using OpenCv. I downloaded OpenCv static libraries for ARM(iPhone's native instruction architecture) and the libraries were generated just fine. There were no problems linking to them libraries.
As a first step, I was trying to create a .avi file using one image, to see if it works. But cvCreateVideoWriter always returns me a NULL value. I did some searching and I believe its due to the codec not being present. I am trying this on the iPhone simulator. This is what i do:
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *anImage = [UIImage imageNamed:#"1.jpg"];
IplImage *img_color = [self CreateIplImageFromUIImage:anImage];
//The image gets created just fine
CvVideoWriter *writer =
cvCreateVideoWriter("out.avi",CV_FOURCC('P','I','M','1'),
25,cvSize(320,480),1);
//writer is always null
int result = cvWriteFrame(writer, img_color);
NSLog(#"\n%d",result);
//hence this is also 0 all the time
cvReleaseVideoWriter(&writer);
}
I am not sure about the second parameter. What sort of codec or what exactly does it do...
I am a n00B at this. Any suggestions?
On *nix flavors, OpenCV uses ffmpeg under the covers to encode video files, so you need to make sure your static libraries are built with ffmpeg support. The second parameter, CV_FOURCC('P','I','M','1'), is the FOURCC code describing the video format/codec you are requesting, in this case the MPEG1 codec. Check out fourcc.org for a complete listing (not all of which work in ffmpeg).

Resources