I am recording video in iOS using AVCaptureSession.
-(id)init
{
if ((self = [super init]))
{
[self setCaptureSession:[[AVCaptureSession alloc] init]];
}
return self;
}
-(void)addVideoPreviewLayer
{
[self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
}
How can I create NSData of recorded video simultaneously with recording ?
Access the encoded frames? You cannot do that with the iOS SDK alone. You can record a bit to file, access the encoded frames in the file, record a new file, access more ... if you need to do so.
However, if you are trying to get the raw frames, while also writing, that's fairly straightforward. Instead of capturing output to a file, use –captureOutput:didOutputSampleBuffer:fromConnection: on your AVCaptureAudioDataOutputSampleBufferDelegate. Just make sure to also route the data to something that is encoding/writing the buffers, otherwise you will lose the "...simultaneously with recording" aspect.
This isn't an NSData, but a CMSampleBufferRef, which depending on if the buffer is audio or video, can be converted to NSData in various ways.
My be these links will help you to solve problem:
http://www.ios-developer.net/iphone-ipad-programmer/development/camera/record-video-with-avcapturesession-2
http://indieambitions.com/idevblogaday/raw-video-data-app-quick-dirty/
Related
I have implemented MobileVLCKit in iOS by using MobileVLCKit framework. I have an issue,
When I declare the player #Interface the streaming and voice is working well.
#import <MobileVLCKit/MobileVLCKit.h>
#interface ViewController ()<VLCMediaPlayerDelegate>{
VLCMediaPlayer *vlcPlayer1
}
#end
But, declare the VLCMediaPlayer object at local function the video preview not displayed but, audio id playing.
- (void)viewDidLoad {
[super viewDidLoad];
VLCMediaPlayer *vlcPlayer1 = [[VLCMediaPlayer alloc] initWithOptions:nil];
vlcPlayer1.drawable = view;
media = [VLCMedia mediaWithURL:[NSURL URLWithString: UrlString]];
[vlcPlayer1 setMedia:media];
[vlcPlayer1 play];
}
How can I resolve the issue. Because, I need to create the view dynamically.
Try this:
[vlcplayer.media addOptions:#{ #"network-caching" : #300}];
If it doesn't work, replace 300 with a bigger value.
That may work.
So both of these questions/answers put me on the right path, but this is ultimately what worked for me.
NSURL* _uri = [NSURL URLWithString:uri];
NSArray* initOptions = [NSArray arrayWithObjects:#"--codec=avcodec", "--network-caching=10000", nil];
self.player = [[VLCMediaPlayer alloc] initWithOptions:initOptions];
self.player.media = [VLCMedia mediaWithURL:_uri];
It looks like the "addOptions" is valid, but my particular use case wasn't picking it up, and instead I had to actually initialize the VLCMediaPlayer with the options from the get go. Worked out nice because it actually fits much better with other JAVA/Android/CMD line VLC api's.
I have an application which requires to use the microphone for recording user voice. I'm trying to make a speech to text.
I'm work with SpeechKit.framework and below is my code used:
-(void)starRecording{
self.voiceSearch = [[SKRecognizer alloc] initWithType:SKSearchRecognizerType
detection:SKShortEndOfSpeechDetection
language:[[USER_DEFAULT valueForKey:LANGUAGE_SPEECH_DIC] valueForKey:#"record"]
delegate:self];
}
- (void)recognizer:(SKRecognizer *)recognizer didFinishWithResults:(SKRecognition *)results {
long numOfResults = [results.results count];
if (numOfResults > 0) {
// update the text of text field with best result from SpeechKit
self.recordString = [results firstResult];
[self sendChatWithMediaType:#"messageCall" MediaUrl:#"" ContactDetail:#"{}" LocationDetail:#"{}"];
[self.voiceSearch stopRecording];
}
if (self.voiceSearch) {
[self.voiceSearch cancel];
}
[self starRecording];
}
That makes the SKRecognizer to be always open and that thing reduce the application performance.
I want to start the SKRecognizer when the microphone is detecting input audio.
I have a method for that? A method which is called when the microphone have input sound for me or a method which is always returning the level of audio detected?
Thank you!
You need to use the SpeechKit class to set up the audio.
Look here for details;
http://www.raywenderlich.com/60870/building-ios-app-like-siri
This project shows how to detect audio threshold;
github.com/picciano/iOS-Audio-Recoginzer
I'm using UIImagePickerController to take pictures and videos from my app. Toggling between the two isn't too bad. If the user chooses to record a video, I first check this:
if (picker.cameraCaptureMode == UIImagePickerControllerCameraCaptureModeVideo)
{
[self captureVideo];
}
else
{
picker.cameraCaptureMode = UIImagePickerControllerCameraCaptureModeVideo;
[self captureVideo];
}
This usually works totally fine. Here's the catch. I'm also using OpenTok by Tokbox to do video calls, and it seems like the captureMode assignment doesn't work after a video call. It seems completely crazy, but I made this modification to do some debugging:
if (picker.cameraCaptureMode == UIImagePickerControllerCameraCaptureModeVideo)
{
[self captureVideo];
}
else
{
picker.cameraCaptureMode = UIImagePickerControllerCameraCaptureModeVideo;
if (picker.cameraCaptureMode != UIImagePickerControllerCameraCaptureModeVideo)
{
NSLog(#"Assignment unsuccessful???")
}
[self captureVideo];
}
And i get this "Assignment unsuccessful???" log every single time. UIImagePickerController must not be allowing the assignment or something. I really can't figure it out. I've also made a forum post on OpenTok's site to see if they're possibly not releasing some camera resources, but I don't think it's their problem.
Any insight here?
Use:
+ (NSArray *)availableCaptureModesForCameraDevice:(UIImagePickerControllerCameraDevice)cameraDevice
to check which source types are available. Also if you're using a simulator, it will never assign properly.
Solved with a solution on the TokBox forum. I needed to first change my audio session before trying to access the microphone.
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setCategory:AVAudioSessionCategorySoloAmbient error:nil];
[self presentViewController:picker animated:YES completion:NULL];
I have following code which performs stop running (video) session, delete last segment of video, store video to directory and starts new video recording. There is (naturally) gap between these two video segments. Is there any way to optimize this code (maybe assync optimization if possible)? By optimization I mean elimination time gap as much as possible between these two video segments, thank you.
- (void) restartVideoRecording {
[captureSession removeOutput:captureMovieOutput];
[captureMovieOutput stopRecording];
if(lastPathWasOne){
captureMoviePath = [[URLPathProvider getUrlPathProvider]videoTwoPathString];
[URLPathProvider deleteFileAtStringPath:captureMoviePath];
lastPathWasOne = NO;
} else {
captureMoviePath = [[URLPathProvider getUrlPathProvider]videoOnePathString];
[URLPathProvider deleteFileAtStringPath:captureMoviePath];
lastPathWasOne = YES;
}
captureMovieURL = [[NSURL alloc] initFileURLWithPath:captureMoviePath];
[captureSession addOutput:captureMovieOutput];
[captureMovieOutput startRecordingToOutputFileURL:captureMovieURL recordingDelegate:self];
}
[NSTimer scheduledTimerWithTimeInterval:loopDuration target:self selector:#selector(restartVideoRecording) userInfo:nil repeats:NO];}
Thank you very much
Yes, you can. Store captured samples in NSTemporaryDirectory and use AVMutableComposition to merge assets at the end of recording session.
there is a Sample code please check this also
My text fields and my images picked from image picker all reset set blank if my app stops running, or device is turned off, How can I retain this information?
I've used a singleton (with help from a fellow member) and I can retain my image...that is until the app is killed or device is turned off. Then it's gone.
.m
- (void)viewDidLoad
{
singletonObj = [Singleton sharedSingletonController];
imageView.image = singletonObj.imagePicked;
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)viewDidUnload
{
[self setImageView:nil];
[super viewDidUnload];
// Release any retained subviews of the main view.
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown);
}
#pragma mark - Action
- (IBAction)done:(id)sender
{
[self.delegate flipsideViewControllerDidFinish:self];
}
- (IBAction)btn:(id)sender {
UIImagePickerController * picker = [[UIImagePickerController alloc] init];
picker.delegate = self;
if((UIButton *) sender == choosePhotoBtn) {
picker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
}
[self presentModalViewController:picker animated:YES];
}
- (void)imagePickerController:(UIImagePickerController*)picker didFinishPickingMediaWithInfo:(NSDictionary*)info
{
NSData *dataImage = UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerOriginalImage"],1);
UIImage *img = [[UIImage alloc] initWithData:dataImage];
singletonObj.imagePicked = img;
imageView.image = img;
[picker dismissModalViewControllerAnimated:YES];
}
#end
There are two types of memory: volatile (RAM) and permanent memory (ie: hard drives and other storage).
Volatile memory is cleared and lost when a program/computer shuts down.
Using a singleton is fine but it's completely unrelated to keeping data from session to session (and by session I mean the time when the program is running: from launch to termination of an application).
You need to store data you wish to keep from session to session to file using any method you want. Depending on the information you want to store, there are different dedicated mechanism for saving:
(such as NSUserDefaults for user preferences).
Core Data is a framework which defines mechanism for structuring data and saving/reading it to file (a.k.a. persistent store).
You can also use serialization.
Or you can always manually manipulate files.
NSData has writeToFile:atomically: which will write create a file out of a data object. If you want to save an image, you must obtain an UIImage's underlying data (i.e.: UIImagePNGRepresentation(...)).
You are going to have to use core data. It can accept NSData from a UIIimage as well as NSStrings. The function within the appDelegate, AppicationWillTerminate, will have to use so that the information is stored just before the application is terminated.
This is going to require some decent amount of work to get it working properly, but nothing too difficult. If you need help understanding core data, I recommend this link
http://www.raywenderlich.com/934/core-data-on-ios-5-tutorial-getting-started