I'm using DirectX.Capture library to save to an AVI fomr Webcam. I need video to be saved to have 50fps or more, but when i use this:
capture.FrameRate = 59.994;
FrameRate doesn't change at all. It had 30 before that line and passing that line it keeps its 30. I tried other values, even 20 and 10, and nothing changes.
What else should i do so i can be able to change that value? or it is something regarding my hardware and i can hope it works in other machine?
Please help me, i don't know what to do.
Thanx
The source material (video, app/etc), is probably only being updated at 30fps, either because that is the way the video codec or app behaves, or because you have vsync turned on in the target app (check vsync settings, it might be getting forced by the video card drivers if there is hardware acceleration). The behaviour of DirectX.Capture is probably to clamp to the highest available framerate from the source.
If you really want to make the video 50fps, capture it at its native rate (30/29.97) and just resample the video using some other software (note that this would be a destructive operation since 50 is not a clean multiple of 30). This will be no different from what DX capture would do if you could force it at 50fps (even if its nonsensical due to the source material being at a lower framerate). FYI most video files are between 25 and 30 FPS.
Related
This took me too long to figure out, so in the hopes of helping anyone out there dealing with any of these issues, I wanted to post the solution. But first, the problems:
I bought one of those cheap HDMI->USB dongles and connected my PS3 as a video source. On vlc the image looked crisp, but I was getting no sound, and the video was really choppy. Checking the codec tab in the "info" section, I saw I was getting 1080p at 5 fps. I thought I got a defective dongle, but decided to check with other apps. tvtime/xawtv gave me great framerate, but low resolution that I couldn't change, cheese allowed me to set all the options, and I was getting good framerate, and good resolution (but no sound), and then I finally tried obs which gave me a perfect result. So clearly the dongle is fine, and the problem was with vlc.
See my answer below for the solution to all those problems (and more!)
I found, through much research and experimentation, that the reason the video was choppy in vlc was because it was using the default "chroma" of YUV2, which if I am not mistaken is uncompressed. (You can check your webcam/dongle's capabilities by running: v4l2-ctl --list-formats-ext -d /dev/video0 where /dev/video0 is your device)
The correct setting to overcome this is mjpg. However, that results in a flood of errors saying:
[mjpeg # 0x7f4e0002fcc0] No JPEG data found in image
This is caused by the fact that the default resolution and framerate (1080p#60fps) overwhelm what I guess is the mjpeg decoder. Setting it to 720p, or lowering the framerate to 30fps prevents the errors.
Next, the sound was missing, and this is due to the fact that I am using pulseaudio and vlc cannot figure out which source to use.
I found the pulse source by running:
pactl list short sources
which yielded:
alsa_input.usb-MACROSILICON_USB_Video-02.multichannel-input
You can test that this is the correct source by running:
vlc pulse://alsa_input.usb-MACROSILICON_USB_Video-02.multichannel-input
I found that to combine the v4l2 video source with the correct pulseaudio sink, you have to set the audio via the input-slave parameter to vlc, but unfortunately, that did not work for me as specified in the guides, and instead I had to set the video source as the slave. The final commands that worked for me were either of:
720p:
vlc pulse://alsa_input.YOUR-SOURCE-HERE-input --input-slave=v4l2:///dev/video0:chroma=mjpg:width=1280:height=720
1080p#30fps:
vlc pulse://alsa_input.YOUR-SOURCE-HERE-input --input-slave=v4l2:///dev/video0:chroma=mjpg:fps=30
Herewith we are facing the data stalling in 360 video in youtube application and also observed in more content for example videoID of 'HemwKBjQ0Uc'(【VR】Elemental Demo - 60fps 4k 8k Stereo 360 with Ambisonic audio). In problematic case, buffer is removing from the RangeList using next range(DeleteAndRemoveRange(&next_range_itr)) and also problem observed within 30-60 secs for above mentioned content. And also we are using Cobalt 13.11 version, MergeWithAdjacentRangeIfNecessary() API has been problematic from our analysis. And also for our internal validation, we have increased the non video budget and 1080p resolution upto 50 MB, Data stallation has not been observed in 360 video and content was playing contionously for that content. For your information, We have checked with latest cobalt application and observed the same behaviour.
Please advise us to conclude this issue.
Data Stalling- Video frame didn’t recieve to ffmpeg_video_decoder even after giving kneedbuffer but audio data receiving as usual continuously. Can you please try to play a above mentioned video. And also we already have ensured in latest cobalt 19+ also having this issue.
Thanks in advance
Have you try to increase the video buffer budget set by variables like COBALT_MEDIA_BUFFER_VIDEO_BUDGET_4K and COBALT_MEDIA_BUFFER_MAX_CAPACITY_4K?
Yes xiaoming. We have already tried this and working fine but we need to know the reason of appndbuffer not happening while after this call happens (DeleteAndRemoveRange(&next_range_itr))
I've used this sample code to create an audio recorder. http://www.stefanpopp.de/capture-iphone-microphone/
I'm finding I get glitches about every 30 seconds. They sound a bit like buffer glitches to me, although I might be wrong. I've tried contacting the author of the article but not having much success. I'm really struggling to follow some of this code. I think it's missing a circular buffer but I'm not sure how important that is here. I'm hoping someone can point me in the right direction to either:
Point me to some different example code or suggest what I need to add to this (high level suggestion is fine - I'm happy to research and do the work, I'm just not confident what the work is)
Suggest some better values to use for things like the buffer data size.
Tell me that there's nothing wrong with this code and my bug is almost certainly elsewhere.
Suggest a library I can use that should take care of it (Amazing Audio Engine 2 looks good for me but I'm a bit worried about the note saying it's retired. AudioKit looks great too but it's missing a peak power reading, which would be a shame to have to implement myself after having imported such a complex library)
Why aren't I using AVAudioSession? I need the user to be able to set mic level while recording and to be able to listen back at the same time. Previously I did this with AVAudioSession but on more recent devices isInputGainSettable returns NO. It also returns NO for many hardware mics plugged in via lightning cable, which we're seeing more and more now the headphone jack is gone.
Several problems.
Apple recommends that object methods not be called in the audio context (the callbacks). Your code has several. Use C functions instead.
Newer iOS devices likely use a hardware sample rate of 48000, not 44100. Resampling potentially causes buffers to change sizes.
The code seems to assume that the play callback buffer was the same size as the input callback buffer. This is not guaranteed. Thus the playback might end up with too few samples, causing periodic glitches.
In my experience (iPhone 6) sample rate from microphone can be 48000 when a headset is not plugged in, and change to 44100 when a headset is plugged in.
If your audiounit is expecting a samplerate of 44100 then glitches like these are to be expected. To verify, you could try if your problem remains when you plug in a headset.
A workaround for the glitch problem seems to be to use an AVAudioEngine. Connect its inputNode to its mainMixerNode using the inputFormat of the inputNode. Connect the mainMixerNode to your AudioUnit in your desired format. Connect your AudioUnit to outputNode of the AVAudioEngine.
Using this mixerNode between inputNode and audioUnit is essential in this workaround.
We have a problem with HLS h.264 mp4 on IPad devices using HLS streaming on IOS 7 and 8:
The first 9-15 seconds (the length of the first TS segment) only shows the first key frame (IDR) of the second TS segment, while the sound plays normally. When the second segment begins to play, the video continues as it should.
HLS segmenter is a wowza with 10 seconds segment length. The encoding software we use is TMPG, latest version (uses x264). The funny thing is, that handbrake, xmedia recode, adobe me deliver videos which work. I am aware of the fact, that this hints to a bug within our encoding software, but if someone already had that problem with another software / segmenter combination and fixed it, I would like to know what the source of the problem was.
What we already tried:
changing almost every setting so that it sticks as close as possible to apple's recommendations
changing GOP structure, GOP length, encoding parameters which influence efficiency of encoding
analysis of the TS segments created by wowza, they are fine and begin all with keyframes
contact TMPG/Apple/Wowza support
So, did anyone stumble upon this problem before? Has anyone solved it?
EDIT:
It seems that TMPGEnc uses a defective x264 implementation. The mediastreamvalidator tool from Apple returned an error stating that our TS segment "does not contain any IDR access unit with a SPS and a PPS" - which it does actually, but obviously in the wrong places if that somehow matters.
Whatever tool you use for segmenting is not ensuring that the segment begins with an SPS+PPS+IDR. this could be an issue with your encoder or your segmenter. Basically, decoding can nor begin until all three of these thing are encountered in the player. Try using mediafilessegmenter and mediastreamvarifier from apple to analyze the problem.
I have an mp3 file that is almost sine-wave like, due to which whenever i fade it out, there are distortions. I need fadeouts over really short periods of time (0.05 seconds). The timer resolution is not enough to cover this. As a result i need to read out the samples, adjust their gain, and play them back. I did this in the original flash/AS3 version of the app but can someone tell me how to do this via core-audio on ios ?
In case anyone wanted something similar, i managed to do this. Basically, i used AudioQueues and Extended Audio File Services.
I read in the mp3 files using ExtAudioFileServices while specifying a 32 bit PCM format. I then start an AudioQueue and in the call back i read in from the buffers corresponding to files, adjust their gain, and copy them to the queue buffer. Voila :D