I´m testing an AIR App on iPad 3.
The loop works fine with one mp4, it flickers on restart with another one.
So i think it depends on the encoding.
Any hints how to encode it so that it loops seamless?
Short answer is that you will not be able to get that to work due to the way MP4 playback with hardware is implemented under iOS. See this question for an example of a different approach that will work, as long as you are not using full HD 30 FPS video (which the CPU and memory bus cannot handle)
Related
This is my first question posted on stackoverflow.
I'm trying to make screen cast app using BroadcastExtension and WebRTC protocol. But broadcast extension's memory limit(50mb) is so tight that if an application tries to send the original video(886 x 1918 30fps) without any processing, it immediately dies after receiving a memory usage warning. After lowering the resolution and frame rate of the video, there is no problem. Investigating the application using the profiler does not seem to cause any problems with memory leaks. I guess it is because of the frames allocated during the encoding process inside WebRTC framework.
So my question is, is it possible to send the original video using WebRTC without any other processing, such as down scaling or lowering the frame rate?
Possible.
I forgot to mention in the question, but the library I used is Google WebRTC. I made two mistakes. One is to build the modified framework in debug mode, and the other is to use a software encoder(default is VP8). Because of this, it seems that the processing of the video frames was delayed and accumulated in the memory. DefaultEncoderFactory basically provides an encoder that operates in SW. (At least on iOS. Android seems to support HW-based decoder encoders automatically.) Fortunately, the iOS version google WebRTC framework supports the H264 hardware encoder(EncoderFactoryH264). In other cases you have to implement it yourself.
However, when using H264 to transmit, there is a problem that some platforms cannot play, for example, Android. The Google webrtc group seems to be aware of this problem, but at least it seems to me that it has not been resolved properly. Additional work is needed to solve this.
We are hosting mp3 files on AWS s3. We have built a web app (in React) that will play back the mp3s. However, it sometimes becomes distorted when played in Safari on iOS. The strange thing is that this does not happen all the time.
Here is the original file (sometimes distorted): https://sayyit-prod-static-assets.s3.amazonaws.com/static/audio/Darrin+M.+McMahon.original.mp3
Here is the file sounds when distorted: https://sayyit-prod-static-assets.s3.amazonaws.com/static/audio/WhatsApp+Video+2019-09-26+at+11.06.49+AM.mp4
Now, this distortion only happens when playing it through our app. When we provide a direct link to s3 (like I did above), it works. The distortion also happens when linking directly to s3 in our app.
Here are some ideas:
The mp3 file is broken
When going directly to the S3 link, it downloads entirely, which seems to allow the mp3 file to play perfectly
Any help would be greatly appreciated.
The sample rate on this MP3 file is 16 kHz. That's very low (not abnormal for voice), but also uncharacteristically low for a 128k MP3. I suspect that there's a bug with the resampler (as the iPhone hardware is locked to 48 kHz anyway), or that you're hitting an edge case bug with the decoder.
I'd recommend that you stop using MP3 and solve a few things at once. While MP3 is of acceptable quality, it's quality for a given bitrate isn't as good as alternatives. These days, you should consider using Opus. It's supported on iOS if muxed into a CAF file, and is extremely efficient. You could drop the bitrate down to 48k for voice and still have excellent quality. And, you'll bypass whatever resampling or decoding issue you're having now all in one go.
So I'm planning to build an app that at the very least let's me use the mic on an iphone be converted into a balanced audio signal through the headphone jack. The problem is I'm not sure if getting mic input to the output is possible without a delay. I've looked into CoreAudio and AVFoundation, but it looks like one is getting deprecated soon and the other might be too high level to ever do what I need. I'm testing out AudioKit, but I've only run it in a simulator that's running on a virtual machine inside windows, so I might get much better results on an actual device (although I'm skeptical because the audio delay is about the same as when I monitor my microphone through windows).
Does anyone know any frameworks or literally anything that might make it possible to do real time audio processing without too noticeable of a delay?
Is it even possible on iOS or is the OS overhead too big?
Literally any answer is appreciated.
I'm doing real-time audio processing using AudioKit. There were a few hiccups, but I've been able to manage to add processing nodes to real-time mic recordings and output them to the speaker with virtually no delay.
A notable hiccup I ran into was the difference between a 'debug' build and a 'release' build in xcode. The release build takes longer to compile, but runs faster, thus reduces delay in the audiobuffer processing. My testing platform is an old ipad2 though, so you may not run into those issues if you're using more recent hardware.
I need some advice from people experienced with streaming video.
I have a task to put together a system that allows video coming from RS-170 (composite) video cameras and have them displayed on an iPad. The catch is that no wireless (no Wi-Fi, no bluetooth) is allowed. Only a wired interface.
The physical I/O options on an iPad are apparently extremely limited, but I did manage to come across a company named Redpark that makes an RS232-to-Lightning cable. So my proposed solution is to have the video feeds go into a box with software that digitizes and encodes the video, and then sends it over RS232 to the iPad using that cable. The catch here is that the maximum bandwidth on that cable is 115kbps.
My preliminary testing of this setup on a prototype system have been less than stellar so far. I set up two PCs, each with serial ports, and hooked them together with a null modem. I then set the baud rates of the ports to 115kpbs and then attempted to stream a web cam video feed over the serial connection in real-time using ffmpeg. The results weren't very encouraging, but I at least did manage to get some sort of image to show up.
I guess I need to play around with the ffmpeg encoding options some more. But I need to ask: am I wasting my time with this idea, or should what I am asking here be possible?
For SDA LQ standard ("low quality") we encode H.264 mp4 (using x264) with a 128 kbps video track. The hardware decoding on the iPad can play it. It is maximum 320x240 30 fps video. The quality depends heavily on the material. For mostly nonmoving material, it is watchable. If there is a lot of movement or lighting changes, you may not be able to make out much. You can check out some examples at the link. Video game video, but some may be comparable to your application.
Without knowing more about your requirements (resolution, framerate, type of material), it is difficult to say more. However, given the right material, it is definitely possible to do it and have it be watchable (for some definitions of watchable).
I have used the vlc plugin(vlc web plugin 2.1.3.0) in Firefox to display the receiving live stream from my server into my browser. and i need to display 16 channels into one web page, but when i play more than 10 channels in the same time, i show that the processor is 100% and some breaking in the video appear. i have checked the plugin-memory in the running task, i have showed that around 45 MB from memory is dedicated for each video (so 10 channels : 10 * 45 = 450 MB).
kindly, do you have any method to reduce the consumption of the VLC plugin to allow the display of 16 channels in the same time ?
best regards,
There is no way to do that correctly. You could probably save a few megabytes by disabling audio decoding if there are audio tracks in one of your 16 streams in case you don't need them. Except for that, 45MB per stream is quite reasonable in terms of VLC playback and won't be able to go much below that, unless you reduce the video dimensions.
Additionally, your problem is probably not the use of half a giga byte of memory (Chrome and Firefox easily manage to use that much memory by themselves if you open a few tabs), but that VLC exceeds your CPU capacity. Make sure not to use windowless playback since this is less efficient that the normal windowed mode.
VLC 2.2 will improve the performance of the webplugins on windows by adding hardware acceleration known from the standalone application.