I Have created an app that records and plays sound and I am looking for a way of showing a simple wave
representation of the recorded sound, no animation is necessary, just a simple graph.
It would also be nice it is was possible to select a subset of the wave and ofcourrse even more nice
playing that section aswell.
To sum up, what I'm looking for:
A way of graphically represent a recorded sound as a wave (e.g as seen in audacity)
A way of graphically selecting a subset of the wave representation.
And to clarify a bit further of what I'm looking for:
If there is a lib for this I'd be insanely happy :)
A hint on what components to best use for handling the graph drawing.
A tip on how to handle the selection within the graphical component.
I already did this in another application and have been struggling with it for a while ...
You would divide the number of samples the audio file has by the number of pixels you have to display the graph. This gives you a chunksize.
For all the "buckets" you calculate the min and max value and display them in relation to the sample resolution used.
Can provide further examples if needed.
Regarding the graphics stuff:
(I am not an iOS developer but Mac programming isn't that much different I think.)
Just create a subclass of NSView ( should be UIView in iOS ) and override the drawRect method.
Then just create a function which you pass an array of values for your file and draw a bunch of lines to the screen. It's no black magic here!!
This is really nothing you would need a library for!
And, as another positive aspect : if you keep it generic enough you can always reuse it.
Related
let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.
I am taking a stab at John Conway's game of life [wiki] & [demo]. I have developed a small program in C to calculate the next state - using a 1D array (but with 2D array logic).
I am hoping to make a small iOS app out of this (to Objective-C!), and am wondering the best and fastest way to render a grid like seen in the video. Note, it would have to render every fraction of a second and would use an array of 1's and 0's to determine a "block's" respective colour.
Edit: I'm probably looking at around 10 frames/sec, but a very large grid. It'd be rendering out hundreds of thousands of squares. Of course, if this isn't physically possible with iPhone/iPad technology then I'll reduce the grid size. It is variable without issue, just looks more 'epic' on a grand scale.
Any suggestions will help out, never touched anything of this manner before.
The best way depends on your criteria. Fastest would probably be to use OpenGL. You might even be able to write a shader to do the entire simulation. However, OpenGL is hard. Really hard.
I suspect that using Core Graphics and implementing code in a view's drawRect method that renders the array of cells onto the screen would be fast enough. It depends on how many cells you have and how many frames/second you want to draw.
I'm a Unity dev and need to help out colleagues with doing this natively in Obj-C. In Unity it's no big deal :
1)samples are stored in memory as a List of float[]
2)A helper function returns float[] of n size for any given sample, at any given offset
3)Another helper function fades the data if needed
4)An AudioClip object is created with the right size to accomodate all cut samples, and is then filled at appropriate offsets.
5)The AudioClip is assigned to a player component(AudioSource).
6)AudioSource.Play(ulong offsetInSamples), plays at a sample accurate time in the future. Looping is also just a matter of setting the AudioSource object's loop parameter.
I would very much appreciate if someone could point me towards the right classes to achieve similar results in Obj-C, for iOS devices. I'm pretty sure a lot of iOS audio newbies would be intersted too. Many thanks in advance!
Gregzo
A good overview of the relevant audio APIs available in iOs is here
The highest level framework that makes sense for patching together audio clips, setting their volume levels, and playing them back in your case is probably AVFoundation.
It will involve creating AVAssets, adding them to AVPlayerItems, possibly putting them into AVMutableCompositions to merge multiple items together and adjust their volumes (audioMix), and them playing them back with AVPlayer.
AVFoundation works with AVAsset, for converting between relevant formats and lower level bytes you'll want to have a look at AudioToolbox (I can't post more than two links yet).
For an somewhat simpler API with less control have a look at AVAudioPlayer. If you need greater control (eg: games - real time / low latency) you might need to use OpenAL for playback.
I have some device which streams h264 video in following format: top half of picture is even lines of video, and bottom half of picture is odd lines of video. So the question is - how can I play this video in normal visibility, using standart players, ffplay for example.
I know about "tinterlace:merge" plugin in ffmpeg, but it combines video from two pictures following one by one. So my task is make a correct video from single frame.
Regards,
Alexey.
I recently had to deal with the exact same problem.
there are many different methods and the optimum solution completely depends on your situation,
the simplest fastest method is weaving two fields together which is perfect for immobile parts but create comb effect in moving object.
more complicated methods use motion detection methods.
what I did was merging two fields then applying Edge-Line averaging (ELA) for moving segments to reduce comb effect.
check this link for a detailed explanation of the problem
It would be good if you could provide a sample video file. You describe very well what the picture looks like, but the file may contain other information that is helpful for playback.
Furthermore, the format you describe doesn't sound like a standard format, so it's unlikely you will get a regular player to play it the way you want, out-of-the-box. If you're using ffplay, it's likely that you will have to write your own plugin to re-order the scanlines prior to displaying them.
Alternatively, you could re-encode the video into a standard format (interlaced or deinterlaced) using ffmpeg. You could then play it back in any regular player, like ffplay or VLC.
Finally, I recommend asking your question on the ffmpeg mailing list.
iam working in a project that i take a vedio by a camera and convert this vedio to frames (this part of project is done )
what iam facing now is how to detect moving object in these frames and differentiate them from the background so that i can distinguish between them ?
I recently read an awesome CodeProject article about this. It discusses several approaches to the problem and then walks you step by step through one of the solutions, with complete code. It's written at a very accessible level and should be enough to get you started.
One simple way to do this (if little noise is present, I recommend smoothing kernel thought) is to compute the absolute difference of two consecutive frames. You'll get an image of things that have "moved". The background needs to be pretty static in order to work. If you always get the abs diff from the current frame to the nth frame you'll have a grayscale image with the object that moved. The object has to be different from the background color or it will disappear...