I'm an artist looking to make an app that utilizes the camera on an Android or Iphone device to display a stereoscopic video feed at 1 to 5 frames per second. Python/Kivy is what I (sort of) know, so I'm starting there. Does the Camera module in Kivy support inputting a custom framerate? The existing documentation doesn't seem to say.
(Also very open to simpler ways to accomplish this/existing applications).
It doesn't directly have a property to set for that, but it should be very easy to achieve. Off the top of my head, you could render the widget in an Fbo and only redraw the Fbo at the rate you require, but there's probably a neater solution.
Probably a bigger problem will be that the Camera support is not that robust, make sure you prototype first to understand what works and what doesn't - or at least what needs more custom work to do what you want.
Related
How to best reproduce the closing activity ring animation from watchOS 4 on iOS? Im particularly interested in the rotating sparkling effect.
Here a still frame of the animation I'm talking about:
and here is a video of it.
Is it possible to implement something like this with Core Animation?
Here at the university of science in zürich in the usability lab, we use:
sketch or illustrator or designer.gravit.io for designing the svg sketches.
than we import it in after effects or in Haiku.ai for animating
and export it as .json for airbnbs animations library Bodymovin or also known as Lottie. Therefor are libraries for web, android and ios available.
The advantage of this solution over #bryanjclark "exported it as a series of images" is that the animation is sharp in every resolution (svg), it is only one .json file and you have the full control over its speed and frames.
Otherwise if you really want to do it with code only, give a look at this Article, done with OpenGL ES2.0.
Or with the AnimationCore example in this SO Answer.
I’m nearly-certain that is a pre-rendered animation, not something generated on-device. (If it is generated on-device, it’s not something you’d have API access).
I’d bet that:
a designer worked it up in a tool like AfterEffects,
exported it as a series of images,
then the developers implemented it using something like WKImageAnimatable
You can see other developers using WKImageAnimatable to build gorgeous animations in their WatchKit apps - for example, Cultured Code’s app Things (watch the video there!) has some really terrific little animation flourishes that (almost-definitely) use WKImageAnimatable under-the-hood!
I am currently working on an iOS app that can take a picture programmatically using AVFoundation libraries like AVCaptureDevice through a custom button.
The new requirement is that the camera should automatically take a picture when the camera session detects something specific. For example, if the camera is open, and I line up an apple to fill a certain circle part of the capture screen, it should take the picture automatically. We can see this auto capture feature in some banking apps when you submit a mobile check deposit.
Does anyone know of existing libraries(open-source or proprietary) that can analyze images in real time while a user is taking a picture?
The first thing you are going to need to do is decide how you want to detect the apple. You can do this using shape detection, image recognition, or various other methods. This is important because you need to know the approach you want to take before you can identify the best way to implement it.
Once you know how you are going to identify the apple, the easiest way to do real-time image processing like this would be to use an existing augmented reality SDK. For example:
http://www.wikitude.com/products/wikitude-sdk/
http://artoolkit.org/
https://developer.vuforia.com/
If you are feeling really adventurous you could roll your own using AForge or a similar library. I have taken this approach in the past for basic shape detection projects.
Edit
The reason I suggest using an existing AR SDK is because generally they provide a lot of the glue between the camera feed and their API for you and it takes a lot of leg work out of the equation. Even though you won't be using any of the actual "augmentation" part of their SDKs, you can still take advantage of the detection part.
No matter what approach you take, you can think about it in the simplest terms of looking a picture, and figuring out if the item you want is in that picture. How do you decide? In most cases you look for a specific shape or pattern.
I'm developing a app that will showcase products. One of the features of this app is that you will be able to "rotate" the product, using your finger/Pan-Gesture.
I was thinking in implementing this by taking photos of the product from different angles so when you "drag" the image, all I would have to do is switch the image according. If you drag a little, i switch only 1 image... if you drag a lot, i will switch them in cadence making it look like a movie... but i have a concerns and a probable solution:
Is this "performatic"? Since its a art/museum product showcase, the photos will be quite large in size/definition, and loading/switching when "dragged a lot" might be a problem because it would cause "flickering"... And the solution would be: instead of loading pic-by-pic i would put them all inside one massive sheet, and work through them as if they were a sprite...
Is that a good ideia? Or should I stick with the pic-by-pic rotation?
Edit 1: There`s a complicator: the user will be able to zoom in/out and to rotate the product in any axis (X, Y and Z)...
My personal opinion, I don't think this will work the way you hope or the performance and/or aesthetics will not be what you want.
1) Taking individuals shots that you then try to keyframe to based on touch events won't work well because you will have inevitable inconsistencies in 'framing' the shots such that the playback won't be smooth
2) The best way to do this, I suspect, will be to shoot it with video and shoot it with some sort of rig that allows you to keep the camera fixed while rotating the object
3) I'm pretty sure this is how most 'professional' grade product carousel type presentations work
4) Even then you will have more image frames than you need -- not sure whether you plan to embed the images files in app or download on demand -- but that is also a consideration in terms of how much downsampling you'll need to do to reduce frames/file size
Suggestion
Look at shooting these as video (somewhat like described above) and downsampling and removing excess frames using a video editor. Then you could use AVFoundation for playback and use your gestures to 'scrub' into the video frames. I worked on something like this for HTML playback at a large company and I can assure you it was done with video.
Alternatively, if video won't work for you. Your sprite sheet solution might work (consider using SpriteKit). But then keep in mind what I said about trying to keyframe one off camera shots together -- it just won't work well. Maybe a compromise would be to shoot static images but do so by fixing the camera and rotating the objects at very specific increments. That could work as well I suppose but you will need to be very careful about light and other atmospehrics. It doesn't take much variation at all to be detectable to the human eye causing the whole presentation to seem strange. Good luck.
A coder from my company did something like this before using 360 images of an object and it worked just great but it didn't have zoom. Maybe you could add zoom by adding a pinch gesture recognizer and placing the image view into a scroll view to zoom in on the static image.
This scenario sounds like what you really need is a simple 3D model loader library or write it in OpenGL yourself. But this pan and zoom behavior is really basic when you make that jump to 3D so it should be easy to find lots of examples.
All depends on your situation and time constraints :)
I have developed a Canvas prototype of a game (kind of), and even though I have it running at a decent 30 FPS in a desktop browser, the performance on iOS devices is not what I hoped (lots of unavoidable pixel-level manipulation in nested x/y loops, already optimized as far as possible).
So, I'll have to convert it to a mostly native ObjC app.
I have no knowledge of ObjC or Cocoa Touch, but a solid generic C background. Now, I guess I have two options -- can anyone recommend one of them and whether they are at all possible?
1) Put the prototype into a UIWebView, and JUST do the pixel buffer filling loops in C. Can I get a pointer to a Canvas pixel array living in a web view "into C", and would I be allowed to write to it?
2) Make it all native. The caveat here is that I use quite a few 2D drawing functions too (bezierCurveTo etc.), and I wouldn't want to recode those, or find drawing libraries. So, is there a Canvas-compatible drawing API available in iOS that can work outside a web view?
Put the prototype into a UIWebView, and JUST do the pixel buffer filling loops in C
Nah. Then just embed a web view into your app and continue coding in JavaScript. It's already JITted so you don't really have to worry about performance.
Can I get a pointer to a Canvas pixel array living in a web view "into C"
No, not directly. If you are a hardcore assembly hacker and reverse engineer, then you may be able to do it by looking at the memory layout and call stack of a UIWebView drawing to a canvas, but it's just nonsense.
and would I be allowed to write to it?
Once you program your way down to that, I'm sure you would.
Make it all native.
If you wish so...
The caveat here is that I use quite a few 2D drawing functions too (bezierCurveTo etc.), and I wouldn't want to recode those, or find drawing libraries.
You wouldn't have to do either of those, instead you could just read the documentation of the uber awsum graphix libz called CoreGraphics. It comes by default with iOS (as well as OS X, for the record).
So, is there a Canvas-compatible drawing API available in iOS that can work outside a web view?
No, I don't know of one.
Translating your objective c code to html5 sounds like a daunting task, How about a 3rd option where you don't have to change your JavaScript code ?
http://impactjs.com/ejecta
Ejecta is like a Browser without the Browser. It's specially crafted
for Games and Animations. It has no DIVs, no Tables, no Forms – only
Canvas and Audio elements. This focus makes it fast.
Yes, it's open source
https://github.com/phoboslab/Ejecta
I Have created an app that records and plays sound and I am looking for a way of showing a simple wave
representation of the recorded sound, no animation is necessary, just a simple graph.
It would also be nice it is was possible to select a subset of the wave and ofcourrse even more nice
playing that section aswell.
To sum up, what I'm looking for:
A way of graphically represent a recorded sound as a wave (e.g as seen in audacity)
A way of graphically selecting a subset of the wave representation.
And to clarify a bit further of what I'm looking for:
If there is a lib for this I'd be insanely happy :)
A hint on what components to best use for handling the graph drawing.
A tip on how to handle the selection within the graphical component.
I already did this in another application and have been struggling with it for a while ...
You would divide the number of samples the audio file has by the number of pixels you have to display the graph. This gives you a chunksize.
For all the "buckets" you calculate the min and max value and display them in relation to the sample resolution used.
Can provide further examples if needed.
Regarding the graphics stuff:
(I am not an iOS developer but Mac programming isn't that much different I think.)
Just create a subclass of NSView ( should be UIView in iOS ) and override the drawRect method.
Then just create a function which you pass an array of values for your file and draw a bunch of lines to the screen. It's no black magic here!!
This is really nothing you would need a library for!
And, as another positive aspect : if you keep it generic enough you can always reuse it.