Taking and Saving a Picture in iOS 6 - ios

First time I have needed to use the camera sense iOS 6, seems as though some things have been been deprecated. Did some research and couldn't find anything but I know its out there. I just need to take a picture, or get from the camera roll and save it to an Image View. Thanks to anyone that has helpful information.

All the helpful information is in the Apple documentation.
http://developer.apple.com/library/ios/#documentation/uikit/reference/UIImagePickerController_Class/UIImagePickerController/UIImagePickerController.html
Including the deprecated stuff (see Appendix A).
A more high level overview is the Camera Programming Topics guide:
http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/CameraAndPhotoLib_TopicsForIOS/Introduction/Introduction.html#//apple_ref/doc/uid/TP40010400

Related

SwiftUi ARKit measurements

Sorry I am pretty inexperienced with ARKit. I am working on an app and it will have more features later but the first step would basically be recreating the measure app that is included with iOS. I have looked at the documentation that Apple gives and most of it is for stuff like face tracking, object detection, or image tracking. I wasn't sure exactly where to start. The rest of the existing code I have now is written in SwiftUI if that matters. Thank you!
Understand that it can be quite confusing in the beginning. I would recommend to walk throught the toruial at raywenderlich.com. This toturial from Codestars on Youtube is also very good if you like to listen and watch instead of reading. Both talks go throught a lot of important parts of ARKit so I really recomend it. After that you problably have a create understanding and you clould watch Apples WWDC2019 talk What's new in ARKit 3.
Hope I understood your question correctly and please reach out if you have any questions or other concerns.

How to Edit CMPixelBuffer by pixel

I'm an undergraduate student and I'm programming some HumanSeg iPhone app. Now I have to read raw frames from the camera, but I found the codes in official guide aren't clear enough for me to understand.
When I get the frames (in CMPixelBuffer format) from the camera, I need to modify it (I mean I have to do some padding and resizing, and turn it into a CVPixelBuffer format, to feed it to a CoreML MobileNet).
I've been searching for the solution for weeks, but unfortunately I got nothing. In the offical guide I was told that these buffers "doen't offer direct access to inner data".
I even tried to use a Context to draw the CMPixelBuffer into a UIImage and draw it back to a CVPixelBuffer,and I found this process terribly slow, just as the offcial guide says. Since I'm doing video processing this method is unacceptable.
What am I supposed to do or read? I really appreciate your help.

How to detect an image in a news paper and play a video relevant to it using augmented reality?

I have planned to detect an image in a news paper play the video relevant to it. I have seen several news paper reading AR apps include this feature. But i couldn't find how to do so. How can I do it??
I dont expect any code. But like to know what are the steps I should follow to do this. Thank you.
You need to browse through the available marker-based AR SDKs - such SDKs let you defined in advance the database of images you would like to detect and respond to, and once any of these images is detected during runtime, you get some kind of an event with data on the detected image.
Vuforia is considered a good one and it has good samples, so it is supposed to be easier to start with. You should also check out Kudan, and there are more.

iOS 8 -- Capture Screen Video

I am trying to capture the on-screen activity of my app as a video (one that I can save/upload to Youtube).
There are many others who want to do this. Although the answers are generally sparse, there's no in-depth explanation of how to do this or why it can't be done.
There's a paid (and possibly sketchy?) option here.
There's this related, but again, not totally clear SO answer about taking lots of screenshots: link.
There's a Smule app called MadPad HD that "records" the user's actions and stitches them together (but it doesn't actually capture the screen, it just stitches actions together). Here's the output of a stitching: link.
My questions are as follows:
Is capturing the video output of the screen and turning it into video actually possible?
If not, is taking lots of screenshots and turning them into video feasible (performance-wise)?
If 1 and 2 are not true, is this impossible because of device constraints or because Apple doesn't want it?
Thanks!
Perhaps you've already found an answer for this, but I thought I'd answer if anyone else is interested: with iOS 9 this will, of course, be possible through the new ReplayKit by Apple. But if you need it sooner (and with backwards compatibility) there are a couple of alternatives that I know of: Kamcord and Everyplay. Both lets your users record video and share through multiple channels, YouTube included. Both should be Sprite-Kit compatible and easily integrated (at least according to their websites!). Hope this helps!

IPad and Openframeworks

Can someone point me in the right direction to learning how to use Openframeworks to develop and IPad app. Perhaps some good tutorials, I can't seem to find any good documentation.
The docs of openFrameworks is quite outdated. But you can discover OF through the examples. Just download the iPhone package here: http://www.openframeworks.cc/download and follow the instructions in the included readme. I think a good start is, try to get the examples running on your device and start to modify the examples. If you have any further questions, the people here --> http://forum.openframeworks.cc/ will be happy to help you out.
For a more in-depth discover of openFrameworks, look at the inofficial doxygen docs here --> http://ofxfenster.undef.ch/doc/
Getting OF running on iPad is actually pretty much the same thing as running on iphone.
have you got it running before?
if you haven’t, first thing is you need to pay Apple $99 if you want to run it on real device,
otherwise it’s free to try on the simulator.
there is some instructions on OF site for the first run,
just go through it these complicated stuffs only need to be done once:
http://www.openframeworks.cc/setup/iphone/
(the guide is totally not updated at all, but it’s pretty much the same process with minor UI difference)
Any iOS OF example should runs on iPad the same way as iPhone does.
but to get iPad native resolution, you’ll have to change it manually.
it's in Application>General and in Deployment Info change the Devices drop down to iPad. (screenshot attached)
try it with any iOS examples
and if you want to put any code for mac version,
just make a copy of any iOS example and hand paste the code in appropriate void,
they are pretty much the same except mouse event vs touch event.
which a bit different in logic but just play around with it. not too hard to get used to.
basically touch events are touch.x/touch.y instead of mouseX mouseY.
(and touch events are private to each void so you might need other variables to pass it somewhere else)
I don't have a forum link but there was an openframeworks forum question on this just last week and folks posted a number of sites that have good examples/tutorials. Here's one on doing pixel operations for graphic effects:
http://itp.nyu.edu/varwiki/Syllabus/Pixels-S10

Resources