DeepEnd(cydia tweak) like 3D background for iOS - ios

I am trying to emulate 3D background in one of the application we are developing.
Check this video on what i am trying to do : http://www.youtube.com/watch?v=429kM-yXGz8
Here is what i am trying to do to emulate this 3D illusion in our
application for iPad.
I have a RootView with 3 rounded buttons centered on the screen which animates in circular motion.
On bottom screen i have some banners of size (600*200) which keeps rotating with flip animation.
I also have some graphical text that is part of the background which contains the "Welcome message"
All elements are individual graphics, and hence when the user moves the iPad we only move the background based on the position of iPad using x,y,z coordinates of accelerometer.
the background moves accordingly, however this is not enough to have 3D illusion, so we decided to add some shadows to graphical elements(buttons, banners, text) and move the shadow accordingly with the iPad's position.
However the result is not convincing, and accelerometer is not updating value if user moves iPad to left and right on stand up position facing iPad straight to the head.
I was wondering if anyone have tried to achieve something similar with success? or any resource to help on how to achieve this? i am just confused whether by using only accelerometer will work or should i go with gyroscope?

Using face detection for simulating a 3D effect already has been done (by me). You can download a complete sample from http://evict.nl/code/face-tracking See the video on that page for a quick demo.

You should definitely use both. accelerometer (movement) and gyroscope (device angle). But for a true 3d effect you probably need to use the camera + face detection.

Related

Can you activate and deactivate image detection in arkit using Unity UI features?

Is it possible to turn image detection on and off inside an application using UI features from Unity? I want to be able to only use the image detection after selecting a toggle, but still be able to detect planes the entire time. When the toggle is deselected I want to be able to view the image through the camera with nothing happening, but still detect planes on the ground. Is there a good way to do this?

OpenCV Colour Detection Error

I am writing a script on the raspberry pi to detect the majority colour featured in a frame of a webcam and I seem to be having an issue. The following image is me holding up my phone with a blank red image on it. I seem to be getting an orange colour instead.
Now when I angle the phone I do in fact produce the red colour expected.
I am not sure why this is the case.
I am using a logitech c920 webcam that emits a blue light when activated and also have the monitor going. I am wondering whether the light from these two are causing this issue and when I angle it, these lights are not hitting it front on and thus not distributing the image.
I am still not heavily experienced in this area so I would enjoy hearing explanations and possible work arounds for my problem.
Thanks
There are a few things that can mess this up:
As you already mention, the light from the monitor and the camera.
The iPhone screen is a display, so flicker and sync might also be coming to play.
Reflection from the iPhone screen.
If your camera has automatic control for exposure and color balance etc., the picture quality can change as you move around.
I suggest using a colored piece of non-glossy paper so that you can remove the iPhone display's effects.

Create a container (fixed aspect ratio) for a landscape SpriteKit game, which can be played both on iPhone and iPad

I want to create a landscape iOS game with SpriteKit, which can be played both on iPhone and iPad. Because of the different aspect ratios I thought it would be usefull to have a fixed frame for the actual game. I would like to treat this game area like a separate container. The rest of the visible area should be filled with a ground, where the character is running on, and a ceiling. Depending on the device the ground and the ceiling will be more visible on the iPad and less on the iPhone.
The anchor point for the game as reference for all upcoming nodes should be in the lower left corner of the game area.
In the following picture I tried to draw my thoughts:
Can you tell me, what is the best way to proceed and which features of SpriteKit are worth to take a look at? I have never worked with scaleModes or something else like this before.
Thank you

AVCaptureVideoPreviewLayer issues with Video Gravity and Face Detection Accuracy

I want to use AVFoundation to set up my own camera feed and process the live feed to detect smiles.
A lot of what I need has been done here: https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
This code was written a long time back, so I needed to make some modifications to use it the way I want to in terms of appearance.
The changes I made are as follows:
I enabled auto layout and size classes so as I wanted to support different screen sizes. I also changed the dimensions of the preview layer to be the full screen.
The session Preset is set to AVCaptureSessionPresetPhoto for iPhone and iPad
Finally, I set the video gravity to AVLayerVideoGravityResizeAspectFill (this seems to be the keypoint)
Now when I run the application, the faces get detected but there seems to be an error in the coordinates of where the rectangles are drawn
When I change the video gravity to AVLayerVideoGravityResizeAspect, everything seems to work fine again.
The only problem is then, the camera preview is not the desired size which is the full screen.
So now I am wondering why does this happen. I notice a function in square cam: videoPreviewBoxForGravity which processes the gravity type and seems to make adjustments.
- (CGRect)videoPreviewBoxForGravity:(NSString *)gravity frameSize:(CGSize)frameSize apertureSize:(CGSize)apertureSize
One thing I noticed here, the frame size stays the same regardless of gravity type.
Finally I read somewhere else, when setting the gravity to AspectFill, some part of the feed gets cropped which is understandable, similar to ImageView's scaletoFill.
My question is, how can I make the right adjustments to make this app work for any VideoGravity type and any size of previewlayer.
I have had a look at some related questions, for example CIDetector give wrong position on facial features seems to have a similar issue but it does not help
Thanks, in advance.

iOS: how to detect an Ipad is put in a special frame

I have programmed an Ipad application that has a behaviour that I would like to change if I put it in a wooden frame (any other material could be added). To simplify things, the background should change whenever is inside this frame, and there must be no tap-touch interaction, just putting the Ipad inside the frame.
Of course, we could program am specific gesture on the screen, like double tapping or swiping but it is not the searched solution.
Another thought has been to detect the lack of movement for a certain amount of time, but that would not assure that iPad is inside the frame.
I have thought about interacting with magnets (thinking about smartcovers) and the sleep sensor in the right side of the Ipad, but I don't know how to do it.
I cannot see any other useful sensor.
Any suggestion?
A combination of accelerometer and the camera seems like an idea worth trying out:
Scan the accelerometer data to detect a spike followed by a flat line (= putting the iPad into the frame, then resting).
After detecting the motion event, use the back camera (maybe combined with the flash) to detect a pattern image fixed inside of the frame for this purpose. It might be necessary to put the pattern into a little hole to create at least a blurry image.
The second step is there to distinguish the frame from any other surface the iPad might be placed upon.

Resources