Is there any way to access the device camera from Java on a blackberry?
My goal is to be able to take a photo and use it within the application.
With Invoke you have to wait for the user to take a picture. It starts the native camera app and pushes that screen on top of the stack.
Maybe this is fine, but if you want more control, look at the Camera Demo in the RIM samples, this has an example of using J2ME MMAPI classes: Player, VideoControl, etc... you can call the snapshot methods yourself instead of waiting for the user.
Take a look at the Invoke class. You can invoke the camera application as well as capture an image.
Related
I have an app that uses ARKit to detect faces and send over the network the coordinates of interest, which works well. I would like this app to run in background, still sending the data over the network, while I would be using another app (almost) fullscreen.
The option 'Enable multiple windows' is activated in info.plist, but as soon as I launch my other app, the ARKit app stops sending information (the app actually probably stops).
Is there a simple way to do this, and at least is this feasible? Thanks!
This is not possible at this point. Camera and AR stuff is disabled at a system level in apps when they are displayed in Slide Over or Split View.
I'd recommend displaying a warning message when Slide Over/Split Screen is being used saying that you should use the app in full screen mode. See this answer under a different question for details.
Is there an easy way to make in iOS Google Street View GMSPanormaView's camera follow the device's orientation via data from its motion sensors?
If not, has anyone already done it and can share a code snippet that takes data from CoreMotion, maybe manipulates it to create GMSPanoramaCamera, and passes it to the GMSPanoramaView with animateToCamera:animationDuration:?
Any relevant Android code would also be useful.
Upon checking the Maps SDK for iOS:Internal: Street View, there is no built-in function/implementation for the device orientation/ gyroscope sensor.
According to Ziem's answer you can try implement this by yourself(create function). He also give pointers to study the following:
Set the camera orientation point of view
Animate the camera movements
Reference:
Blog
Github
Using a site they have successfully create a function that will let you browse the Google streetview panoramas with your smartphone/tablet like you were inside it, just by moving your phone like a window to the world.
I am working on application (using Swift) which has the function to scan barcodes. For this I am using RSBarcodes.
Issue I am dealing with is that I need to scan barcodes from A4 paper sheet which is full of them. However the capturing is too fast and before I can focus on the right barcode the app captures wrong one.
So basically I need to ensure that the device will not capture some barcode, I don't want, but the one I will be pointing on for a longer time. My only idea is to check if the same barcode was captured for example 10 times and based on this assume that this one is the right one. Is there some more elegant solution?
Thanks for any suggestion!
I'm not an iOS dev, so I don't know the specifics - but is there a way you activate the camera but not the scanner? If so, if you can overlay a button in the camera view which says "Scan" - then your user would provide the trigger to capture the barcode once they have settled on a specific one.
I want to make app as my master thesis on ios 7 but I need to know if could run camera on background. I need to record video after user press home button (it should be traffic cam to car while user uses another app eg. navigation). I know it was not possilbe in ios 6 but I know that ios 7 has better support for background tasks... Is it please possible? I would be grateful for every answer. Thank you
There is a list of long-running tasks permitted in the background in Apple's documentation.
Camera access is not one of them. You will need to have your app in the foreground to use the camera.
I was wondering if there was a specific method called when we press the 2 buttons of the iPhone (using Home-Button & Power on/off) to take a screenshot. If yes, I would known his name to use her in programming.
There used to be a UIGetScreenImage() function that you could use to capture the screen. Apple no longer allows use of that function in App Store apps, so you have a few other options. CALayer has a -renderInContext: method—Google it—that you can use to copy a view’s contents to a graphics context; this does not, however, work for OpenGL content, video, or live imagery from a device’s camera. I’m not sure about solutions for the first two, but for the latter—getting images from the camera—you’ll need to use the AVFoundation framework.
It is a system level service for which the app never receives any notification or method call.
I believe that would be a native method, not accessible from the IPhone SDK. In what context are you going to be using this? You might be looking for this - Take screenshot from code