I am developing my camera application that can decoded the qrcode in real time of camera on.
When I open the camera, I set the timer about every 3 seconds interval to capture picture on the screen with Display.screenshot(); method and then decoded it with zxing library while camera is on. Sometimes, it can capture the qrcode and successful decoded, but otherwise it hard to decoded the picture. I think the problem is about camera. I can not use auto focus with the camera, so the capture picture is gloomy. I want to know how to use auto focus with the camera.
My application implement on OS version 5.0 and upper version.
Could anyone help me or suggestion about new solution ?
Thank you so much
Try BBBarcodeLib available for download here: http://aliirawan-wen.blogspot.com/2011/05/barcode-scanner-for-blackberry-os-50.html.
Note that for OS6 devices you can use the built in BlackBerry barcode scanning library (built on Google ZXing).
The main problem is likely that 3 second pause. You should be decoding frames as fast as you can, which should be many times per second. This will get you a successful scan much faster.
You can and should integrate the latest zxing library instead of using that built into RIM's OS if you can, as it has small improvements that will be helpful.
Related
How to capture photo automatically in android phone? is about how to take a picture automatically without people's interaction. This feature is needed in many applications. For example, when you are going to take a picture of a document, you expect that the camera can take it automatically when the full document is insider the picture (or four corners of the document). So my question is how about doing it in iPhone or iPad?
Recently, I am working on Cordova, and does someone know that there are some plugins that have already existed for this kind of camera operations? Thanks
EDIT:
This operation will be done in an APP that will be given the full access of the camera, and the task is how to develop such an APP.
Instead of capturing photo, you should capture video frames. When the captured frame satisfies your requirements, stop capturing the video and proceed.
I am using Livecode on a Mac with MergAV enabled. I would like to be able to do two things.
1) Take a still image from the Camera Roll and covert it to Video. (A 5 sec clip)
and
2) Be able to capture a portion of the iPad screen of my App and covert it to a 5 sec clip.
Thank you!
There's a couple of issues here.
The subject says iPhone/android and mergAV only supports iOS and OS X so we won't be able to get this working on android any time soon.
The second issue which applies to both #1 and #2 is mergAV doesn't yet support creating a video from an image. It's possible to implement it though so if you want to discuss that please contact me via mergExt.com.
Currently i am using ZBAR reader for decoding the QR codes. It is working fine. The problem occurs when QR code is heavily populated. ZBAR takes some time to read the content and due to this delay, the camera is kept open which annoys some users. So is there any way to capture the QR code as image and store in local and decode the image of QR code from local. This will stop the camera immediately and user won't see the camera opened for a long time. Thanks in advance
You didn't say anything about platform or language, but the the answer is "yes". See these examples.
I have integrate the zxing lib for qr code decoding in blackberry with OS 6.
i want to know how small image can be read the QR code any possibility to read minimum small image.I have try this that it is not working for 2.5x2.5 cm image.
The device contains the 5 MP camera.
EDITED:i have check the using the default camera looking nice but in camera open by the zxing lib is not look great.so is there any way to change the quality of camera view.
Please provide me help me.
As well as the size of the code, you also need to factor in how far away the code is from the camera and whether the camera supports macro-focus mode.
I would suggest using a larger code.
I am a very beginner in Objective-C and iOS programming. I spent a month to find out how to show a 3D model using OpenGL ES (version 1.1) on top of the live camera preview by using AvFoundation. I am doing a kind of augmented reality application on iPad. I process the input frames and show 3D object overlay with the camera preview in realtime. These was fine because there are so many site and tutorial about these things (Thanks to this website as well).
Now, I want to make a screen capture of the whole screen (the model with camera preview as the background) as the image and show in the next screen. I found a really good demonstration here, http://cocoacoderblog.com/2011/03/30/screenshots-a-legal-way-to-get-screenshots/. He did everything I want to do. But, as I said before, I am so beginner and don't understand the whole project without explanation in details. So, I'm stuck for a while because I don't know how to implement this.
Does anybody know any of good tutorial or any kind of source in this topic or any suggestion that I should learn more in order to do this screen capture? This will help me a lot to moving on.
Thank you in advance.
I'm currently attempting to solve this same problem to allow a user to take a screenshot of an Augmented Reality app. (We use Qualcomm's AR SDK plugged into Unity 3D to make our AR apps, which saved me from ever having to learn how to programmatically render OpenGL models)
For my solution I am first looking at implementing the second answer found here: How to take a screenshot programmatically
Barring that I will have to re-engineer the "Combined Screenshots" method found in CocoaCoder's Screenshots app.
I'll check back in when I figure out which one works better.
Here are 3 very helpful links to capture screenshot:
OpenGL ES View Snapshot
How to capture video frames from the camera as images using AV Foundation
How do I take a screenshot of my app that contains both UIKit and Camera elements
Enjoy