Prevent Roomle Camera from resetting - roomle

As can be seen in the example provided in the tutorial the Roomle configurator demonstrates different behaviors when zoomed in and a parameter is changed:
If the user clicks on the component and then scrolls to zoom in, the parameters of the component can be changed and the camera will stay in place.
If the user does not select the component first and just zooms in, the camera will reset to its original position if a parameter is changed.
I want to programmatically control camera movement and parameter selection without direct user interaction with the Roomle configurator (custom GUI) and this 2nd behavior is very annoying as the camera always jumps around.
I've tried using RoomleConfigurator._sceneHelper._cameraControl.lock() which successfully prevents manual camera movement but it will still reset on a parameter change.
How can I achieve the 1st behavior, where the camera is locked in place?

It is currently not possible to deactivate this behaviour (camera reset on parameter change).
Also a word of warning regarding the code RoomleConfigurator._sceneHelper._cameraControl.lock(). Functions which start with an underscore (_) are private and may be subject to change in a future release.
Here you can find the documentation on how to move the camera using the official API:
https://docs.roomle.com/web/guides/tutorial/configurator/07_recipes.html#move-camera

Related

Prevent manual screenshot in IOS Objective-C

On iOS, my requirement is disallow user to take manual screenshot from my application, either disallow or blur the captured screenshot. How?
The only solution is to simulate the iOS controls you have in your View using DRM'ed videos.
For each widget you need to create a video subclass that renders the widget, and apply DRM to the video.
You can try to do it yourself, or use a commercial solution such as the following:
https://screenshieldkit.com
It is possible but I don't recommend it because of the room for error. It was easier to do in the past, in iOS13 you will have to do it like this:
You will have to ask for the user's permission to read and edit their photo library, then you have a listener which is checking the number of photos in their library while they are using your app, if that number changes, they have just taken a screenshot (unless you allow other things in your app like tap and hold to save image, etc). When this happens, read said photo and apply a blur, then delete the photo from their library and save the blurred photos.
Warning: There are times where a user may get a photo while using your app that is not a screenshot (e.g. they received an airdrop) and you will now be tampering with their photos, which is very bad. To prevent this you may need to use key value pixel encoding on your screen at all times, for example the first 3 pixels of the screen are 3 very specific RGB values, that way if a new photo is detected and the first 3 pixels are those exact RGB values you know it's a screenshot of your app and not just another photo that was somehow saved while the user was using the app.
There isn't any regular solution to your problem!
You can do some tricks such as if you force the user to have their finger on the screen for the image to show then I don't think they can create screenshots. Because as soon as you press the home+lock keys to actually take the screenshot, the screen seems to behave as if there are no fingers touching it.
BUT what if the user takes a screenshot by AssistiveTouch?!
OR what do you want to do if user records screen and taking screenshot from the video?
I think it's better to change your strategy for example notify the owner of picture for taking screen shot by another one (like SnappChat)!

Line number swiped in UITextField

I am working on a react-native app that needs to detect which line number in a TextInput (UITextfield) has been swiped by a user.
A popular app that uses this type of interaction is Paper which allows a user to swipe a given line in a text document to style it.
Even though my main use case is react-native, curious to know what other developers familiar with RN or iOS think are the building blocks for such an interaction.
My current thoughts are:
Wrap my TextInput with a PanResponder element
Detect whatever I consider is a valid user gesture
Determine the x,y coordinate of the valid gesture
Somehow determine the sentence that was swiped on given the above coordinates??

How do I know when the screen is being drawn on iOS?

I would like to know when the screen is being drawn on iOS. In particular, I'd like to know if there are any visible changes being drawn on screen. This can be handy to know how long a page took to render, for example (assuming that the user is not interacting with the page). I would like to be able to capture this information in a regular production build, not in a developer build. And I'd like this to be a general solution applicable to most any page in my app, not just a specific page.
For example, I have a page that 1) asynchronously queries an API for data, 2) displays that data in a UITableView where some of the entries may be offscreen, and then 3) asynchronously downloads the images for each of the visible items on the screen. I want to get callbacks when the UITableView is rendered and when all of the images are rendered. The total time to render the page can be determined by looking at the timestamp of the last call to the callback (again, assuming no user interaction).
On Android, this is fairly simple. You can use ViewTreeObserver.addPreDrawListener to get a callback whenever the screen is being drawn. If there's no visible change to the screen, the callback is not called.
On iOS, it looks like CADisplayLink might potentially serve a similar purpose. However, when I hook up my CADisplayLink, it appears to be called over-and-over forever, whether or not there are visible changes on the screen.
Is there a way to know when there are visible changes to the screen being drawn in iOS?
In iOS 9 Apple made it impossible to get access to things drawn onto the screen outside of your app. Prior, it was possible to use an API called IOSurface to do it, but Apple closed it down in iOS 9. (To prevent apps from snooping on each other.)
So if you're talking about ANYTHING being drawn to the screen the answer is no. If you're looking for changes within your app there's probably a way to do it.

How to detect that Android Camera Geo tagging setting is on/off (GPS Info in the EXIF data)

I'm trying to find out if there's a way to detect the "geotag" or "store-location" setting of the camera on an Android device. I don't mean the general location access setting which I'm very familiar with, but the more "hidden" setting when you use the camera.
Most users are not even aware it exists, and it's turned off by default, I would like to be able to tell the users that this setting is off so that they can turn it on if they want to, this way pictures will have all the EXIF data concerning their location.
I hope this has not been answered before on so, if its the case, I'm sorry about it and would you please link me to the right thread.
Each Android device usually ships with its own custom camera app, made by the manufacturer of that device. Each has their own UI and probably own way/place to store this setting, if it even exists for that device. So any answer to this question would be heavily device-dependent.
But even if you just restrict yourself to the AOSP camera app, which is the app used on the Nexus devices, there's no API for this. The app asks if you want to enable GPS tagging the first time the app is run, and after that the option to enable/disable geotagging can be found in the settings.
There's no way to confirm if that setting is on, since it's not part of any public or standard Android API. You might be able to do something with the accessibility API to read these settings, but it requires substantial permissions to do so (Accessibility service documentation here).
To extract the EXIF information from the files, you could consider an example similar to the updateExif example shown in this code snippet. This would enable you to get all the information including make, flash, focal length etc which is stored in the EXIF file.
Speaking to the Android 4.3 camera app that comes with Nexus devices, you can view and change the setting like this:
Open the camera app.
The white circle (rightmost of three icons at bottom of app screen) is used to bring up the main menu. If "store location" is turned off, that circular icon will be displayed with a small location icon with a line through it. If "store location" is turned on, there is no positive indication thereof. Tap it to continue if you want to change the setting.
Tap the settings icon (middle icon of five) to bring up "More Options" menu.
The "store location" icon is the leftmost of five. Tapping that icon toggles the setting (and dismisses the menu). If "store location" is off, that icon is displayed with a line through it, else without.

Windows phone 7 - control camera zoom in via application

I am creating one application where I required to control the camera controls via my app such as camera capture, camera zoom in, zoom out, flash on/ off and so on.
Normally, in iPhone, API are available which can control the execution of hardware. I have tried to achieve same in windows phone 7 using silverlight. I have found the code to control the camera on event via code but not able to find any thing so that via button or slider, I can zoom in, zoom out.
I tried the reference video http://channel9.msdn.com/Shows/Inside+Windows+Phone/Inside-Windows-Phone-16-Mango-Camera-APIs and downloaded the code but still not found anything specific.
My question is that is it possible in windows phone 7 to have this feature and if so can anyone please guide me for this?
I also found one thing where when the camera is open, the images are coming in reverse direction :)
Please help me out of this.
Thanks,
David.
To zoom, you have to manually process the image.
You will need to add your own + and - zoom buttons and keep track of the zoom level.
Then, to show a zoomed viewfinder, you need to get the previewbuffer in a loop and zoom the image yourself to the current zoom level and then display it.
When the user takes a picture, you will apply the same zoom processing to the image in the CaptureImageAvailable event handler before saving.

Resources