Titanium create2DMatrix ugly transformation result - ios

Hi I quite finished my third app with Titanium. This time is not for a customer but for my self, and I did a very smart ux and ui. I loved titanium but I met only a big limit thet I wish you can help me to solve. I used code like this
myPicImageView.transform = Ti.UI.create2DMatrix().rotate(-3);
To rotate some pic, but I have a very ugly result with corner like little stairs (sorry for my english but I can't explain better) is like low low low resolution when I rotate something. There's a way to avoid this problem?

rotating any view a non-multiple of pi / 2 while cause jagged edges as UIKit does not anti alias views when rendering. for images there is a simple solution, use an image with a single pixel width of transparency around the edge, then the rotated image view will have anti aliased edges.
see this post for more details.

Related

1024x768 resolution with nearest neighbor scaling

I'm using Arch Linux with Cinnamon and a 4K or rather UHD screen (3840x2160) and would like to use a 1024x768 resolution with black bars on
the left and right side and nearest neighbor scaling instead of bilinear filtering which is the default for all monitors that I've ever seen.
There's ways to kind of get this to work by actually running 3840x2160 but rendering a 1024x768 and scaling it up.
This can be done with xrandr or nvidia-settings.
I also managed to get some black bars going.
So what worked best for me so far was this command:
nvidia-settings -a CurrentMetaMode="DP-2: 3840x2160_60 {ViewPortIn=1024x768, ViewPortOut=2880x2160+480+0, ResamplingMethod=Nearest}"
This gives me the crisp upscaling and black bars.
There's one problem though: The right side of the screen is "cut off".
Which means when I maximize windows it acts as if I was still using a 16:9 resolution, rendering the right part of let's say a browser not accessible.
In games that scroll by putting the mouse at the edge of the screen it makes this not work for the right side, while working on the left, top and bottom.
Does anyone know about this problem or has a better solution?
I'm open to anything, like for example using some WINE settings to pull this off or something. Since this is mainly for playing old games a completely different approach with WINE would be totally fine to solve the problem.
I've already tried all kinds of things over the last few days. At this point I would jump into the air if somebody knows of a way to get this to work.

Unity 2D: after-image from OLED screens in a high contrast situation

When I test my unity 2D game on my iPhone X, all background and sprite elements on the screen have a blue "halo" when moving my character. I have explored the issue with transparency on mobile, but the issue seems really strange. The blue halo appears only when the background is black. Anything brighter and it is absolutely fine. So I doubt it's a transparency issue given that it appears only when a dark background is present.
It is visible only on mobile, so taking a screenshot is useless.
If anyone wants to test do the following. Download or open the image attached here to full screen. Zoom in just a bit so the shapes are taking most of the screen. Start moving the image left and right. Slow and fast and you should see a blueish after-image around the edges. This should happen only on some OLED mobile screens.
If anyone ever encounters this. The result I mentioned is an after-image effect from the OLED screen on the iPhone X. I haven't tested on other OLED devices, but I assume depending on the software it is possible other models can experience this. The levels of Black are incredible, but when you have a high contrast situation between light and dark, an after-image is created around the edges of the contrast zone.
How to fix this?
Simply do not use full black backgrounds or elements. Near black colors in a game situation is indistinguishable from a true black, 0, 0, 0 RGB, choice. This might be a common game design principle I am unaware of and I am the only person stupid enough to use 0,0,0 in the first place, but anyway, I hope if someone has the same issue to read this and fix it easily,

How to detect text in a photo

I am researching into the best way to detect test in a photo using open source libraries.
I think the standard way is as follows (note: steps 1 - 4 all use OpenCV):
1) detect outline of document
2) transform document so it's flat and cropped, using said outline
3) Make the background of document white, using a filter
4) Feed resulting image to Tesseract
Is this the optimum process, or is there a better way, or better tools?
Also, what happens for case if the photo doesn't have a document outline (It's possible that step 1 & 2 are redundant)?
Is there anyway to automatically detect document orientation (i.e. portrait / landscape)?
I think your process is fine. I've used a similar process for an Android project.
I think that the only way you can discover if a document is portrait/landscape is to reason with the length of the sides of the bounding box of your outline.
I don't think there's an automatic way to do this, maybe you can find the most external contour approximable with a 4 segment polyline (all doable in opencv). In order to get this you'll have to work with contour hierarchy and contous approximation (see cv2.approxPolyDP).
This is how I would go for automatic outline detection. As I said, the rest of your algorithm seems just fine to me.
PS. I'll leave my Android project GitHub link. I don't know if it can be useful to you, but here I specify the outline by dragging some handles, then transform the image and feed it to Tesseract, using Java and OpenCV. Yeah It's a very bad idea to do that in the main thread of an Android app and yeah, the app is not finished. I just wanted to experiment with OCR, so I didn't care much of performance and usability, since this was not intended to use, but just for studying.
Look up the uniform width transform.
What this does is detect edges which have more or less the same width with respect to their opposite edge. So things like drainpipes (which can be eliminated at a later pass) but also the majority of text. Whilst conceptually it's similar to a distance transform, the published method uses rather ad hoc normal projection methods and Canny edge detection.

iOS Heavy image switching

I'm developing a app that will showcase products. One of the features of this app is that you will be able to "rotate" the product, using your finger/Pan-Gesture.
I was thinking in implementing this by taking photos of the product from different angles so when you "drag" the image, all I would have to do is switch the image according. If you drag a little, i switch only 1 image... if you drag a lot, i will switch them in cadence making it look like a movie... but i have a concerns and a probable solution:
Is this "performatic"? Since its a art/museum product showcase, the photos will be quite large in size/definition, and loading/switching when "dragged a lot" might be a problem because it would cause "flickering"... And the solution would be: instead of loading pic-by-pic i would put them all inside one massive sheet, and work through them as if they were a sprite...
Is that a good ideia? Or should I stick with the pic-by-pic rotation?
Edit 1: There`s a complicator: the user will be able to zoom in/out and to rotate the product in any axis (X, Y and Z)...
My personal opinion, I don't think this will work the way you hope or the performance and/or aesthetics will not be what you want.
1) Taking individuals shots that you then try to keyframe to based on touch events won't work well because you will have inevitable inconsistencies in 'framing' the shots such that the playback won't be smooth
2) The best way to do this, I suspect, will be to shoot it with video and shoot it with some sort of rig that allows you to keep the camera fixed while rotating the object
3) I'm pretty sure this is how most 'professional' grade product carousel type presentations work
4) Even then you will have more image frames than you need -- not sure whether you plan to embed the images files in app or download on demand -- but that is also a consideration in terms of how much downsampling you'll need to do to reduce frames/file size
Suggestion
Look at shooting these as video (somewhat like described above) and downsampling and removing excess frames using a video editor. Then you could use AVFoundation for playback and use your gestures to 'scrub' into the video frames. I worked on something like this for HTML playback at a large company and I can assure you it was done with video.
Alternatively, if video won't work for you. Your sprite sheet solution might work (consider using SpriteKit). But then keep in mind what I said about trying to keyframe one off camera shots together -- it just won't work well. Maybe a compromise would be to shoot static images but do so by fixing the camera and rotating the objects at very specific increments. That could work as well I suppose but you will need to be very careful about light and other atmospehrics. It doesn't take much variation at all to be detectable to the human eye causing the whole presentation to seem strange. Good luck.
A coder from my company did something like this before using 360 images of an object and it worked just great but it didn't have zoom. Maybe you could add zoom by adding a pinch gesture recognizer and placing the image view into a scroll view to zoom in on the static image.
This scenario sounds like what you really need is a simple 3D model loader library or write it in OpenGL yourself. But this pan and zoom behavior is really basic when you make that jump to 3D so it should be easy to find lots of examples.
All depends on your situation and time constraints :)

UIScrollView Dome Effect

DESIRED EFFECT:
Imagine a UIScrollView such that as you scroll in any direction, you feel like you're looking around inside a dome. As in, the screen is stretched/warped/distorted at the edges by a filter/mesh of some sort. Think of a 3D game where you're looking up at the sky.
WHAT IT'S FOR:
I plan on plastering menu items on a sky of sorts. Imagine looking at the sky where clouds are tappable menu items and there are enough clouds such that you have to scroll around to find them all. This is just a menu to the actual content; it isn't a full 3D game where you can move around and such. I am therefore hoping that I can fake the 3D effect by stretching/warping/distorting the edges of the screen.
WHAT I NEED:
I need to at least know the direction to look in so that I can see how feasible and how much work it will take. If it's too much, I'll unfortunately have to do something else.
From what I've looked at so far, it appears that QuartzCore isn't enough and I suspect that OpenGL is the only way to do it. Before I throw myself into OpenGL though (I'm a complete noob at it), I'd like to know if that's even the correct technology that I should be looking at. And if it is, what area of it I should be looking at (initial searches indicated stuff like texture warping might be what I'm looking for?).
Thanks!
You’re on the right track. You’ll want to use OpenGL ES for this. The basic idea behind this that I’ve seen used to great effect is to project the scene on the inside of a cube, rotating the cube when the user moves their finger. This book really helped me when I got started with OpenGL.

Resources