Texture is mirrored on some sides - gltf

I want to display multiple boxes within a 3D scene.
I take the BoxTextured.gltf sample (https://github.com/KhronosGroup/glTF-Sample-Models/blob/master/2.0/BoxTextured/glTF-Embedded/BoxTextured.gltf) as template for my gltf file and only create new nodes, meshes , materials, textures and images image for each box.
Displaying (and positioning) of the boxes already works.
The next step is to put some additional information on each box.
My problem is that some sides are actually mirrored. (You don't see that at first glance with the sample bitmap, but you see it when you replace the sample with text)
Is there a simple way to correct that?
Is tehre an easy way to display different information on each side of the boxes?

Related

How to recolor an image on specific area?

I'm trying to create an iOS recoloring app (this is my reference), and i need to know how recolor some portion of the image when user taps on a given area. All the loaded pictures will be black/white initially.
Is there any prebuilt library? Or which graphics framework should i use?
Any help will be appreciated.
If what you are looking for is adding/replacing the colour within a certain shape and edges are really important (as in the example) then you should be looking into vectorised drawing.
What this means is every shape in your image would have an actual object representation in your code, and you could easily interact with that object to do whatever you want (i.e. tap gestures to change colour, zoom etc.).
This however, means that you can't simply use .jpeg images, and you need to use images in vector format, such as .svg or CorelDraw.
As a reference, check out SVGKit, which is an excellent library for working with SVG images.

Unity3D Displaying a RenderTexture overlayed ontop of another Camera

So to be simple I have a RenderTexture of another camera, and I need to overlay it onto another camera either through:
a) A RenderTexture of that camera
or
b) directly to the cameras rendering
What I'm trying to do can also be seen in this representation:
fig1 shows the main render, fig2 shows the desired overlay to be applied, fig3 shows them applied in a way of overlay, fig4 shows post processing of the now newly edited image
Where the first box is the main camera, and the second is what I want overlayed onto it as a RenderTexture in OnRenderObject() A.K.A when these two get rendered. Then in OnPostRender() these are combined where the overlay will be ontop. Then in OnRenderImage(), image effects can freely change the combined images together.
So as a list of what I need help with in explaining is that:
I do not know how to either:
Access the cameras rendering directly
or
Set a RenderTexture as a cameras rendering in OnPostRender()
I also need help though explanation in correctly overlaying a RenderTexture onto either one of the above (This would be using the depth rendered to the RenderTexture as alpha) just as shown in fig3 of the image.
This is the method I've thought up in order to overlay a forward rendering onto deferred for image effects. If you have any other solutions or ideas, it'd be very well appreciated if you could post them as a comment.
Just to clarify I'm not asking for source code, just methods and or links to Unity3D's documentation of said methods that I'm asking about.
Thank you very much so in advance. :)

Drawing a grid of circles in xcode/swift

How would I go abouts getting an effect in a view controller like the wonderful work of art attached in the following link?
Circles
I do have some idea, ranging from actually using ASCII text (I know, super wrong way) to a collection view of pictures, to what I suspect is the "right" way, done with core graphics. But I am asking in case there is a super easy/right methodology I will one day discover and facepalm.
You are correct that there are two primary ways for you to draw a grid of circles.
The primary way is to draw them using UIBezierPath or by using a collection view of images of circles.
For simple shapes, it is recommended that you choose the first option simply because your choices of circles aren't restricted by your images.
Here is a link for you to get started.

How to add rain effect to a picture?

Given a picture, I would like to modify it to create the effect of rain on glass. What steps should I take to achieve this goal?
Suppose we want to add the effect of a single drop of water on a given point in an image, some pixels around that point should be modified in some way: how these pixels should be modified?
Simple way is to just make transparent image that is actually image overlay. That looks like common approach of water drop effects in gimp.
example:
http://natural-drops.deviantart.com/art/drop-of-rain-373710307
I think that in order to make optically correct image one need to have full 3D info of environment, because most optics equations that one needs to simulate correct image includes each object distance.

overlaying images when displaying in OpenCV

I have two images that I want to display on top of each other. one image a single channel image and the second image is a RGB image but with most of the area being transparent.
How these two images are generated in different functions. I know to just display these on top of each other, i can use the same window name when calling cvShowImage() but this doesn't work when they are drawn from different functions. When trying this, I used cvCvtcolor() to convert he binary image from single channel to RGB and then displaying the second image from another function. But this didn't work. Both images are same dimension, depth and number of channels (after conversion).
I want to avoid passing in one image into the second function and then draw them. So I'm looking for a quick dirty trick to display these two images overlapped.
Thank you
EDIT:
I don't think that's possible. You'll have to create a new image or modify an existing one. Here's an article that shows how to do this: Transparent image overlays in OpenCV
There is no way to "overlay" images. cvShowImage() displays a single image from memory. You'll need to blend/combine them together. There are several ways to do this.
You can copy one into 1 or 2 channels of the other, you can use logical operations like AND, OR or XOR, you can use arithmetic operations like Add, Multiply and MultiplyScale (these operations will saturate values larger than 255). All these can also be done with an optional mask image like your blob image.
Naturally, you may want to do this into a third buffer so as not to overwrite your originals.
Apparently now it can be done using OpenCV 2.1 version
http://opencv.willowgarage.com/documentation/cpp/highgui_qt_new_functions.html#cv-displayoverlay

Resources