Draw a rectangle with texture (VertexPositionTexture) in Xna 3d - xna

I am tryig to draw a rectangle with texture using VertexPositionTexture but I get an error:
An unhandled exception of type 'System.NotSupportedException' occurred
in Microsoft.Xna.Framework.Graphics.dll
Additional information: XNA Framework Reach profile requires
TextureAddressMode to be Clamp when using texture sizes that are not
powers of two.
Thanks.

Three options:
(1) Try adding this line:
GraphicsDevice.SamplerStates[0] = SamplerState.LinearClamp;
This may change the appearance of the texture.
(2)
Change the height and width of the texture such that height * width is a power of 2. (i.e. ((2^9) * (2^9)) = 512*512 = 2^18)
(3) Change the XNA profile from Reach to Hi-def.
Right-click your project in Solution Explorer
Choose Properties
Focus the XNA Game Studio tab, and make your selection
(http://blogs.msdn.com/b/shawnhar/archive/2010/07/19/selecting-reach-vs-hidef.aspx)

Related

How can I position an a-frame object to bottom left corner of the marker, and make its width equal to the marker's width?

I'm trying to create a basic scene an AR.JS with NFT (so it's not just the basic marker-based tracking; it tracks a custom image) using A-frame to place down and position my objects, but I've noticed that e.g.: if I place a 1*1*1 size box in the scene, it will appear at different places on different devices. And also, if I don't scale it up to like 200, it will appear as a very-very tiny box.
E.g.: If I try to view my scene on my phone, the object appears at the exact center of the marker, but if I check it on a different phone, it will appear almost completely outside the marker. Also, if I check it with a webcam, it will appear yet again in a different place, and even in a different size.
I wonder if there is any option to make the marker images bottom left (or any other) corner the 0 0 0 point, so I can position my objects more precisely, and also set the object's width to equal the marker images width, so I don't have to scale up the object like this.
At this moment there is any option to display a model in the center of the NFT marker. This because AR.js depend on jsartoolkit5 and this last has no yet this feature. But if you know the width, height and dpi you can display the object in the center of the maker with this formula (pseudo code):
obj.position.y = (marker.height / marker.dpi * 2.54 * 10)/2.0;
obj.position.x = (marker.width / marker.dpi * 2.54 * 10)/2.0;
You can acquire width, height and dpi while creating your marker or using the dispFeatureSet display app distributed by the Artoolkit5 SDK you can find binaries here https://github.com/artoolkitx/artoolkit5/releases/tag/5.4.0 or from artookitx website https://www.artoolkitx.org/docs/downloads/

Get SCNMaterial's frame

I have an iPhone 3D model in my SceneKit application, it has a material which I get. It is called iScreen (the image that "is on" the screen of the iPhone):
var iScreen: SCNMaterial!
iScreen = iphone.geometry?.materialWithName("Screen")!
I decided to somehow project a webView there instead of an image.
Therefore I need the frame / position / size of the screen where iScreen "draws" to set the UIWebView's frame. Is that possible?
Note
Of course I tried position, frame, size, etc. but that all was not available :/
you will first have to know to which SCNGeometryElement the material applies (an SCNGeometry is made of one or several SCNGeometryElement).
That geometry element is essentially a list of indices to retrieve vertex data contained in the geometry's SCNGeometrySources. Here you are interested in the position source (it gives you the 3D coordinates of the vertices).
By iterating over these positions you'll be able to find the element's width and height.

Distance Fog XNA 4.0

I've been working on a project that helps create a virtual reality experience on the laptop and/or desktop. I am using XNA 4.0 on Visual Studio 2010. The current scenario looks like this. I have interfaced the movements of a persons head through kinect. So if the person moves his head right relative to the laptop, the scene seen in the image is rotated towards the left giving the effect of a virtual tour or like looking through the window experience.
To enhance the visual appeal, I want to add a darkness at the back plane. Like the box looks as if it was a tunnel.
The box was made using trianglestrips. The BasicEffect used for the planes of the box is called effect.
effect.VertexColorEnabled = true;
effect.EnableDefaultLighting();
effect.FogEnabled = true;
effect.FogStart = 35.0f;
effect.FogEnd = 100.0f;
effect.FogColor = new Vector3(0.0f, 0.0f, 0.0f);
effect.World = world;
effect.View = cam.view;
effect.Projection = cam.projection;
On compiling the error is regarding some normals.
I have no clue what they mean by that. I have dug the internet hard enough. (I was first under the impression that ill put a black omnilight in the backside of the box).
The error is attached below:
'verts' is the VertexPositionColor [][] that is used to build the box.
How do I solve this error ? Is the method/approach correct ?
Any help shall be welcome.
Thanks.
Your Vertex has Position and Color channels, but is has no normals... so you have to provide vertex has it.
You can use VertexPostionNormalTexture if you don't need the color, or build a custom struct that provides the normal...
Here your are a custom implementation: VertexPositionNormalColor
You need to add a normal (vector3) to your vertex type.
Also if you want Distance fog you will have to write your own shader as BasicEffect only implements depth fog (which while not looking as good is faster)

What is the secret behind "contentScaleFactor" of UIView when used with CATiledLayer?

Greetings,
I'm working on an application inspired by the "ZoomingPDFViewer" example that comes with the iOS SDK. At some point I found the following bit of code:
// to handle the interaction between CATiledLayer and high resolution
// screens, we need to manually set the tiling view's
// contentScaleFactor to 1.0. (If we omitted this, it would be 2.0
// on high resolution screens, which would cause the CATiledLayer
// to ask us for tiles of the wrong scales.)
pageContentView.contentScaleFactor = 1.0;
I tried to learn more about contentScaleFactor and what it does. After reading everything of Apple's documentation that mentioned it, I searched Google and never found a definite answer to what it actually does.
Here are a few things I'm curious about:
It seems that contentScaleFactor has some kind of effect on the graphics context when a UIView's/CALayer's contents are being drawn. This seems to be relevant to high resolution displays (like the Retina Display). What kind of effect does contentScaleFactor really have and on what?
When using a UIScrollView and setting it up to zoom, let's say, my contentView; all subviews of contentView are being scaled, too. How does this work? Which properties does UIScrollView modify to make even video players become blurry and scale up?
TL;DR: How does UIScrollView's zooming feature work "under the hood"? I want to understand how it works so I can write proper code.
Any hints and explanation is highly appreciated! :)
Coordinates are expressed in points not pixels. contentScaleFactor defines the relation between point and pixels: if it is 1, points and pixels are the same, but if it is 2 (like retina displays ) it means that every point has two pixels.
In normal drawing, working with points means that you don't have to worry about resolutions: in iphone 3 (scaleFactor 1) and iphone4 (scaleFactor 2 and 2x resolution), you can use the same coordinates and drawing code. However, if your are drawing a image (directly, as a texture...) and just using normal coordinates (points), you can't trust that pixel to point map is 1 to 1. If you do, then every pixel of the image will correspond to 1 point but 4 pixels if scaleFactor is 2 (2 in x direction, 2 in y) so images could became a bit blurred
Working with CATiledLayer you can have some unexpected results with scalefactor 2. I guess that having the UIView a contentScaleFactor==2 and the layer a contentScale==2 confuse the system and sometimes multiplies the scale. Maybe something similar happens with Scrollview.
Hope this clarifies it a bit
Apple has a section about this on its "Supporting High-Resolution Screens" page in the iOS dev documentations.
The page says:
Updating Your Custom Drawing Code
When you do any custom drawing in your application, most of the time
you should not need to care about the resolution of the underlying
screen. The native drawing technologies automatically ensure that the
coordinates you specify in the logical coordinate space map correctly
to pixels on the underlying screen. Sometimes, however, you might need
to know what the current scale factor is in order to render your
content correctly. For those situations, UIKit, Core Animation, and
other system frameworks provide the help you need to do your drawing
correctly.
Creating High-Resolution Bitmap Images Programmatically If you
currently use the UIGraphicsBeginImageContext function to create
bitmaps, you may want to adjust your code to take scale factors into
account. The UIGraphicsBeginImageContext function always creates
images with a scale factor of 1.0. If the underlying device has a
high-resolution screen, an image created with this function might not
appear as smooth when rendered. To create an image with a scale factor
other than 1.0, use the UIGraphicsBeginImageContextWithOptions
instead. The process for using this function is the same as for the
UIGraphicsBeginImageContext function:
Call UIGraphicsBeginImageContextWithOptions to create a bitmap
context (with the appropriate scale factor) and push it on the
graphics stack.
Use UIKit or Core Graphics routines to draw the content of the
image.
Call UIGraphicsGetImageFromCurrentImageContext to get the bitmap’s
contents.
Call UIGraphicsEndImageContext to pop the context from the stack.
For example, the following code snippet
creates a bitmap that is 200 x 200 pixels. (The number of pixels is
determined by multiplying the size of the image by the scale
factor.)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(100.0,100.0), NO, 2.0);
See it here: Supporting High-Resolution Screens

How to adjust GLCamera to show entire GLScene

I have a GLScene object of varying (but known) size. It is completely surrounded by a TGLDummyCube.
I want to position the GLCamera (with CameraStyle: glPerspective) so that the object is completely visible on screen. I got this running basically - the object is visible, but the distance is sometimes too far, or the object is larger than the screen and clipped.
How can I do that? I suppose that this can be done by a clever combination of camera distance and focal length, but I have not been successful so far.
This seems to be different in GLScene compared to OpenGL. I'm using GLScene and Delphi 2007.
Although varying the camera distance and focal length will change the object's visual size, it has the drawback of also changing the perspective, thus leading to a somewhat distorted view. I suggest to use the camera's SceneScale property instead.
Alas, I have no valid steps to calculate the correct value for that. In my case I have to scale to a cube with varying size, while the window size of the viewer is constant. So I placed two dummycubes at the position of the target cube, each sized to fit either the width or the height of the viewer with appropriate values for SceneScale, camera distance and FocalLength. During runtime I calculate the new SceneScale by the ratio of the target cube size in respect to the dummycube sizes. This works quite well in my case.
Edit: Here is some code I make for the calculations.
ZoomRefX and ZoomRefY are those DummyCubes
TargetDimX and TargetDimY give the size of the current object
DesignWidth and DesignHeight are the size of MyGLView at design time
DesignSceneScale is the camera's SceneScale at design time
The calculation code:
ScaleX := (ZoomRefX.CubeSize*MyGLView.Width)/(DesignWidth*TargetDimX);
ScaleY := (ZoomRefY.CubeSize*MyGLView.Height)/(DesignHeight*TargetDimY);
NewSceneScale := Min(ScaleX, ScaleY)*DesignSceneScale;
The DummyCubes ZoomRefX and ZoomRefY are sized so that they have a small margin to either the left-right or top-bottom edges of the viewing window. The are both positioned so that the front faces match. Also the target object is positioned to match its front face with those of these DummyCubes.
The formulas above allow the window size to be different from design time, but I actually didn't test this feature.
#Andreas if you've been playing with SceneScale (as you mentioned in comments) that means that you are looking for a proper way to fit object within camera view by either changing camera distance/focal length or by resizing object. If so, the easiest way to resize single object to fit the screen is to use its BoundingSphereRadius property like this:
ResizeMultiplier := 2; //play with it, it depends on your camera params
GLFreeForm1.Scale.Scale(ResizeMultiplier / GLFreeForm1.BoundingSphereRadius);
You can add GLDummyCube as root object for all other scene objects and then resize GLDummyCube with method mentioned above.

Resources