XNA model with bones - xna

I created a model in Blender and then I used armature to create some pose. How can I display this pose in XNA/MonoGame? I can't find any solution. I use model exported to fbx file.
My only code is here and it draws model without armature effect:
protected override void Draw(GameTime gameTime)
{
graphics.GraphicsDevice.Clear(Color.CornflowerBlue);
// Copy any parent transforms.
Matrix[] transforms = new Matrix[myModel.Bones.Count];
myModel.CopyAbsoluteBoneTransformsTo(transforms);
// Draw the model. A model can have multiple meshes, so loop.
foreach (ModelMesh mesh in myModel.Meshes)
{
// This is where the mesh orientation is set, as well as our camera and projection.
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(modelRotation)
* Matrix.CreateTranslation(modelPosition);
effect.View = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up);
effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f),
aspectRatio, 1.0f, 10000.0f);
}
// Draw the mesh, using the effects set above.
mesh.Draw();
}
base.Draw(gameTime);
}

Related

Draw line instead of rendering Anchor in arcore

I am new to AR, I am working on an APP using ARCore using this one AR-REMOTE-SUPPORT
When I am drawing it from my screen it is creating default android anchor, I want line instead of default android anchor.
How can I achieve this.
here is the function which is placing Anchors on the screen
public void onDrawFrame(GL10 gl) {
// Clear screen to notify driver it should not load any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
if (mSession == null) {
return;
}
// Notify ARCore session that the view size changed so that the perspective matrix and
// the video background can be properly adjusted.
mDisplayRotationHelper.updateSessionIfNeeded(mSession);
try {
// Obtain the current frame from ARSession. When the configuration is set to
// UpdateMode.BLOCKING (it is by default), this will throttle the rendering to the
// camera framerate.
Frame frame = mSession.update();
Camera camera = frame.getCamera();
// Handle taps. Handling only one tap per frame, as taps are usually low frequency
// compared to frame rate.
MotionEvent tap = queuedSingleTaps.poll();
if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon
Trackable trackable = hit.getTrackable();
// Creates an anchor if a plane or an oriented point was hit.
if ((trackable instanceof Plane && ((Plane) trackable).isPoseInPolygon(hit.getHitPose()))
|| (trackable instanceof Point
&& ((Point) trackable).getOrientationMode()
== Point.OrientationMode.ESTIMATED_SURFACE_NORMAL)) {
// Hits are sorted by depth. Consider only closest hit on a plane or oriented point.
// Cap the number of objects created. This avoids overloading both the
// rendering system and ARCore.
if (anchors.size() >= 250) {
anchors.get(0).detach();
anchors.remove(0);
}
// Adding an Anchor tells ARCore that it should track this position in
// space. This anchor is created on the Plane to place the 3D model
// in the correct position relative both to the world and to the plane.
anchors.add(hit.createAnchor());
break;
}
}
}
// Draw background.
mBackgroundRenderer.draw(frame);
// If not tracking, don't draw 3d objects.
if (camera.getTrackingState() == TrackingState.PAUSED) {
return;
}
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
final float lightIntensity = frame.getLightEstimate().getPixelIntensity();
if (isShowPointCloud()) {
// Visualize tracked points.
PointCloud pointCloud = frame.acquirePointCloud();
mPointCloud.update(pointCloud);
mPointCloud.draw(viewmtx, projmtx);
// Application is responsible for releasing the point cloud resources after
// using it.
pointCloud.release();
}
// Check if we detected at least one plane. If so, hide the loading message.
if (mMessageSnackbar != null) {
for (Plane plane : mSession.getAllTrackables(Plane.class)) {
if (plane.getType() == Plane.Type.HORIZONTAL_UPWARD_FACING
&& plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
}
if (isShowPlane()) {
// Visualize planes.
mPlaneRenderer.drawPlanes(
mSession.getAllTrackables(Plane.class), camera.getDisplayOrientedPose(), projmtx);
}
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : anchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
// Get the current pose of an Anchor in world space. The Anchor pose is updated
// during calls to session.update() as ARCore refines its estimate of the world.
anchor.getPose().toMatrix(mAnchorMatrix, 0);
// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, mScaleFactor);
//mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
}
sendARViewMessage();
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}
Any help would be appreciated
TIA
One simple way to draw a line in ARCore is to create it between two anchor points.
The line itself is generally a 3D object also.
Here is a tested working example, based on the nice approach in this answer: https://stackoverflow.com/a/52816504/334402
private void drawLine(AnchorNode node1, AnchorNode node2) {
//Draw a line between two AnchorNodes (adapted from https://stackoverflow.com/a/52816504/334402)
Log.d(TAG,"drawLine");
Vector3 point1, point2;
point1 = node1.getWorldPosition();
point2 = node2.getWorldPosition();
//First, find the vector extending between the two points and define a look rotation
//in terms of this Vector.
final Vector3 difference = Vector3.subtract(point1, point2);
final Vector3 directionFromTopToBottom = difference.normalized();
final Quaternion rotationFromAToB =
Quaternion.lookRotation(directionFromTopToBottom, Vector3.up());
MaterialFactory.makeOpaqueWithColor(getApplicationContext(), new Color(0, 255, 244))
.thenAccept(
material -> {
/* Then, create a rectangular prism, using ShapeFactory.makeCube() and use the difference vector
to extend to the necessary length. */
Log.d(TAG,"drawLine insie .thenAccept");
ModelRenderable model = ShapeFactory.makeCube(
new Vector3(.01f, .01f, difference.length()),
Vector3.zero(), material);
/* Last, set the world rotation of the node to the rotation calculated earlier and set the world position to
the midpoint between the given points . */
Anchor lineAnchor = node2.getAnchor();
nodeForLine = new Node();
nodeForLine.setParent(node1);
nodeForLine.setRenderable(model);
nodeForLine.setWorldPosition(Vector3.add(point1, point2).scaled(.5f));
nodeForLine.setWorldRotation(rotationFromAToB);
}
);
}
You can see the full source here: https://github.com/mickod/LineView

ARCore Unity: How do I Start and Stop Plane Detection on Command?

I am creating an app with ARCore, but I don't want ARCore to look for planes as soon as the app starts. Instead, I want the plane detection to begin when I hit a button in my app. It would also be great if I could stop the plane detection on command as well.
Does anyone know how I could do start and stop the ARCore plane detection on command?
I am building the app in Unity.
Thanks so much in advance!
on ARPlaneVisualizer.cs, there is this code
void OnEnable()
{
m_PlaneLayer = LayerMask.NameToLayer ("ARGameObject");
ARInterface.planeAdded += PlaneAddedHandler;
ARInterface.planeUpdated += PlaneUpdatedHandler;
ARInterface.planeRemoved += PlaneRemovedHandler;
HidePlane(true);
}
void OnDisable()
{
ARInterface.planeAdded -= PlaneAddedHandler;
ARInterface.planeUpdated -= PlaneUpdatedHandler;
ARInterface.planeRemoved -= PlaneRemovedHandler;
HidePlane(false);
}
you can use the OnEnable() code as start tracking and OnDisable() code to stop tracking.
Initially create bool to restrict surface detection code and inatially make bool to true.
bool isSurfaceDetected = true;
if (isSurfaceDetected) {
Session.GetTrackables<TrackedPlane> (_newPlanes, TrackableQueryFilter.New);
// Iterate over planes found in this frame and instantiate corresponding GameObjects to visualize them.
foreach (var curPlane in _newPlanes) {
// Instantiate a plane visualization prefab and set it to track the new plane. The transform is set to
// the origin with an identity rotation since the mesh for our prefab is updated in Unity World
// coordinates.
var planeObject = Instantiate (plane, Vector3.zero, Quaternion.identity,
transform);
planeObject.GetComponent<DetectedPlaneVisualizer> ().Initialize (curPlane);
// Debug.Log ("test....");
// Apply a random color and grid rotation.
// planeObject.GetComponent<Renderer>().material.SetColor("_GridColor", new Color(Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f)));
// planeObject.GetComponent<Renderer>().material.SetFloat("_UvRotation", Random.Range(0.0f, 360.0f));
//
}
Create a stop button in canvas and attatch below method
public void StopTrack()
{
// Make isSurfaceDetected to false to disable plane detection code
isSurfaceDetected = false;
// Tag DetectedPlaneVisualizer prefab to Plane(or anything else)
GameObject[] anyName = GameObject.FindGameObjectsWithTag ("Plane");
// In DetectedPlaneVisualizer we have multiple polygons so we need to loop and diable DetectedPlaneVisualizer script attatched to that prefab.
for (int i = 0; i < anyName.Length; i++)
{
anyName[i].GetComponent<DetectedPlaneVisualizer> ().enabled = false;
}
}
Make sure that stop button method is in ARController

[XNA]Drawing a simple transparent plane

I'm working with XNA and a have a problem.
A basic problem, but argh !!!
I want to draw a ocean, BASIC ocean, just a plane, blue and transparent.
just a plane.
I have try with Vertex, with models and textures.
How tha alpha channel work in XNA ? StencilState, DepthBuffer, nothing would work.
Can you explain how to do this ?
Use VertexPositionColor enough ?
Excuse me, but I am looking for a long time.
class Ocean
{
Effect shader0;
public Vector3 Position;
GraphicsDevice Graphics;
Camera camera;
Model Mesh;
Texture2D waterTexture;
Rectangle screen;
Texture2D test;
public Ocean(Vector3 pos, int size, GraphicsDevice gra, Camera cam, ContentManager content)
{
Graphics = gra;
shader0 = content.Load<Effect>("Ocean");
//shader0 = new BasicEffect(this.Graphics);
waterTexture = content.Load<Texture2D>("Images/shaderUnderwater");
screen = new Rectangle(0, 0, this.Graphics.Viewport.Width, this.Graphics.Viewport.Height);
Position = pos;
Mesh = content.Load<Model>("Models/ocean");
camera = cam;
foreach (ModelMesh mesh in this.Mesh.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
part.Effect = shader0;
}
}
}
bool underWater;
Vector3 lightDirection = new Vector3(-1.0f, -1.0f, -1.0f);
public void Draw(SpriteBatch spriteBatch, Player player)
{
Matrix world = Matrix.CreateScale(100f) * Matrix.CreateRotationX(MathHelper.ToRadians(-90f)) * Matrix.CreateTranslation(Position);
//this.shader0.EnableDefaultLighting();
this.shader0.Parameters["World"].SetValue(world);
this.shader0.Parameters["View"].SetValue(player.Camera.View);
this.shader0.Parameters["Projection"].SetValue(player.Camera.Projection);
// Dessin du model
foreach (ModelMesh mesh in this.Mesh.Meshes)
{
foreach (Effect effect in mesh.Effects)
{
effect.CurrentTechnique = effect.Techniques["Textured"];
effect.Parameters["DiffuseColor"].SetValue(new Vector4(1f, 0.2f, 0.2f, 1f) ); // a reddish light
effect.Parameters["DiffuseLightDirection"].SetValue(new Vector3(1, 0, 0) ); // coming along the x-axis
effect.Parameters["SpecularColor"].SetValue(new Vector4(0, 1, 0 ,1f) ); // with green highlights
effect.Parameters["World"].SetValue(world);
effect.Parameters["View"].SetValue(player.Camera.View);
effect.Parameters["Projection"].SetValue(player.Camera.Projection);
}
mesh.Draw();
}
this.Graphics.BlendState = BlendState.Opaque;
//Effet
if (camera.Position.Y < Position.Y)
spriteBatch.Draw(this.waterTexture, this.screen, Color.White);
}
public void Update(GameTime gameTime)
{
}
void onWater()
{
underWater = true;
}
}
}
If you are using the BasicEffect to render the ocean, then BasicEffect has an Alpha property, which affects the transparency. Please see here: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.basiceffect.alpha.aspx
You can use BasicEffect to create transparency with any way that you are using for rendering your model.

Using Bloom effect makes black background blue/indigo in MonoGame (Xna)

My game, made in MonoGame (Xna clone to Mac), needs to use the Bloom effect. I followed the code in the sample over at MSDN (http://xbox.create.msdn.com/en-US/education/catalog/sample/bloom). When I put this effect into my 2D game, it made the background a bluish indigo purple thing. I looked around on Google of why this is, I'm not sure why. I couldn't find an answer. I found this StackOverflow question, they said something about Pre-Bloom, which I'm guessing this is what this is. I put my draw code in a component. My Draw code is:
spriteBatch.Begin();
((Pump)Game).bloomComponent.BeginDraw();
grid.DrawPeices(spriteBatch);
spriteBatch.End();
The DrawPeices method just goes through a list and draws each sprite normally. My BloomComponent class, which I initialized the component in my main Game class, is:
#region File Description
//-----------------------------------------------------------------------------
// BloomComponent.cs
//
// Microsoft XNA Community Game Platform
// Copyright (C) Microsoft Corporation. All rights reserved.
//-----------------------------------------------------------------------------
#endregion
#region Using Statements
using System;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.Graphics;
using System.IO;
using System.Reflection;
#endregion
namespace Pump.Libraries.Bloom
{
public class BloomComponent : DrawableGameComponent
{
#region Fields
SpriteBatch spriteBatch;
Effect bloomExtractEffect;
Effect bloomCombineEffect;
Effect gaussianBlurEffect;
RenderTarget2D sceneRenderTarget;
RenderTarget2D renderTarget1;
RenderTarget2D renderTarget2;
RenderTarget2D finalRenderTarget;
public RenderTarget2D FinalRenderTarget { get { return finalRenderTarget; } }
byte[] shaderCode;
// Choose what display settings the bloom should use.
public BloomSettings Settings
{
get { return settings; }
set { settings = value; }
}
BloomSettings settings = BloomSettings.PresetSettings[0];
// Optionally displays one of the intermediate buffers used
// by the bloom postprocess, so you can see exactly what is
// being drawn into each rendertarget.
public enum IntermediateBuffer
{
PreBloom,
BlurredHorizontally,
BlurredBothWays,
FinalResult,
}
public IntermediateBuffer ShowBuffer
{
get { return showBuffer; }
set { showBuffer = value; }
}
IntermediateBuffer showBuffer = IntermediateBuffer.FinalResult;
#endregion
#region Initialization
public BloomComponent(Game game)
: base(game)
{
if (game == null)
throw new ArgumentNullException("game");
}
/// <summary>
/// Load your graphics content.
/// </summary>
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
//LOAD ALL THE SHADERS
var assembly = Assembly.GetExecutingAssembly();
var stream = assembly.GetManifestResourceStream("Pump.Shaders.BloomExtract.mgfxo");
using (var ms = new MemoryStream())
{
stream.CopyTo(ms);
shaderCode = ms.ToArray();
}
stream.Close();
bloomExtractEffect = new Effect(GraphicsDevice, shaderCode);
assembly = Assembly.GetExecutingAssembly();
stream = assembly.GetManifestResourceStream("Pump.Shaders.BloomCombine.mgfxo");
using (var ms = new MemoryStream())
{
stream.CopyTo(ms);
shaderCode = ms.ToArray();
}
stream.Close();
bloomCombineEffect = new Effect(GraphicsDevice, shaderCode);
assembly = Assembly.GetExecutingAssembly();
stream = assembly.GetManifestResourceStream("Pump.Shaders.GaussianBlur.mgfxo");
using (var ms = new MemoryStream())
{
stream.CopyTo(ms);
shaderCode = ms.ToArray();
}
stream.Close();
gaussianBlurEffect = new Effect(GraphicsDevice, shaderCode);
/*bloomExtractEffect = Game.Content.Load<Effect>("BloomExtract");
bloomCombineEffect = Game.Content.Load<Effect>("BloomCombine");
gaussianBlurEffect = Game.Content.Load<Effect>("GaussianBlur");*/
// Look up the resolution and format of our main backbuffer.
PresentationParameters pp = GraphicsDevice.PresentationParameters;
int width = pp.BackBufferWidth;
int height = pp.BackBufferHeight;
SurfaceFormat format = pp.BackBufferFormat;
// Create a texture for rendering the main scene, prior to applying bloom.
sceneRenderTarget = new RenderTarget2D(GraphicsDevice, width, height, false,
format, pp.DepthStencilFormat, pp.MultiSampleCount,
RenderTargetUsage.DiscardContents);
finalRenderTarget = new RenderTarget2D(GraphicsDevice, width, height, false,
format, pp.DepthStencilFormat, pp.MultiSampleCount,
RenderTargetUsage.DiscardContents);
// Create two rendertargets for the bloom processing. These are half the
// size of the backbuffer, in order to minimize fillrate costs. Reducing
// the resolution in this way doesn't hurt quality, because we are going
// to be blurring the bloom images in any case.
width /= 2;
height /= 2;
renderTarget1 = new RenderTarget2D(GraphicsDevice, width, height, false, format, DepthFormat.None);
renderTarget2 = new RenderTarget2D(GraphicsDevice, width, height, false, format, DepthFormat.None);
}
/// <summary>
/// Unload your graphics content.
/// </summary>
protected override void UnloadContent()
{
sceneRenderTarget.Dispose();
renderTarget1.Dispose();
renderTarget2.Dispose();
}
#endregion
#region Draw
/// <summary>
/// This should be called at the very start of the scene rendering. The bloom
/// component uses it to redirect drawing into its custom rendertarget, so it
/// can capture the scene image in preparation for applying the bloom filter.
/// </summary>
public void BeginDraw()
{
if (Visible)
{
GraphicsDevice.SetRenderTarget(sceneRenderTarget);
}
}
/// <summary>
/// This is where it all happens. Grabs a scene that has already been rendered,
/// and uses postprocess magic to add a glowing bloom effect over the top of it.
/// </summary>
public override void Draw(GameTime gameTime)
{
GraphicsDevice.SamplerStates[1] = SamplerState.LinearClamp;
// Pass 1: draw the scene into rendertarget 1, using a
// shader that extracts only the brightest parts of the image.
bloomExtractEffect.Parameters["BloomThreshold"].SetValue(
Settings.BloomThreshold);
DrawFullscreenQuad(sceneRenderTarget, renderTarget1,
bloomExtractEffect,
IntermediateBuffer.PreBloom);
// Pass 2: draw from rendertarget 1 into rendertarget 2,
// using a shader to apply a horizontal gaussian blur filter.
SetBlurEffectParameters(1.0f / (float)renderTarget1.Width, 0);
DrawFullscreenQuad(renderTarget1, renderTarget2,
gaussianBlurEffect,
IntermediateBuffer.BlurredHorizontally);
// Pass 3: draw from rendertarget 2 back into rendertarget 1,
// using a shader to apply a vertical gaussian blur filter.
SetBlurEffectParameters(0, 1.0f / (float)renderTarget1.Height);
DrawFullscreenQuad(renderTarget2, renderTarget1,
gaussianBlurEffect,
IntermediateBuffer.BlurredBothWays);
// Pass 4: draw both rendertarget 1 and the original scene
// image back into the main backbuffer, using a shader that
// combines them to produce the final bloomed result.
GraphicsDevice.SetRenderTarget(null);
EffectParameterCollection parameters = bloomCombineEffect.Parameters;
parameters["BloomIntensity"].SetValue(Settings.BloomIntensity);
parameters["BaseIntensity"].SetValue(Settings.BaseIntensity);
parameters["BloomSaturation"].SetValue(Settings.BloomSaturation);
parameters["BaseSaturation"].SetValue(Settings.BaseSaturation);
GraphicsDevice.Textures[1] = sceneRenderTarget;
Viewport viewport = GraphicsDevice.Viewport;
DrawFullscreenQuad(renderTarget1,
viewport.Width, viewport.Height,
bloomCombineEffect,
IntermediateBuffer.FinalResult);
}
/// <summary>
/// Helper for drawing a texture into a rendertarget, using
/// a custom shader to apply postprocessing effects.
/// </summary>
void DrawFullscreenQuad(Texture2D texture, RenderTarget2D renderTarget,
Effect effect, IntermediateBuffer currentBuffer)
{
GraphicsDevice.SetRenderTarget(renderTarget);
DrawFullscreenQuad(texture,
renderTarget.Width, renderTarget.Height,
effect, currentBuffer);
}
/// <summary>
/// Helper for drawing a texture into the current rendertarget,
/// using a custom shader to apply postprocessing effects.
/// </summary>
void DrawFullscreenQuad(Texture2D texture, int width, int height,
Effect effect, IntermediateBuffer currentBuffer)
{
// If the user has selected one of the show intermediate buffer options,
// we still draw the quad to make sure the image will end up on the screen,
// but might need to skip applying the custom pixel shader.
if (showBuffer < currentBuffer)
{
effect = null;
}
spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect);
spriteBatch.Draw(texture, new Rectangle(0, 0, width, height), Color.White);
spriteBatch.End();
}
/// <summary>
/// Computes sample weightings and texture coordinate offsets
/// for one pass of a separable gaussian blur filter.
/// </summary>
void SetBlurEffectParameters(float dx, float dy)
{
// Look up the sample weight and offset effect parameters.
EffectParameter weightsParameter, offsetsParameter;
weightsParameter = gaussianBlurEffect.Parameters["SampleWeights"];
offsetsParameter = gaussianBlurEffect.Parameters["SampleOffsets"];
// Look up how many samples our gaussian blur effect supports.
int sampleCount = weightsParameter.Elements.Count;
// Create temporary arrays for computing our filter settings.
float[] sampleWeights = new float[sampleCount];
Vector2[] sampleOffsets = new Vector2[sampleCount];
// The first sample always has a zero offset.
sampleWeights[0] = ComputeGaussian(0);
sampleOffsets[0] = new Vector2(0);
// Maintain a sum of all the weighting values.
float totalWeights = sampleWeights[0];
// Add pairs of additional sample taps, positioned
// along a line in both directions from the center.
for (int i = 0; i < sampleCount / 2; i++)
{
// Store weights for the positive and negative taps.
float weight = ComputeGaussian(i + 1);
sampleWeights[i * 2 + 1] = weight;
sampleWeights[i * 2 + 2] = weight;
totalWeights += weight * 2;
// To get the maximum amount of blurring from a limited number of
// pixel shader samples, we take advantage of the bilinear filtering
// hardware inside the texture fetch unit. If we position our texture
// coordinates exactly halfway between two texels, the filtering unit
// will average them for us, giving two samples for the price of one.
// This allows us to step in units of two texels per sample, rather
// than just one at a time. The 1.5 offset kicks things off by
// positioning us nicely in between two texels.
float sampleOffset = i * 2 + 1.5f;
Vector2 delta = new Vector2(dx, dy) * sampleOffset;
// Store texture coordinate offsets for the positive and negative taps.
sampleOffsets[i * 2 + 1] = delta;
sampleOffsets[i * 2 + 2] = -delta;
}
// Normalize the list of sample weightings, so they will always sum to one.
for (int i = 0; i < sampleWeights.Length; i++)
{
sampleWeights[i] /= totalWeights;
}
// Tell the effect about our new filter settings.
weightsParameter.SetValue(sampleWeights);
offsetsParameter.SetValue(sampleOffsets);
}
/// <summary>
/// Evaluates a single point on the gaussian falloff curve.
/// Used for setting up the blur filter weightings.
/// </summary>
float ComputeGaussian(float n)
{
float theta = Settings.BlurAmount;
return (float)((1.0 / Math.Sqrt(2 * Math.PI * theta)) *
Math.Exp(-(n * n) / (2 * theta * theta)));
}
#endregion
}
}
And finally, BloomSettings.cs:
#region File Description
//-----------------------------------------------------------------------------
// BloomSettings.cs
//
// Microsoft XNA Community Game Platform
// Copyright (C) Microsoft Corporation. All rights reserved.
//-----------------------------------------------------------------------------
#endregion
namespace Pump.Libraries.Bloom
{
/// <summary>
/// Class holds all the settings used to tweak the bloom effect.
/// </summary>
public class BloomSettings
{
#region Fields
// Name of a preset bloom setting, for display to the user.
public readonly string Name;
// Controls how bright a pixel needs to be before it will bloom.
// Zero makes everything bloom equally, while higher values select
// only brighter colors. Somewhere between 0.25 and 0.5 is good.
public readonly float BloomThreshold;
// Controls how much blurring is applied to the bloom image.
// The typical range is from 1 up to 10 or so.
public readonly float BlurAmount;
// Controls the amount of the bloom and base images that
// will be mixed into the final scene. Range 0 to 1.
public readonly float BloomIntensity;
public readonly float BaseIntensity;
// Independently control the color saturation of the bloom and
// base images. Zero is totally desaturated, 1.0 leaves saturation
// unchanged, while higher values increase the saturation level.
public readonly float BloomSaturation;
public readonly float BaseSaturation;
#endregion
/// <summary>
/// Constructs a new bloom settings descriptor.
/// </summary>
public BloomSettings(string name, float bloomThreshold, float blurAmount,
float bloomIntensity, float baseIntensity,
float bloomSaturation, float baseSaturation)
{
Name = name;
BloomThreshold = bloomThreshold;
BlurAmount = blurAmount;
BloomIntensity = bloomIntensity;
BaseIntensity = baseIntensity;
BloomSaturation = bloomSaturation;
BaseSaturation = baseSaturation;
}
/// <summary>
/// Table of preset bloom settings, used by the sample program.
/// </summary>
public static BloomSettings[] PresetSettings =
{
// Name Thresh Blur Bloom Base BloomSat BaseSat
new BloomSettings("Default", 0.25f, 4, 1.25f, 1, 1, 1),
new BloomSettings("Soft", 0, 3, 1, 1, 1, 1),
new BloomSettings("Desaturated", 0.5f, 8, 2, 1, 0, 1),
new BloomSettings("Saturated", 0.25f, 4, 2, 1, 2, 0),
new BloomSettings("Blurry", 0, 2, 1, 0.1f, 1, 1),
new BloomSettings("Subtle", 0.5f, 2, 1, 1, 1, 1),
};
}
}
Can somebody explain why the screen is blue/indigo, and how I can fix it? My original background was black, if you should need to know. Oh, and I have a screenshot:
In looking at your screenshot and code it does appear that you are not clearing your rendertagets, I would modify your BeginDraw code to call Clear so that your targets are initialized to a known value before you draw to them.
/// <summary>
/// This should be called at the very start of the scene rendering. The bloom
/// component uses it to redirect drawing into its custom rendertarget, so it
/// can capture the scene image in preparation for applying the bloom filter.
/// </summary>
public void BeginDraw()
{
if (Visible)
{
GraphicsDevice.SetRenderTarget(renderTarget1);
GraphicsDevice.Clear(Color.Black);
GraphicsDevice.SetRenderTarget(renderTarget2);
GraphicsDevice.Clear(Color.Black);
GraphicsDevice.SetRenderTarget(FinalRenderTarget);
GraphicsDevice.Clear(Color.Black);
GraphicsDevice.SetRenderTarget(sceneRenderTarget);
}
}

SlimDX Camera setup

Please, tell me what I'm doing wrongly:
that's my Camera class
public class Camera
{
public Matrix view;
public Matrix world;
public Matrix projection;
public Vector3 position;
public Vector3 target;
public float fov;
public Camera(Vector3 pos, Vector3 tar)
{
this.position = pos;
this.target = tar;
view = Matrix.LookAtLH(position, target, Vector3.UnitY);
projection = Matrix.PerspectiveFovLH(fov, 1.6f, 0.001f, 100.0f);
world = Matrix.Identity;
}
}
that's my Constant buffer struct:
struct ConstantBuffer
{
internal Matrix mWorld;
internal Matrix mView;
internal Matrix mProjection;
};
and here I'm drawing the triangle and setting camera:
x+= 0.01f;
camera.position = new Vector3(x, 0.0f, 0.0f);
camera.view = Matrix.LookAtLH(camera.position, camera.target, Vector3.UnitY);
camera.projection = Matrix.PerspectiveFovLH(camera.fov, 1.6f, 0.0f, 100.0f);
var buffer = new Buffer(device, new BufferDescription
{
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
////////////////////////////// camera setup
ConstantBuffer cb;
cb.mProjection = Matrix.Transpose(camera.projection);
cb.mView = Matrix.Transpose(camera.view);
cb.mWorld = Matrix.Transpose(camera.world);
var data = new DataStream(sizeof(ConstantBuffer), true, true);
data.Write(cb);
data.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, data), buffer, 0);
//////////////////////////////////////////////////////////////////////
// set the shaders
context.VertexShader.Set(vertexShader);
context.PixelShader.Set(pixelShader);
// draw the triangle
context.Draw(4, 0);
swapChain.Present(0, PresentFlags.None);
Please, if you can see what's wrong, tell me! :) I have spent two days writing this already..
Attempt the second:
#paiden I initialized fov now ( thanks very much :) ) but still no effect (now it's fov = 1.5707963267f;) and #Nico Schertler , thank you too, I put it in use by
context.VertexShader.SetConstantBuffer(buffer, 0);
context.PixelShader.SetConstantBuffer(buffer, 0);
but no effect still... probably my .fx file is wrong? for what purpose do I need this:
cbuffer ConstantBuffer : register( b0 ) { matrix World; matrix View; matrix Projection; }
Attepmpt the third:
#MHGameWork
Thank you very much too, but no effect still ;)
If anyone has 5 minutes time, I can just drop source code to his/her e-mail and then we will publish the answer... I guess it will help much to some newbies like me :)
unsafe
{
x+= 0.01f;
camera.position = new Vector3(x, 0.0f, 0.0f);
camera.view = Matrix.LookAtLH(camera.position, camera.target, Vector3.UnitY);
camera.projection = Matrix.PerspectiveFovLH(camera.fov, 1.6f, 0.01f, 100.0f);
var buffer = new Buffer(device, new BufferDescription
{
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
THE PROBLEM NOW - I SEE MY TRIANGLE BUT THE CAMERA DOESN'T MOVE
You have set your camera's nearplane to 0. This makes all the value in your matrix divide by zero, so you get a matrix filled with 'NAN's
Use a near plane value of about 0.01 in your case, it will solve the problem
I hope you still need help. Here is my camera class, which can be used, and can be easily moved around the scene using mouse/keyboard.
http://pastebin.com/JtiUSiHZ
Call the "TakeALook()" method in each frame (or when you move the camera).
You can move around it with the "CameraMove" method. It takes a Vector3 - where you want to move your camera (dont give it huge values, I use 0,001f for each frame)
And with the "CameraRotate()" you can turn it around - it take a Vector2 as a Left-Right and Up-Down rotation.
Its pretty easy. I use EventHandlers to call there two function, but feel free to edit as you wish.

Resources