AssetNotFoundException in SharpDX - sharpdx

I'm actually trying to make program that is displaying some picture on the Window
Here is part of code
public Texture2D tulTexture;
//...
protected override void LoadContent()
{
// Instantiate a SpriteBatch
spriteBatch = ToDisposeContent(new SpriteBatch(GraphicsDevice));
// Loads the balls texture (32 textures (32x32) stored vertically => 32 x 1024 ).
// The [Balls.dds] file is defined with the build action [ToolkitTexture] in the project
tulTexture = this.Content.Load<Texture2D>("T.jpg");
// Loads a sprite font
// The [Arial16.xml] file is defined with the build action [ToolkitFont] in the project
base.LoadContent();
}
When I'm running the program,I'm getting the AssetNotFoundException but this is lie.I've got this asset!

AssetNotFoundException but this is lie.I've got this asset!
It is most likely that you didn't configure your texture to be part of the build. You need to set the action "ToolkitTexture" on the texture in VS, and read the texture Content.Load<Texture2D>("T") without the ".jpg", as it is done in SharpDX samples. This is explained in the comment of the code above you pasted.

Related

Choosing between buffers in a Metal shader

I'm struggling with porting my OpenGL application to Metal. In my old app, I used to bind two buffers, one with vertices and respective colours and one with vertices and respective textures, and switch between the two based on some app logic. Now in Metal I've started with the Hello Triangle example where I tried running this vertex shader
vertex RasterizerData
vertexShader(uint vertexID [[vertex_id]],
constant AAPLVertex1 *vertices1 [[buffer(AAPLVertexInputIndexVertices1)]],
constant AAPLVertex2 *vertices2 [[buffer(AAPLVertexInputIndexVertices2)]],
constant bool &useFirstBuffer [[buffer(AAPLVertexInputIndexUseFirstBuffer)]])
{
float2 pixelSpacePosition;
if (useFirstBuffer) {
pixelSpacePosition = vertices1[vertexID].position.xy;
} else {
pixelSpacePosition = vertices2[vertexID].position.xy;
}
...
and this Objective-C code
bool useFirstBuffer = true;
[renderEncoder setVertexBytes:&useFirstBuffer
length:sizeof(bool)
atIndex:AAPLVertexInputIndexUseFirstBuffer];
[renderEncoder setVertexBytes:triangleVertices
length:sizeof(triangleVertices)
atIndex:AAPLVertexInputIndexVertices1];
(where AAPLVertexInputIndexVertices1 = 0, AAPLVertexInputIndexVertices2 = 1 and AAPLVertexInputIndexUseFirstBuffer = 3), which should result in vertices2 never getting accessed, but still I get the error: failed assertion 'Vertex Function(vertexShader): missing buffer binding at index 1 for vertices2[0].'
Everything works if I replace if (useFirstBuffer) with if (true) in the Metal code. What is wrong?
When you're hard-coding the conditional, the compiler is smart enough to eliminate the branch that references the absent buffer (via dead-code elimination), but when the conditional must be evaluated at runtime, the compiler doesn't know that the branch is never taken.
Since all declared buffer parameters must be bound, leaving the unreferenced buffer unbound trips the validation layer. You could bind a few "dummy" bytes at the Vertices2 slot (using -setVertexBytes:length:atIndex:) when not following that path to get around this. It's not important that the buffers have the same length, since, after all, the dummy buffer will never actually be accessed.
In the atIndex argument, you call the code with the values AAPLVertexInputIndexUseFirstBuffer and AAPLVertexInputIndexVertices1 but in the Metal code the values AAPLVertexInputIndexVertices1 and AAPLVertexInputIndexVertices2 appear in the buffer() spec. It looks like you need to use AAPLVertexInputIndexVertices1 instead of AAPLVertexInputIndexUseFirstBuffer in your calling code.

How can I trim an image using Magick.net keeping clipping paths if they exist

When trimming images with Magick.net (to remove white areas around the main object of the image) and they've got a clipping path, then the path is not synchronized with the new proportions of the image.
Is there a way to handle this using Magick.Net, so that the path still trace the objects that it did before trimming?
(Magick.net uses ImageMagick for its image processing, so if anyone knows how to do it in ImageMagick, perhaps it's easily "translated" into MagickNet.)
Adding some more information:
Here is a link to a simple image with a clipping path in it (I just made a few strokes in Photoshop and made it into a path):
Zip archive with sample files in different formats, containing paths.
And below you will find a piece of code that uses Magic.Net for trimming that image, with the resulting displacement of the path.
// Put a reference to Magick.net-Q8-AnyCPU using "nuget"
using ImageMagick;
using System;
namespace ConsoleApplication2
{
class Program
{
static void Main(string[] args)
{
string path = #"C:\Temp\Path-test-jpg.jpg";
MagickNET.SetGhostscriptDirectory(#"C:\Temp\ConsoleApplication2\bin\Debug");
using (MagickImage image = new MagickImage(path))
{
// DPI
Console.WriteLine(image.Density);
//// Trim
image.Trim();
image.Quality = 99;
image.Write(#"C:\Temp\test-out.jpg");
}
}
}
}

SharpDX:How to place SharpDX Window in Winforms Window?

I'm actually tryin to place SharpDX Window in Winforms Window like in the following video:
http://www.youtube.com/watch?v=g-JupOxwB-k
In SharpDX this method doesn't work.Can anyone tell me how to EASILY do this ?
don't think of it as putting a sharpDX window into a winforms window.
instead think of it as how to output SharpDX to the windows handle (sorta)
the key is in the SharpDX.DXGI.SwapChain. when creating this you will need a SwapChainDescription
I like to create mine like
SwapChainDescription scd = new SwapChainDescription()
{
//set other fields
OutputHandle = yourform.Handle,
//set other fields
};
Handle
SwapChainDescription
so when you call SwapChain.Present() it will render to the form.
this is the basic way to do it with straight SharpDX and not the toolkit stuff
EDIT 04222019 LINKS DO NOT WORK --- 01052022 Link fixed
if you want to use the toolkit's GraphicsDevice you will have to set the Presenter property. in almost the same way you set the window handle in the presentation parameters.
https://github.com/sharpdx/Toolkit/tree/master/Documentation
also the toolkit has the RenderForm which plays nice with the Game class
04222019
EDIT (DirectX Example)
here is an example using straight SharpDX (No Toolkit). for complete examples you should refer to the github examples HERE
As stated above all you need to do to render to a WindowsForm window is pass the Handle to the SwapChain
visual studio 2012
add the references: (all other references are default winforms project references)
some using statements to make things easier:
namespace YourNameSpaceHere
{
using Device = SharpDX.Direct3D11.Device;
using Buffer = SharpDX.Direct3D11.Buffer;
...the rest of the application
}
the Form class: here we make the device, swap chain, render target , and render target view a variable of the Form class we are declaring
public partial class Form1 : Form //default vs2012 declaration
{
Device d; //Direct311
SwapChain sc; //DXGI
Texture2D target; //Direct3D11
RenderTargetView targetveiw;//DIrect3D11
...the rest of the form
}
Initializing the Device and SwapChain: this is what works for me on my system. if you have problems than you need to research your specific implementation and hardware. DirectX (and by extension SharpDX) has methods by which you can detect what the hardware will support.
the main code Example:
using System;
using System.ComponentModel;//needed to overide OnClosing
//I removed useless usings
using System.Windows.Forms;
using SharpDX.Direct3D11;
using SharpDX.DXGI;
using SharpDX;
namespace WindowsFormsApplication2
{
using Device = SharpDX.Direct3D11.Device;
using Buffer = SharpDX.Direct3D11.Buffer;
public partial class Form1 : Form
{
Device d;
SwapChain sc;
Texture2D target;
RenderTargetView targetveiw;
public Form1()
{
InitializeComponent();
SwapChainDescription scd = new SwapChainDescription()
{
BufferCount = 1, //how many buffers are used for writing. it's recommended to have at least 2 buffers but this is an example
Flags = SwapChainFlags.None,
IsWindowed = true, //it's windowed
ModeDescription = new ModeDescription(
this.ClientSize.Width, //windows veiwable width
this.ClientSize.Height, //windows veiwable height
new Rational(60,1), //refresh rate
Format.R8G8B8A8_UNorm), //pixel format, you should resreach this for your specific implementation
OutputHandle = this.Handle, //the magic
SampleDescription = new SampleDescription(1, 0), //the first number is how many samples to take, anything above one is multisampling.
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput
};
Device.CreateWithSwapChain(
SharpDX.Direct3D.DriverType.Hardware,//hardware if you have a graphics card otherwise you can use software
DeviceCreationFlags.Debug, //helps debuging don't use this for release verion
scd, //the swapchain description made above
out d, out sc //our directx objects
);
target = Texture2D.FromSwapChain<Texture2D>(sc, 0);
targetveiw = new RenderTargetView(d, target);
d.ImmediateContext.OutputMerger.SetRenderTargets(targetveiw);
}
protected override void OnClosing(CancelEventArgs e)
{
//dipose of all objects
d.Dispose();
sc.Dispose();
target.Dispose();
targetveiw.Dispose();
base.OnClosing(e);
}
protected override void OnPaint(PaintEventArgs e)
{
//I am rendering here for this example
//normally I use a seperate thread to call Draw() and Present() in a loop
d.ImmediateContext.ClearRenderTargetView(targetveiw, Color.CornflowerBlue);//Color to make it look like default XNA project output.
sc.Present(0, PresentFlags.None);
base.OnPaint(e);
}
}
}
this is meant to get you started with using DirectX using ShaprDX in a managed environment, specifically C# on Windows. there is much much more you will need to get something real off the ground. and this is meant as the gateway to rendering on a Winforms window using SharpDX. I don't explain things like vertex/index buffers or rendering Textures/Sprites. because it is out of scope for the question.

BMP (OR JPG) to DDS converter programmatically

I have bmp (or jpg) file.
I need to convert it to dds file programmatically (I can use C++, C# with or without .NET; I can try any other language if I would see some clues in it)
I know that there are software that do this, but I need it to integrate into my programm, this should be a part of a longer manipulations on my application.
BTW, my question is:
1) Are there any opensource program that does this, so I can look into the code of it?
2) Are there any tutorials found somewhere in web that can help me to write this code?
I could not found any helpfull information.
Thank you!
I'm sure there are standalone converters you can use calling them via command line, if you need a programmatically solution the easiest way I may think about is to use built-in XNA classes to do the job. Because XNA handles all these file formats you can open source .bmp and then save it back to .dds (using the Texture class):
public static void ConvertToDds(
GraphicsDevice graphicsDevice, string sourcePath, string targetPath)
{
Texture.FromFile(graphicsDevice, sourcePath)
.Save(targetPath, ImageFileFormat.Dds);
}
Things changed with XNA 4.0 (these methods have been removed), try the DDSLib to write and Texture2D to read:
public static void ConvertToDds(
GraphicsDevice graphicsDevice, string sourcePath, string targetPath)
{
using (var stream = File.Open(sourcePath))
{
var texture = Texture2D.FromStream(graphicsDevice, stream);
DDSLib.DDSToFile(targetPath, true, texture, false);
}
}
See linked pages for more details and examples. By the way you can't have C# without .NET!

Tracking blobs with OpenCV

I have an EMGU (openCV wrapper) program that subtracts the background from
a camera feed and extracts nice clean blobs.
Now I need something that will track these blobs and assign them with IDs.
Any suggestions/libraries ?
Thanks,
SW
well if you have multiple objects that you would like to track you could try a Particle Filter.
Particle filters basically "disposes" particles on the image which each have a certain weight. In each time step these weights are then updated by comparing them with the actual measured value of the object at that time. Particles with high weight will then dispose more particles in its direction (with adding a slight random part on the direction) for the next time step.
After a few time steps the particles will then group around the objects measured position. That's why this method is sometimes also called Survival of the fittest method...
So this whole thing builds a circle:
Initialization ----> Sampling
> \
/ >
Updating Prediction
< /
\ <
Association
So this provides a good method of tracking objects in a given scene. One way to do multi-object tracking would be to use this one particle filter on all the objects, which would work, but has disadvantages when you try to give IDs to the objects and also when the objects cross each other since the particle clouds might lose one object and follow another one.
To solve this you could try a Mixture-Particle-Filter (by Vermaak et al. [2003]). This one tracks each of the objects by an individual Particle filter (with of course less necessary particles).
A good paper on that can be found here: http://www.springerlink.com/content/qn4704415gx65315/
(I can also supply you with several other stuff on that if you like and if you speak German I can even give you a presentation I held about that in my university a while ago)
EDIT:
Forgot to mention: Since you try to do this in OpenCV: as far as I know there is an implementation of the Condensation algorithm (the first one where you use one particle filter on the whole image) is part of the OpenCV distribution, though it might be outdated a bit. There might be newer ways of the particle filter in OpenCV directly but if not you will find a lot of results on Google if you look for OpenCV and particle filters
Hope that helps... if not, please keep asking...
You could simply adapt one of the EMGU CV examples that makes use of
VideoSurveillance namespace:
public partial class VideoSurveilance : Form
{
private static MCvFont _font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0);
private static Capture _cameraCapture;
private static BlobTrackerAuto<Bgr> _tracker;
private static IBGFGDetector<Bgr> _detector;
public VideoSurveilance()
{
InitializeComponent();
Run();
}
void Run()
{
try
{
_cameraCapture = new Capture();
}
catch (Exception e)
{
MessageBox.Show(e.Message);
return;
}
_detector = new FGDetector<Bgr>(FORGROUND_DETECTOR_TYPE.FGD);
_tracker = new BlobTrackerAuto<Bgr>();
Application.Idle += ProcessFrame;
}
void ProcessFrame(object sender, EventArgs e)
{
Image<Bgr, Byte> frame = _cameraCapture.QueryFrame();
frame._SmoothGaussian(3); //filter out noises
#region use the background code book model to find the forground mask
_detector.Update(frame);
Image<Gray, Byte> forgroundMask = _detector.ForgroundMask;
#endregion
_tracker.Process(frame, forgroundMask);
foreach (MCvBlob blob in _tracker)
{
frame.Draw(Rectangle.Round(blob), new Bgr(255.0, 255.0, 255.0), 2);
frame.Draw(blob.ID.ToString(), ref _font, Point.Round(blob.Center), new Bgr(255.0, 255.0, 255.0));
}
imageBox1.Image = frame;
imageBox2.Image = forgroundMask;
}
}

Resources