XNA/Monogame GraphicsDevice method change? What is proper way to instantiate it? - xna

I had a game I wrote a few months ago and it worked fine. Recently I've updated my Monogame references and now something that compiled and worked before doesn't because the signature's changed on GraphicsDevice, but not sure how best to implement it now. Haven't found any examples yet.
Original line:
var obsticleTexture = new Texture2D(new GraphicsDevice(), 0, 0);
but now I get
'Microsoft.Xna.Framework.Graphics.GraphicsDevice' does not contain a
constructor that takes 0 arguments
The signature's changed to:
GraphicsDevice(GraphicsAdapter adapter, GraphicsProfile graphicsProfile, PresentationParameters presentationParameters)
I tried doing new Texture2D(new GraphicsDevice(null, GraphicsProfile.HiDef, new PresentationParameters()),0,0); but that didn't work.

Try this:
GraphicsDevice newGraphicsDevice = new GraphicsDevice(GraphicsAdapter.DefaultAdapter, GraphicsProfile.HiDef, new PresentationParameters());
Texture2D texture = new Texture2D(newGraphicsDevice, 1, 1)
Keep in mind that the width and height of the Texture2D must be > 0.

Related

cups4J printing multiple copies

i have the following code:
FileInputStream fis =
new FileInputStream("C:/test.pdf");
//PrintJob.Builder test = new PrintJob.Builder(fis);
//test.duplex(true);
//test.build();
Map <String,String> newMap = new HashMap<String, String>();
newMap.put("job-attributes", "sides:keyword:two-sided-short-edge#copies:2");
PrintJob pj = new PrintJob.Builder(fis).jobName("testJob").copies(2).attributes(newMap).build();
cp.print(pj);
The issues i have is even though i have set copies to (2) it only prints it out once....
anything i have done wrong?
copies:2 in the job-attributes is incorrect. You need to code:
copies:integer:2
Somehow, the incorrect job-attributes entry causes the...
.copies(2)
..on the Builder to be ignored.
I was able to reproduce that on my system using the older(!) de.spqr-info cups4j v1.1 from 2016 (not the current v0.7.6 org.cups4j).
But beware: if the job-attributes value is correct, the value from the Builder will be used (even if you didn't specify it! It defaults in that case to 1)
The only way to use the value from the job-attributes is to explicitly code .copies(n) (where n <= 0) on the Builder.

Ogre::SceneManager::setAmbientLight doese not work

i am building the wiki Advanced Ogre Framework,
then i find that the Ogre::SceneManager::setAmbientLight() does not work at all.
i find nothing useful after google, anyone can give me some idea?
the code is like this:
m_pSceneMgr = OgreFramework::getSingletonPtr()->m_pRoot->createSceneManager(ST_GENERIC, "GameSceneMgr");
m_pSceneMgr->setAmbientLight(Ogre::ColourValue(0.7f, 0.7f, 0.7f));
finally i figure it out bymyslef,
in the framework, i call setAmbientLight() before these code:
DotSceneLoader* pDotSceneLoader = new DotSceneLoader();
pDotSceneLoader->parseDotScene("CubeScene.xml", "General", m_pSceneMgr, m_pSceneMgr->getRootSceneNode());
delete pDotSceneLoader;
there is a node in the CubeScene.xml set the ambient color again, which is (0,0,0), so my call can not work then.

How to not use threads

This is a dart newbie question about how to do "multithreading" in dart.
(Excuse me I am an old java developer ...)
So I have this kind of code (se below) but since recreating the gui is costly I would like to defer it so that instead of recreating the gui in the _onWindowResize() I would like to start a thread that does this when the size has been stable some time. E.g. for one second.
If a thread is already is started do nothing. (Btw, StageXL is cool ....)
(This will also fix the bug that _onWindowResize() is called twice by the dart:html ...)
...
html.window.onResize.listen((e) => _onWindowResize());
}
_createGui() {
var shape = new Shape();
shape.graphics.ellipse(html.window.innerWidth / 2, html.window.innerHeight / 2, html.window.innerWidth / 4, html.window.innerHeight / 4);
shape.graphics.fillColor(Color.Red);
stage.addChild(shape);
}
void _onWindowResize() {
print("New window size ${html.window.innerWidth}x${html.window.innerHeight}");
stage = new Stage('stage', canvas);
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.align = StageAlign.TOP_LEFT;
renderLoop = new RenderLoop();
renderLoop.addStage(stage);
juggler = renderLoop.juggler;
_createGui();
}
One can send work to other threads in Dart via Isolates, but this won't work for your scenario since it's mostly about modifying the UI of the app.
One cannot share objects between isolates in Dart (or using WebWorkers in general). So you cannot pass the canvas into an Isolate to create your stage, renderloop, etc.
If you are doing complex calculations (Physics, for example), it might make sense to send those off to an Isolate and use the result to update the UI.

How to set error correction level for QR code when using the new createbitmap method

This question is in reference to the API documentation link, http://www.blackberry.com/developers/docs/7.0.0api/net/rim/device/api/barcodelib/BarcodeBitmap.html
They specify that the old method
public static Bitmap createBitmap(ByteMatrix byteMatrix,
int maxBitmapSizeInPixels)
is deprecated.
But by using the new method,
public static Bitmap createBitmap(ByteMatrix byteMatrix)
they haven't specified a way to specify the error correction level for the QR code in Multiformatwriter. I haven't been able to find a way either, looking through various member functions.
Has anyone tried this?
Thanks for your help.
Here is my code, and I have checked with my phone, the error correction level is set correctly according to my phone.
Hashtable hints = new Hashtable();
switch (comboBox1.Text)
{
case "L":
hints.Add(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.L);
break;
case "Q":
hints.Add(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.Q);
break;
case "H":
hints.Add(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.H);
break;
default:
hints.Add(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.M);
break;
}
MultiFormatWriter mw = new MultiFormatWriter();
ByteMatrix bm = mw.encode(data, BarcodeFormat.QR_CODE, size, size, hints);
Bitmap img = bm.ToBitmap();
pictureBox1.Image = img;
When encoding, you can pass in hints
Map<EncodeHintType, Object> hints = new Hastable<EncodeHintType, Object>();
Add the error correction setting to the hints (for example to level M)
hints.put(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.M);
ZXing uses error correction level L by default (the lowest, meaning the QR Code will still be readable even after a max of 7% damage)
Just looked to the documentation.
It says to use createBitmap(ByteMatrix byteMatrix) in conjunction with MultiFormatWriter. That has method encode(String contents, BarcodeFormat format, int width, int height, Hashtable hints) where you could specify width, height and error level.
To specify error level put to hints hashtable key EncodeHintType.ERROR_CORRECTION with value new Integer(level).
Unfortunately I didn't find any constants for these values as described here. but probably you could find it in axing sources.

How do I set up DirectX 9 so that backface culling is off, z-buffering is on, and gouraud shading works, for triangle meshes without normals data?

I've been having difficulty identifying the correct parameters for the PresentParameters and DirectX device, so that there can be both vertex-level gouraud shading and the use of a z buffer. Some triangle meshes work fine, others have background triangles appearing in front of triangles which are closer-to-camera.
An example of this is found here: http://gallery.me.com/robert.perkins/100045/zBufferGone. The input data is a simple list of vertices in facets. The winding order of the vertices in each facet is nondeterministic (comes from various CAD software export functions) and there is no normals data.
The PresentParameters are being set up right now as follows. I realize this is C# instead of C++ but I think it's descriptive enough, and the parameters pass through to C++ code. This produces the image in the picture; the behavior is the same on the Reference device:
pParams = new PresentParameters()
{
BackBufferWidth = this.ClientSize.Width,
BackBufferHeight = this.ClientSize.Height,
AutoDepthStencilFormat = Format.D16,
EnableAutoDepthStencil = true,
SwapEffect = SwapEffect.Discard,
Windowed = true
};
_engineDX9 = new EngineDX9(this, SlimDX.Direct3D9.DeviceType.Hardware, SlimDX.Direct3D9.CreateFlags.SoftwareVertexProcessing, pParams);
_engineDX9.DefaultCamera.NearPlane = 0;
_engineDX9.DefaultCamera.FarPlane = 10;
_engineDX9.D3DDevice.SetRenderState(RenderState.Ambient, false);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZEnable, ZBufferType.UseZBuffer);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZWriteEnable, true);
_engineDX9.D3DDevice.SetRenderState(RenderState.ZFunc, Compare.Always);
_engineDX9.BackColor = Color.White;
_engineDX9.FillMode = FillMode.Solid;
_engineDX9.CullMode = Cull.None;
_engineDX9.DefaultCamera.AspectRatio = (float)this.Width / this.Height;
All of my other setup attempts, even on the reference device, return a COM error code ({"D3DERR_INVALIDCALL: Invalid call (-2005530516)"}). What are the correct setup parameters?
EDIT: The C++ class which interfaces with DirectX9 sets defaults like this:
PresentParameters::PresentParameters()
{
BackBufferWidth = 640;
BackBufferHeight = 480;
BackBufferFormat = Format::X8R8G8B8;
BackBufferCount = 1;
Multisample = MultisampleType::None;
MultisampleQuality = 0;
SwapEffect = SlimDX::Direct3D9::SwapEffect::Discard;
DeviceWindowHandle = IntPtr::Zero;
Windowed = true;
EnableAutoDepthStencil = true;
AutoDepthStencilFormat = Format::D24X8;
PresentFlags = SlimDX::Direct3D9::PresentFlags::None;
FullScreenRefreshRateInHertz = 0;
PresentationInterval = PresentInterval::Immediate;
}
Where does it return an invalid call?
Edit: I'm assuming in the new EngineDX9 call? Have you tried setting a device window handle in the present parameters?
Edit 2: Have you turned on the debug spew in the DirectX control panel to see whether it tells you what the error is?
Edit3: You have tried setting backbufferWidth and Height to 0? What is backbuffercount set to? Might also be worth trying "Format.D24S8" on the backbuffer? Its "possible" your graphics card doesn't support 16-bit (unlikely though). Have you checked in the caps that the mode you are trying to create is valid? I asssume, btw, that the CLR language you are using automagically sets the parameters you don't set to 0? I,personally, always prefer to be explicit in such cases ....
PS I'm guessing here because im a native C++ DX9 coder not a CLR SlimDX coder ...
Edit4: I'm sure its the lack of window handle ... I'm probably wrong but thats the only thing i can see REALLY wrong with your setup. A windowed DX9 device requires a window. Btw set width and height to 0 to just use the window you are setting the device too's size ...
Edit 5: I've really been heading down the wrong route here. There is nothing wrong with the creation of the device that produced your "incorrect" device. Do not mess with the present parameters they are fine. The main reason you'll have problems with your Z-Buffering is that you set the compare function to always. This means that, regardless of what the z-buffer contains, pas the pixel and write its z into the z-buffer overwriting whatever is there already. I'd wager therein lies your Z-buffering problem.

Resources