First Chance Exception - directx

I am following a tutorial to learn game programming with DirectX11. When I run the sample code it gives me this error:
First-chance exception at 0x76E12EEC in Chapter1.exe: Microsoft C++ exception: Platform::COMException ^ at memory location 0x0307E824. HRESULT:0x887A0004
The problem appears to be in featureLevel and creationFlag in the following code:
hr = D3D11CreateDevice(
nullptr,
D3D_DRIVER_TYPE_HARDWARE,
nullptr,
creationFlags,
featureLevels,
ARRAYSIZE(featureLevels),
D3D11_SDK_VERSION,
&device,
&featureLevel,
&context);
ThrowIfFailed(hr);
However, if I change values of creationFlags and featureLevels to 0 and nullptr, the code works fine. I am using Visual Studio 2012 with Windows 8.1 and Windows SDK 8.0.
Here is the relevant code:
UINT creationFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
#if defined(_DEBUG)
// For debugging
creationFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
};
I read about first chance exception that it doesn't mean there is something really wrong with code, but it doesn't just go away. What should I do?

Issue
You passing only D3D_FEATURE_LEVEL_11_1 and D3D_FEATURE_LEVEL_11_0, so D3D11CreateDevice() function fails, returns HRESULT which is not S_OK and your ThrowIfFailed(hr); function throws exception you've got.
You cannot create DirectX 11 hardware device and context if your GPU only supports DirectX 10: Createdevice*() function will fail. To be able to use DirectX 11 API (but not DirectX 11 features) on down-level hardware Microsoft introduced feature levels.
How to fix?
Just use conventional way of device creation.
You will need to pass array with all possible feature levels
D3D_FEATURE_LEVEL arrFeatLevels [] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_2,
D3D_FEATURE_LEVEL_9_1,
};
so, DirectX API will automatically choose the highest supported one.
(After device creation, you can find which one was chosen by looking at returned &featureLevel):
if(featureLevel >= D3D_FEATURE_LEVEL_11_0)
std::cout << "Yay! we using D3D11! :) " << std::endl;
else if( featureLevel >= D3D_FEATURE_LEVEL_10_0)
std::cout << "Oh noes! only D3D10 available! :(" <<std::endl;
else
std::cout << "Man, where did you take that old videocard? =\ " <<std::endl;
Note, that DirectX 11 features (such as Shader Model 5, tessellation shaders; compute shader) will not be available on device/context created with feature level lower than D3D_FEATURE_LEVEL_11_0. Same way, DirectX 10 features (e.g. geometry shader) will not be available with feature level lower than D3D_FEATURE_LEVEL_10_0.
All hardware supported features will run as usual.
Also, there are way to test features, that not supported by your hardware. You can create software emulated WARP device: pass D3D_DRIVER_TYPE_WARP. It is very slow and not intended for production code, but it permits developers test and debug D3D11 features even if they don't have top hardware.
Where to find my GPU's capabilities?
On GPU manufacturer's site. Or simply use tools like GPU-Z (it shows DX support) or GPU Caps Viewer (shows many OpenGL features).
Happy coding!= )

Related

How to connect a thermal printer to an ESP32?

I want to attach my GOOJPRT Thermal Printer (I believe model QR701, communication RS232) to my ESP32 but I cannot seem to get them working.
I tried all the Adafruit Thermal Printer library examples but get the same error each and every time:
"Error compiling for board ESP32 Dev Module."
I guess the libraries are not meant for the ESP32.
I also tried the "Thermal Printer Library" by Larry Bank (which should be compatible with the ESP32 according to its github docs) but there I cannot figure out how to connect the wires of thermal printer to the ESP32 correctly.
Of course, I do not ask for a specific solution, I am just looking for someone to point me in the right direction!
This is an image of the exact thermal printer I have
Full error message from Adafruit Thermal Printer examples:
C:\Users\Thomas\Documents\Arduino\libraries\SoftwareSerial-master\SoftwareSerial.cpp:41:27: fatal error: avr/interrupt.h: No such file or directory
compilation terminated.
Multiple libraries were found for "Adafruit_Thermal.h"
Used: C:\Users\Thomas\Documents\Arduino\libraries\Adafruit_Thermal_Printer_Library
Not used: C:\Users\Thomas\Documents\Arduino\libraries\Adafruit-Thermal-Printer-Library-master
Multiple libraries were found for "SoftwareSerial.h"
Used: C:\Users\Thomas\Documents\Arduino\libraries\SoftwareSerial-master
Not used: C:\Users\Thomas\Documents\Arduino\libraries\EspSoftwareSerial
exit status 1
Error compiling for board ESP32 Dev Module
You need to use the <HardwareSerial.h> library. SoftwareSerial is for Arduino boards.
"Thermal Printer Library" by Larry Bank is for GOOJPRT PT-210 and use Bluetooth. Won't work for qr-701.
Instead of using Adafruit Library, you can try with this:
ThermalPrinter
Quick start:
Import libraries:
#include "TPrinter.h"
#include <HardwareSerial.h>
Set baudrate and pins.
const int printerBaudrate = 9600; // or 19200 usually
const byte rxPin = 16; // check datasheet of your board
const byte txPin = 17; // check datasheet of your board
const byte dtrPin = 27; // optional
const byte rsePin = 4; // direction of transmission, max3485
You need to use boad with max3485(for 3V3 logic lvl) or similar, if you have printer with rs232.
Necessary in my case. I use board UART - RS485 3,3V - ARK/RJ11 - Waveshare 4777
Init
HardwareSerial mySerial(1);
Tprinter myPrinter(&mySerial, printerBaudrate);
void setup() {
micros();
mySerial.begin(printerBaudrate, SERIAL_8N1, rxPin, txPin); // must be 8N1 mode
pinMode(rsePin, OUTPUT); // optional
digitalWrite(rsePin, HIGH); // optional
// myPrinter.enableDtr(dtrPin, LOW); // optional
myPrinter.begin();
}

ZBar processor and delphi

Ok so I have been trying to get bar code scanning to work in a delphi application for the last 3 weeks now. Ive been directed to this example but that example uses other librarys like imagemagika and is a console application. I am looking for a vcl forms application.
Here is some code I have written to try and see if I can get the ZBar processor to work in delphi :
// Create Processor
processor := zbar_processor_create(0);
zbar_processor_set_config(processor, ZBAR_NONE, ZBAR_CFG_ENABLE, 1);
// Initialize processor
zbar_processor_init(processor, {what do I put here ?}, 1);
// Setup a callback
{I dont know what do here}
// Enable preview window
zbar_processor_set_visible(processor, 1);
zbar_processor_set_active(processor, 1);
This code is based on a example in C that I found here : https://github.com/ZBar/ZBar/blob/master/examples/processor.c
as well as the documentation over here :
http://zbar.sourceforge.net/api/zbar_8h.html#c-processor
The zbar window opens but it does not show the video feed because I parsed nil as a paramater in the initialize step. In the example they have this C code but I have no idea what it means :
const char *device = "/dev/video0";
/* initialize the Processor */
if(argc > 1)
device = argv[1];
zbar_processor_init(proc, device, 1);
If I parse '/dev/video0' instead of nil the video feed still doesn't show. So I guess my question is what do I need to parse in zbar_processor_init() function ?
I also dont know how to set up a callback function that will be called once a result is found. How would I go about doing this ?
Thanks in advance,
Kobus
argc is the number of parameters passed in the command line and argv fetches them. dev/video is linux style device. Try con:
zbar_processor_init(processor, 'con:', 1)
Con: is the console. Com1: serial port 1, Aux: auxiliary port - probably usb, Prn: the printer Lpt: the line printer.

Beaglebone Black with MIDI input (via USB) -> can't detect proper port

A few days back I wrote question regarding MIDI and ALSA, but I've since solved the problem and run into a new one.
the context in short:
I have a Beaglebone Black with debian 7.5 on it.
My host is a 32bit Ubuntu 14.10 installation.
I'm using Qt4.8.6 for arm cross-compilation.
I am trying create an application which uses a touchscreen and also reads MIDI input from a keyboard. I've used the following tutorial (http://embedded.von-kannen.net/2014/05/21/qt-4-8-6-on-beaglebone-black/) to install Qt embedded so I can crosscompile to my beaglebone (Tutorial needs some fixes, I've got a 'fixed' doc ready if anyone needs one) and the following one to compile ALSA for use on an ARM MPU: omappedia.org/wiki/ALSA_Setup
Basically after I finally got the program building and deploying onto my beaglebone black it couldn't find the port it needs to receive the MIDI signals.
I'm using a MidiMate II to connect the MIDI device I'm using to a USB port on a HUB in my Beaglebone Black.
I have the following code to check for available ports (C++):
RtMidiIn *midiin = 0;
// RtMidiIn constructor
try {
midiin = new RtMidiIn();
}
catch ( RtMidiError &error ) {
error.printMessage();
exit( EXIT_FAILURE );
}
// Check inputs.
unsigned int nPorts = midiin->getPortCount();
qDebug() << "\nThere are " << nPorts << " MIDI input sources available.\n";
std::string portName;
for ( unsigned int i=0; i<nPorts; i++ ) {
try {
portName = midiin->getPortName(i);
}
catch ( RtMidiError &error ) {
error.printMessage();
delete midiin
}
std::cout << " Input Port #" << i+1 << ": " << portName << '\n';
}
I can confirm that the MidiMate functions properly with Ubuntu. As running the application on desktop receives values just fine. I'm not certain of functionality on Debian for the BeagleBone.
The above code tells me there are no available input sources when ran on the Beaglebone, as opposed to the 2 available input sources when ran on both Ubuntu and Windows desktops.
my question:
How can I get my Beaglebone to detect the port that I need for reading the live MIDI input?
edit:
plugging the midimate into the beaglebone generates a midi1 entry int the /dev/ list.
however I don't know what and how to do with it.
the RtMidi function I use can only accept an unsigned integer as input so I can't provide the string "midi1" as an argument :(
Your distribution does not load the snd-seq and snd-seq-midi kernel modules when booting, and has no mechanism to load them on demand either.
Add them to the /etc/modules file.

MSBuild is failing inconsistently when performing a TFS build (usually error C1093 / Not enough Storage)

I have a really odd and hard to diagnose issue with MSBuild / TFS. I have a solution that contains about 12 different build configurations. When running on the build server, it takes maybe 30mins to build the lot and has worked fine for weeks now but now is occasionally failing.
Most of the time, when it fails it'll be an error like this:
19:25:45.037 2>TestPlanDocument.cpp(1): fatal error C1093: API call 'GetAssemblyRefHash' failed '0x8007000e' : ErrorMessage: Not enough storage is available to complete this operation. [C:\Builds\1\ICCSim Card Test Controller\ICCSimCTC Release\src\CardTestController\CardTestController.vcxproj]
The error will sometimes happen on a different file. It won't happen for every build configuration either, it's very inconsistent and occasionally even builds all of them successfully. There's not much different between the build configurations either, mostly it's just a few string changes and of course they all build locally just fine.
The API call in question is usually GetAssemblyRefHash but not always. I don't think this is the issue, as Googling for GetAssemblyRefHash specifically brings up next to nothing. I suspect there's some kind of resource issue at play here but I'm at a loss as to what: There's plenty of HDD space (Hundreds of GB's), plenty of RAM (Machine originally had 4GB minimum allocated but was dynamic as it's a Hyper-v - it never pushed above 2.5GB. I upped this to 8GB minimum just in case and there's been no change).
I've set the build verbosity to diagnostic and it doesn't really show anything else that's helpful, just the same error.
For reference, the build server is fully up to date on all patches. It's running Windows Server 2012 R2, has TFS 2013 and VS 2013 installed, both are on Update 4.
I'm really at a loss at this point and would appreciate any help or pointers.
EDIT: Just to keep people up to date, the compile toolchain was in 32bit mode however even after switching to 64bit, the issue persists.
I think I found the source, but I still don't know the reason.
Browsing through the Microsoft Shared Source, we can find the source for GetAssemblyRefHash():
HRESULT CAsmLink::GetAssemblyRefHash(mdToken FileToken, const void** ppvHash, DWORD* pcbHash)
{
if (TypeFromToken(FileToken) != mdtAssemblyRef) {
VSFAIL( "You can only get AssemblyRef hashes for assemblies!");
return E_INVALIDARG;
}
HRESULT hr;
CAssembly *file = NULL;
if (FAILED(hr = m_pImports->GetFile( FileToken, (CFile**)&file)))
return hr;
return file->GetHash(ppvHash, pcbHash);
}
Only two places here to investigate - the call to m_pImports->GetFile(), where m_pImports is CAssembly *m_pImports;, the other is file->GetHash().
m_pImports->GetFile() is here, and is a dead end:
HRESULT CAssembly::GetFile(DWORD index, CFile** file)
{
if (!file)
return E_POINTER;
if (RidFromToken(index) < m_Files.Count()) {
if ((*file = m_Files.GetAt(RidFromToken(index))))
return S_OK;
}
return ReportError(E_INVALIDARG);
}
file->GetHash(), which is here:
HRESULT CAssembly::GetHash(const void ** ppvHash, DWORD *pcbHash)
{
ASSERT( ppvHash && pcbHash);
if (IsInMemory()) {
// We can't hash an InMemory file
*ppvHash = NULL;
*pcbHash = 0;
return S_FALSE;
}
if (!m_bDoHash || (m_cbHash && m_pbHash != NULL)) {
*ppvHash = m_pbHash;
*pcbHash = m_cbHash;
return S_OK;
}
DWORD cchSize = 0, result;
// AssemblyRefs ALWAYS use CALG_SHA1
ALG_ID alg = CALG_SHA1;
if (StrongNameHashSize( alg, &cchSize) == FALSE)
return ReportError(StrongNameErrorInfo());
if ((m_pbHash = new BYTE[cchSize]) == NULL)
return ReportError(E_OUTOFMEMORY);
m_cbHash = cchSize;
if ((result = GetHashFromAssemblyFileW(m_Path, &alg, (BYTE*)m_pbHash, cchSize, &m_cbHash)) != 0) {
delete [] m_pbHash;
m_pbHash = 0;
m_cbHash = 0;
}
*ppvHash = m_pbHash;
*pcbHash = m_cbHash;
return result == 0 ? S_OK : ReportError(HRESULT_FROM_WIN32(result));
}
We can see that about halfway down, it tries to allocate room to store the byte[] result, and when fails, returns E_OUTOFMEMORY, which is the error code you're seeing:
if ((m_pbHash = new BYTE[cchSize]) == NULL)
return ReportError(E_OUTOFMEMORY);
m_cbHash = cchSize;
There's other paths to consider, but this seems like the most obvious source. So it looks like the problem is that a plain memory allocation is failing.
What could cause this?
Lack of free physical memory pages / swap
Memory fragmentation in the process.
Inability to reserve commit space for this in the swap file
Lack of address space
At this point, my best guess would be memory fragmentation. Have you triple checked that the Microsoft CPP compiler is running in 64-bit mode? Perhaps see if you can debug the compiler (Microsoft symbol servers may be able to help you here), and set a breakpoint for that line and dump the heap when it happens.
Some specifics on diagnosing heap fragmentation - fire up sysinternal's VMMap when the compiler breaks, and look at the free list - you need three chunks at least 64 kB free to perform an allocation; less than 64 kB and it won't get used, and two 64 kB chunks are reserved.
Okay, I have an update to this! I opened a support ticket with Microsoft and have been busy working with them to figure out the issue.
They went down the same paths as outlined above and came to the same conclusion - it's not a resources issue.
To cut a long story short, Microsoft has now acknowledged that this is likely a bug in the VC++ compiler, which is almost certainly caused by a race condition (Though this is unconfirmed). There's no word on if they'll fix it in a future release.
There is a workaround by using the /MP flag at the project level to limit the number of compiler processes opened by MSBuild without disabling multiple instances entirely (Which for me was doubling build times).
To do this, go to your project properties and under Configuration Properties -> C/C++ -> Command Line, you need to specify the /MP flag and then a number to limit the number of processes.
My build server has 8 Virtual CPU's and the normal behaviour is equivelant to /MP8 but this causes the bug to sometimes appear. For me, using /MP4 seems to be enough to limit the bug without causing build times to increase too much. If you're seeing a bug similar to this, you may need to experiment with other numbers such as /MP6 or /MP2.

PROCCESING-Pixel operations are not supported on this device

Any code I write that requires save(), saveFrame() or functions like loadpixels() I can't use, what's stopping me from saving edited pitcures.
Error it says its: Pixel operations are not supported on this device.
About my computer:
Windows7 ultimate Service pack 1, 64-bit
AMD A10-5800K APU with Radeon(tm) HD Graphics 3.80 GHZ
UPDATE
It works on any other computer just not mine, even some basic codes like this one for example
size(640,480);
background(255);
fill(44);
beginShape();
vertex(50,20);
vertex(600,160);
vertex(190,400);
endShape(CLOSE);
saveFrame("izlaz1.jpg");
I suppose that your Windows' color depth setting is set too low.
Processing assumes that system exposes color depth as 32-bit (RGB + alpha = 4*8bit).
This is fragment from PGraphicsJava2D class:
protected WritableRaster getRaster() {
...
if (raster.getTransferType() != DataBuffer.TYPE_INT) {
System.err.println("See https://github.com/processing/processing/issues/2010");
throw new RuntimeException("Pixel operations are not supported on this device.");
}
return raster;
}
So "Pixel operations are not supported" exception is raised when your system exposes to low color depth.
Try to change your Windows' setting.
Some helper links below:
https://github.com/processing/processing/issues/2010
http://helpx.adobe.com/x-productkb/global/change-color-depth-resolution-windows.html

Resources