I develop an application utilizing D3D11.2 Tiled Resources feature. I have GeForce 780 and some Radeon 7900 series graphics cards, but both support only Tier 1 set.
A year ago, AMD claimed to have a hardware fully supporting Tiled Resources, but Direct3D caps reporting, that only Tier 1 can be used, determined that was a lie (probably they had some issues with conformance, performance or stability and decided to release driver with Tier 2 disabled).
I found some links to articles, that there is a chance, that Tier 2 is supported in the recently released graphics cards, like for example:
The different tier's represent the level of Tiled Resources feature supported under DX 11.2. R7 260X, R9 290 and R9 290X will have the ability to support the entire feature set, both tiers of Tiled Resources under DX 11.2 in Windows 8.1
Source: http://www.hardocp.com/article/2013/10/23/amd_radeon_r9_290x_video_card_review#.U_2OPflByjk
But IT journalist have usually no idea what they are writing about, especially when graphics hardware is concerned, so I'm asking the question here.
Is there any Radeon R7/R9 owner, who can confirm, that Tiled Resources Tier 2 is supported in these graphics cards?
Windows 8.1 SDK has tool called dxcapsviewer. You can check this in DXGI Devices -> (graphics card name) -> Direct3D 11.2 -> D3D_FEATURE_LEVEL_11_1 -> Tiled Resources.
Regards
The R9 290 does indicate support for Tiled Resources Tier 2 in the caps viewer on the latest Catalyst beta driver. It should also be enabled on the latest retail one as well. The R7 should have the same level of support.
Related
I have different laptops having title as below.
Acer One 10 S1002-15XR NT.G53SI.001 10.1 Inch Laptop (Quad Core/2GB/32GB eMMC/Win 10/Touch) Dark Silver
Acer One S1003 Nt Lcqsi 001 Hybrid (2 In 1) Intel Atom 2 Gb 25.65cm(10.1) Windows 10 Home - Black
Acer One S1003 Nt Lcqsi 001 Hybrid (2 In 1) Intel Atom 2 Gb 25.65cm(10.1") Windows 10 Home - Black
Acer One S1003 Nt Lcqsi 001 Hybrid (2 In 1) Intel Atom 2 Gb 25.65cm(10.1Inch) Windows 10 Home - Black
HP Spectre 13 i7 8GB 512GB SSD 10.1 Full HD (1920x1080) Touch Back-lit KeyBoard Intel HD 620 No CD/DVD Drive Dark Ash
So all the above laptops has 10.1 inch screen size, but it is typed differently. So how can I generalize all these to common one as 10_inch using google's Dialogflow.
I have made screen_size entity like below.
But I don't want all possible screen sizes to be specified in entity.
Can we do this using system entity or composite entity?
Parsing this sort of product information is outside of the normal use case for Dialogflow, which is generally intended for use in building conversational experiences that involve natural language (such as chatbots).
If you're looking for an API, I had some success in extracting the information you are looking for using the Cloud Natural Language API.
There's a tool you can use to test it out; enter the string, hit "Analyze" and then click "Syntax". For all the examples you gave above, the screen size was extracted as a num part of speech.
Even so, these APIs were designed for use with natural language. The machine learning model it is based on was not trained on this sort of input text, so it can't extract meaning from it.
As an alternative, you could try training your own extractor using a machine learning toolkit such as Tensorflow, or just write some crazy regex or string parsing algorithm.
The DXGI Overview on MSDN says that the Direct3D API (10, 11 and 12) sits on top of DXGI wheras DXGI sits on top of the Hardware which is illustrated by the following picture:
The article further mentions that the tasks of DXGI basically are enumerating adapters and present images on the screen. Now, if DirectX sits on top of DXGI, how are all the math related tasks invoked on the actual hardware (GPU)? Or is the the architectural overview wrong and D3D_ also directly access the hardware?
This diagram is a logical map, not an explicit map of how everything actually gets implemented. In reality, Direct3D and DXGI are more 'side-by-side' and the layer that includes the User Mode Driver (UMD) and the Kernel Mode Driver (KMD) is the Windows Display Driver Model (WDDM) which uses the Device Driver Interface (DDI) to communicate to the kernel mode which in turns communicates with the hardware. The various versions of Direct3D are also 'lofted' together to use the same DDI in most cases (i.e. Direct3D 9 an Direct3D 10 legacy applications end up going through the same Direct3D 11 codepaths where possible).
Since "DXGI" means "DirectX Graphics Infrastructure" this diagram is lumping the DXGI APIs with WDDM and DDI.
The purpose of the DXGI API was to separate the video hardware/output enumeration as well as swapchain creation/presentation from Direct3D. Back in Direct3D 9 and prior, these were all lumped together. In theory DXGI was supposed to not change much between Direct3D versions, but in practice it has evolved at basically the same pace with a lot of changes dealing with the CoreWindow swapchain model for Windows Store apps / Universal Windows Platform apps.
Many of the DXGI APIs are really for internal use, particularly when dealing with surface creation. You need to create Direct3D resources with the Direct3D APIs and not try to create them directly with DXGI, but you can use QueryInterface in places to get a DXGI surface for doing certain operations like inter-API surface sharing. With Direct3D 11.1 or later, most of the device sharing behavior has been automated so you don't have to deal with DXGI to use Direct2D/DirectWrite with Direct3D 11.
The real question is: Why does it matter to you?
See DirectX Graphics Infrastructure (DXGI): Best Practices and Surface Sharing Between Windows Graphics APIs
I'm porting a DirectX 9 program to DirectX 11. How do I get the value in DirectX 11 that is retrieved using
D3DCAPS9::MaxVertexIndex
in DirectX 9?
Thanks in advance.
DirectX 11 uses "feature levels" to capture the bulk of device capabilities in set stair-step fashion. You should read about feature levels on MSDN and in this blog post.
Feature Level 9.1 supports 16-bit indices and Feature Level 9.2 or later supports 32-bit indices.
The MaxVertexIndex is essentially the same as "Max Primitive Count" on the MSDN Feature Level table.
Feature Level 9.1 requires MaxVertexIndex >= 65534
Feature Level 9.2 and 9.3 requires MaxVertexIndex >= 1048575
Feature Level 10.0 or later defines the maximum vertex index as a
full 32-bits (with the 0xFFFFFFFF value reserved for strip restarts)
i.e. 4294967295
BTW, there are a few optional features that hardware can expose in addition to their defined feature levels, but there's really only a few dozen of these across the whole ecosystem. You use CheckFeatureSupport for most of these. You can use CheckFormatSupport for a lot of information, but the bulk of the settings here are strictly determined by the feature level anyhow. See MSDN for the DXGI format support tables.
Where could I find OpenCL SDK for intel core 2 due?
Graphic card: mobile intel (r) series express chipset family.
The current intel OpenCL SDK does not support Core 2 Duo Series CPUs. (See release notes)
If, however, you want to use that kind of CPU for OpenCL (development), you can use the AMD APP SDK. It supports all CPUs with at least SSE 2.x, as can be seen here
Works for me (Core2Duo 6750, Ubuntu)
If I want to do scaling and compositing of 2D anti-aliased vector and bitmap images in real-time on Windows XP and later versions of Windows, making the best use of hardware acceleration available, should I be using GDI+ or DirectX 9.0c? (Actually, Windows XP and Windows 7 are important but we're not concerned about performance on Vista.)
Is there any merit in using SDL, given that the application is not cross-platform (and never will be)? I wonder if SDL might make it easier to switch to whichever underlying drawing API gives better performance…
Where can I find the documentation for doing scaling and compositing of 2D images in DirectX 9.0c? (I found the documentation for DirectDraw but read that it is deprecated after DirectX 7. But Direct2D is not available until DirectX 10.)
Can I reasonably expect scaling and compositing to be hardware accelerated on Windows XP on a mid- to low-spec PC (i.e. integrated graphics)? If not then does it even matter whether I use GDI+ or DirectX 9.0c?
Do not use GDI+. It does everything in software, and it has a rendering model that is not good for performance in software. You'd be better off with just about anything else.
Direct3D or OpenGL (which you can access via SDL if you want a more complete API that is cross-platform) will give you the best performance on hardware that supports it. Direct2D is in the same boat but is not available on Windows XP. My understanding is that, at least in the case of Intel's integrated GPU's, the hardware is able to do simple operations like transforming and composing, and that most of the problems with these GPU's are with games that have high demands for features and performance, and are optimized for ATI/Nvidia cards. If you somehow find a machine where Direct3D is not supported by the video card and is falling back to software, then you might have a problem.
I believe SDL uses DirectDraw on Windows for its non-OpenGL drawing. Somehow I got the impression that DirectDraw does all its operations in software in modern releases of Windows (and given what DirectDraw is used for it never really mattered since the win9x era), but I'm not able to verify that.
The ideal would be a cross-platform vector graphics library that can make use of Direct3D or OpenGL for rendering, but AFAICT no such thing is available. The Cairo graphics library lacks acceleration on Windows, and Mozilla has started a project called Azure that apparently has that but doesn't appear to be designed for use outside of their projects.
I just found this: 2D Rendering in DirectX 8.
It appears that since Microsoft removed DirectDraw after DirectX 7 they expected all 2D drawing to be done using the 3D API. This would explain why I totally failed to find the documentation I was looking for.
The article looks promising so far.
Here's another: 2D Programming in a 3D World