directx feature level and code - directx

I coded a program with direct11 and my feature level coded like this
unsigned int featureLevel[4] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0
};
I am curious that why this program can work in feature level 10 even if I coded only direct11?
If I used direct11 version functions, should the program run only direct11?

I think the feature levels have nothing to do with directx api version. Different feature level has different feature which is useful for game developer such as partial constant buffer updates,16bpp rendering, partial clears and so on.

Related

Determining which swap chain formats are supported

When calling IDXGIFactory1::CreateSwapChain with DXGI_FORMAT_B5G6R5_UNORM I get an error that this format isn't supported, specifically E_INVALIDARG One or more arguments are invalid. However, this works fine with a more standard format like DXGI_FORMAT_B8G8R8A8_UNORM.
I'm trying to understand how I can know which swap chain formats are supported. From digging around in the documentation, I can find lists of supported formats for "render targets", but this doesn't appear to be the same set of formats supported for swap chains. B5G6R5 does need 11.1 to have required support for most uses, but it's working as a render target.
https://learn.microsoft.com/en-us/previous-versions//ff471325(v=vs.85)
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-0-feature-level-hardware
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-1-feature-level-hardware
As a test, I looped through all formats and attempted to create swap chains with each. Of the 118 formats, only 8 appear to be supported on my machine (RTX 2070):
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB
DXGI_FORMAT_NV12
DXGI_FORMAT_YUY2
What is the proper way to know which swap chain formats are supported?
For additional context, I'm doing off-screen rendering to a 16-bit (565) format. I have an optional "preview window" that I open occasionally to quickly see the rendering results. When I create the window I create a swap chain and do a copy from the real render target into the swap chain back buffer. I'm targeting DirectX 11 or 11.1. I'm able to render to the B5G6R5 format just fine, it's only the swap chain that complains. I'm running Windows 10 1909.
Here's a Gist with resource creation snippets and a full code sample.
https://gist.github.com/akbyrd/c9d312048b49c5bd607ceba084d95bd0
For the swap-chain, it must be supported for "Display Scan-Out". If you need to check a format support at runtime, you can use:
UINT formatSupport = 0;
if (FAILED(device->CheckFormatSupport(backBufferFormat, &formatSupport)))
formatSupport = 0;
UINT32 required = D3D11_FORMAT_SUPPORT_RENDER_TARGET | D3D11_FORMAT_SUPPORT_DISPLAY;
if ((formatSupport & required) != required)
{
// Not supported
}
For all Direct3D Hardware Feature Levels devices, you can always count on DXGI_FORMAT_R8G8B8A8_UNORM working. Unless you are on Windows Vista or ancient WDDM 1.0 legacy drivers, you can also count on DXGI_FORMAT_B8G8R8A8_UNORM
For Direct3D Hardware Feature Level 10.0 or better devices, you can also count on DXGI_FORMAT_R16G16B16A16_FLOAT and DXGI_FORMAT_R10G10B10A2_UNORM being supported.
You can also count on all Direct3D Hardware Feature Level devices supporting DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and DXGI_FORMAT_B8G8R8A8_UNORM_SRGB for swap-chains if you are using the 'legacy' swap effects. For modern swap effects which are required for DirectX 12 and recommended on Windows 10 for DirectX 11 (see this blog post), the swapchain buffer is not created with _SRGB but instead you create just the render target view with it.
See Anatomy of Direct3D 11 Create Device

FFT in C++ AMP Throw CLIPBRD_E_CANT_OPEN error

I am trying to use C++ AMP in Visual C++ 2017 on Windows 10 (updated to the latest) and I find the archived FFT library from C++ AMP team on codeplex. I try to run the sample code, however the program throws ran out of memory error when creating DirectX FFT. I solve that problem by following the thread on Microsoft forum.
However, the problem doesn't stop. When the FFT library tries to create Unordered Access View, it throws error of CLIPBRD_E_CANT_OPEN. I did not try to operate on clipboard anyhow.
Thank you for reading this!
It seems I solve the problem. The original post mentioned that we need to create a new DirectX device and then create accelerator view upon it. Then I pass that view to ctor of fft as the second parameter.
fft(
concurrency::extent<_Dim> _Transform_extent,
const concurrency::accelerator_view& _Av = concurrency::accelerator().default_view,
float _Forward_scale = 0.0f,
float _Inverse_scale = 0.0f)
However, I still have crashes of the CLIPBRD_E_CANT_OPEN.
After reading the code, I realize that I need to create array on that DirectX views too. So I started to change:
array<std::complex<float>,dims> transformed_array(extend, directx_acc_view);
The idea comes from the different behaviors of create_uav(). The internal buffers and the precomputing caused no problem, but the samples' calls trigger the clipboard error. I guess the device matters here, so I do that change.
I hope my understanding is correct and anyway there is no such errors now.

Detect if Cycript/Substrate or gdb is attached to an iOS app's process?

I am building an iOS app that transmits sensitive data to my server, and I'm signing my API requests as an additional measure. I want to make reverse engineering as hard as possible, and having used Cycript to find signing keys of some real-world apps, I know it's not hard to find these keys by attaching to a process. I am absolutely aware that if someone is really skilled and tries hard enough, they eventually will exploit, but I'm trying to make it as hard as possible, while still being convenient for myself and users.
I can check for jailbroken status and take additional measures, or I can do SSL pinning, but both are still easy to bypass by attaching to the process and modifying the memory.
Is there any way to detect if something (whether it be Cycript, gdb, or any similar tool that can be used for cracking the process) is attached to the process, while not being rejected from App Store?
EDIT: This is not a duplicate of Detecting if iOS app is run in debugger. That question is more related to outputting and it checks an output stream to identify if there's an output stream attached to a logger, while my question is not related to that (and that check doesn't cover my condition).
gdb detection is doable via the linked stackoverflow question - it uses the kstat to determine if the process is being debugged. This will detect if a debugger is currently attached to the process.
There is also a piece of code - Using the Macro SEC_IS_BEING_DEBUGGED_RETURN_NIL in iOS app - which allows you to throw in a macro that performs the debugger attached check in a variety of locations in your code (it's C/Objective-C).
As for detecting Cycript, when it is run against a process, it injects a dylib into the process to deal with communications between the cycript command line and the process - the library has part of the name looking like cynject. That name doesn't look similar to any libraries that are present on a typical iOS app. This should be detectable with a little loop like (C):
BOOL hasCynject() {
int max = _dyld_image_count();
for (int i = 0; i < max; i++) {
const char *name = _dyld_get_image_name(i);
if (name != NULL) {
if (strstr(name, "cynject") == 0) return YES;
}
}
}
Again, giving it a better name than this would be advisable, as well as obfuscating the string that you're testing.
These are only approaches that can be taken - unfortunately these would only protect you in some ways at run-time, if someone chooses to point IDA or some other disassembler at it then you would not be protected.
The reason that the check for debugger is implemented as a macro is that you would be placing the code in a variety of places in the code, and as a result someone trying to fix it would have to patch the app in a variety of places.
Based on #petesh's answer, I found the below code achieved what I wanted on a jailbroken phone with Cycript. The existence of printf strings is gold to a reverse engineer, so this code is only suitable for demo / crack-me apps.
#include <stdio.h>
#include <string.h>
#include <mach-o/dyld.h>
int main ()
{
int max = _dyld_image_count();
for (int i = 0; i < max; i++) {
const char *name = _dyld_get_image_name(i);
const char needle[11] = "libcycript";
char *ret;
if ((ret = strstr(name, needle)) != NULL){
printf("%s\nThe substring is: %s\n", name, ret);
}
}
return 0;
}
As far as I know, Cycript process injection is made possible by debug symbols. So, if you strip out debug symbols for the App Store release (the default build setting for the Release configuration), that would help.
Another action you could take, which would have no impact on the usability of the App, would be to use an obfuscator. However, this would render any crash reports useless, since you wouldn't be able to make sense of the symbols, even if the crash report was symbolicated.

How to trigger a DMA operation on PCI sound card

I'm a newbie to driver development in Linux. I want to trigger a DMA read operation at specified target address, but I have no basic concept about how to do it. Should I write a new driver for my sound card? Or just invoke some APIs(if any) provided by current sound card driver?
I can imagine that what I want looks like this (from LDD3 Ch15),
int dad_transfer(struct dad_dev *dev, int write, void *buffer,
size_t count)
{
dma_addr_t bus_addr;
/* Map the buffer for DMA */
dev->dma_dir = (write ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
dev->dma_size = count;
bus_addr = dma_map_single(&dev->pci_dev->dev, buffer, count,
dev->dma_dir);
dev->dma_addr = bus_addr;
/* Set up the device */
writeb(dev->registers.command, DAD_CMD_DISABLEDMA);
writeb(dev->registers.command, write ? DAD_CMD_WR : DAD_CMD_RD);
writel(dev->registers.addr, cpu_to_le32(bus_addr));
writel(dev->registers.len, cpu_to_le32(count));
/* Start the operation */
writeb(dev->registers.command, DAD_CMD_ENABLEDMA);
return 0;
}
But what should this be, a user-space program or a module? And where can I grub more device-specific details in order to know which and how the registers should be write?
You have several questions buried in here, so I will take them one at a time:
Should I write a new driver or invoke some API function calls?
If the existing driver has such a function accessible from userspace, yes you should use them - they will the easiest option. If they do not already exist, you will have to write a driver because you cannot directly access the kernel's DMA engine from userspace. You need a driver to help you along.
Should this be a userspace program or module?
It would have to be a module so that it can access low-level kernel features. Using your included code as an example, you cannot call "dma_map_single" from userspace or access a PCI device's device structure. You need to be in kernel space to do that, which requires either a driver module or static kernel driver.
Where can I get more device-specific details?
(I assume you meant Grep.) You will have to get a hold of a programmer's guide for the device you want to access. Regular user's manuals won't have the level of detail you need (register addresses, bit patterns, etc) so you may have to contact the manufacturer to get a driver writer's guide. You also may be able to find some examples in the kernel source code. Check http://lxr.free-electrons.com/ for a searchable, up-to-date listing of the entire kernel source. If you look in /drivers/, you may be able to find some examples to get you started.

iOS Audio Units : When is usage of AUGraph's necessary?

I'm totally new to iOS programing (I'm more an Android guy..) and have to build an application dealing with audio DSP. (I know it's not the easiest way to approach iOS dev ;) )
The app needs to be able to accept inputs both from :
1- built-in microphone
2- iPod library
Then filters may be applied to the input sound and the resulting is to be outputed to :
1- Speaker
2- Record to a file
My question is the following : Is an AUGraph necessary in order to be able for example to apply multiple filters to the input or can these different effects be applied by processing the samples with different render callbacks ?
If I go with AUGraph do I need : 1 Audio Unit for each input, 1 Audio Unit for the output and 1 Audio Input for each effect/filter ?
And finally if I don't may I only have 1 Audio Unit and reconfigure it in order to select the source/destination ?
Many thanks for your answers ! I'm getting lost with this stuff...
You may indeed use render callbacks if you so wished to but the built in Audio Units are great (and there are things coming that I can't say here yet under NDA etc., I've said too much, if you have access to the iOS 5 SDK I recommend you have a look).
You can implement the behavior you wish without using AUGraph, however it is recommended you do as it takes care of a lot of things under the hood and saves you time and effort.
Using AUGraph
From the Audio Unit Hosting Guide (iOS Developer Library):
The AUGraph type adds thread safety to the audio unit story: It enables you to reconfigure a processing chain on the fly. For example, you could safely insert an equalizer, or even swap in a different render callback function for a mixer input, while audio is playing. In fact, the AUGraph type provides the only API in iOS for performing this sort of dynamic reconfiguration in an audio app.
Choosing A Design Pattern (iOS Developer Library) goes into some detail on how you would choose how to implement your Audio Unit environment. From setting up the audio session, graph and configuring/adding units, writing callbacks.
As for which Audio Units you would want in the graph, in addition to what you already stated, you will want to have a MultiChannel Mixer Unit (see Using Specific Audio Units (iOS Developer Library)) to mix your two audio inputs and then hook up the mixer to the Output unit.
Direct Connection
Alternatively, if you were to do it directly without using AUGraph, the following code is a sample to hook up Audio units together yourself. (From Constructing Audio Unit Apps (iOS Developer Library))
You can, alternatively, establish and break connections between audio
units directly by using the audio unit property mechanism. To do so,
use the AudioUnitSetProperty function along with the
kAudioUnitProperty_MakeConnection property, as shown in Listing 2-6.
This approach requires that you define an AudioUnitConnection
structure for each connection to serve as its property value.
/*Listing 2-6*/
AudioUnitElement mixerUnitOutputBus = 0;
AudioUnitElement ioUnitOutputElement = 0;
AudioUnitConnection mixerOutToIoUnitIn;
mixerOutToIoUnitIn.sourceAudioUnit = mixerUnitInstance;
mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus;
mixerOutToIoUnitIn.destInputNumber = ioUnitOutputElement;
AudioUnitSetProperty (
ioUnitInstance, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
ioUnitOutputElement, // destination element
&mixerOutToIoUnitIn, // connection definition
sizeof (mixerOutToIoUnitIn)
);

Resources