DirectX 11: simultaneous use of multiple adaptors - directx

We need to drive 8 to 12 monitors from one pc, all rendering different views of a single 3d scenegraph, so have to use several graphics cards. We're currently running on dx9, so are looking to move to dx11 to hopefully make this easier.
Initial investigations seem to suggest that the obvious approach doesn't work - performance is lousy unless we drive each card from a separate process. Web searches are turning up nothing. Can anybody suggest the best way to go about utilising several cards simultaneously from a single process with dx11?

I see that you've already come to a solution, but I thought it'd be good to throw in my own recent experiences for anyone else who comes onto this question...
Yes, you can drive any number of adapters and outputs from a single process. Here's some information that might be helpful:
In DXGI and DX11:
Each graphics card is an "Adapter". Each monitor is an "Output". See here for more information about enumerating through these.
Once you have pointers to the adapters that you want to use, create a device (ID3D11Device) using D3D11CreateDevice for each of the adapters. Maybe you want a different thread for interacting with each of your devices. This thread may have a specific processor affinity if that helps speed things up for you.
Once each adapter has its own device, create a swap chain and render target for each output. You can also create your depth stencil view for each output as well while you're at it.
The process of creating a swap chain will require your windows to be set up: one window per output. I don't think there is much benefit in driving your rendering from the window that contains the swap chain. You can just create the windows as hosts for your swap chain and then forget about them entirely afterwards.
For rendering, you will need to iterate through each Output of each Device. For each output change the render target of the device to the render target that you created for the current output using OMSetRenderTargets. Again, you can be running each device on a different thread if you'd like, so each thread/device pair will have its own iteration through outputs for rendering.
Here are a bunch of links that might be of help when going through this process:
Display Different images per monitor directX 10
DXGI and 2+ full screen displays on Windows 7
http://msdn.microsoft.com/en-us/library/windows/desktop/ee417025%28v=vs.85%29.aspx#multiple_monitors
Good luck!

Maybe you not need to upgrade the Directx.
See this article.

Enumerate the available devices with IDXGIFactory, create a ID3D11Device for each and then feed them from different threads. Should work fine.

Related

What is the standard workflow for using drake with real robot?

I would like to use Drake to control a real robot (Franka panda or Kuka iiwa). For simulation, the standard workflow would be build a diagram first and then use the Simulator to simulate the diagram. However, for the usage in real robot, I haven't found an example on how to run the built diagram. A close example would be ManipulationStationHardwareInterface. However I haven't found code on how to actual use this interface. Do I need to manually generate the context and run the diagram in a while loop? Or maybe I still need to use the Simulator and use the AdvanceTo method? If I use the Simulator, would there be addtional setting for synchoronization? What's more, how to control the loop frequency of the running diagram?
Thank you in advance!
The ManipulationStationHardwareInterface is indeed a reasonable example to look at. We have an updated version of the coming to Drake soon, too.
The basic steps are:
Run the robot's driver as a different process. Some of the drivers we use are public and are hosted here: https://manipulation.csail.mit.edu/station.html .
Send / Receive messages from your main controller thread to/from your driver. The LCMPublisherSystem and LCMSubscriberSystem are right in Drake. The ROS/ROS2 equivalents are here: https://github.com/RobotLocomotion/drake-ros (The ManipulationStationHardwareInterface + manipulation_station_simulation file in Drake shows an example of splitting up a Diagram for control + simulation into two processes using this message passing.
Timing: in the simplest case (typically ok for hardware) all of the executables are running with simulator.set_target_realtime_rate(1.0) and then simulator.AdvanceTo(big number). In other words, they all synchronize their clocks to the cpu clock, and don't try to synchronize explicitly to a clock passed via message passing. That sort of synchronization is possible, too, but I recommend the simpler version first. In this simple version, each system (e.g. your controller) is set to update with some periodic update -- say 100Hz -- and will run its computation using the most recently received subscribed messages and publish at that rate.

Multiple nodes in .scn model causing iMessage extension to crash

I am working with SceneKit in an iMessage extension and have run into a peculiar little beast of an issue. I am trying to render a custom scn model and rig nodes to a users facial expression using blend shape anchors. I am able to do this succesfully in the iOS app that this iMessage extension is born from without an issue. However, once placed into a MessageViewController the program exits with code 0 every time I try to run it.
I did a bit of digging and it seems "exited with code 0" is indicative of memory overload so I started playing around with my models nodes. I discovered that if I delete all nodes but one, I am able to animate that node with its corresponding blend shape. Any more than one node and it crashes.
Does anyone have any ideas as to why this is happening? Or any proof that iMessage extensions are only granted a certain amount of processing power before they are killed off (another theory of mine)?
Appreciate any help!
from the App Extension Programming Guide we learn that
Memory limits for running app extensions are significantly lower than the memory limits imposed on a foreground app. On both platforms, the system may aggressively terminate extensions because users want to return to their main goal in the host app. Some extensions may have lower memory limits than others: For example, widgets must be especially efficient because users are likely to have several widgets open at the same time.
Your app extension doesn’t own the main run loop, so it’s crucial that you follow the established rules for good behavior in main run loops. For example, if your extension blocks the main run loop, it can create a bad user experience in another extension or app.
Keep in mind that the GPU is a shared resource in the system. App extensions do not get top priority for shared resources; for example, a Today widget that runs a graphics-intensive game might give users a bad experience. The system is likely to terminate such an extension because of memory pressure. Functionality that makes heavy use of system resources is appropriate for an app, not an app extension.
One option may be to try to optimize your geometry in your DCC so you don't run into system resource constraints.

Lua code runs properly on my advanced computer but doesn't run on the monitor

I run a successful Minecraft Tekkit modded server with computer craft on it.
I'm fairly new to lua and only know the basics, I'm trying to make a menu with pages to display the banned items list and rules list on. I've made a program with arrows that's optimized for advanced computers and monitors.
The code runs properly on my advanced computer but doesn't run on the monitor, when it shows and someone clicks the arrows it doesn't work either.
I just started using stack so I'm not sure on what to do, if you need any info please ask for it :)
The code: http://pastebin.com/gVtPeBCE
By the way I already tried using Mon.write and Mon = peripheral.wrap("top")
For those who don't have tekkit here is a computercraft emulator: https://goo.gl/J0dPq0
I'm sorry to inform you that I haven't read through all of your code. But judging based on your description, I would say that it's likely one of three issues, not including incorrect syntax as a possibility.
Note: Your question is exclusively asking about the programs ability to run on a monitor while the emulator you link to only provides the desktop ComputerCraft computers.
Peripheral
Although you already stated:
By the way I already tried using Mon.write and Mon = peripheral.wrap("top")
I would like to clarify that you can, as a way to simplify the code transition, set the peripheral function table equal to the term variable. For example: term = peripheral.wrap(string_side).
Note: When you use this method, you shouldn't execute the program with the command:
> monitor side program.
You should instead run it as a normal program with no special treatment.
I.e. > program.
Incorrect Mouse Event Detection
Simply put, when using a monitor, you're not supposed to pull for a mouse_click event. You have to pull for a monitor_touch event instead.
while true do
type, side, x, y = os.pullEvent()
if type == "monitor_touch" then
print("Monitor '"..side.."' has been pressed at "..x..", "..y.."!")
end
end
Monitor Size
This just simply means that the program you're trying to execute on the monitor takes up to much space and is therefore unusable when displayed on that size of monitor.
Suggestion: Either update your code for the monitor size or build the monitor to fit the program.
Please remember that all of these ideas might not answer your question, as the code you have provided to look over is too large and I haven't been able to find the time to experiment with it. Therefore, these are only general suggestions.
if i had to guess, it's because term is short for terminal and will auto work with computers so if you set term to be the monitor at the top of the file it should work correctly.
term = peripheral.wrap("SIDE OF MONITOR")
Put that at the top of your code and it should work. but this what i think it is after taking a look at your code (also its not that long of a code sample...)

PIC32 becomes unresponsive after a few hours

I have a PIC32MX340F512 board developed by another company for us, The board has a DS1338 RTCC and 24LC32A eeprom, and display unit on an I2C bus, on this bus i included a TSL2561 I2C light sensor, i wrote code in c to poll the light sensor continously , when the light sensor reaches a certain level i save the time and date and light sensor value on SD card. This all works fine but if i leave the system without exposure to light inside tunnel where incident light on one end of the tunnel is ought to be monitored the system becomes unresponsive no matter how much amount of light you apply and then if i switch power off and back on again everything starts to work normal. i am a one man development team and have been trying to find out the problem for months, i activated the watchdog timer to prevent the system from hanging but the problem still persisted. i then decided to find out if the problem is with the sensor by including a push button to activate light measurement but still when 4-5 hours elapse the PIC cant even detect a change in the the input pin. Under the impression that a hardware reset overrides anything going on i included a reset button and it also works ok for the first few hours after that the PIC doesn't seem to be responding to anything including a reset. I was getting convinced that there is nothing wrong with the firmware but also with all this happening the display unit (pic16f1933 and lcd) on the I2C shares power with the main unit and doesn't seem to be affected as it alternates between different messages constantly Does anybody have an idea what could be wrong (hardware/firmware or my sensor). I am using a 24v DC power supply purchased seperately. The PIC seems to go into a deep sleep although i dd not implement any kind of SLEEP mode in my code. Nb We use the same board for many other projects and i haven't come across such a problem . Thanks in advance.
I think you need to (if you haven't already) explore the wonderful world of in-circuit-debugging (such as with the ICD3 or PICkit 2/3). It allows you to run the processor in a special mode that lets you pause execution, see exactly which line of code is being executed, inspect variable values, and step through the code to see which parts are running and not running, or see exactly where execution takes a wrong turn. If the problem takes hours to reproduce, that's okay. You can just leave it overnight running in debug mode and hopefully it will be locked-up or 'sleeping' in the morning. At this point, you will be able to pause the processor and poke around to see if you got caught in some kind of infinite loop or something. This is often the only way to dig inside a running piece of code to see why things aren't working as you expect. But as you say, those bugs that take hours or days to manifest are the trickiest. Good luck!
It sounds like you can break up your design into two main parts, sd card interfacing, reading the rtc and reading the light sensor. If it were me I would upload a version of the code that mimics reading the light sensor but only returns fake data and see if that cures the problem. Additionally do the same with the other two modules separately and see if any of the three versions of your project not show this problem. From there just keep narrowing it down until you find the block of code thats causing problems.
if Two or more versions of your debug code show the same problem then my guess is it has to do with one of the communication protocols. I had a problem with a Pic32 silicon version blocking when using the DMA in conjunction with the SPI peripherals. So I would suggest checking the errata for your chip.
If you still cant find the problem, my only suggestion would be to check for memory leaks or arrays that are growing into reserved memory.
Hope that helps, good luck!

Passing object/NSData between iOS devices

I'm creating a game, turn based, and I was thinking of using Game Center to handle it, but the passed game-object is evidently max 64kb. Is there another way to pass objects between devices for this use, without having to create a database or storage-server as middle man? The game-object itself for me is probably a lot less than 64kb, but there are some initial variables I would like to send, such as images. With my calculations, the initial data for one game is about 500kb, but after getting those images once, the passed game object is just a couple of kb's, and are never going to include those images again.
Is there a way to send these images directly?
There are a few ways to get around the limit.
This answer mentions Alljoyn which would allow you to transfer that size of files.
You could also send them indirectly by transferring them to your own server, then passing a link to the file to the other player. For a turn based game, this would have good advantages of enhanced reliability as you could put in retries on error for both the upload to the server and the download to the device and control it yourself. I would recommend AFHTTPClient for this, also.
Is there another way to pass objects between devices for this use, without having to create a database or storage-server as middle man?
Without your own server, there isn't.

Resources