I need memory traces for simulation. I am using gem5 simulator for this purpose. Gem5 version is 22.0, I am simulating ARM ISA in SE mode. Memory traces are generated using commMonitor. I have used this code in SE mode.
system.comm_monitor=CommMonitor()
system.comm_monitor.cpu_side_port=system.membus.mem_side_ports
system.comm_monitor.trace=MemTraceProbe(trace_file=f"mem_trace",trace_compress=True)
#system.system_port = system.membus.slave
system.mem_ctrl=MemCtrl()
system.mem_ctrl.dram = DDR3_1600_8x8()
system.mem_ctrl.dram.range = system.mem_ranges[0]
#system.mem_ctrl.port = #system.membus.mem_side_ports
system.mem_ctrl.port = system.comm_monitor.mem_side_port
system.system_port = system.membus.cpu_side_ports
After extracting the traces, I get this. I understood that third field is address. Now my doubt is should i consider the traces from both ports i.e 7,8 or only from port 8, because address of port 7 is quite a small number.
7,r,856,4,256,0
7,r,860,4,256,77000
7,r,864,4,256,126000
8,r,442000,4,10,175000
7,r,868,4,256,238000
7,r,872,4,256,287000
8,w,442000,4,10,336000
Related
I am quite enthusiastic about computers and I managed to create my own bootloader and kernel.
So far it jumps to protected mode from real mode and can execute C code. But the thing is that I cannot write to memory addresses above 0x8000000. The code runs fine but whenever I read from that address it seems that the data was not written.
I know that all memory addresses are not mapped to ram but according to this article addresses from 100000-FEBFFFFF are mapped to ram. So my code should work.
http://staff.ustc.edu.cn/~xyfeng/research/cos/resources/machine/mem.htm
Here is a snippet to help you understand
char *data1 = (char*)0x8000000; // This cannot be written to
char *data2 = (char*)0x7ffffff; // This can be written to
*data1 = 'A'; // Will run but not actually write to that ram address
*data2 = 'A'; // Will run properly
Note - I am running my code in qemu(an emulator) and have not tested this on raw hardware yet.
Any help would be appreciated.
When calling IDXGIFactory1::CreateSwapChain with DXGI_FORMAT_B5G6R5_UNORM I get an error that this format isn't supported, specifically E_INVALIDARG One or more arguments are invalid. However, this works fine with a more standard format like DXGI_FORMAT_B8G8R8A8_UNORM.
I'm trying to understand how I can know which swap chain formats are supported. From digging around in the documentation, I can find lists of supported formats for "render targets", but this doesn't appear to be the same set of formats supported for swap chains. B5G6R5 does need 11.1 to have required support for most uses, but it's working as a render target.
https://learn.microsoft.com/en-us/previous-versions//ff471325(v=vs.85)
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-0-feature-level-hardware
https://learn.microsoft.com/en-us/windows/win32/direct3ddxgi/format-support-for-direct3d-11-1-feature-level-hardware
As a test, I looped through all formats and attempted to create swap chains with each. Of the 118 formats, only 8 appear to be supported on my machine (RTX 2070):
DXGI_FORMAT_R16G16B16A16_FLOAT
DXGI_FORMAT_R10G10B10A2_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_B8G8R8A8_UNORM
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB
DXGI_FORMAT_NV12
DXGI_FORMAT_YUY2
What is the proper way to know which swap chain formats are supported?
For additional context, I'm doing off-screen rendering to a 16-bit (565) format. I have an optional "preview window" that I open occasionally to quickly see the rendering results. When I create the window I create a swap chain and do a copy from the real render target into the swap chain back buffer. I'm targeting DirectX 11 or 11.1. I'm able to render to the B5G6R5 format just fine, it's only the swap chain that complains. I'm running Windows 10 1909.
Here's a Gist with resource creation snippets and a full code sample.
https://gist.github.com/akbyrd/c9d312048b49c5bd607ceba084d95bd0
For the swap-chain, it must be supported for "Display Scan-Out". If you need to check a format support at runtime, you can use:
UINT formatSupport = 0;
if (FAILED(device->CheckFormatSupport(backBufferFormat, &formatSupport)))
formatSupport = 0;
UINT32 required = D3D11_FORMAT_SUPPORT_RENDER_TARGET | D3D11_FORMAT_SUPPORT_DISPLAY;
if ((formatSupport & required) != required)
{
// Not supported
}
For all Direct3D Hardware Feature Levels devices, you can always count on DXGI_FORMAT_R8G8B8A8_UNORM working. Unless you are on Windows Vista or ancient WDDM 1.0 legacy drivers, you can also count on DXGI_FORMAT_B8G8R8A8_UNORM
For Direct3D Hardware Feature Level 10.0 or better devices, you can also count on DXGI_FORMAT_R16G16B16A16_FLOAT and DXGI_FORMAT_R10G10B10A2_UNORM being supported.
You can also count on all Direct3D Hardware Feature Level devices supporting DXGI_FORMAT_R8G8B8A8_UNORM_SRGB and DXGI_FORMAT_B8G8R8A8_UNORM_SRGB for swap-chains if you are using the 'legacy' swap effects. For modern swap effects which are required for DirectX 12 and recommended on Windows 10 for DirectX 11 (see this blog post), the swapchain buffer is not created with _SRGB but instead you create just the render target view with it.
See Anatomy of Direct3D 11 Create Device
Currently, both console.log and console.dir truncate their output. Here's a pull request that explicitly limited, by default, the size of the console output for the NativeScript Android runtime. I couldn't find a similar pull request for the iOS runtime. The pull request for Android added a configuration option that can be set to change the limit but no such option seems to exist for iOS.
Here's a (closed) issue for the main NativeScript project with a comment mentioning that no configuration option seems to be available (or at least known) to change the apparent limit:
console.dir not showing full object · Issue #6041 · NativeScript/NativeScript
I checked the NativeScript, NativeScript iOS runtime, and even WebKit sources (on which the NativeScript iOS runtime depends for its JavaScript runtime from what I could tell) and I couldn't find any obvious limit on the size of console messages.
In the interim, I've opted to use this function in my code:
function logBigStringToConsole(string) {
const maxConsoleStringLength = 900; // The actual max length isn't clear.
const length = string.length;
if (length < maxConsoleStringLength) {
console.log(string);
} else {
console.log(string.substring(0, maxConsoleStringLength));
logBigStringToConsole(string.substring(maxConsoleStringLength));
}
}
and I use it like this:
logBigStringToConsole(JSON.stringify(bigObject));
This query is regarding the Portaudio framework. A little background before I ask the question:I am working on an application in PortAudio to output audio through a multichannel(=8) device. However, the device I am using does not expose itself as a single 8-channel device but instead shows up in my device-list as 4 stereo devices. On searching for an approach to handle this, I got to know that WinMME in PortAudio supports multiple devices.
Now, I went through the appropriate header file("pa_win_wmme.h") and followed the suggestions present. But I get the 'Invalid device' error (error number -9996) after calling PA_OpenStream(). In the above mentioned header file, they have in fact specified the right parameter(s) to use when configuring multiple devices to avoid this error, but in-spite of following them, I still get the error.
So I wanted to know if anybody has faced a similar issue and whether I have missed/wrongly configured anything.
I am pasting the required snippets of code below for reference:
PaStreamParameters outputParameters;
PaWinMmeStreamInfo wmmeStreamInfo;
PaWinMmeDeviceAndChannelCount wmmeDeviceAndNumChannels;**
...
...
outputParameters.device = paUseHostApiSpecificDeviceSpecification;
outputParameters.channelCount = 8;
outputParameters.sampleFormat = paFloat32; /* 32 bit floating point processing */
outputParameters.hostApiSpecificStreamInfo = NULL;
wmmeStreamInfo.size = sizeof(PaWinMmeStreamInfo);
wmmeStreamInfo.hostApiType = paMME;
wmmeStreamInfo.version = 1;
wmmeStreamInfo.flags = paWinMmeUseMultipleDevices;
wmmeDeviceAndNumChannels.channelCount = 2;
wmmeDeviceAndNumChannels.device = 3;
wmmeStreamInfo.devices = &wmmeDeviceAndNumChannels;
wmmeStreamInfo.deviceCount = 4;
outputParameters.hostApiSpecificStreamInfo = &wmmeStreamInfo;
The device id = 3 was obtained through
Pa_GetHostApiInfo( Pa_HostApiTypeIdToHostApiIndex( paMME ) )->defaultOutputDevice
I hope I have made the query clear enough. Will be happy to provide more details if required.
Thanks.
I finally figured out the mistake :-)
The configuration for multiple devices must be made as an array. For instance, in the above case
wmmeDeviceAndNumChannels must be an array of 4, with each individual device field containing the corresponding device index of each of the 4 stereo devices. The channelCount remains 2. The outputParameters.channelCount still has to be the aggregate number of channels, i.e. 8. With this I was able to run the application with a single stream, and of course, without any errors related to invalid device or invalid number of channels.:-)
Thanks.
Based on the code pasted above, it looks like you are trying to call open on a single 8-channel device. Instead you will have to get the Pa index of all four devices and call open 4 times. Once for each stereo device. You will then have 4 interleaved stereo streams to maintain. My guess is that changing channelCount = 8 to channelCount = 2 will allow the first stream to open.
I currently have some code as shown below that uses Linq to organize some IEnumerables for me. When executing this code on the device in release mode (iOS 5.0.1, MonoTouch 5.0.1, Mono 2.10.6.1) I get the exception
Attempting to JIT compile method 'System.Linq.OrderedEnumerable`1:GetEnumerator()' while running with --aot-only.
The code that generates this error is
// List<IncidentDocument> documents is passed in
List<LibraryTableViewItemGroup> groups = new List<LibraryTableViewItemGroup>();
List<DocumentObjectType> categories = documents.Select(d=>d.Type).Distinct().OrderBy(s=>s.ToString()).ToList();
foreach(DocumentObjectType cat in categories)
{
List<IncidentDocument> catDocs = documents.Where(d => d.Type == cat).OrderBy(d => d.Name).ToList();
List<LibraryTableViewItem> catDocsTableItems = catDocs.ConvertAll(d => { return new LibraryTableViewItem{ Image = GetImageForDocument(d.Type), Title = d.Name, SubTitle = d.Description}; });
LibraryTableViewItemGroup catGroup = new LibraryTableViewItemGroup{ Name = GetCatName(cat), Footer = null, Items = catDocsTableItems };
groups.Add (catGroup);
}
This error doesn't happen in the simulator for Release|Debug configurations, or on the device for the Debug configuration. I've seen a couple of similar threads on SO here and here, but I'm not sure I understand how they apply to me on this particular issue.
It could be a few things.
There are some limitations when using full AOT to build iOS applications, i.e. ensuring that nothing will be JITted at runtime (an Apple restriction). Each one is different even if the message looks identical (i.e. many causes will lead to this). However there are generally easy workarounds we can suggest for them;
It could also be a (known) regression in 5.0.1 (which is fixed in 5.0.2). This produced a few extra AOT failures that are normally not issues (or already fixed issues).
I suggest you to update to MonoTouch 5.0.2 to see if it compiles correctly your application. If not then please fill a bug report on http;//bugzilla.xamarin.com and include a small, self-contained, test case to duplicate the issue (the above is not complete enough). It seems an interesting test case if it works when debugging is enabled.