So for our maps, we are using MapKit. We overlay a layer using MKPolygons above the map. This feature has been working since iOS15 but since 16.1 we get the following error and the app freezes (does not crash).
[VKDefault] Exceeded Metal Buffer threshold of 50000 with a count of 50892 resources, pruning resources now (Time since last prune:6.497636): Assertion with expression - false : Failed in file - /Library/Caches/com.apple.xbs/Sources/VectorKit/src/MDMapEngine.mm line - 1363
Metal API Validation Enabled [PipelineLibrary] Mapping the pipeline data cache failed, errno 22
Another interesting log is the following
[IconManager] No config pack found for key SPR London Landmarks
Any idea how to manually clear the metal cache?
There seems to be a problem related to the number of MKOverlayRenderers making draw calls. Having more than a few seems to trigger this issue. Using MKMultiPolyline/Polygon seems to be a workaround but that does not work if you need to support iOS 12 or style the lines and polygons independently.
Related
I have a DirectCompute application making computation on images (Like computing average pixel value, applying a filter and much more). For some computation, I simply treat the image as an array of integer and dispatch a computer shader like this:
FImmediateContext.Dispatch(PixelCount, 1, 1);
The result is exactly the expected value, so the comptation is correct. Nevertheless, at runt time, I see in the debug log the following message:
D3D11 ERROR: ID3D11DeviceContext::Dispatch: There can be at most 65535 Thread Groups in each dimension of a Dispatch call. One of the following is too high: ThreadGroupCountX (3762013), ThreadGroupCountY (1), ThreadGroupCountZ (1) [ EXECUTION ERROR #2097390: DEVICE_DISPATCH_THREADGROUPCOUNT_OVERFLOW]
This error is shown only in the debug log, everything else is correct, including the computation result. This makes me thinking that the GPU somehow manage the very large thread group, probably breaking it to smaller groups sequentially executed.
My question is: should I care about this error or is it OK to keep it and letting the GPU do the work for me?
Thx.
If you only care about it working on your particular piece of hardware and driver, then it's fine. If you care about it working on all Direct3D Feature Level 11.0 cards, then it's not fine as there's no guarantee it will work on any other driver or device.
See Microsoft Docs for details on the limits for DirectCompute.
If you care about robust behavior, it's important to test DirectCompute applications across a selection of cards & drivers. The same is true of basically any use of DirectX 12. Much of the correctness behavior is left up to the application code.
I got a bag dataset and want to play the message containing velodyne VLP-16 points back. But got uncomplete result.
I've tried:
- increasing fps in rviz
- using timestamp from other simulation
I expect to got uncut result / a ring of lidar radius beam.
This is the result I got in rviz
result
As you can see, the points are available but they are not visible long enought because points are not showed simultaneously. The reason is that a single message does not contain all points of a complete (360°) beam. A beam gets splittet to several messages typically, of which only the latest gets shown by default. Checking the rviz point cloud documentation you will find a parameter called decay time:
The amount of time to keep a cloud/scan around before removing it. A value of 0 means to only display the most recent data.
Try increasing the value of this parameter in rviz, then you should be able to see more points simultaneously.
I am executing a big file on metal, it is showing the following error:
-[MTLDebugRenderCommandEncoder initWithRenderCommandEncoder:parent:descriptor:]_block_invoke:807: failed assertion `Exceeded HW limit of scissor rectangles for render encoder working in Memoryless mode.'
Message from debugger: failed to send the k packet
Any way to solve it
According to the Metal Feature Set tables (PDF), the limit is 16 for macOS 10.13 (and later, presumably) and 1 everywhere else. Annoyingly, these limits are not query-able programmatically; they're only available in these tables and, as you've found, empirically by exceeding them.
I'm trying to demodulate a signal using GNU Radio Companion. The signal is FSK (Frequency-shift keying), with mark and space frequencies at 1200 and 2200 Hz, respectively.
The data in the signal text data generated by a device called GeoStamp Audio. The device generates audio of GPS data fed into it in real time, and it can also decode that audio. I have the decoded text version of the audio for reference.
I have set up a flow graph in GNU Radio (see below), and it runs without error, but with all the variations I've tried, I still can't get the data.
The output of the flow graph should be binary (1s and 0s) that I can later convert to normal text, right?
Is it correct to feed in a wav audio file the way I am?
How can I recover the data from the demodulated signal -- am I missing something in my flow graph?
This is a FFT plot of the wav audio file before demodulation:
This is the result of the scope sink after demodulation (maybe looks promising?):
UPDATE (August 2, 2016): I'm still working on this problem (occasionally), and unfortunately still cannot retrieve the data. The result is a promising-looking string of 1's and 0's, but nothing intelligible.
If anyone has suggestions for figuring out the settings on the Polyphase Clock Sync or Clock Recovery MM blocks, or the gain on the Quad Demod block, I would greatly appreciate it.
Here is one version of an updated flow graph based on Marcus's answer (also trying other versions with polyphase clock recovery):
However, I'm still unable to recover data that makes any sense. The result is a long string of 1's and 0's, but not the right ones. I've tried tweaking nearly all the settings in all the blocks. I thought maybe the clock recovery was off, but I've tried a wide range of values with no improvement.
So, at first sight, my approach here would look something like:
What happens here is that we take the input, shift it in frequency domain so that mark and space are at +-500 Hz, and then use quadrature demod.
"Logically", we can then just make a "sign decision". I'll share the configuration of the Xlating FIR here:
Notice that the signal is first shifted so that the center frequency (middle between 2200 and 1200 Hz) ends up at 0Hz, and then filtered by a low pass (gain = 1.0, Stopband starts at 1 kHz, Passband ends at 1 kHz - 400 Hz = 600 Hz). At this point, the actual bandwidth that's still present in the signal is much lower than the sample rate, so you might also just downsample without losses (set decimation to something higher, e.g. 16), but for the sake of analysis, we won't do that.
The time sink should now show better values. Have a look at the edges; they are probably not extremely steep. For clock sync I'd hence recommend to just go and try the polyphase clock recovery instead of Müller & Mueller; chosing about any "somewhat round" pulse shape could work.
For fun and giggles, I clicked together a quick demo demod (GRC here):
which shows:
I have a DirectShow Transform filter written in Delphi 6 using the DSPACK component library. It is a simple audio mixer that creates a new input pin whenever a new connection is attempted. I say simple because once its media format is set, all connections to the its input pins or singular output pin are forced to conform to that media format. I build the filter chain manually, making all pin connections explicitly myself. I do not use any of the "intelligent rendering" calls, unless there is some way to trigger that unwanted behavior (in my case) accidentally.
NOTE: The Capture Filter is a standard DirectShow filter external to my application. My push source audio filter and simple audio mixer filters are being used as private, unregistered filters and are internal to my application.
I am having a weird problem that only occurs when I try to make multiple input connections to my mixer, which does indeed accept them. Currently, I am attempting to connect both a Capture Filter and my custom Push Source audio filter to my mixer filter. Whenever I try to do that the second upstream filter connection fails. Regardless of whether I connect the Capture Filter first or Push Source audio filter first, the second upstream filter connection always fails.
The first test I ran was to try connecting just the Capture Filter to the mixer. That worked fine.
The second test I ran was to try connecting just the Push Source audio filter to the mixer. That worked fine.
But as soon as try to do both I get a "no combination of intermediate filters could be found" error. I did several hours of deep digging into the media negotiation calls hitting my filter from the graph builder and then I found the problem. For some reason, the filter graph is dragging in the ancient "Indeo (R) Audio Software" codec into the chain.
I discovered this because despite the fact that codec did have a media format that matched my filter in almost every regard (major type, sub type, format type, wave format parameters), it had an extra 2 bytes at the end of it's pbFormat data member and that was enough to fail the equals test since that test does a comparison between the source and target pbFormat areas by comparing the cbFormat value of each media type. The Indeo codec has a cbFormat value of 20 while my filter has a cbFormat value of 18, which is the size of a _tWAVEFORMATEX data structure. In a way it's a good thing the Indeo pbFormat has that weird size because the first 18 bytes of its 20 byte area were exactly equal to the pbFormat area of my mixer filter's supported media type. Without that anomaly I never would have known that ancient codec was being drug in. I'm surprised it's being drug in at all since it has known exploits and vulnerabilities. What is most confusing is that this is happening on my mixer filter's output pin, not one of the input pins, and I have not made a single downstream connection yet when building up my pin connections.
Can anyone tell me why DirectShow is trying to drag in that codec despite the fact that the media formats for the both incoming filters, the Capture Filter and the Push Source filter, are identical and don't need any intermediate filters at all since they match my mixer filter's input pins supported format exactly? How can I fix this problem?
Also, I noticed that even in the single filter attachment tests above that succeeded, my mixer output pin was still getting queried for media formats. Why is that when as I said, at this point in building up my pin connections I have not connected anything to the output pin of my mixer filter?
--------------------------- UPDATE: 1 ----------------------------
I have learned that you can avoid the "intelligent connection" behavior entirely by using IFilterGraph.ConnectDirect() instead of IGraphBuilder.Connect(). I switched over to DirectConnect() and turns out that the input pin on my mixer filter is coming back as "already connected". That may be what is causing the graph builder to drag in the Indeo codec filter. Now that I have this new diagnostic information I will correct the problem and update this post with my results.
--------------------------- RESOLUTION ----------------------------
The root problem of all of this was my re-use of the input pin I obtained from the first destination/downstream filter I connected to my simple audio mixer filter, at the top of my application code. In other words my filter was working correctly, but I was not getting a fresh input pin with each upstream filter I tried to connect to it. Once I started doing that the connection process worked fine. I don't know why the code behind the IGraphBuilder.Connect() interface tried to bring in the Indeo codec filter, perhaps something to do with trying to connect to an already connected input pin, but it did. For my needs, I prefer the tight control that IFilterGraph.ConnectDirect() provides since it eliminates any interference from the intelligent connection code in IGraphBuilder, but I could see when video filters get involved it could become useful.