I am attempting to place a number of overlays (textures) on top of an existing texture. For the most part, this works fine.
However, for the life of me, I can't figure out why the output of this is sporadically "flickering" in my drawRect method of my MTKView. Everything seems fine; I do further processing on theTexture (in a kernel shader) after I loop with my placing my overlays. For some reason, I feel like this encoding is ending early and not enough work is getting done on it.
To clarify, everything starts out fine but about 5 seconds in, the flickering starts and gets progressively worse. For debugging purposes (right now, anyways) that loop runs only once -- there is only one overlay element. The input texture (theTexture) is bona-fide every time before I start (created with a descriptor where storageMode is MTLStorageModeManaged and usage is MTLTextureUsageUnknown).
I've also tried stuffing the encoder instantiation/ending inside the loop; no difference.
Can someone help me see what I'm doing wrong?
id<MTLTexture> theTexture; // valid input texture as "background"
MTLRenderPassDescriptor *myRenderPassDesc = [MTLRenderPassDescriptor renderPassDescriptor];
myRenderPassDesc.colorAttachments[0].texture = theTexture;
myRenderPassDesc.colorAttachments[0].storeAction = MTLStoreActionStore;
myRenderPassDesc.colorAttachments[0].loadAction = MTLLoadActionLoad;
id<MTLRenderCommandEncoder> myEncoder = [commandBuffer renderCommandEncoderWithDescriptor:myRenderPassDesc];
MTLViewport viewPort = {0.0, 0.0, 1920.0, 1080.0, -1.0, 1.0};
vector_uint2 imgSize = vector2((u_int32_t)1920,(u_int32_t)1080);
[myEncoder setViewport:viewPort];
[myEncoder setRenderPipelineState:metalVertexPipelineState];
for (OverlayWrapper *ow in overlays) {
id<MTLTexture> overlayTexture = ow.overlayTexture;
VertexRenderSet *v = [ow getOverlayVertexInfoPtr];
NSUInteger vSize = v->metalVertexCount*sizeof(AAPLVertex);
id<MTLBuffer> mBuff = [self.device newBufferWithBytes:v->metalVertices
length:vSize
options:MTLResourceStorageModeShared];
[myEncoder setVertexBuffer:mBuff offset:0 atIndex:0];
[myEncoder setVertexBytes:&imgSize length:sizeof(imgSize) atIndex:1];
[myEncoder setFragmentTexture:overlayTexture atIndex:0];
[myEncoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:v->metalVertexCount];
}
[myEncoder endEncoding];
// do more work (kernel shader) with "theTexture"...
UPDATE #1:
I've attached a image of a "good" frame, with the vertex area (lower right) being shown. My encoder is responsible for placing the green stand-in "image" on top of the video frame theTexture at 30fps, which it does do. Just to clarify, theTexture is created for each frame (from a CoreVideo pixel buffer). After this encoder, I only read from the theTexture in a kernel shader to adjust brightness -- all that is working just fine.
My problems must exist elsewhere, as the video frames stop flowing (though the audio keeps going) and I end up alternating between 2 or 3 previous frames once this encoder is inserted (hence, the flicker). I believe now that my video pixel buffer vendor is being inadvertently supplanted by this "overlay" vendor.
If I comment out this entire vertex renderer, my video frames flow through just fine; it's NOT a problem with my video frame vendor.
UPDATE #2:
Here is the declaration of my rendering pipeline:
MTLRenderPipelineDescriptor *p = [[MTLRenderPipelineDescriptor alloc] init];
if (!p)
return nil;
p.label = #"Vertex Mapping Pipeline";
p.vertexFunction = [metalLibrary newFunctionWithName:#"vertexShader"];
p.fragmentFunction = [metalLibrary newFunctionWithName:#"samplingShader"];
p.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
NSError *error;
metalVertexPipelineState = [self.device newRenderPipelineStateWithDescriptor:p
error:&error];
if (error || !metalVertexPipelineState)
return nil;
Here is the texture descriptor used for creation of theTexture:
metalTextureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm
width:width
height:height
mipmapped:NO];
metalTextureDescriptor.storageMode = MTLStorageModePrivate;
metalTextureDescriptor.usage = MTLTextureUsageUnknown;
I haven't included the AAPLVertex and the vertex/fragment functions because of this: If I just comment out the OverlayWrapper loop in my rendering code (ie. don't even set vertex buffers or draw primitives), the video frames still flicker. The video is still playing but only 2-3 frames or so are playing in a continuous loop, from the time that this encoder "ran".
I've also added this code after the [... endEncoding] and changed the texture usage to MTLStorageModeManaged -- still, no dice:
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer blitCommandEncoder];
[blitEncoder synchronizeResource:crossfadeOutput];
[blitEncoder endEncoding];
To clarify a few things: The subsequent computer shader uses theTexture for input only. These are video frames; thusly, theTexture is re-created each time. Before it goes through this render stage, it has a bona-fide "background"
UPDATE #3:
I got this working, if by unconventional means.
I used this vertex shader to render my overlay onto a transparent background of a newly-created blank texture, specifically with my loadAction being MTLLoadActionClear with a clearColor of (0,0,0,0).
I then mixed this resulting texture with my theTexture with a kernel shader. I should not have to do this, but it works!
I had the same problem and wanted to explore a simpler solution before attempting #zzyzy's. This solution is also somewhat unsatisfying but at least seems to work.
The key (but inadequate in and of itself) is to reduce the buffering on the Metal layer:
metalLayer_.maximumDrawableCount = 2
Second, once the buffering was reduced, I found I had to go through a render/present/commit cycle to draw a trivial, invisible item with .clear set on the render pass descriptor — pretty straightforward:
renderPassDescriptor.colorAttachments[0].loadAction = .clear
(That there were a few invisible triangles drawn is probably irrelevant; it is probably the MTLLoadActionClear attribute that differentiates the pass. I used the same clear color as #zzyzy above and I think this echos the above solution.)
Third, I found I had to run the code through that render/present/commit cycle a second time — i.e., twice in a row. Of the three, this seems the most arbitrary and I don't pretend to understand it, but the three together worked for me.
Related
I'm really new to graphics programming in general, so please bear with me. I am trying to add shadow mapping from a distant light (orthogonal projection) into my scene, but when I follow the (very incomplete) steps from Frank Luna's DX12 book I find that my SRV for the shadow map is just filled with depths of 1.
If it helps, here is my SRV definition:
D3D12_TEX2D_SRV texDesc = {
0,
-1,
0,
0.0f
};
D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {
DXGI_FORMAT_R32_TYPELESS,
D3D12_SRV_DIMENSION_TEXTURE2D,
D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING,
};
srvDesc.Texture2D = texDesc;
m_device->CreateShaderResourceView(m_lightDepthTexture.Get(),&srvDesc, m_cbvHeap->GetCPUDescriptorHandleForHeapStart());
and here are my DSV heap and descriptor definitions:
D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {};
dsvHeapDesc.NumDescriptors = 2;
dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV;
dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;
ThrowIfFailed(m_device->CreateDescriptorHeap(&dsvHeapDesc, IID_PPV_ARGS(&m_dsvHeap)));
D3D12_DEPTH_STENCIL_VIEW_DESC depthStencilDesc = {};
depthStencilDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilDesc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D;
depthStencilDesc.Flags = D3D12_DSV_FLAG_NONE;
CD3DX12_HEAP_PROPERTIES heapProps = CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT);
CD3DX12_RESOURCE_DESC resourceDesc = CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_R32_TYPELESS, m_width, m_height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL);
D3D12_CLEAR_VALUE depthOptimizedClearValue = {};
depthOptimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT;
depthOptimizedClearValue.DepthStencil.Depth = 1.0f;
depthOptimizedClearValue.DepthStencil.Stencil = 0;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&resourceDesc,
D3D12_RESOURCE_STATE_DEPTH_WRITE,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_dsvBuffer)
));
D3D12_RESOURCE_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D12_RESOURCE_DESC));
texDesc.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D;
texDesc.Alignment = 0;
texDesc.Width = m_width;
texDesc.Height = m_height;
texDesc.DepthOrArraySize = 1;
texDesc.MipLevels = 1;
texDesc.Format = DXGI_FORMAT_R32_TYPELESS;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN;
texDesc.Flags = D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL;
ThrowIfFailed(m_device->CreateCommittedResource(
&heapProps,
D3D12_HEAP_FLAG_NONE,
&texDesc,
D3D12_RESOURCE_STATE_GENERIC_READ,
&depthOptimizedClearValue,
IID_PPV_ARGS(&m_lightDepthTexture)
));
CD3DX12_CPU_DESCRIPTOR_HANDLE dsv(m_dsvHeap->GetCPUDescriptorHandleForHeapStart());
m_device->CreateDepthStencilView(m_dsvBuffer.Get(), &depthStencilDesc, dsv);
dsv.Offset(1, m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_DSV));
m_device->CreateDepthStencilView(m_lightDepthTexture.Get(), &depthStencilDesc, dsv);
I then created a basic vertex shader that just transforms the vertices with my map (from Frank Luna's book, page 648,650). Since I bound the m_lightDepthTexture to D3D12GraphicsCommandList::OMSetRenderTargets, I assumed that the depth values would be written onto m_lightDepthTexture. But simply sampling this texture in my main pass proves that the values are actually 1.0f. So nothing actually happened on my shadow pass!
I really have no idea what to ask, but if anyone has a sample DX12 shadow map I could see (Google comes up with DX11 or less, or much too complicated samples), or if there's a good source to learn about this, please let me know!
EDIT: I should say that I changed the format from DXGI_FORMAT_D24_UNORM_S8_UINT, as I think the extra 8 bits for stencil is irrelevant to my case. I changed back to the book format and nothing changed, so I think this format should be fine.
If you remove the unecessary return ret; from your shadow vertex shader, the problem then seems to be in winding order of vertices of your sphere. You can easily verify this by setting cull mode to D3D12_CULL_MODE_NONE for your shadow PSO.
You can easily correct your sphere winding order by switching order of any two vertices of every triangle, so wherever you have p1,p2,p3 you just write it for example as p1,p3,p2.
You will also need to check your matrix multiplication order in your vertex shaders, I didn't checked it in detail but it's inconsistent and I believe the cause why the sphere will appear black when you fix the above issue. You also seem to be missing division by w for your light coords in lighting vertex shader.
I was trying to draw a half circle with renderEncoder's drawIndexedPrimitives
[renderEncoder setVertexBuffer:self.vertexBuffer offset:0 atIndex:0];
[renderEncoder drawIndexedPrimitives:MTLPrimitiveTypeTriangleStrip
indexCount:self.indexCount
indexType:MTLIndexTypeUInt16
indexBuffer:self.indicesBuffer
indexBufferOffset:0];
where the vertexBuffer and indicesBuffer for the circle were created by calculation
int segments = 10;
float vertices02[ (segments +1)* (3+4)];
vertices02[0] = centerX;
vertices02[1] = centerY;
vertices02[2] = 0;
//3, 4, 5, 6 are RGBA
vertices02[3] = 1.0;
vertices02[4] = 0;
vertices02[5] = 0.0;
vertices02[6] = 1.0;
uint16_t indices[(segments -1)*3];
for (int i = 1; i <= segments ; i++){
float degree = (i -1) * (endDegree - startDegree)/ (segments -1) + startDegree;
vertices02[i*7] = (centerX + cos([self degreesToRadians:degree])*radius);
vertices02[i*7 +1] = (centerY + sin([self degreesToRadians:degree])*radius);
vertices02[i*7 +2] = 0;
vertices02[i*7 +3] = 1.0;
vertices02[i*7 +4] = 0;
vertices02[i*7 +5] = 0.0;
vertices02[i*7 +6] = 1.0;
if (i < segments){
indices[(i-1)*3 + 0] = 0;
indices[(i-1)*3 + 1] = i;
indices[(i-1)*3 + 2] = i+1;
}
}
So I am combining 9 Triangle to form a 180 degree circle.
Then create vertexBuffer and indicesBuffer
self.vertexBuffer = [device newBufferWithBytes:vertexArrayPtr
length:vertexDataSize
options:MTLResourceOptionCPUCacheModeDefault];
self.indicesBuffer = [device newBufferWithBytes:indexArrayPtr
length:indicesDataSize
options:MTLResourceOptionCPUCacheModeDefault];
The result is like this:
I believe this is Anti-Aliasing problem from Metal of iOS. I used to create half circle in OpenGL using same technique but the edges was much smoother.
Any suggestions to tackle the problem?
Suggested by warrenm, I should set the CAMetalLayer's drawableSize equals screenSize x scale. There are improvements:
Another Suggestion by warrenm, using MTKView and setting sampleCount = 4 solved the problem:
There are a couple of things to consider here. First, you need to ensure that (when possible) the size of the grid you're rasterizing to matches the resolution of the display it will be viewed on. Second, you might need to use subpixel techniques to eke out additional smoothness, since raster techniques tend to undersample continuous functions.
In Metal, the way we match the rendered image size to the display is by ensuring that the drawable size of the Metal layer matches the pixel dimensions it will occupy on the screen. When using CAMetalLayer directly, the default behavior is for the drawable size of the layer to be the size of the layer's bounds multiplied by the layer's contentsScale property. Setting the latter to the scale of the UIScreen onto which the layer is composited will match the layer's dimensions to the screen's pixels (ignoring other transformations that might be applied to the layer or its view hierarchy).
When using MTKView, the autoResizeDrawable property determines whether the view automatically manages its layer's drawable size. This is the default behavior, but if you set this property to NO, you can manually set the drawable size to something else (e.g., use adaptive resolution rendering when fragment-bound).
In order to sample more finely, we have our choice among any number of antialiasing techniques, but perhaps the easiest of these is multisampled antialiasing (MSAA), a hardware feature that—as the name suggests—takes multiple samples for each pixel along the edges of primitives, in order to reduce the jagged effects of aliasing.
In Metal, using MSAA requires setting multisampling state (i.e., the sample count) on both the render pipeline state and the textures used for rendering. MSAA is a two-step process, where a render target that can hold the data for multiple fragments per pixel is rendered to, then a resolve step combines these samples into the final color for each pixel. When using CAMetalLayer (or drawing off-screen), you must create a texture of type MTLTextureType2DMultisample for each active color/depth attachment. These textures are configured as the texture property of their respective color/depth attachments, and the resolveTexture property is set to a texture of type MTLTextureType2D, into which the MSAA targets are resolved.
When using MTKView, simply setting the sampleCount on the view to match the sampleCount of the render pipeline descriptor is sufficient to get MetalKit to create and manage the appropriate resources. By default, the render pass descriptors you receive from a view will have an internally-managed MSAA color target set as the primary color attachment, and the current drawable's texture set as the resolve texture of that attachment. In this way, enabling MSAA with MetalKit only requires a couple of lines of code.
I am having a image in the asset. Then I am changing it to MTLTexture. I want to pass this texture to the shader function and append the texture and add the smudge feature to the passed texture using shader functions.
Currently I am passing the texture in MTLRenderPassDescriptor like below.
let renderPassWC = MTLRenderPassDescriptor()
renderPassWC.colorAttachments[0].texture = ssTexture
renderPassWC.colorAttachments[0].loadAction = .load
renderPassWC.colorAttachments[0].storeAction = .store
When I edit the texture in Shader function like Moving the pixel of ssTexture to adjacent Pixel. The movement don't stops. Because the operations I am doing in shader functions continuously operating in the appended Textures every draw cycle.
So I think rather than loadAction with load I feel clear will be a option but texture which passed become clear when I changed the code as below
renderPassWC.colorAttachments[0].texture = ssTexture
renderPassWC.colorAttachments[0].loadAction = .clear
renderPassWC.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
.Any possible way to pass the image texture with clear.
For whatever reason I am having issues with alpha blending in metal. I am drawing to a MTKView and for every pipeline that I create I do the following:
descriptor.colorAttachments[0].blendingEnabled = YES;
descriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperationAdd;
descriptor.colorAttachments[0].alphaBlendOperation = MTLBlendOperationAdd;
descriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactorSourceAlpha;
descriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactorSourceAlpha;
descriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
descriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
However for whatever reason that is not causing alpha testing to happen. You can even check in the frame debugger and you will see vertices with an alpha of 0 that are being drawn black rather than transparent.
One thought I had is that some geometry ends up on the exact same z plane so if alpha blending does not work on the same z plane that might cause an issue. But I dont think that is a thing.
Why is alpha blending not working?
I am hoping to blend as if they were transparent glass. Think like this.
Alpha blending is an order-dependent transparency technique. This means that the (semi-)transparent objects cannot be rendered in any arbitrary order as is the case for (more expensive) order-independent transparency techniques.
Make sure your transparent 2D objects (e.g., circle, rectangle, etc.) have different depth values. (This way you can define the draw ordering yourself. Otherwise the draw ordering depends on the implementation of the sorting algorithm and the initial ordering before sorting.)
Sort these 2D objects based on their depth value from back to front.
Draw the 2D objects from back to front (painter's algorithm) using alpha blending. (Of course, your 2D objects need an alpha value < 1 to actually see some blending.)
Your blend state for alpha blending is correct:
// The blend formula is defined as:
// (source.rgb * sourceRGBBlendFactor ) rgbBlendOperation (destination.rgb * destinationRGBBlendFactor )
// (source.a * sourceAlphaBlendFactor) alphaBlendOperation (destination.a * destinationAlphaBlendFactor)
// <=>
// (source.rgba * source.a) + (destination.rgba * (1-source.a))
descriptor.colorAttachments[0].blendingEnabled = YES;
descriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperationAdd;
descriptor.colorAttachments[0].alphaBlendOperation = MTLBlendOperationAdd;
descriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactorSourceAlpha;
descriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactorSourceAlpha;
descriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
descriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
I am trying to blur multiple SKNode objects. I do this by having a parent SKEffectNode with a CIFilter set to #"CIGaussianBlur". Like so:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
blurNode.shouldRasterize = YES;
[blurNode setShouldEnableEffects:NO];
[blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
This works fine for a bunch of nodes currently onscreen. But when I space these notes far away from each other (about 3000 pixels), the blurring no longer happens and I get a big black box. This happens regardless of whether the SKNodes I'm blurring are SKShapeNodes or SKSpriteNodes. Here's a sample project with this issue: Sample Project. (By the way, thanks to BobMoff for the initial version found here):
Here's happy blur (when nodes are less than 3000 pixels away from each other):
Sad blur (when nodes are more than 3000 pixels away from each other):
UPDATE
This behavior occurs whenever an SKEffectNode is the parent. It doesn't matter if it's enabling effects, blurring, etc. If the parent node is an SKNode, it's fine. i.e. Even if the parent blur node is created like it is below, you will get the blackness:
- (SKEffectNode *)createBlurNode
{
SKEffectNode *blurNode = [[SKEffectNode alloc] init];
// blurNode.shouldRasterize = YES;
// [blurNode setShouldEnableEffects:NO];
// [blurNode setFilter:[CIFilter filterWithName:#"CIGaussianBlur"
// keysAndValues:#"inputRadius", #10.0f, nil]];
return blurNode;
}
I had a similar problem, with a very wide, panning scene that I wanted to blur.
To get the blur effect to work, I removed any nodes that were sticking out too far past the edges of the scene:
// Property declarations, elsewhere in the class:
var blurNode: SKEffectNode
var mainScene: SKScene
var exParents: [SKNode : SKNode] = [:]
/**
* Remove outlying nodes from the scene and activate the SKEffectNode
*/
func blurScene() {
let FILTER_MARGIN: CGFloat = 100
let widthMax: CGFloat = mainScene.size.width + FILTER_MARGIN
let heightMax: CGFloat = mainScene.size.height + FILTER_MARGIN
// Recursively iterate through all blurNode's children
blurNode.enumerateChildNodesWithName(".//*", usingBlock: {
[unowned self]
node, stop in
if node.parent != nil && node.scene != nil { // Ignore nodes we already removed
if let sprite = node as? SKSpriteNode {
// Calculate sprite node position in scene coordinates
let sceneOrig = sprite.scene!.convertPoint(sprite.position, fromNode: sprite.parent!)
// Find left, right, bottom and top edges of sprite
let l = sceneOrig.x - sprite.size.width*sprite.anchorPoint.x
let r = l + sprite.size.width
let b = sceneOrig.y - sprite.size.height*sprite.anchorPoint.y
let t = b + sprite.size.height
if l < -FILTER_MARGIN || r > widthMax || b < -FILTER_MARGIN || t > heightMax {
self.exParents[sprite] = sprite.parent!
sprite.removeFromParent()
}
}
}
})
blurNode.shouldEnableEffects = true
}
/**
* Disable blur and reparent nodes we removed earlier
*/
func removeBlur() {
self.blurNode.shouldEnableEffects = false
for (kid, parent) in exParents {
parent.addChild(kid)
}
exParents = [:]
}
NOTES:
This does remove content from your effect node, so extremely wide nodes won't show up in the final result:
You can see the mountain highlighted in red stuck out too far and was removed from the resulting blur.
This code only considers SKSpriteNodes. Empty SKNodes don't seem to break the effect node, but if you're using other visible nodes like SKShapeNodes or SKLabelNodes, you'll have to modify this code to include them.
If you have ignoreSiblingOrder = false, this code might mess up your z-ordering since you can't guarantee what order the nodes are added back to the scene.
Stuff I tried that didn't work
Simply saying node.hidden = true instead of using removeFromParent() doesn't work. That would be WAY too easy ;)
Using an SKCropNode to crop out outlying content didn't work for me. I tried having the SKEffectNode parent the SKCropNode and the other way around, but the black square appeared no matter how small I made the cropped area. This might still be worth looking into if you're desperate for a cleaner solution.
As noted here, SKScenes are secretly SKEffectNodes and you can set their filter just like our blurNode above. SKScenes don't show a black screen when their content is too big. Unfortunately, they seem to just silently disable the filter instead. Again, I might have missed something, so you could explore this option further if you're trying to apply an effect across the entire scene.
Alternate Solutions
You can capture an image of the whole screen and apply a filter to that, as suggested here. I ended up going with an even simpler solution; I took a generic screenshot of the stuff I wanted to blur, then applied a very heavy blur so you can't see the precise details. I used that as the blurred background and you can hardly tell it's not the real thing ;) This also saves a healthy chunk of memory and avoids a small UI hiccup.
Musings
This is a pretty nasty bug, and I hope Apple comes up with a solution soon. You can click this cute picture of a camera to get a GPU trace and some insight on what's happening:
The device seems to be discarding the framebuffer for the effect node because it takes up too much memory. This is affirmed by the fact that when there's more memory pressure on the device, it's easier to get the 'black square' on smaller content in the SKEffectNode.
I used a method that worked for my game but it requires the blurred area to be static without movement.
On iOS 10 using Swift 3 I used SKSpriteNode, SKView, SKEffectNode, CIFilter. I created a sprite from a texture returned from the SKView method "texture from node" and passed the current scene as the parameter because it inherits from SKNode. So essentially I was taking a "screenshot" of the scene and creating a sprite from it. I then put it in an SKEffectNode with a blur filter. (set "should rasterize" to true for better performance as I only needed to blur once). Finally I added the new sprite to the scene. From there you could add sprites to the scene and place them above the new blurred node.
let blurFilter = CIFilter(name: "CIGaussianBlur")!
let blurAmount = 15.0
blurFilter.setValue(blurAmount, forKey: kCIInputRadiusKey)
let blurEffect = SKEffectNode()
blurEffect.shouldRasterize = true
let screenshotNode = SKSpriteNode(texture: gameScene.view!.texture(from: gameScene))
blurEffect.addChild(screenshotNode)
blurEffect.filter = blurFilter
gameScene.addChild(blurEffect)
Possible workaround for the bug:
Use a camera, zoom WAY out, so you can see most everything of your background, take a screenshot style rendering of this image. Crop it to your needs, and then blur it. Then rasterise this.
Then scale this image back up, and slice it up if needs be, and place accordingly.
SKEffectNode renders into a texture. In most iOS systems the maximum size for a texture is 2048x2048. If an SKEffectNode is trying to render content larger than that, it will just use a 2048x2048 texture and anything outside of it will just not appear in the texture. It won't give you any error or warning about this happening; it simply does it silently.
And no, there is no way to tell SKEffectNode to use a texture of a specific size, and pan&clamp the content into it. It always uses a texture that will cover all the child nodes, and if the texture would be too large, it just silently uses that 2048x2048 texture.