I'm using the code below inside a SCNGeometry extension to get the vertex data, and a similar one for .normals, my objective is to create a cloned mesh from the original node to colorize each vertex, but when I try to create another node with those vertices and normals, the resulting mesh has some triangles messed up, I have a small mesh to test and this is what I got, Do any of you have some guidance on what I could be doing wrong?
In the example, this function is getting me an array of 50 vertices, while the mesh actually has 18 faces, hence the result should be an array of 54 vertices, am I right?
Original mesh
Cloned mesh
Extension:
func extGetVertices() -> [SCNVector3]? {
let sources = self.sources(for: .vertex)
guard let source = sources.first else{return nil}
let stride = source.dataStride / source.bytesPerComponent
let offset = source.dataOffset / source.bytesPerComponent
let vectorCount = source.vectorCount
return source.data.withUnsafeBytes { (buffer : UnsafePointer<Float>) -> [SCNVector3] in
var result = Array<SCNVector3>()
for i in 0...vectorCount - 1 {
let start = i * stride + offset
result.append(SCNVector3(buffer[start], buffer[start+ 1], buffer[start+ 2]))
}
return result
}
}
obj:
#
# object obj_12853878
#
v 1097 957 36
v 779.26361083984375 992 0
v 707.26361083984375 590.5828857421875 91
v 1076 334.41595458984375 0
v 748.26361083984375 326.41595458984375 0
v 732.01263427734375 22.33051872253417968 0
v 1110.4652099609375 639.2049560546875 0
v 335.71615600585937504 680.5828857421875 39
v 314.88912963867187504 369.207000732421875 9.023892608628001E-14
v 350.4644775390625 926.65570068359375 -33
v 36 358.41595458984375 0
v 0 0 -33
v 0 680.5828857421875 -27
v 0 957 19
v 335.71615600585937504 22 0
v 1076 0 30
# 16 vertices
vn -0.08388713002204895 0.23459480702877044 0.9684669375419616
vn 0 0 1
vn 0.24344816803932188 -0.28190669417381288 0.92804181575775152
vn -0.1642393171787262 -0.11176854372024536 0.98006796836853024
vn -0.11669350415468216 0.28533965349197388 0.9512959122657776
vn 0.00356122362427413 -0.0920381024479866 0.99574911594390864
vn -0.19254806637763976 0.06056648120284081 0.9794166088104248
vn 0.13100945949554444 -0.1627427488565445 0.97793215513229376
vn -0.0974447876214981 -0.0058451765216887 0.9952237606048584
vn -0.03258795291185379 -0.3300407826900482 0.9434040188789368
vn 0.23050078749656676 -0.09988142549991608 0.967932403087616
vn -0.07897967845201492 0.233848974108696 0.96905976533889776
vn 0.00482096569612622 -0.1245955303311348 0.99219590425491328
vn -0.18483471870422364 0.28617173433303832 0.940181851387024
vn -0.08079835772514343 0.08905769884586334 0.99274384975433344
vn 1.8364935581471252E-16 -2.4888339972415545E-16 1
# 16 vertex normals
g obj_12853878
s 1
f 1//1 2//1 3//1
f 4//2 5//2 6//2
f 7//3 3//3 5//3
f 3//4 8//4 9//4
f 2//5 10//5 8//5
f 9//6 11//6 12//6
f 8//7 13//7 11//7
f 10//8 14//8 13//8
f 15//9 9//9 12//9
f 5//10 3//10 9//10
f 7//11 1//11 3//11
f 2//12 8//12 3//12
f 8//13 11//13 9//13
f 10//14 13//14 8//14
f 4//15 6//15 16//15
f 7//2 5//2 4//2
f 5//2 15//2 6//2
f 5//16 9//16 15//16
#18 polygons
Clone code:
guard let let vertices = node.geometry?.extGetVertices() else {return nil}
guard let let normals = node.geometry?.extGetNormals() else {return nil}
guard let let indices = (0..<vertices.count).indices.map {Int32($0)}
let vertexSource = SCNGeometrySource(vertices: vertices)
let indexElement = SCNGeometryElement(indices: indices, primitiveType: SCNGeometryPrimitiveType.triangles)
let normalSource = SCNGeometrySource(normals: normals)
let voxelGeometry = SCNGeometry(sources: [vertexSource, normalSource], elements: [indexElement])
let voxelMaterial = SCNMaterial()
voxelMaterial.diffuse.contents = UIColor.white
voxelGeometry.materials = [voxelMaterial]
let clonedNode = SCNNode(geometry: voxelGeometry)
Ok, I found my original problem, I was creating the array of indexes as a series of consecutive numbers, but they could be obtained from the original mesh too
func extGetIndices() -> [Int32]? {
guard let dataCount = self.elements.first?.data.count else { return nil }
let faces = self.elements.first?.data.withUnsafeBytes {(ptr: UnsafeRawBufferPointer) -> [Int32] in
guard let boundPtr = ptr.baseAddress?.assumingMemoryBound(to: Int32.self) else {return []}
let buffer = UnsafeBufferPointer(start: boundPtr, count: dataCount / 4)
return Array<Int32>(buffer)
}
return faces
}
so, the updated code for cloning the node is:
guard let let vertices = node.geometry?.extGetVertices() else {return nil}
guard let let normals = node.geometry?.extGetNormals() else {return nil}
guard let let indices = node.geometry?.extGetIndices() else {return nil}
let vertexSource = SCNGeometrySource(vertices: vertices)
let indexElement = SCNGeometryElement(indices: indices, primitiveType: SCNGeometryPrimitiveType.triangles)
let normalSource = SCNGeometrySource(normals: normals)
let voxelGeometry = SCNGeometry(sources: [vertexSource, normalSource], elements: [indexElement])
let voxelMaterial = SCNMaterial()
voxelMaterial.diffuse.contents = UIColor.white
voxelGeometry.materials = [voxelMaterial]
let clonedNode = SCNNode(geometry: voxelGeometry)
Additionally, the node could be colored having a SCNGeometrySource of color type and adding colors for each vertex:
let colors = getRandomColors(vertices.count)
let colorData = NSData(bytes: colors , length: MemoryLayout<SCNVector3>.stride * colors.count)
let colorSource = SCNGeometrySource(data: colorData as Data, semantic: .color, vectorCount: colors.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride)
let voxelGeometry = SCNGeometry(sources: [vertexSource, normalSource, colorSource ], elements: [indexElement])
//random colors function
func getRandomColors(count: Int) -> [SCNVector3] {
var colors: [SCNVector3] = []
for _ in 0..<count {
let red = Double.random(in: 0...1)
let green = Double.random(in: 0...1)
let blue = Double.random(in: 0...1)
colors.append(SCNVector3(red, green, blue))
}
return colors
}
Here an example :
func cloneNode(node: SCNNode) -> SCNNode? {
guard let vertices = node.geometry?.extGetVertices() else {return nil}
guard let normals = node.geometry?.extGetNormals() else {return nil}
let indices = (0..<vertices.count).indices.map {Int32($0)}
// initialise color source
struct RGBColor {
let r,g,b : Float
}
var colors : [RGBColor] = []
// to have contiguous memory space
colors.reserveCapacity(vertices.count)
// convert to color geometry source
let colorsAsData = NSData(bytes: colors, length: MemoryLayout<RGBColor>.size * colors.count) as Data
// fill colors arrays
let colorSource = SCNGeometrySource(data: colorsAsData,
semantic: .color,
vectorCount: colors.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: MemoryLayout.offset(of: \RGBColor.r)!,
dataStride: MemoryLayout<RGBColor>.stride)
let vertexSource = SCNGeometrySource(vertices: vertices)
let indexElement = SCNGeometryElement(indices: indices, primitiveType: SCNGeometryPrimitiveType.triangles)
let normalSource = SCNGeometrySource(normals: normals)
// Add the colors source in the list
let voxelGeometry = SCNGeometry(sources: [vertexSource, normalSource, colorSource], elements: [indexElement])
let voxelMaterial = SCNMaterial()
voxelMaterial.diffuse.contents = UIColor.white
voxelGeometry.materials = [voxelMaterial]
let clonedNode = SCNNode(geometry: voxelGeometry)
return clonedNode
}
Related
I'm having weird result when I apply a shader on a MTLTexture after applying a MPSImageLanczosScale.
Even if the transform as scale = 1 and translationX = 0 and translationY = 0.
It's working well if I don't apply the MPSImageLanczosScale. Below you can see the result without and with applying the MPSImageLanczosScale.
My render method look like this:
func filter(pixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
guard let commandQueue = commandQueue, var commandBuffer = commandQueue.makeCommandBuffer() else {
print("Failed to create Metal command queue")
CVMetalTextureCacheFlush(textureCache!, 0)
return nil
}
var newPixelBuffer: CVPixelBuffer?
CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, outputPixelBufferPool!, &newPixelBuffer)
guard var outputPixelBuffer = newPixelBuffer else {
print("Allocation failure: Could not get pixel buffer from pool (\(self.description))")
return nil
}
guard let inputTexture = makeTextureFromCVPixelBuffer(pixelBuffer: pixelBuffer, textureFormat: .bgra8Unorm) else {
return nil
}
guard var intermediateTexture = makeTextureFromCVPixelBuffer(pixelBuffer: outputPixelBuffer, textureFormat: .bgra8Unorm) else {
return nil
}
let imageLanczosScale = MPSImageLanczosScale(device: metalDevice)
let transform = MPSScaleTransform(scaleX: Double(scale), scaleY: Double(scale), translateX: Double(translationX), translateY: Double(translationY))
withUnsafePointer(to: &transform) { (transformPtr: UnsafePointer<MPSScaleTransform>) -> () in
imageLanczosScale.scaleTransform = transformPtr
}
imageLanczosScale.encode(commandBuffer: commandBuffer, sourceTexture: inputTexture, destinationTexture: outputTexture)
guard let commandEncoder = commandBuffer.makeComputeCommandEncoder(),
let outputTexture = makeTextureFromCVPixelBuffer(pixelBuffer: outputPixelBuffer, textureFormat: .bgra8Unorm) else { return nil }
commandEncoder.label = "Shader"
commandEncoder.setComputePipelineState(shaderPipline)
commandEncoder.setTexture(intermediateTexture, index: 1)
commandEncoder.setTexture(outputTexture, index: 0)
let w = shaderPipline.threadExecutionWidth
let h = shaderPipline.maxTotalThreadsPerThreadgroup / w
let threadsPerThreadgroup = MTLSizeMake(w, h, 1)
let threadgroupsPerGrid = MTLSize(width: (intermediateTexture.width + w - 1) / w, height: (intermediateTexture.height + h - 1) / h, depth: 1)
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()
commandBuffer.commit()
return outputPixelBuffer
}
No idea what im doing wrong. any ideas?
I am trying to show a 3D pie-chart in my app in swift 4.
but can't find any solution.does anyone help me or give me any suggestion please??
private func createPieChart() -> SCNNode {
let aNode = SCNNode()
var total:Float = 0.0
//let numRows = data.numberOfRows()
let numColumns = data.numberOfColums()
let i = 0
for j in 0 ..< numColumns {
let val = data.valueForIndexPath(row:i, column: j)
total = total + val
}
var startDeg:Float = 0.0
var startRad:Float = 0.0
for j in 0 ..< numColumns {
let val = data.valueForIndexPath(row:i, column: j)
let pct = val*360.0/total
startRad = Float(startDeg) * Float(M_PI) / Float(180.0)
let endDeg = startDeg + pct - 1.0
let endRad:Float = Float(endDeg) * Float(M_PI) / Float(180.0)
let circlePath = UIBezierPath()
circlePath.move(to: CGPoint.zero)
circlePath.addArc(withCenter: CGPoint.zero, radius: 20.0, startAngle:CGFloat(startRad), endAngle: CGFloat(endRad), clockwise: true)
startDeg = endDeg + 1
let node = SCNNode(geometry: SCNShape(path:circlePath, extrusionDepth: 5.0))
node.geometry?.firstMaterial?.diffuse.contents = data.colorForIndexPath(row: i, column: j)
aNode.addChildNode(node)
}
return aNode
}
Recently, I am playing with SceneKit, I found the colorGrading property. The doc say that
The contents value for this material property must be a 3D color
lookup table, or a 2D texture image that represents such a table
arranged in a horizontal strip.
and a 3D color lookup table can read from a Metal texture.
You can provide data in this cubic format as a Metal texture with the type3D texture type.
so how can I set scnCamera.colorGrading.contents property.
Creating a 3D texture is very similar to creating a 2D texture, provided you have a buffer containing the image data in the appropriate layout. I assume you already have that. Here's how to create the texture itself, copy the data into it, and set it as the color grading texture:
var dim = 16
var values: UnsafeMutablePointer<Float> = ... // alloc and populate 3D array of pixels
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.textureType = .type3D
textureDescriptor.pixelFormat = .rgba32Float
textureDescriptor.width = dim
textureDescriptor.height = dim
textureDescriptor.depth = dim
textureDescriptor.usage = .shaderRead
let texture = device.makeTexture(descriptor: textureDescriptor)
texture.replace(region: MTLRegionMake3D(0, 0, 0, dim, dim, dim),
mipmapLevel:0,
slice:0,
withBytes:values,
bytesPerRow:dim * MemoryLayout<Float>.size * 4,
bytesPerImage:dim * dim * MemoryLayout<Float>.size * 4)
camera.colorGrading.contents = texture
EDIT
Here's a complete parser that will turn a .cube file into a MTLTexture that is suitable for use with this property:
import Metal
class AdobeLUTParser {
static func texture(withContentsOf url: URL, device: MTLDevice) -> MTLTexture? {
let lutString = try! NSString(contentsOf: url, encoding: String.Encoding.utf8.rawValue)
let lines = lutString.components(separatedBy: "\r\n") as [NSString]
var dim = 2
var values: UnsafeMutablePointer<Float>? = nil
var index = 0
for line in lines {
if line.length == 0 { continue; } // skip blanks
let firstChar = line.character(at: 0)
if firstChar < 58 /*':'*/ {
if values == nil {
print("Error: Got data before size in LUT")
break;
}
let numbers = line.components(separatedBy: " ") as [NSString]
if numbers.count == 3 {
let r = numbers[0].floatValue
let g = numbers[1].floatValue
let b = numbers[2].floatValue
let a = Float(1)
values![index * 4 + 0] = r
values![index * 4 + 1] = g
values![index * 4 + 2] = b
values![index * 4 + 3] = a
index += 1
}
} else {
if line.hasPrefix("LUT_3D_SIZE") {
let sizeString = line.components(separatedBy: " ")[1] as NSString
dim = Int(sizeString.intValue)
if dim < 2 || dim > 512 {
print("Error: insane LUT size: \(dim)")
}
let rawPointer = malloc(dim * dim * dim * 4 * MemoryLayout<Float>.size)
values = rawPointer!.bindMemory(to: Float.self, capacity: dim * dim * dim * 4)
} else if line.hasPrefix("LUT_1D_SIZE") {
print("Error: 1D LUTs not supported")
break
}
}
}
if values == nil {
print("Did not parse LUT successfully")
return nil
}
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.textureType = .type3D
textureDescriptor.pixelFormat = .rgba32Float
textureDescriptor.width = dim
textureDescriptor.height = dim
textureDescriptor.depth = dim
textureDescriptor.usage = .shaderRead
let texture = device.makeTexture(descriptor: textureDescriptor)
texture.replace(region: MTLRegionMake3D(0, 0, 0, dim, dim, dim),
mipmapLevel:0,
slice:0,
withBytes:values!,
bytesPerRow:dim * MemoryLayout<Float>.size * 4,
bytesPerImage:dim * dim * MemoryLayout<Float>.size * 4)
return texture
}
}
Usage:
let mtlDevice = MTLCreateSystemDefaultDevice()
let lutURL = Bundle.main.url(forResource: "MyGradingTexture", withExtension: "cube")
let lutTexture = AdobeLUTParser.texture(withContentsOf: lutURL!, device: mtlDevice!)
camera.colorGrading.contents = lutTexture
Following this solution: Custom SceneKit Geometry and converted to Swift 3, the code became:
func drawLine() {
var verts = [SCNVector3(x: 0,y: 0,z: 0),SCNVector3(x: 1,y: 0,z: 0),SCNVector3(x: 0,y: 1,z: 0)]
let src = SCNGeometrySource(vertices: &verts, count: 3)
let indexes: [CInt] = [0, 1, 2]
let dat = NSData(
bytes: indexes,
length: MemoryLayout<CInt>.size * indexes.count
)
let ele = SCNGeometryElement(
data: dat as Data,
primitiveType: .line,
primitiveCount: 2,
bytesPerIndex: MemoryLayout<CInt>.size
)
let geo = SCNGeometry(sources: [src], elements: [ele])
let nd = SCNNode(geometry: geo)
geo.materials.first?.lightingModel = .blinn
geo.materials.first?.diffuse.contents = UIColor.red
scene.rootNode.addChildNode(nd)
}
It work on simulator:
But I got error on device:
/BuildRoot/Library/Caches/com.apple.xbs/Sources/Metal/Metal-85.83/ToolsLayers/Debug/MTLDebugRenderCommandEncoder.mm:130: failed assertion `indexBufferOffset(0) + (indexCount(4) * 4) must be <= [indexBuffer length](12).'
What is happening?
The entire code is here: Source code
I'm answering my own question because I found a solution that can help others.
The problem was on "indexes", 3 indexes won't draw 2 vertices. Must set 2 indexes for each vertice you want to draw.
This is the final function:
func drawLine(_ verts : [SCNVector3], color : UIColor) -> SCNNode? {
if verts.count < 2 { return nil }
let src = SCNGeometrySource(vertices: verts, count: verts.count )
var indexes: [CInt] = []
for i in 0...verts.count - 1 {
indexes.append(contentsOf: [CInt(i), CInt(i + 1)])
}
let dat = NSData(
bytes: indexes,
length: MemoryLayout<CInt>.size * indexes.count
)
let ele = SCNGeometryElement(
data: dat as Data,
primitiveType: .line,
primitiveCount: verts.count - 1,
bytesPerIndex: MemoryLayout<CInt>.size
)
let line = SCNGeometry(sources: [src], elements: [ele])
let node = SCNNode(geometry: line)
line.materials.first?.lightingModel = .blinn
line.materials.first?.diffuse.contents = color
return node
}
Calling:
scene.rootNode.addChildNode(
drawLine(
[SCNVector3(x: -1,y: 0,z: 0),
SCNVector3(x: 1,y: 0.5,z: 1),
SCNVector3(x: 0,y: 1.5,z: 0)] , color: UIColor.red
)!
)
Will draw:
I have 2 pictures which I want to compare, if pixel color is the same to save it.
I detect the color of the pixel by this UIImage extension function:
func getPixelColor(pos: CGPoint) -> ??? {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return ???
}
For example, I run the scanner on picture 1 and save it in an array? Or dictionary? And after that I run the scanner on picture 2 and when I have the information from 2 pictures to compare it with what function?
I want to see on which CGPoint the pixels colors are identical from 2 images?
UPDATE:
I update getPixelColor to return me "(pos)(r)(g)(b)(a)" and after that I created this function which left only duplicates (BEFORE USING THIS FUNCTION YOU HAVE TO .sort() THE ARRAY!)
extension Array where Element : Equatable {
var duplicates: [Element] {
var arr:[Element] = []
var start = 0
var start2 = 1
for _ in 0...self.count{
if(start2<self.count){
if(self[start] == self[start2]){
if(arr.contains(self[start])==false){
arr.append(self[start])
}
}
start+=1
start2+=1
}
}
return arr
}
}
This returns me something like this:
"(609.0, 47.0)1.01.01.01.0" I know that the color is black at this point I do x-536 to fit iPhone 5 screen and when I make an attempt to draw it again it draws something wrong... maybe I can't do it properly.. help?
have the UIImage extension return a UIColor. use this method to compare each pixel of the two images. if both pixels match, add the color to an array of arrays.
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
func findMatchingPixels(aImage: UIImage, _ bImage: UIImage) -> [[UIColor?]] {
guard aImage.size == bImage.size else { fatalError("images must be the same size") }
var matchingColors: [[UIColor?]] = []
for y in 0..<Int(aImage.size.height) {
var currentRow = [UIColor?]()
for x in 0..<Int(aImage.size.width) {
let aColor = aImage.getPixelColor(CGPoint(x: x, y: y))
let colorsMatch = bImage.getPixelColor(CGPoint(x: x, y: y)) == aColor
currentRow.append(colorsMatch ? aColor : nil)
}
matchingColors.append(currentRow)
}
return matchingColors
}
used like this:
let matchingPixels = findMatchingPixels(UIImage(named: "imageA.png")!, UIImage(named: "imageB.png")!)
if let colorForOrigin = matchingPixels[0][0] {
print("the images have the same color, it is: \(colorForOrigin)")
} else {
print("the images do not have the same color at (0,0)")
}
for simplicity i made findMatchingPixels() require the images be the same size, but it wouldn't take much to allow different sized images.
UPDATE
if you want ONLY the pixels that match, i'd return a tuple like this:
func findMatchingPixels(aImage: UIImage, _ bImage: UIImage) -> [(CGPoint, UIColor)] {
guard aImage.size == bImage.size else { fatalError("images must be the same size") }
var matchingColors = [(CGPoint, UIColor)]()
for y in 0..<Int(aImage.size.height) {
for x in 0..<Int(aImage.size.width) {
let aColor = aImage.getPixelColor(CGPoint(x: x, y: y))
guard bImage.getPixelColor(CGPoint(x: x, y: y)) == aColor else { continue }
matchingColors.append((CGPoint(x: x, y: y), aColor))
}
}
return matchingColors
}
Why not try a different approach?
The Core Image filter CIDifferenceBlendMode will return an all black image if passed two identical images and an image with areas of non black where two images differ. Pass that into a CIAreaMaximum which will return a 1x1 image containing the maximum pixel: if the maximum value is 0, you know you have two identical images, if the maximum is greater than zero, the two images are different.
Given two CIImage instances, imageA and imageB, here's the code:
let ciContext = CIContext()
let difference = imageA
.imageByApplyingFilter("CIDifferenceBlendMode",
withInputParameters: [
kCIInputBackgroundImageKey: imageB])
.imageByApplyingFilter("CIAreaMaximum",
withInputParameters: [
kCIInputExtentKey: CIVector(CGRect: imageA.extent)])
let totalBytes = 4
let bitmap = calloc(totalBytes, sizeof(UInt8))
ciContext.render(difference,
toBitmap: bitmap,
rowBytes: totalBytes,
bounds: difference.extent,
format: kCIFormatRGBA8,
colorSpace: nil)
let rgba = UnsafeBufferPointer<UInt8>(
start: UnsafePointer<UInt8>(bitmap),
count: totalBytes)
let red = rgba[0]
let green = rgba[1]
let blue = rgba[2]
If red, green or blue are not zero, you know the images are different!