Metal Core Image Kernel Sampling - ios

I wrote the following test kernel to understand sampling in Metal Core Image Shaders. What I want to achieve is the following. Any pixel outside the bounds(extent) of the inputImage should be black, other pixels should be pixels of inputImage as usual. But I don't see the desired output so something is wrong in my understanding of how samplers work in shaders. There is no easy way to grab the world coordinates of inputImage, only destination supports world coordinates. Here is my code.
extern "C" float4 testKernel(coreimage::sampler inputImage, coreimage::destination dest)
{
float2 inputCoordinate = inputImage.coord();
float4 color = inputImage.sample(inputCoordinate);
float2 inputOrigin = inputImage.origin();
float2 inputSize = inputImage.size();
float2 destCoord = dest.coord();
if (inputCoordinate.x * inputSize.x < destCoord.x || inputCoordinate.y * inputSize.y > destCoord.y) {
return float4(0.0, 0.0, 0.0, 1.0);
}
return color;
}
And here is Swift code for the filter:
class CIMetalTestRenderer: CIFilter {
var inputImage:CIImage?
static var kernel:CIKernel = { () -> CIKernel in
let bundle = Bundle.main
let url = bundle.url(forResource: "Kernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: url)
return try! CIKernel(functionName: "testKernel", fromMetalLibraryData: data)
}()
override var outputImage: CIImage? {
guard let inputImage = inputImage else {
return nil
}
let dod = inputImage.extent.insetBy(dx: -10, dy: -10)
return CIMetalTestRenderer.kernel.apply(extent: dod, roiCallback: { index, rect in
return rect
}, arguments: [inputImage])
}
}
Update: Here is my full code of ViewController. I just have UIImageView in storyboard (which can be created in viewDidLoad as well):
class ViewController: UIViewController {
#IBOutlet weak var imageView:UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
generateSolidImage()
}
private func generateSolidImage() {
let renderSize = imageView.bounds.size
let solidSize = CGSize(width: renderSize.width * 0.5, height: renderSize.height * 0.5)
var solidImage = CIImage(color: CIColor(red: 0.3, green: 0.6, blue: 0.754, alpha: 1))
var cropRect = CGRect.zero
cropRect.size = solidSize
solidImage = solidImage.cropped(to: cropRect)
solidImage = solidImage.transformed(by: CGAffineTransform.init(translationX: -10, y: -10))
let metalRenderer = CIMetalTestRenderer()
metalRenderer.inputImage = solidImage
var outputImage = metalRenderer.outputImage
outputImage = outputImage?.transformed(by: CGAffineTransform.init(translationX: 20, y: 20))
let cyanImage = CIImage(color: CIColor.cyan).cropped(to: CGRect(x: 0, y: 0, width: renderSize.width, height: renderSize.height))
outputImage = outputImage?.composited(over: cyanImage)
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(outputImage!, from: outputImage!.extent)
imageView.image = UIImage(cgImage: cgImage!)
}
}
And here are the outputs (by commenting and uncommenting black pixel line respectively).

I think the problem is the comparison with dest.coord() because this also changes depending on the pixel that is currently being processed.
If you just want to check whether you are currently sampling outside the bounds of inputImage, you can simply do the following:
if (inputCoordinate.x < 0.0 || inputCoordinate.x > 1.0 ||
inputCoordinate.y < 0.0 || inputCoordinate.y > 1.0) {
return float4(0.0, 0.0, 0.0, 1.0);
}
However, there would be a simpler way to achieve a "clamp-to-black" effect:
let clapedToBlack = solidImage.composited(over: CIImage.black)

Related

Draw dashline cylinder in scenekit like measure app?

I've followed this question to try to make the dash cylinder
final class LineNode: SCNNode {
convenience init(positionA: SCNVector3, positionB: SCNVector3) {
self.init()
let vector = SCNVector3(positionA.x - positionB.x, positionA.y - positionB.y, positionA.z - positionB.z)
let distance = vector.length
let midPosition = (positionA + positionB) / 2
let lineGeometry = SCNCylinder()
lineGeometry.radius = PileDrawer3D.lineWidth
lineGeometry.height = CGFloat(distance)
lineGeometry.radialSegmentCount = 5
lineGeometry.firstMaterial?.diffuse.contents = dashedImage
lineGeometry.firstMaterial?.diffuse.contentsTransform = SCNMatrix4MakeScale(distance * 10, Float(lineGeometry.radius * 10), 1)
lineGeometry.firstMaterial?.diffuse.wrapS = .repeat
lineGeometry.firstMaterial?.diffuse.wrapT = .repeat
lineGeometry.firstMaterial?.isDoubleSided = true
lineGeometry.firstMaterial?.multiply.contents = UIColor.green
lineGeometry.firstMaterial?.lightingModel = .constant
let rotation = SCNMatrix4MakeRotation(.pi / 2, 0, 0, 1)
lineGeometry.firstMaterial?.diffuse.contentsTransform = SCNMatrix4Mult(rotation, lineGeometry.firstMaterial!.diffuse.contentsTransform)
geometry = lineGeometry
position = midPosition
eulerAngles = SCNVector3.lineEulerAngles(vector: vector)
name = className
}
lazy var dashedImage: UIImage = {
let size = CGSize(width: 10, height: 3)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.white.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: 7, height: size.height))
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}()
}
However, the pipes is not dashed.
I'm not sure what I'm missing here please help.
UpdateT:
It turns out that the clear color (in the image) is rendered as black, not transparent in the SCNView. Still, no idea why the green color got darken like this.
Another approach for Line & DashLine
final class LineNode: SCNNode {
var color: UIColor? {
set { geometry?.firstMaterial?.diffuse.contents = newValue }
get { geometry?.firstMaterial?.diffuse.contents as? UIColor }
}
convenience init(positionA: SCNVector3, positionB: SCNVector3, dash: CGFloat = 0, in scene: SCNScene? = nil) {
self.init()
let indices: [Int32] = [0, 1]
let source = SCNGeometrySource(vertices: [positionA, positionB])
let element = SCNGeometryElement(indices: indices, primitiveType: .line)
geometry = SCNGeometry(sources: [source], elements: [element])
geometry?.firstMaterial?.diffuse.contents = UIColor.green
geometry?.firstMaterial?.lightingModel = .constant
return
}
}
final class DashLineNode: SCNNode {
convenience init(positionA: SCNVector3, positionB: SCNVector3) {
self.init()
let vector = (positionB - positionA)
let length = floor(vector.length / 1)
let segment = vector / length
let indices:[Int32] = Array(0..<Int32(length))
var vertices = [positionA]
for _ in indices {
vertices.append(vertices.last! + segment)
}
let source = SCNGeometrySource(vertices: vertices)
let element = SCNGeometryElement(indices: indices, primitiveType: .line)
geometry = SCNGeometry(sources: [source], elements: [element])
geometry?.firstMaterial?.lightingModel = .constant
}
}

What is the most optimized way of changing alpha of blended images in CoreImage in Swift

I want to blend two images with darkenBlendMode and using Slider to manipulate alpha of first and second image. Unfortunately my method is laggy on iPhone 6S. I know that problem is in calling "loadImage" function in every Slider touch but I have no idea how to make it less aggravating. Have you any idea how to repair it?
extension UIImage {
func alpha(_ value:CGFloat) -> UIImage {
UIGraphicsBeginImageContextWithOptions(size, false, scale)
draw(at: CGPoint.zero, blendMode: .normal, alpha: value)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
struct ContentView: View {
#State private var image: Image?
#State private var opacity: CGFloat = 0
let context = CIContext()
var body: some View {
return VStack(alignment: .center) {
image?
.resizable()
.padding(14.0)
.scaledToFit()
Slider(value: Binding(
get: {
self.opacity
},
set: {(newValue) in
self.opacity = newValue
self.loadImage()
}
), in: -0.5...0.5)
}
.onAppear(perform: loadImage)
}
func loadImage(){
let uiInputImage = UIImage(named: "photo1")!.alpha(0.5 + opacity)
let uiBackgroundInputImage = UIImage(named: "photo2")!.alpha(0.5 - opacity)
let ciInputImage = CIImage(image: uiInputImage)
let ciBackgroundInputImage = CIImage(image: uiBackgroundInputImage)
let currentFilter = CIFilter.darkenBlendMode()
currentFilter.inputImage = ciInputImage
currentFilter.backgroundImage = ciBackgroundInputImage
guard let blendedImage = currentFilter.outputImage else {return}
if let cgBlendedImage = context.createCGImage(blendedImage, from: blendedImage.extent){
let uiblendedImage = UIImage(cgImage: cgBlendedImage)
image = Image(uiImage: uiblendedImage)
}
}
}
You could use the CIColorMatrix filter to change the alpha of the images without loading the bitmap data again:
let colorMatrixFilter = CIFilter.colorMatrix()
colorMatrixFilter.inputImage = ciInputImage // no need to load that every time
colorMatrixFilter.inputAVector = CIVector(x: 0, y: 0, z: 0, w: 0.5 + opacity)
let semiTransparentImage = colorMatrixFilter.outputImage
// perform blending

MTKView glitches/strobing while using a custom blur filter written in Metal

I am using CADisplayLink to live filter an image and showing it in the MTKView. All filters work fine until I try the blur filter - during that filter MTKView sometimes starts strobing, glitching or just showing a black screen on some of the frames instead of the actual result image.
I have three interesting observations:
1) There is no such problem when I display the result image in the UIImageView, so the filter itself is not the cause of the problem
2) If I switch filter back from blur to any other filter, the same problem starts happening in those filters too, but ONLY when I used the blur filter first
3) The glitching itself is slowly fading away the more I use the app. It even starts to occur less and less the more times I actually launch the app.
Code for the MTKView:
import GLKit
import UIKit
import MetalKit
import QuartzCore
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()!
}()
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
framebufferOnly = false
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// from tutorial
private func setup() {
framebufferOnly = false
isPaused = false
enableSetNeedsDisplay = false
}
/// The image to display
var image: CIImage?
{
didSet
{
}
}
override func draw()
{
guard let
image = image,
let targetTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let bounds = CGRect(origin: CGPoint.zero, size: drawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(scaledImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer!.present(currentDrawable!)
commandBuffer!.commit()
super.draw()
}
}
extension CGRect
{
func aspectFitInRect(target: CGRect) -> CGRect
{
let scale: CGFloat =
{
let scale = target.width / self.width
return self.height * scale <= target.height ?
scale :
target.height / self.height
}()
let width = self.width * scale
let height = self.height * scale
let x = target.midX - width / 2
let y = target.midY - height / 2
return CGRect(x: x,
y: y,
width: width,
height: height)
}
}
The code for the blur filter in Metal:
float4 zoneBlur(sampler src, float time, float4 touch) {
float focusPower = 2.0;
int focusDetail = 10;
float2 uv = src.coord();
float2 fingerPos;
float2 size = src.size();
if (touch.x == 0 || touch.y == 0) {
fingerPos = float2(0.5, 0.5);
} else {
fingerPos = touch.xy / size.xy;
}
float2 focus = uv - fingerPos;
float4 outColor;
outColor = float4(0, 0, 0, 1);
for (int i=0; i < focusDetail; i++) {
float power = 1.0 - focusPower * (1.0/size.x) * float(i);
outColor.rgb += src.sample(focus * power + fingerPos).rgb;
}
outColor.rgb *= 1.0 / float(focusDetail);
return outColor;
}
I wonder what can cause such an odd behaviour?

Cropping the same UIImage with the same CGRect gives different results

I have the following functional in the app:
the user takes (or chooses) an image (hereinafter originalImage).
the originalImage is sent to some external API, which returns the array of coordinates of dots that I need to add to originalImage.
Since the dots are always located in one area (face), I want to crop the originalImage close to the face borders and display to the user only the result of crop.
After the crop result is displayed I'm adding dots to it one by one.
Here is the code that does the job (except sending image, let's say it has already happened)
class ScanResultViewController{
#IBOutlet weak var scanPreviewImageView: UIImageView!
let originalImage = ORIGINAL_IMAGE //meaning we already have it
let scanDots = [["x":123, "y":123], ["x":234, "y":234]]//total 68 coordinates
var cropRect:CGRect!
override func viewDidLoad() {
super.viewDidLoad()
self.setScanImage()
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
self.animateScan(0)
}
func setScanImage(){
self.cropRect = self.getCropRect(self.scanDots, sourceImage:self.originalImage)
let croppedImage = self.originalImage.imageAtRect(self.cropRect)
self.scanPreviewImageView.image = croppedImage
self.scanPreviewImageView.contentMode = .ScaleAspectFill
}
func animateScan(index:Int){
let i = index
self.originalImage = self.addOnePointToImage(self.originalImage, pointImage: GREEN_DOT!, point: self.scanDots[i])
let croppedImage = self.originalImage.imageAtRect(self.cropRect)
self.scanPreviewImageView.image = croppedImage
self.scanPreviewImageView.contentMode = .ScaleAspectFill
if i < self.scanDots.count-1{
let delay = dispatch_time(DISPATCH_TIME_NOW, Int64(0.1 * Double(NSEC_PER_SEC)))
dispatch_after(delay, dispatch_get_main_queue()) {
self.animateScan(i+1)
}
}
}
func addOnePointToImage(sourceImage:UIImage, pointImage:UIImage, point: Dictionary<String,CGFloat>)->UIImage{
let rect = CGRect(x: 0, y: 0, width: sourceImage.size.width, height: sourceImage.size.height)
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 0)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, rect)
sourceImage.drawInRect(rect, blendMode: .Normal, alpha: 1)
let pointWidth = sourceImage.size.width/66.7
pointImage.drawInRect(CGRectMake(point["x"]!-pointWidth/2, point["y"]!-pointWidth/2, pointWidth, pointWidth), blendMode: .Normal, alpha: 1)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
func getCropRect(points: Array<Dictionary<String,CGFloat>>, sourceImage:UIImage)->CGRect{
var topLeft:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var topRight:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var bottomLeft:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var bottomRight:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
for p in points{
if p["x"]<topLeft.x {topLeft.x = p["x"]!}
if p["y"]<topLeft.y {topLeft.y = p["y"]!}
if p["x"]>topRight.x {topRight.x = p["x"]!}
if p["y"]<topRight.y {topRight.y = p["y"]!}
if p["x"]<bottomLeft.x {bottomLeft.x = p["x"]!}
if p["y"]>bottomLeft.y {bottomLeft.y = p["y"]!}
if p["x"]>bottomRight.x {bottomRight.x = p["x"]!}
if p["y"]>bottomRight.y {bottomRight.y = p["y"]!}
}
let rect = CGRect(x: topLeft.x, y: topLeft.y, width: (topRight.x-topLeft.x), height: (bottomLeft.y-topLeft.y))
return rect
}
}
extension UIImage{
public func imageAtRect(rect: CGRect) -> UIImage {
let imageRef: CGImageRef = CGImageCreateWithImageInRect(self.CGImage, rect)!
let subImage: UIImage = UIImage(CGImage: imageRef)
return subImage
}
}
The problem is that in setScanImage the desired area is accurately cropped and displayed, but when animateScan method is called a different area of the same image is cropped (and displayed) though cropRect is the same and the size of originalImage is totally the same.
Any ideas, guys?
By the way if I display originalImage without cropping it everything works smoothly.
So finally after approximately 10 hours net time (and a lot of help of the stackoverflow community:-) I managed to fix the problem:
In the function addOnePointToImage you need to change the following:
In this line:
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 0)
you need to change the last argument (which stands for scale) to 1:
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 1)
That totally resolves the issue.

How to draw text and image as textures on IOS Swift OpenglES 1.0/1.1/2.0

I am trying to render texture of png images and also draw strings as textures on Swift by using OpenglES 1.0 . But none of examples I have googled and found here aint working. I spent more than enough time for it and I need to figure this out.
On screen only the non-textured drawers are shown, I remove all the triangle based drawers and there is no issue about camera angle or views, issue is about texture rendering.
Thanks to anyone can help me, here are my simplified codes for drawing png to texture, but shows nothing, not even white boxes;
This is the renderer class;
import GLKit
import SWXMLHash
class MapViewController: GLKViewController {
#IBOutlet var glView: GLKView!
var rViewController : UIViewController? = nil
var context : EAGLContext!
var ratio : Float!
var size : Float!
override func viewDidLoad() {
super.viewDidLoad()
self.context = EAGLContext.init(API: EAGLRenderingAPI.OpenGLES1)
if self.context == nil {
Util.log("failed to create context for self")
}
glView.context = self.context
EAGLContext.setCurrentContext(self.context)
glClearColor(0.95, 0.95, 0.95, 0.5)
glDisable(GLenum(GL_CULL_FACE))
glEnable(GLenum(GL_DEPTH_TEST))
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA))
}
override func viewWillLayoutSubviews() {
if firstInit {
let width = glView.frame.size.width
let height = glView.frame.size.height
glViewport(0, 0, Int32(width), Int32(height))
let fieldOfView = GLKMathDegreesToRadians(60)
glEnable(GLenum(GL_NORMALIZE))
self.ratio = Float(width)/Float(height)
glMatrixMode(GLenum(GL_PROJECTION))
glLoadIdentity()
size = zNear * Float(tan(Double(fieldOfView/2.0)))
glFrustumf(-size, size, -size / (ratio), size / (ratio), zNear, zFar)
glMatrixMode(GLenum(GL_MODELVIEW))
glLoadIdentity()
}
}
override func glkView(view: GLKView, drawInRect rect: CGRect) {
glPushMatrix()
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
glClear(GLbitfield(GL_DEPTH_BUFFER_BIT))
glMatrixMode(GLenum(GL_MODELVIEW))
glLoadIdentity()
glEnableClientState(GLenum(GL_VERTEX_ARRAY))
glEnableClientState(GLenum(GL_COLOR_ARRAY))
glEnable(GLenum(GL_TEXTURE_2D))
// glEnable(GLenum(GL_BLEND))
// glEnableClientState(GLenum(GL_NORMAL_ARRAY))
glEnableClientState(GLenum(GL_TEXTURE_COORD_ARRAY))
let fl = CGPoint(x: 0, y: 0)
let drawer = IconDrawer(x: 0, y: 0, z: 0, size:150, roomCoords: fl, exampleimage)
drawer.draw()
glDisable(GLenum(GL_TEXTURE_2D))
glDisableClientState(GLenum(GL_TEXTURE_COORD_ARRAY))
glDisableClientState(GLenum(GL_VERTEX_ARRAY));
glDisableClientState(GLenum(GL_COLOR_ARRAY));
glPopMatrix();
}
}
and this is the drawer class;
import Foundation
class IconDrawer : Mesh {
let normals : [GLfloat] = [
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0 ]
let textureCoordinates : [Float] = [0.0, 1.0,
1.0, 1.0,
1.0, 0.0,
0.0, 0.0]
let indices = [0, 1, 2, 0, 2, 3]
var coordinates : CGPoint? = nil
var texture:GLuint=0
static var currentScreenMid = CGPoint(x: 0, y: 0)
init(x : Float, y : Float, z : Float, size : Int, roomCoords : CGPoint, imageName : String) {
super.init()
// Mapping coordinates for the vertices
self.coordinates = roomCoords
let w = Float((size)/1000);
let h = Float((size)/1000);
let vertices : [Float] = [-w+x, -h+y, z+Zunit,
w+x, -h+y, z+Zunit,
w+x, h+y, z+Zunit,
-w+x, h+y, z+Zunit]
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
//PART 1 LOAD PNG
let image = UIImage(named : imageName)
let iwidth = CGImageGetWidth(image?.CGImage)
let iheight = CGImageGetHeight(image?.CGImage)
let imageData = malloc(iwidth * iheight * 4)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)
let imageContext : CGContextRef = CGBitmapContextCreate(imageData, iwidth, iheight, 8, iwidth * 4, CGColorSpaceCreateDeviceRGB(), bitmapInfo.rawValue)!
CGContextClearRect(imageContext, CGRectMake(0, 0, CGFloat(iwidth), CGFloat(iheight)))
CGContextTranslateCTM(imageContext, 0, CGFloat((iheight - iheight)))
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, CGFloat(iwidth), CGFloat(iheight)), image?.CGImage)
glTexImage2D(GLenum(GL_TEXTURE_2D), GLint(0), GL_RGBA, Int32(iwidth), Int32(iheight), GLint(0), GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), imageData)
self.w = GLsizei(Int(image!.size.width))
self.h = GLsizei(Int(image!.size.height))
loadBitmap(imageData)
fl.bitmaps.append(imageData)
//PART 2 CREATE TEXTURE WITH THE IMAGE DATA
glGenTextures(1, &texture)
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glTexImage2D(GLenum(GL_TEXTURE_2D), Int32(0), GL_RGBA, GLsizei(Int(image!.size.width)), GLsizei(Int(image!.size.height)), Int32(0), GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), imageData)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GLint(GL_LINEAR))
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GLint(GL_LINEAR))
if image != nil {
let imageRect = CGRectMake(0.0, 0.0, image!.size.width, image!.size.height);
UIGraphicsBeginImageContextWithOptions(image!.size, false, UIScreen.mainScreen().scale)
image?.drawInRect(imageRect)
UIGraphicsEndImageContext();
}
}
override func draw() {
//PART 3 RENDER
glEnableClientState(GLenum(GL_VERTEX_ARRAY));
glEnableClientState(GLenum(GL_NORMAL_ARRAY));
glEnableClientState(GLenum(GL_TEXTURE_COORD_ARRAY));
glEnable(UInt32(GL_TEXTURE_2D));
glEnable(UInt32(GL_BLEND));
glBlendFunc(UInt32(GL_SRC_ALPHA), UInt32(GL_ONE_MINUS_SRC_ALPHA));
Drawer.printGLErrors()
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
Drawer.printGLErrors()
glVertexPointer(Int32(2), GLenum(GL_FLOAT), GLsizei(0), self.mVerticesBuffer!)
Drawer.printGLErrors()
glNormalPointer(GLenum(GL_FLOAT), GLsizei(0), normals)
Drawer.printGLErrors()
glTexCoordPointer(Int32(2), GLenum(GL_FLOAT), GLsizei(0), textureCoordinates)
Drawer.printGLErrors()
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 4)
Drawer.printGLErrors()
}
}
The problem you are facing is the non POT (power of two) textures are not supported out of the box. I believe there are some extensions for that but I wouldn't bother with that.
Anyway the texture dimensions must be POT (2, 4, 8, 16...) so you have 2 options in doing so. You can create a POT context when getting the raw image data pointer and resize the image to a desired POT size scaling the image in the process. A more common way is to create a large enough texture with NULL data pointer and then use glTexSubImage with original image size parameters to send the data to the texture. This procedure also requires you to change the texture coordinates so instead of 1.0 you will have imageWidth/textureWidth, same for height. The second procedure need a bit more effort but is usually used to be upgraded into texture atlas procedure (putting multiple images onto the same texture).
Then again the fastest procedure is simply resizing your image in Photoshop or whatever to be 128x128 for instance. Just a note for POT dimensions should not need to be the same, 128x64 should be valid as well.

Resources