swift - speed improvement in UIView pixel per pixel drawing - ios

is there a way to improve the speed / performance of drawing pixel per pixel into a UIView?
The current implementation of a 500x500 pixel UIView, is terribly slow.
class CustomView: UIView {
public var context = UIGraphicsGetCurrentContext()
public var redvalues = [[CGFloat]](repeating: [CGFloat](repeating: 1.0, count: 500), count: 500)
public var start = 0
{
didSet{
self.setNeedsDisplay()
}
}
override func draw(_ rect: CGRect
{
super.draw(rect)
context = UIGraphicsGetCurrentContext()
for yindex in 0...499{
for xindex in 0...499 {
context?.setStrokeColor(UIColor(red: redvalues[xindex][yindex], green: 0.0, blue: 0.0, alpha: 1.0).cgColor)
context?.setLineWidth(2)
context?.beginPath()
context?.move(to: CGPoint(x: CGFloat(xindex), y: CGFloat(yindex)))
context?.addLine(to: CGPoint(x: CGFloat(xindex)+1.0, y: CGFloat(yindex)))
context?.strokePath()
}
}
}
}
Thank you very much

When drawing individual pixels, you can use a bitmap context. A bitmap context takes raw pixel data as an input.
The context copies your raw pixel data so you don't have to use paths, which are likely much slower. You can then get a CGImage by using context.makeImage().
The image can then be used in an image view, which would eliminate the need to redraw the whole thing every frame.
If you don't want to manually create a bitmap context, you can use
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
// draw everything into the context
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Then you can use a UIImageView to display the rendered image.
It is also possible to draw into a CALayer, which does not need to be redrawn every frame but only when resized.

That's how it looks now, are there any optimizations possible or not?
public struct rgba {
var r:UInt8
var g:UInt8
var b:UInt8
var a:UInt8
}
public let imageview = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let width_input = 500
let height_input = 500
let redPixel = rgba(r:255, g:0, b:0, a:255)
let greenPixel = rgba(r:0, g:255, b:0, a:255)
let bluePixel = rgba(r:0, g:0, b:255, a:255
var pixelData = [rgba](repeating: redPixel, count: Int(width_input*height_input))
pixelData[1] = greenPixel
pixelData[3] = bluePixel
self.view.addSubview(imageview)
imageview.frame = CGRect(x: 100,y: 100,width: 600,height: 600)
imageview.image = draw(pixel: pixelData,width: width_input,height: height_input)
}
func draw(pixel:[rgba],width:Int,height:Int) -> UIImage
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
let data = UnsafeMutableRawPointer(mutating: pixel)
let bitmapContext = CGContext(data: data,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4*width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
let image = bitmapContext?.makeImage()
return UIImage(cgImage: image!)
}

I took the answer from Manuel and got it working in Swift 5. The main sticking point here was to clear the dangling pointer warning now in Xcode 12.
var image:CGImage?
pixelData.withUnsafeMutableBytes( { (rawBufferPtr: UnsafeMutableRawBufferPointer) in
if let rawPtr = rawBufferPtr.baseAddress {
let bitmapContext = CGContext(data: rawPtr,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4*width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
image = bitmapContext?.makeImage()
}
})
I did have to move away from the rgba struct approach for front loading the data and moved to direct UInt32 values derived from rawValues in the enum. The 'append' or 'replaceInRange' approach to updating an existing array took hours (my bitmap was LARGE) and ended up exhausting swap space on my computer.
enum Color: UInt32 { // All 4 bytes long with full opacity
case red = 4278190335 // 0xFF0000FF
case yellow = 4294902015
case orange = 4291559679
case pink = 4290825215
case violet = 4001558271
case purple = 2147516671
case green = 16711935
case blue = 65535 // 0x0000FFFF
}
With this approach I was able to quickly build a Data buffer with that data amount via:
func prepareColorBlock(c:Color) -> Data {
var rawData = withUnsafeBytes(of:c.rawValue) { Data($0) }
rawData.reverse() // Byte order is reveresed when defined
var dataBlock = Data()
dataBlock.reserveCapacity(100)
for _ in stride(from: 0, to: 100, by: 1) {
dataBlock.append(rawData)
}
return dataBlock
}
With that I just appended each of these blocks into my mutable Data instance 'pixelData' and we are off. You can tweak how the data is assembled, as I just wanted to generate some color bars in a UIImageView to validate the work. For a 800x600 view, it took about 2.3 seconds to generate and render the whole thing.
Again, hats off to Manuel for pointing me in the right direction.

Related

Cropping the same UIImage with the same CGRect gives different results

I have the following functional in the app:
the user takes (or chooses) an image (hereinafter originalImage).
the originalImage is sent to some external API, which returns the array of coordinates of dots that I need to add to originalImage.
Since the dots are always located in one area (face), I want to crop the originalImage close to the face borders and display to the user only the result of crop.
After the crop result is displayed I'm adding dots to it one by one.
Here is the code that does the job (except sending image, let's say it has already happened)
class ScanResultViewController{
#IBOutlet weak var scanPreviewImageView: UIImageView!
let originalImage = ORIGINAL_IMAGE //meaning we already have it
let scanDots = [["x":123, "y":123], ["x":234, "y":234]]//total 68 coordinates
var cropRect:CGRect!
override func viewDidLoad() {
super.viewDidLoad()
self.setScanImage()
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
self.animateScan(0)
}
func setScanImage(){
self.cropRect = self.getCropRect(self.scanDots, sourceImage:self.originalImage)
let croppedImage = self.originalImage.imageAtRect(self.cropRect)
self.scanPreviewImageView.image = croppedImage
self.scanPreviewImageView.contentMode = .ScaleAspectFill
}
func animateScan(index:Int){
let i = index
self.originalImage = self.addOnePointToImage(self.originalImage, pointImage: GREEN_DOT!, point: self.scanDots[i])
let croppedImage = self.originalImage.imageAtRect(self.cropRect)
self.scanPreviewImageView.image = croppedImage
self.scanPreviewImageView.contentMode = .ScaleAspectFill
if i < self.scanDots.count-1{
let delay = dispatch_time(DISPATCH_TIME_NOW, Int64(0.1 * Double(NSEC_PER_SEC)))
dispatch_after(delay, dispatch_get_main_queue()) {
self.animateScan(i+1)
}
}
}
func addOnePointToImage(sourceImage:UIImage, pointImage:UIImage, point: Dictionary<String,CGFloat>)->UIImage{
let rect = CGRect(x: 0, y: 0, width: sourceImage.size.width, height: sourceImage.size.height)
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 0)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, rect)
sourceImage.drawInRect(rect, blendMode: .Normal, alpha: 1)
let pointWidth = sourceImage.size.width/66.7
pointImage.drawInRect(CGRectMake(point["x"]!-pointWidth/2, point["y"]!-pointWidth/2, pointWidth, pointWidth), blendMode: .Normal, alpha: 1)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
func getCropRect(points: Array<Dictionary<String,CGFloat>>, sourceImage:UIImage)->CGRect{
var topLeft:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var topRight:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var bottomLeft:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
var bottomRight:CGPoint = CGPoint(x: points[0]["x"]!, y: points[0]["y"]!)
for p in points{
if p["x"]<topLeft.x {topLeft.x = p["x"]!}
if p["y"]<topLeft.y {topLeft.y = p["y"]!}
if p["x"]>topRight.x {topRight.x = p["x"]!}
if p["y"]<topRight.y {topRight.y = p["y"]!}
if p["x"]<bottomLeft.x {bottomLeft.x = p["x"]!}
if p["y"]>bottomLeft.y {bottomLeft.y = p["y"]!}
if p["x"]>bottomRight.x {bottomRight.x = p["x"]!}
if p["y"]>bottomRight.y {bottomRight.y = p["y"]!}
}
let rect = CGRect(x: topLeft.x, y: topLeft.y, width: (topRight.x-topLeft.x), height: (bottomLeft.y-topLeft.y))
return rect
}
}
extension UIImage{
public func imageAtRect(rect: CGRect) -> UIImage {
let imageRef: CGImageRef = CGImageCreateWithImageInRect(self.CGImage, rect)!
let subImage: UIImage = UIImage(CGImage: imageRef)
return subImage
}
}
The problem is that in setScanImage the desired area is accurately cropped and displayed, but when animateScan method is called a different area of the same image is cropped (and displayed) though cropRect is the same and the size of originalImage is totally the same.
Any ideas, guys?
By the way if I display originalImage without cropping it everything works smoothly.
So finally after approximately 10 hours net time (and a lot of help of the stackoverflow community:-) I managed to fix the problem:
In the function addOnePointToImage you need to change the following:
In this line:
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 0)
you need to change the last argument (which stands for scale) to 1:
UIGraphicsBeginImageContextWithOptions(sourceImage.size, true, 1)
That totally resolves the issue.

View with Custom design

Need to design single view like the attached design in swift and this to be visible at all my collecitonview cell, how to achieve this anyone have idea about this
I have tried it in a test project. This is my way:
Open Photoshop or a similar tool and make a picture with a translucent background.
Use the PS tools to draw a figure the way you want it.
Save it as a PNG. Open Xcode. Put a UIImageView into your UIViewController. Put the PDF into your Assets folder and set this Image as the Image for the UIImageView. Set the contraints.
Set the following code into the UIViewController.swift file:
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var testImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
let tap = UITapGestureRecognizer(target: self, action: #selector(doubleTapped))
tap.numberOfTapsRequired = 1
view.addGestureRecognizer(tap)
}
func doubleTapped() {
let image = UIImage(named: "test")
testImageView.image = image?.maskWithColor(color: UIColor.blue)
}
}
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskImage = self.cgImage
let width = self.size.width
let height = self.size.height
let bounds = CGRect(origin: CGPoint(x: 0,y :0), size: CGSize(width: width, height: height))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let bitmapContext = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) //needs rawValue of bitmapInfo
bitmapContext!.clip(to: bounds, mask: maskImage!)
bitmapContext!.setFillColor(color.cgColor)
bitmapContext!.fill(bounds)
//is it nil?
if let cImage = bitmapContext!.makeImage() {
let coloredImage = UIImage(cgImage: cImage)
return coloredImage
} else {
return nil
}
}
}
Start the app and touch the display. The color changes from black to blue once tapped. Now you should have all the tools to do whatever you want too... The Code is in Swift 3 language.
You can set this UIImageView into your UICollectionViewCell and set the UIColor with the function provided.
And here is a function to set a random UIColor.

Why does my CGImage not display correctly after setNeedsDisplay

I'm drawing a colorWheel in my app. I do this by generating a CGImage in a separate function and using CGContextDrawImage in drawRect to put it onscreen.
On the initial presentation it looks fine, but after I call setNeedsDisplay (or setNeedsDisplayInRect) the image turns black/distorted. I must be doing something stupid, but can't see what.
DrawRect code looks like:
override func drawRect(rect: CGRect) {
if let context = UIGraphicsGetCurrentContext() {
let wheelFrame = CGRectMake(0,0,circleRadius*2,circleRadius*2)
CGContextSaveGState(context)
//create clipping mask for circular wheel
CGContextAddEllipseInRect(context, wheelFrame)
CGContextClip(context)
//draw the wheel
if colorWheelImage != nil { CGContextDrawImage(context, CGRectMake(0,0,circleRadius*2,circleRadius*2), colorWheelImage) }
CGContextRestoreGState(context)
//draw a selector element
self.drawSliderElement(context)
}
}
CGimage generation function:
func generateColorWheelImage() {
let width = Int(circleRadius*2)
let height = Int(circleRadius*2)
var pixels = [PixelData]()
for yIndex in 0..<width {
for xIndex in 0..<height {
pixels.append(colorAtPoint(CGPoint(x: xIndex, y: yIndex)))
}
}
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo.ByteOrderDefault
let bitsPerComponent: Int = 8
let bitsPerPixel: Int = 24
let renderingIntent = CGColorRenderingIntent.RenderingIntentDefault
assert(pixels.count == Int(width * height))
var data = pixels
let providerRef = CGDataProviderCreateWithData(nil, data, data.count * sizeof(PixelData), nil)
colorWheelImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, width * Int(sizeof(PixelData)), rgbColorSpace, bitmapInfo, providerRef, nil, true,renderingIntent)
}
I finally found the answer (here: https://stackoverflow.com/a/10798750/5233176) to this problem when I started running the code on ipad rather than simulator and received a BAD_ACCESS_ERROR, rather seeing a distorted image.
As the answer explains: '[The] CMSampleBuffer is used directly to create a CGImageRef, so [the] CGImageRef will become invalid once the buffer is released.'
Hence the problem was with this line:
let providerRef = CGDataProviderCreateWithData(nil, data, data.count * sizeof(PixelData), nil)
And the problem can be fixed by using a copy of the buffer, like so:
let providerData = NSData(bytes: data, length: data.count * sizeof(PixelData))
let provider = CGDataProviderCreateWithCFData(providerData)

Gray scaling the entire view

I am working on a swift based application which shows some events. Each event has time limit after which event expires.
I want to make the event screen grayscale after event expiration for which I have tried things like:
Mask view
if let maskImage = UIImage(named: "MaskImage"){
myView.maskView = UIImageView(image: maskImage)
}
above solution not worked from me as my event screen contains colored images as well
Recursively fetched all subviews and tried to set their backgroundColor but not worked
Changing alpha value of all subViews which shows more faded white color
Query: My event screen has many colorful images and some colorful labels, how can I make all these in grayscale?
Any help would be appreciated.
After some research I've achieved gray scaling effect over entire view something like this:
/**
To convert an image to grayscale
- parameter image: The UIImage to be converted
- returns: The UIImage after conversion
*/
static func convertToGrayScale(_ image: UIImage?) -> UIImage? {
if image == nil {
return nil
}
let imageRect:CGRect = CGRect(x: 0, y: 0, width: image!.size.width, height: image!.size.height)
let colorSpace = CGColorSpaceCreateDeviceGray()
let width = image!.size.width
let height = image!.size.height
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(image!.cgImage!, in: imageRect)
let imageRef = context?.makeImage()
let newImage = UIImage(cgImage: imageRef!)
return newImage
}
Here above method will work for all coloured images in presented view and the same can be used for myView.maskView = UIImageView(image: maskImage).
This will grayscale your entire view and while gray scaling only labels and texts I used this:
// Convert to grayscale
func convertToGrayScaleColor() -> UIColor? {
if self == UIColor.clear {
return UIColor.clear
}
var fRed : CGFloat = 0
var fGreen : CGFloat = 0
var fBlue : CGFloat = 0
var fAlpha: CGFloat = 0
if self.getRed(&fRed, green: &fGreen, blue: &fBlue, alpha: &fAlpha) {
if fRed == 0 && fGreen == 0 && fBlue == 0{
return UIColor.gray
}else{
let grayScaleColor = (fRed + fGreen + fBlue)/3
return UIColor(red: grayScaleColor, green: grayScaleColor, blue: grayScaleColor, alpha: fAlpha)
}
} else {
print("Could not extract RGBA components, so rolling back to default UIColor.grayColor()")
return UIColor.gray
}
}
Above method is a part extension UIColor. Above code is written in Swift 3.0 using xcode 8.1
Please feel free to drop a comment if concerned or confused about anything in the answer.

How to draw text and image as textures on IOS Swift OpenglES 1.0/1.1/2.0

I am trying to render texture of png images and also draw strings as textures on Swift by using OpenglES 1.0 . But none of examples I have googled and found here aint working. I spent more than enough time for it and I need to figure this out.
On screen only the non-textured drawers are shown, I remove all the triangle based drawers and there is no issue about camera angle or views, issue is about texture rendering.
Thanks to anyone can help me, here are my simplified codes for drawing png to texture, but shows nothing, not even white boxes;
This is the renderer class;
import GLKit
import SWXMLHash
class MapViewController: GLKViewController {
#IBOutlet var glView: GLKView!
var rViewController : UIViewController? = nil
var context : EAGLContext!
var ratio : Float!
var size : Float!
override func viewDidLoad() {
super.viewDidLoad()
self.context = EAGLContext.init(API: EAGLRenderingAPI.OpenGLES1)
if self.context == nil {
Util.log("failed to create context for self")
}
glView.context = self.context
EAGLContext.setCurrentContext(self.context)
glClearColor(0.95, 0.95, 0.95, 0.5)
glDisable(GLenum(GL_CULL_FACE))
glEnable(GLenum(GL_DEPTH_TEST))
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_SRC_ALPHA), GLenum(GL_ONE_MINUS_SRC_ALPHA))
}
override func viewWillLayoutSubviews() {
if firstInit {
let width = glView.frame.size.width
let height = glView.frame.size.height
glViewport(0, 0, Int32(width), Int32(height))
let fieldOfView = GLKMathDegreesToRadians(60)
glEnable(GLenum(GL_NORMALIZE))
self.ratio = Float(width)/Float(height)
glMatrixMode(GLenum(GL_PROJECTION))
glLoadIdentity()
size = zNear * Float(tan(Double(fieldOfView/2.0)))
glFrustumf(-size, size, -size / (ratio), size / (ratio), zNear, zFar)
glMatrixMode(GLenum(GL_MODELVIEW))
glLoadIdentity()
}
}
override func glkView(view: GLKView, drawInRect rect: CGRect) {
glPushMatrix()
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
glClear(GLbitfield(GL_DEPTH_BUFFER_BIT))
glMatrixMode(GLenum(GL_MODELVIEW))
glLoadIdentity()
glEnableClientState(GLenum(GL_VERTEX_ARRAY))
glEnableClientState(GLenum(GL_COLOR_ARRAY))
glEnable(GLenum(GL_TEXTURE_2D))
// glEnable(GLenum(GL_BLEND))
// glEnableClientState(GLenum(GL_NORMAL_ARRAY))
glEnableClientState(GLenum(GL_TEXTURE_COORD_ARRAY))
let fl = CGPoint(x: 0, y: 0)
let drawer = IconDrawer(x: 0, y: 0, z: 0, size:150, roomCoords: fl, exampleimage)
drawer.draw()
glDisable(GLenum(GL_TEXTURE_2D))
glDisableClientState(GLenum(GL_TEXTURE_COORD_ARRAY))
glDisableClientState(GLenum(GL_VERTEX_ARRAY));
glDisableClientState(GLenum(GL_COLOR_ARRAY));
glPopMatrix();
}
}
and this is the drawer class;
import Foundation
class IconDrawer : Mesh {
let normals : [GLfloat] = [
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0 ]
let textureCoordinates : [Float] = [0.0, 1.0,
1.0, 1.0,
1.0, 0.0,
0.0, 0.0]
let indices = [0, 1, 2, 0, 2, 3]
var coordinates : CGPoint? = nil
var texture:GLuint=0
static var currentScreenMid = CGPoint(x: 0, y: 0)
init(x : Float, y : Float, z : Float, size : Int, roomCoords : CGPoint, imageName : String) {
super.init()
// Mapping coordinates for the vertices
self.coordinates = roomCoords
let w = Float((size)/1000);
let h = Float((size)/1000);
let vertices : [Float] = [-w+x, -h+y, z+Zunit,
w+x, -h+y, z+Zunit,
w+x, h+y, z+Zunit,
-w+x, h+y, z+Zunit]
setIndices(indices);
setVertices(vertices);
setTextureCoordinates(textureCoordinates);
//PART 1 LOAD PNG
let image = UIImage(named : imageName)
let iwidth = CGImageGetWidth(image?.CGImage)
let iheight = CGImageGetHeight(image?.CGImage)
let imageData = malloc(iwidth * iheight * 4)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)
let imageContext : CGContextRef = CGBitmapContextCreate(imageData, iwidth, iheight, 8, iwidth * 4, CGColorSpaceCreateDeviceRGB(), bitmapInfo.rawValue)!
CGContextClearRect(imageContext, CGRectMake(0, 0, CGFloat(iwidth), CGFloat(iheight)))
CGContextTranslateCTM(imageContext, 0, CGFloat((iheight - iheight)))
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, CGFloat(iwidth), CGFloat(iheight)), image?.CGImage)
glTexImage2D(GLenum(GL_TEXTURE_2D), GLint(0), GL_RGBA, Int32(iwidth), Int32(iheight), GLint(0), GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), imageData)
self.w = GLsizei(Int(image!.size.width))
self.h = GLsizei(Int(image!.size.height))
loadBitmap(imageData)
fl.bitmaps.append(imageData)
//PART 2 CREATE TEXTURE WITH THE IMAGE DATA
glGenTextures(1, &texture)
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glTexImage2D(GLenum(GL_TEXTURE_2D), Int32(0), GL_RGBA, GLsizei(Int(image!.size.width)), GLsizei(Int(image!.size.height)), Int32(0), GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), imageData)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GLint(GL_LINEAR))
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GLint(GL_LINEAR))
if image != nil {
let imageRect = CGRectMake(0.0, 0.0, image!.size.width, image!.size.height);
UIGraphicsBeginImageContextWithOptions(image!.size, false, UIScreen.mainScreen().scale)
image?.drawInRect(imageRect)
UIGraphicsEndImageContext();
}
}
override func draw() {
//PART 3 RENDER
glEnableClientState(GLenum(GL_VERTEX_ARRAY));
glEnableClientState(GLenum(GL_NORMAL_ARRAY));
glEnableClientState(GLenum(GL_TEXTURE_COORD_ARRAY));
glEnable(UInt32(GL_TEXTURE_2D));
glEnable(UInt32(GL_BLEND));
glBlendFunc(UInt32(GL_SRC_ALPHA), UInt32(GL_ONE_MINUS_SRC_ALPHA));
Drawer.printGLErrors()
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
Drawer.printGLErrors()
glVertexPointer(Int32(2), GLenum(GL_FLOAT), GLsizei(0), self.mVerticesBuffer!)
Drawer.printGLErrors()
glNormalPointer(GLenum(GL_FLOAT), GLsizei(0), normals)
Drawer.printGLErrors()
glTexCoordPointer(Int32(2), GLenum(GL_FLOAT), GLsizei(0), textureCoordinates)
Drawer.printGLErrors()
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 4)
Drawer.printGLErrors()
}
}
The problem you are facing is the non POT (power of two) textures are not supported out of the box. I believe there are some extensions for that but I wouldn't bother with that.
Anyway the texture dimensions must be POT (2, 4, 8, 16...) so you have 2 options in doing so. You can create a POT context when getting the raw image data pointer and resize the image to a desired POT size scaling the image in the process. A more common way is to create a large enough texture with NULL data pointer and then use glTexSubImage with original image size parameters to send the data to the texture. This procedure also requires you to change the texture coordinates so instead of 1.0 you will have imageWidth/textureWidth, same for height. The second procedure need a bit more effort but is usually used to be upgraded into texture atlas procedure (putting multiple images onto the same texture).
Then again the fastest procedure is simply resizing your image in Photoshop or whatever to be 128x128 for instance. Just a note for POT dimensions should not need to be the same, 128x64 should be valid as well.

Resources