I want to use the WEBGL_draw_buffers extension. Besides enabling the extension by gl.getExtension('WEBGL_draw_buffers'); in the JavaScript, one apparently has to enable the extension in the shader as well by using the directive #extension GL_EXT_draw_buffers : require. However, whereas my WebGL implementation seems to support the WEBGL_draw_buffers extension (also according to WebGL Extension Viewer), the shader does not. I'm getting the following error:
'GL_EXT_draw_buffers' : extension is not supported
Is there a way to fix this? I already updated my graphics card driver. I get the error both, with a dedicated AMD Graphics Card (that also supports OpenGL 4.5) and with an integrated Intel Graphics Chip.
Related
How can I obtain a EGLNativeWindowType object in the iOS platform? or achieve the equivalent of the following android code?
To provide a bit more insight, I am currently porting a native android app to iOS which shares a single core C library, while the iOS project itself is written in Objective-C. The project is also using EGL and not EAGL.
The existing source code is standard C but uses Android's NDK; a EGLSurface object is defined with EGLAPI EGLSurface EGLAPIENTRY eglCreateWindowSurface(EGLDisplay dpy, EGLConfig config, EGLNativeWindowType win, const EGLint *attrib_list)
EGLNativeWindowType win = AndroidMainGetAndroidActivity()->app->window;
EGLSurface eglSurface = eglCreateWindowSurface(e_eglDisplay, config, win, NULL);
I haven't found any documentation relating to EGLNativeWindowType and iOS.
iOS uses EAGL as an interface between OpenGL ES and the underlying windowing system. EAGL need for iOS as EGL needs for Android to draw via OpenGL ES. So you can not use EGL API on iOS.
Differences between them and how them works very good described in an article.
I am trying to follow instructions on creating Custom Direct2D Effects, as described in Custom Effects and Effect Shader Linking. I am getting into number of problems:
The "d2d1effecthelpers.hlsli" file in not found in my Windows SDK. I could only find it here.
When trying to run fxc compiler with model set to lib_4_0_level_9_1_ps_only I get an error that this model is not supported.
When using lib_4_0_level_9_1 instead, I get another compilation error, saying that static variables are not supported for libraries (and this static definition comes from the above HLSLI file, not from my code).
After all my goal is to create a pixel shader for Direct2D, which will be receiving number of textures as input. Maybe something in my environment is missing and I can't follow the examples (I'm using Windows 8.1). Any suggestions are welcome.
In short:
Can anyone confirm whether it is possible to use the built-in variable gl_InstanceID (or gl_InstanceIDEXT) in a vertex shader using OpenGL ES 2.0 on iOS with GL_EXT_draw_instanced enabled?
Longer:
I want to draw multiple instances of an object using glDrawArraysInstanced and gl_InstanceID, and I want my application to run on multiple platforms, including iOS.
The specification clearly says that these features require ES 3.0. According to the iOS Device Compatibility Reference ES 3.0 is only available on a few devices (those based on the A7 GPU; so iPhone 5s, but not on iPhone 5 or earlier).
So my first assumption was that I needed to avoid using instanced drawing on older iOS devices.
However, further down in the compatibility reference document it says that the EXT_draw_instanced extension is supported for all SGX Series 5 processors (that includes iPhone 5 and 4s).
This makes me think that I could indeed use instanced drawing on older iOS devices too, by looking up and using the appropriate extension function (EXT or ARB) for glDrawArraysInstanced.
I'm currently just running some test code using SDL and GLEW on Windows so I haven't tested anything on iOS yet.
However, in my current setup I'm having trouble using the gl_InstanceID built-in variable in a vertex shader. I'm getting the following error message:
'gl_InstanceID' : variable is not available in current GLSL version
Enabling the "draw_instanced" extension in GLSL has no effect:
#extension GL_ARB_draw_instanced : enable
#extension GL_EXT_draw_instanced : enable
The error goes away when I specifically declare that I need ES 3.0 (GLSL 300 ES):
#version 300 es
Although that seem to work fine on my Windows desktop machine in an ES 2.0 context I doubt that this would work on an iPhone 5.
So, shall I abandon the idea of being able to use instanced drawing on older iOS devices?
From here:
Instanced drawing is available in the core OpenGL ES 3.0 API and in
OpenGL ES 2.0 through the EXT_draw_instanced and EXT_instanced_arrays
extensions.
You can see that it's available on all of their GPUs, PowerVR SGX, Apple A7, A8.
(Looks like #Shammi's not coming back... if they do, you can change the accepted answer :)
Here is the list of GL extensions I get when I run my WebGL project:
GL_WEBKIT_WEBGL_compressed_texture_s3tc
WEBKIT_EXT_texture_filter_anisotropic
OES_texture_float_linear
OES_texture_half_float_linear
GL_ANGLE_instanced_arrays
OES_vertex_array_object
WEBKIT_WEBGL_lose_context
WEBGL_debug_renderer_info
GL_WEBGL_lose_context
WEBGL_lose_context
GL_OES_texture_half_float
OES_standard_derivatives
GL_OES_texture_half_float_linear
OES_element_index_uint
OES_texture_float
GL_OES_texture_float_linear
GL_WEBGL_compressed_texture_s3tc
GL_OES_element_index_uint
GL_WEBGL_draw_buffers
ANGLE_instanced_arrays
EXT_texture_filter_anisotropic
GL_WEBKIT_EXT_texture_filter_anisotropic
GL_EXT_frag_depth
GL_OES_vertex_array_object
OES_texture_half_float
WEBGL_compressed_texture_s3tc
WEBGL_draw_buffers
GL_OES_standard_derivatives
WEBGL_depth_texture
EXT_frag_depth
GL_WEBGL_depth_texture
WEBKIT_WEBGL_compressed_texture_s3tc
GL_OES_texture_float
GL_WEBGL_debug_renderer_info
GL_EXT_texture_filter_anisotropic
GL_WEBKIT_WEBGL_depth_texture
GL_WEBKIT_WEBGL_lose_context
WEBKIT_WEBGL_depth_texture
And here is the list I get when I activate WebGL Inspector:
GL_WEBKIT_WEBGL_compressed_texture_s3tc
WEBKIT_EXT_texture_filter_anisotropic
GL_OES_texture_half_float
OES_standard_derivatives
OES_element_index_uint
OES_texture_float
GL_WEBGL_compressed_texture_s3tc
GL_OES_element_index_uint
EXT_texture_filter_anisotropic
GL_WEBKIT_EXT_texture_filter_anisotropic
OES_texture_half_float
WEBGL_compressed_texture_s3tc
GLI_frame_terminator
GL_GLI_frame_terminator
GL_OES_standard_derivatives
WEBKIT_WEBGL_compressed_texture_s3tc
GL_OES_texture_float
GL_EXT_texture_filter_anisotropic
Notice a lot are missing! Why is it different? Is this normal behavior??
This is annoying because my program actually make use of one of these extension, and somehow its not available when running in WebGL Inspector. How do I fix this?
Thanks!
Are you using WebGL inspector from the Chrome/Firefox AppStore ?
I remember having the same issue, turned out in older versions of WebGL-Inspector
there was a whitelist blocking out all "unknown" extensions.
The WebGL-Inspector project has been abandoned in favor of google web tracing framework:
http://google.github.io/tracing-framework/
Thats why the Chrome/Firefox AppStore plugins are outdated, skimming through the old code on github it seems like the bug has been fixed.
You may want to try to get the latest version from github and load it as an unpacked extension.
https://github.com/benvanik/WebGL-Inspector
EDIT:
As brought to my attention the WebGL-Inspector is not abandoned,
but it still features the whitelist approach.
For an introduction to WebGL debugging using Google Web Tracing Framework see:
http://google.github.io/tracing-framework/analyzing-traces.html
Also note that there is experimental support for WebGL debugging in Chrome DevTools:
http://www.html5rocks.com/en/tutorials/canvas/inspection/
I am developing an image processing application in Centos with OpenCV using C/C++ coding. My intension is to have a single development platform for Linux and IOS (IPAD).
So if I start the development in a Linux environment with OpenCV installed ( in C/CPP ),Can I use the same code in IOS without going for Objective-C? I don't want to put dual effort for IOS and Linux, so how to achieve this?
It looks like it's possible. Compiling and running C/C++ on iOS is no problem, but you'll need some Objective-C for the UI. When you pay some attention to the layering/abstraction of your modules, you should be able to share most/all core code between the platforms.
See my detailed answer to this question:
iOS:Retrieve rectangle shaped image from the background image
Basically you can keep most of your CPP code portable between platforms if you keep your user interface code separate. On iOS all of the UI should be pure objective-C, while your openCV image processing can be pure C++ (which would be exactly the same on linux). On iOS you would make a thin ObjC++ wrapper class that mediates between Objective-C side and the C++ side. All it really does is translate image formats between them and send data in and out of C++ for processing.
I have a couple of simple examples on github you might want to take a look at: OpenCVSquares and OpenCVStitch. These are based on C++ samples distributed with openCV - you should compare the C++ in those projects with the original samples to see how much altering was required (hint: not much).