I'd rather create shaders in GLSL than HLSL.
Is there a way for it to work with monogame? Or do I have to use HLSL .fx files?
Short answer: not yet
Currently, you have to write your shaders in HLSL, regardless of whether you are using DirectX or OpenGL. The .fx shader (written in HLSL) will be converted to GLSL internally if you are targeting OpenGL and are using the MonoGame Content pipeline (with the MonoGame effect content processor) or the 2MGFX tool.
So yeah, if you prefer to write your shaders in GLSL. You will have to convert them to HLSL first, and then, internally it will be converted back to GLSL anyways.. Seems ridiculous, right? But that seems to be the current situation.
They will be adding support for GLSL shaders in the future.
Source: MonoGame documentation
You could use a converter to convert between GLSL files to HLSL files. Google Search
Related
I have a number of GLSL fragment shaders for which I can pretty much guarantee that they conform to #version 120 They use standard, non-ES conformant values and they do not have any ES-specific pragmas.
I really want to make a web previewer for them using WebGL. The previewer won't be used on mobile. Is this feasible? Is the feature set exposed to GLSL shaders in WebGL restricted compared to that GLSL version? Are there precision differences?
I've already tried playing with THREE.js but that doesn't really rub it since it mucks up my shader code before loading it onto the GPU (which I cannot do).
In short: is the GLSL spec sufficient for me to run those shaders?.. because if it isn't what I am after is not doable and I should just drop it.
No, WebGL shaders must be version #100. Anything else is disallowed.
If you're curious why it's because, as much as possible, WebGL needs to run everywhere. If you could choose any version your web page would only run on systems with GPUs/Drivers that handled that version.
The next version of WebGL will raise the version number. It will allow GLSL ES 3.0 (note the ES). It's currently available behind a flag in Chrome and Firefox as of May 2016
I'm thinking about releasing a bunch of GPGPU functions as a framework using OpenGL ES 2.0 for iOS devices.
When capturing an OpenGL ES frame in XCode, I can see the code of the shaders being used. Is there a way to avoid this from happening? I've tried deleting and detaching the shaders with glDeleteShader and glDetachShader after linking the OpenGL ES program, but the code is still captured.
I'm not looking for a bullet proof option (which probably doesn't exist), just something that makes getting to the code a bit more difficult than just pressing a button.
Thank you.
The debugger has to capture input from calls to glShaderSource, the actual shader source is never stored in VRAM after compilation. I cannot think of any way to overcome this problem directly. Calling glShaderSourceis required because OpenGL ES does not support precompiled shader binaries.
I would recommend obfuscating the original shader code, perhaps using compile time macros, or even a script to scramble variable names etc (be carful of attribs and uniforms as they affect linkage to app code).
Here is a tool used for obfuscation/minimization of shader code. I believe it is built for WebGL so it may not work perfectly. http://glslunit.appspot.com/compiler.html
I am trying to learn writing DirectX shaders with hlsl for a DirectX Windows Store App.
Visual Studio 2012 has a great tool to help designing shaders but since I am novice in shaders, I can not interpret the exported hlsl source and possibly alter it to fine-tune the shaders.
What I would like to have actually is to be able to re-use some of shaders written for DirectX 9 (mostly to be used with XNA). Also, net has very useful tutorials to teach shaders for DirectX 9. I am comparing the shaders for DirectX 9 and 11 but can not see how I may convert existing DirectX 9 shaders to the ones I may use in DirectX 11.
I think I am missing some basic concepts to dive into details on this. Please let me know the differences between the shaders in the two platforms.
A reference to a document or a good tutorial will greatly be appreciated.
The biggest difference I see is that DirectX 11 shaders has a main function like below although old shaders do not.
P2F main(V2P pixel)
Instead they have something like below:
technique10 FireTechnique
{
pass pass0
{
SetBlendState(AlphaBlendingOn, float4(0.0f, 0.0f, 0.0f, 0.0f), 0xFFFFFFFF);
SetVertexShader(CompileShader(vs_4_0, FireVertexShader()));
SetPixelShader(CompileShader(ps_4_0, FirePixelShader()));
SetGeometryShader(NULL);
}
}
Try http://msdn.microsoft.com/en-us/library/windows/desktop/ff476190(v=vs.85).aspx
But really maybe you should get a handle on the DX9 shaders first (FXComposer? I'm not sure how you have sourced these DX9 shaders) -- it feels a bit like you'd like to translate from one language you don't know into another you don't know without learning the first one.
Apple provides the texturetool tool to cook textures into the PowerVR compressed texture format. My toolchain runs on Windows so I would like to create this texture data on a Windows PC. It looks like this will be simple because Imagination provides a tool and SDK that runs on windows. So I've downloaded PVRTexTool and will use that in my existing in-house texture cooking tool. Has anyone tried this? I'm wondering if there are any known incompatibilities between this and the iOS OpenGL ES implementation.
I now have this working and did not have any issues of compatibility with iOS.
One thing that confused me at first is that the standard formats that the tool does processing on are all ABGR format. You can convert your original data (mine was ARGB) into a standard format using the DecompressPVR function (even though my original data is not compressed).
Other issues that came up along the way:
- Compressed textures have to be square. You can use the ProcessRawPVR function to resize non-square textures to square
- the layout of the generated mipmaps in the resulting buffer is not obvious. You end up with one buffer containing all the mipmaps but, at runtime you need to add each mip map separately using glCompressedTexImage2D or glTexImage2D.
I want to use some of the features of OpenGL 4 (specifically, tessellation shaders and newer shader language versions) from WebGL. Is this possible, in either a standards-compliant or a hackish way? Is there some magic value I could use instead of, say, gl.FRAGMENT_SHADER to tell the underlying GL implementation to compile tessellation shaders?
WebGL is based on the OpenGL ES 2.0 Specification so you wouldn't be able to use GL4 unless the browser also somehow exposes a GL4 interface to JavaScript which i doubt. Even if a browser would give you such an interface it would only work on that browser.