I'm using Stefan Gustavson's GLSL impl. of Simplex noise and animating it using WebGL. Works just fine on desktop, but on my Android device, both Chrome and Firefox, the animation always halts after a fixed period. There's no error/warning, the scripts and WebGL program continue running.
Same thing happens with the classic Perlin noise impl.
You can see a demo here: https://wix.github.io/kampos
Code is here: https://github.com/wix/kampos/blob/master/index.html
Turbulence impl. is here: https://github.com/wix/kampos/blob/master/src/effects/turbulence.js
Related
I am having strange issues with tracing bitmaps that have been imported into flash pro CS6
I import large HD bitmaps into flash and then trace them to vector and then shrink the vector down about 1/15th the size of the original. This allow me to used bitmap images without the grainy pixelization look.
I have done this for quite some time but on my current project the traced vectors are causing the flash program to lag really bad and the published ios version is lagging horribly as well
Not sure if Im missing some thing, please help
This is likely because your vectors contain too many points.
You can use the smooth tool to go from thisc:
to thatc:
Or optimize curves to get a similar effect like soc:
c images under CC BY-NC-SA 3.0, click them for source
I'm making an app that gets the stream from the camera and uses the canny algorithm to display the edges.
In android everything worked fine,using OpenCv to get the egdes and it was in real time. After that i went on developing to WP8, and found out WP8 doesn't support OpenCv yet. Because my only problem was that canny edge algorithm i got one from the internet,adapted the code to silverlight and it was a complete mess. It wasn't real-time at all,I was displaying the information in like 1 sec.I searched a bit about alternatives, found:EmguCv(but nothing about canny edge algorithm) and some guys that tried to compile an Opencv subset for W8 ARM.Even tried the second one,but ended up failing.My questions now are:
why on earth is it moving so slow?
if i manage to get a OpenCv library,will it be quicker?
do you guys have another alternatives/sugestions?
Think they're bringing our windows phone 8 support- with emgucv you can access all opencv functions and I'm sure canny edge is one of them. Apparently opencvsharp is good. As for speed- no idea.
I saw that someone has made an app that tracks your feet using the camera, so that you can kick a virtual football on your iPhone screen.
How could you do something like this? Does anyone know of any code examples or other information about using the iPhone camera for detecting objects and tracking them?
I just gave a talk at SecondConf where I demonstrated the use of the iPhone's camera to track a colored object using OpenGL ES 2.0 shaders. The post accompanying that talk, including my slides and sample code for all demos can be found here.
The sample application I wrote, whose code can be downloaded from here, is based on an example produced by Apple for demonstrating Core Image at WWDC 2007. That example is described in Chapter 27 of the GPU Gems 3 book.
The basic idea is that you can use custom GLSL shaders to process images from the iPhone camera in realtime, determining which pixels match a target color within a given threshold. Those pixels then have their normalized X,Y coordinates embedded in their red and green color components, while all other pixels are marked as black. The color of the whole frame is then averaged to obtain the centroid of the colored object, which you can track as it moves across the view of the camera.
While this doesn't address the case of tracking a more complex object like a foot, shaders like this should be able to be written that could pick out such a moving object.
As an update to the above, in the two years since I wrote this I've now developed an open source framework that encapsulates OpenGL ES 2.0 shader processing of images and video. One of the recent additions to that is a GPUImageMotionDetector class that processes a scene and detects any kind of motion within it. It will give you back the centroid and intensity of the overall motion it detects as part of a simple callback block. Using this framework to do this should be a lot easier than rolling your own solution.
Due to performance bottlenecks in Core Graphics, I'm trying to use OpenGL EL on iOS to draw a 2D scene, but OpenGL is rendering my images at an incredibly low resolution.
Here's part of the image I'm using (in Xcode):
And OpenGL's texture rendering (in simulator):
I'm using Apple's Texture2D to create a texture from a PNG, and then to draw it to the screen. I'm using an Ortho projection to look straight down on the scene as was recommended in Apress' Beginning iPhone Development book. The image happens to be the exact size of the screen and is drawn as such (I didn't take the full image in the screenshots above). I'm not using any transforms on the model, so drawing the image should cause no sub-pixel rendering.
I'm happy to post code examples, but thought I'd start without in the case that there's a simple explanation.
Why would the image lose so much quality using this method? Am I missing a step in my environment setup? I briefly read about textures performing better as sizes of powers of two -- does this matter on the iPhone? I also read about multisampling, but I wasn't sure if that was related.
Edit: Updated screen shots so as to alleviate confusion.
I still don't fully understand why I was having quality issues with my code, but I managed to work around the problem. I was originally using the OpenGLES2DView class provided by the makers of the Apress book "Beginning iPhone 4 Development", and I suspect the problem lied somewhere within this code. I then came across this tutorial which suggested I use GLKit instead. GLKit allowed me to do exactly what I wanted!
A few lessons learned:
Using GLKit and the sprites class in the tutorial linked above, my textures don't need to be powers of two. GLKit seems to handle this fine.
Multisampling was not the solution to the problem. With GLKit, using multisampling seemingly has no effect (to my eye) when rendering 2D graphics.
Thanks all who tried to help.
I've enabled 4x MSAA on my iPad OpenGL ES 2.0 app using the example on Apple's website. On the simulator this works well and the image is nice and smooth however on the device there are colored artifacts on the edges where it should be antialiased. This exists on the iPad/iPad2 and iPhone4 but not in the simulator. I've attached a picture below of what the artifact looks like. Anyone know what this could be?
It looks very much like your shader is attacking, but you didn't post the shader so I can't be sure. See, when you turn on MSAA, it then becomes possible for the shader to get executed for samples that are inside the pixel area, but outside of the triangle area. Without MSAA, this pixel would not have caused a fragment shader execution at all, but now that you turned on MSAA, it must execute the fragment shader for that pixel if one of the samples is active.
The link I posted explains the issue in greater depth. It also gives you ways to avoid this issue, but I don't know if OpenGL ES 2.0 provides access to centroid sampling. If it does not, then you will have to disable multisampled rendering for those things that cause artifacts with glDisable(GL_MULTISAMPLE). You can re-enable it when you need multisampling active.