Whole map layer transparent in MapServer 6 - transparency

I have a mapfile with several layers, one of which has an OFFSITE set to 255 255 255. Unfortunately this seems to be causing transparency in all layers when the mapfile is called as a wms layer - the background map (an independent wms call) is showing through, although strangely not through the white sections, but seems to be more related to the anti-aliasing (when I switch IMAGEMODE to RGB rather than RGBA, the problem goes away, but the image quality is terrible).
When I comment out the OFFSITE the transparency across all layers is removed.
If I use Mapserver 5.x instead of 6, the problem doesn't occur either.
But neither of these solutions is an option for me.
Strangely, the problem doesn't occur in ArcGIS 10, but it does in QGIS and OpenLayers and MapModeller (CadCorp), but I can't see an obvious difference in the wms call from these different clients.
I am a bit unclear about all the other transparency settings available in Mapserver, but I have changed some of these (wms_bgcolor and wms_transparent in metadata; TRANSPARENT ON/OFF in OUTPUTFORMAT) and none have made a difference.
I hope somebody can help shed some light...
Thanks in advance,
Fiona

Related

Unity 5 iOS font distortion issue

SOLVED:
Before handling the Video RGBA Data and pushing it to the texture, I was setting the Unpack Alignment to 4. glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
Simply discarding this, or setting back to 1after handling the video frame memory fixes the issue.
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
- END UPDATE -
*Update: This only happens when I am uploading a texture to a mesh. This is being done with opengl es 2.0. That mesh is in 3d space though obviously, and does not overlap the 2d UI text even after the 2d and 3d are composed together. Merely disabling the plane mesh entirely fixes this. Any indication as to why, or how to fix this would be greatly appreciated.
Original Post:
I would appreciate any insight as to why the font looks so odd in the image below and how to fix it. This does not happen in editor, only on device.
I have tried every suggestion that I have seen out there. The script from http://answers.unity3d.com/questions/900663/why-are-my-unity-ui-fonts-rendering-incorrectly.html for instance, as well as rebuilding the font with a much smaller subset of characters, as someone suggests the font atlas is getting full and dropping glyphs to make room for dynamic character changes.
Here is an image:
This DOES NOT happen when the text is already entered. Only after a field is updated such as a score change or the like. I have tried both 32 and 64 bit builds, and it happens on new and old ipads. I have also tried multiple fonts, including Arial.

Painting issues with TScaledLayout & custom styles

I'm experiencing painting issues when combining TScaledLayout and custom styles created from the bitmap style designer in fmx.
To demonstrate, I loaded the default custom style created by chosing "New style for VCL / FMX" -> "save as .style" in the bitmap style designer. I dropped several standard controls on some colored rectangles: The red & green ones on a TScaledLayout, the blue one directly on the form. As I stretch the form, colored lines appear on the controls on the ScaledLayout; the background is partially visible:
If I size the form to exactly match the design-time dimensions, the lines disappear. That seems like a pretty significant issue, I certainly can't use those two together like that. Does anybody have an idea for a possible fix or workaround?
Looks like this is a known issue with scaling and bitmaps. See the Google+ discussion here - https://plus.google.com/+PaulThornton/posts/ACAHkJD3a84. I'll quote Marco Cantu's thoughts:
I've found an internally reported issue of a similar case, but haven't
found one that matches this scenario. Certainly worth adding to quality
portal. Having said this, I fear that bitmap-based operations and
scaling don't really fit together very well, and it might be difficult
to have an all encompassing solution.
Let me explain with an example. Take a button. This is painted by FMX
with 9 sections (borders, corners, central part) so that regardless of
the size the bitmap elements are stretched in one direction at most,
often just draw. Stretching a single bitmap for the button to the
target size would break anti-aliasing and create a blurred image when
using colors.
This is example what happens with a ScaledLayout, given it takes the
complete final image and transforms it. ScaledLayout was originally
introduced with vector styles, and worked very well in that scenario.
With todays's bitmap styles things get a bit more complex.
Regardless of this explanation of there the issue lies, I'd recommend
reporting it on QC, and I'll make sure it doesn't get closed as design
(it could naturally happen, this is how the system works) but that we
do some investigation to address the issue -- turning this into a
feature request.

Is it possible for an iOS app to take an image and then analyze the colors present in said image?

For example after taking the image, the app would tell you the relative amount of red, blue, green, and yellow present in the picture and how intense each color is.
That's super specific I know, but I would really like to know if it's possible and if anyone has any idea how to go about that.
Thanks!
Sure it's possible. You've have to load the image into a UIImage, then get the underlying CGImage, and get a pointer to the pixel data. If you average the RGB values of all the pixels you're likely to get a pretty muddy result, though, unless you're sampling an image with large areas of strong primary colors.
Erica Sadun's excellent iOS Developer Cookbook series has a section on sampling pixel image data that shows how it's done. In recent versions there is a "core" and an "extended" volume. I think it's in the Core iOS volume. My copy of Mac iBooks is crashing repeatedly right now, so I can't find it for you. Sorry about that.
EDIT:
I got it to open on my iPad finally. It is in the Core volume, in recipe 1-6, "Testing Touches Against Bitmap Alpha Levels." As the title implies, that recipe looks at an image's alpha levels to figure out if you've tapped on an opaque image pixel or missed the image by tapping on a transparent pixel. You'll need to adapt that code to come up with the average color for an image, but Erica's code shows the hard part - getting and interpreting the bytes of image data. That book is all in Objective-C. Post a comment if you have trouble figuring it out.

Scaling issues with LUMINANCE_ALPHA

I'm currently extending my OpenGL-UI system, for this i rewrite the font part and faced a issue which appears when using mipmapping. Because of the fact that images say more than thousands of words:
As you can see the font's transparency is fading out ( the text should be displayed 8 times! ), this happens only when using LUMINANCE_ALPHA-textures. The code which loads the textures is basically the same but they differ in the formats used, this is what LUMINANCE_ALPHA uses:
TexImageInternalFormat.LUMINANCE_ALPHA, TexImageFormat.LUMINANCE_ALPHA, TexImagePixelType.UNSIGNED_BYTE
Linear filtering is enabled and clamp is set to GL_CLAMP_TO_EDGE. For me it seems like a mipmapping issue but i tried a lot of different settings and it isn't working and, as i already said, RGBA textures are working without any issues. The application also runs on iOS so using a LUMINANCE_ALPHA-texture saves a lot of ram compared to a RGBA.
What could cause this and how can i solve it?
As it turned out the ImageFormat settings have been wrong:
LA8 = new ImageFormat("LA8", TexImageInternalFormat.LUMINANCE_ALPHA, TexImageFormat.LUMINANCE_ALPHA, TexImagePixelType.UNSIGNED_BYTE, 4);
The last number indicates the number of bytes per pixel for this format and should be 2 in case of LUMINANCE_ALPHA. The PVR reader doesn't complain about the missing image data and no exception has been thrown. Changing the 4 to 2 solves the problem.

Replace particular color of image in iOS

I want to replace the particular color of an image with other user selected color. While replacing color of image, I want to maintain the gradient effect of that original color. for example see the attached images.
I have tried to do so with CoreGraphics & I got success to replace color. But the replacing color do not maintain the gradient effect of the original color in the image.
Can someone help me on this? Is the CoreGraphics is right way to do this?
Thanks in advance.
After some struggling almost with the same problem (but with NSImage), made a category for replacing colors in NSImage which uses ColorCube CIFilter.
https://github.com/braginets/NSImage-replace-color
inspired by this code for UIImage (also uses CIColorCube):
https://github.com/vhbit/ColorCubeSample
I do a lot of color transfer/blend/replacement/swapping between images in my projects and have found the following publications very useful, both by Erik Reinhard:
Color Transfer Between Images
Real-Time Color Blending of Rendered and Captured Video
Unfortunately I can't post any source code (or images) right now because the results are being submitted to an upcoming conference, but I have implemented variations of the above algorithms with very pleasing results. I'm sure with some tweaks (and a bit of patience) you might be able to get what you're after!
EDIT:
Furthermore, the real challenge will lie in separating the different picture elements (e.g. isolating the wall). This is not unlike Photoshop's magic wand tool which obviously requires a lot of processing power and complex algorithms (and is still not perfect).

Resources