Images lose quality after saving as GIF - ios

Im developing an iOS app which allows users to take a sequence of photos - afterwards the photos are put in an animation and exported as MP4 and GIF.
While the MP4 presents the source quality, the GIF color grades are visible.
Here the visual comparison:
GIF:
MP4
The code I use for exporting as GIF:
var dictFile = new NSMutableDictionary();
var gifDictionaryFile = new NSMutableDictionary();
gifDictionaryFile.Add(ImageIO.CGImageProperties.GIFLoopCount, NSNumber.FromFloat(0));
dictFile.Add(ImageIO.CGImageProperties.GIFDictionary, gifDictionaryFile);
var dictFrame = new NSMutableDictionary();
var gifDictionaryFrame = new NSMutableDictionary();
gifDictionaryFrame.Add(ImageIO.CGImageProperties.GIFDelayTime, NSNumber.FromFloat(0f));
dictFrame.Add(ImageIO.CGImageProperties.GIFDictionary, gifDictionaryFrame);
InvokeOnMainThread(() =>
{
var imageDestination = CGImageDestination.Create(fileURL, MobileCoreServices.UTType.GIF, _images.Length);
imageDestination.SetProperties(dictFile);
for (int i = 0; i < this._images.Length; i++)
{
imageDestination.AddImage(this._images[i].CGImage, dictFrame);
}
imageDestination.Close();
});
The code I use for exporting as MP4:
var videoSettings = new NSMutableDictionary();
videoSettings.Add(AVVideo.CodecKey, AVVideo.CodecH264);
videoSettings.Add(AVVideo.WidthKey, NSNumber.FromNFloat(images[0].Size.Width));
videoSettings.Add(AVVideo.HeightKey, NSNumber.FromNFloat(images[0].Size.Height));
var videoWriter = new AVAssetWriter(fileURL, AVFileType.Mpeg4, out nsError);
var writerInput = new AVAssetWriterInput(AVMediaType.Video, new AVVideoSettingsCompressed(videoSettings));
var sourcePixelBufferAttributes = new NSMutableDictionary();
sourcePixelBufferAttributes.Add(CVPixelBuffer.PixelFormatTypeKey, NSNumber.FromInt32((int)CVPixelFormatType.CV32ARGB));
var pixelBufferAdaptor = new AVAssetWriterInputPixelBufferAdaptor(writerInput, sourcePixelBufferAttributes);
videoWriter.AddInput(writerInput);
if (videoWriter.StartWriting())
{
videoWriter.StartSessionAtSourceTime(CMTime.Zero);
for (int i = 0; i < images.Length; i++)
{
while (true)
{
if (writerInput.ReadyForMoreMediaData)
{
var frameTime = new CMTime(1, 10);
var lastTime = new CMTime(1 * i, 10);
var presentTime = CMTime.Add(lastTime, frameTime);
var pixelBufferImage = PixelBufferFromCGImage(images[i].CGImage, pixelBufferAdaptor);
Console.WriteLine(pixelBufferAdaptor.AppendPixelBufferWithPresentationTime(pixelBufferImage, presentTime));
break;
}
}
}
writerInput.MarkAsFinished();
await videoWriter.FinishWritingAsync();
I would appreciate for your help!
Kind regards,
Andre

This is just summarization of mine comments...
I do not code on your platform so I only provide generic answer (and insights from mine own GIF encoder/decoder coding experience).
GIF image format supports up to 8bit per pixel leading to max 256 colors per pixel with naive encoding. Cheap encoders just truncates input image to 256 or less colors usually leading to ugly pixelated results. To increase coloring quality of GIF there are 3 approaches I know of:
Multiple frames covering screen with own palettes
Simply you divide image into overlays each with its own palette. This is slow (in therm of decoding as you need to process more frames per single image which can cause sync errors with some viewers and you need to process all frame related chunks multiple times per single image). The encoding itself is fast as you just either separate the frames based on colors or region/position to multiple frames. Here (region/position based) example:
The sample image is taken from here: Wiki
The GIF supports transparency so the sub frames can overlap ... This approach physically increase the colors per pixel possible to N*256 (or N*255 for transparent frames) where N is the number of frames or palettes used per single image.
Dithering
Dithering is technique that approximate color of area to match colors as closely as possible while using only specified colors (from palette) only. This is fast and easily implementable but the result is kind of noisy. For more info see some related answers of mine:
Converting BMP image to set of instructions for a plotter?
c# image dithering routine that accepts an amount of dithering?
Better color quantization method
Cheap encoders just truncate the colors to predefined palette. Much better results are obtained by clustering the used colors based on histogram. For example see:
Effective gif/image color quantization?
The result is usually much better then dithering but the encoding time is huge in comparison to dithering...
The #1 and #3 can be used together to enhance quality even more ...
If you do not have access to the encoding code or pipeline you still can transform image itself before encoding doing the quantization and palette computation instead and load the result directly to GIF encoder which should be possible (if the GIF encoder you are using is at least a bit sophisticated ...)

Related

Extract pixel values from image collection to make composite in google earth engine

I am trying to make a cloud-free landsat composite in Google Earth Engine in an area with a lot of clouds (Indonesian cloud-forest). Previously, I accomplished this successfully by making a greenest pixel composite, in which I used the pixel with the highest NDVI value to make sure I was using non-cloud pixels in my composite image.
//Filter landsat 8 image collection by date, area
var collection = landsat
.filterBounds(bounds)
.filterDate(2016-08-01, 2016-10-31);
// Sort from least to most cloudy and get first (least cloudy) image
var sorted = collection.sort('CLOUD_COVER');
var image = ee.Image(sorted.first());
//Function to get NDVI
var addNDVI = function(image) {
var ndvi = image.normalizedDifference(['B5', 'B4']).rename('NDVI');
return image.addBands(ndvi);
};
//Add NDVI bands to image collection
var withNDVI = landsat.map(addNDVI);
// Make a "greenest" pixel composite using NDVI
var greenest = withNDVI.qualityMosaic('NDVI');
Map.addLayer(greenest, {bands: ['B4', 'B3', 'B2'], max: 0.15}, 'greenest');
The code works fine, however, I am concerned using the highest NDVI pixels to make my composite is over-representing forested area. Therefore, I am looking for a method to extract the pixels with the highest NDVI (to get rid of the clouds), and then using all 7 other bands of that pixel in my composite (instead of using the NDVI band itself). My questions are: would this even get rid of forest over-representation, or would I still have the same problem? Second, if this method does seem like a legitimate way to get rid clouds while making a composite that does not over-represent forest, how can I extract pixels of a high NDVI, and then use their other bands to make a composite?
It seems like however you do a quality mosaic with the greenest pixel it will almost always accentuate the forest in tropical regions (because forests are really green). I suggest you use the Landsat simple cloud score algorithm to find pixels that are least likely to be cloudy and then do your compositing based on that. Here is some code that gives you two options to make a composite. One is based on masking cloudy pixels and taking the median, another is based on the qualityMosaic() function while using the likelihood of cloud band.
var bounds = /* color: #d63000 */ee.Geometry.Polygon(
[[[94.93602603806119, -12.072520735360198],
[141.8696197880612, -13.187431968041206],
[142.3969635380612, 6.019400576838261],
[94.67235416306119, 6.456250813337956]]]),
landsat = ee.ImageCollection("LANDSAT/LC08/C01/T1_RT_TOA");
//Filter landsat 8 image collection by date, area
var collection = landsat
.filterBounds(bounds)
.filterDate('2016-08-01', '2016-10-31');
//Function to get Inverse Cloud Score
var addCloud = function(image) {
var cloudImg = ee.Algorithms.Landsat.simpleCloudScore(image);
var clouds = cloudImg.select('cloud');
var inverseClouds = ee.Image(100).subtract(clouds).rename('inverse_cloud');
return image.addBands(inverseClouds);
};
//Add cloud bands to image collection
var withCloudBand = landsat.map(addCloud);
// Option 1: Median composite after masking clouds
var noCloudsMedian = withCloudBand.map(function(img){
return img.updateMask(img.select('inverse_cloud').gt(90));
}).median();
Map.addLayer(noCloudsMedian, {bands: ['B4', 'B3', 'B2'], max: 0.30}, 'Option 1');
// Option 2: Quality mosaic based on least cloudy pixel
var noCloudQualityMosaic = withCloudBand.qualityMosaic('inverse_cloud');
Map.addLayer(noCloudQualityMosaic, {bands: ['B4', 'B3', 'B2'], max: 0.30}, 'Option 2');
Here is a link to the code to view the results: https://code.earthengine.google.com/7ea8e59b5c72340c6d784d850db856f4

Finding largest blob in image

I am having some issues extracting a blob from an image using EmguCV. Everything I see online uses the Contours object, but I guess that was removed from EmguCV3.0? I get an exception every time I try to use it. I haven't found many recent/relevant SO topics that aren't out of date.
Basically, I have a picture of a leaf. The background might be white, green, black, etc. I want to essentially remove the background so that I can perform operations on the leaf without interference with the background. I'm just not sure where I'm going wrong here:
Image<Bgr, Byte> Original = Core.CurrentLeaf.GetImageBGR;
Image<Gray, Byte> imgBinary = Original.Convert<Gray, Byte>();
imgBinary.PyrDown().PyrUp(); // Smoothen a little bit
imgBinary = imgBinary.ThresholdBinaryInv(new Gray(100), new Gray(255)); // Apply inverse suppression
// Now, copy pixels from original image that are black in the mask, to a new Mat. Then scan?
Image<Gray, Byte> imgMask;
imgMask = imgBinary.Copy(imgBinary);
CvInvoke.cvCopy(Original, imgMask, imgBinary);
VectorOfVectorOfPoint contoursDetected = new VectorOfVectorOfPoint();
CvInvoke.FindContours(imgBinary, contoursDetected, null, Emgu.CV.CvEnum.RetrType.List, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
var contoursArray = new List<VectorOfPoint>();
int count = contoursDetected.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint currContour = contoursDetected[i])
{
contoursArray.Add(currContour);
}
}
With this, I get a black image with a tiny bit of white lines. I've racked my brain back and forth and haven't been able to come up with something. Any pointers would be much appreciated!
I think that you need to find which one is the largest area using ContourArea on each one of the contours.
After you find the largest contour you need to fill it (because the contour is just the putline of the blob and not all the pixel in it) using FillPoly and create a mask that as the leaf pixels with value 1 and the everything else with 0.
In the end use the mask to extract the leaf pixels from the original image
I am not so proficient in c# so i attach a code in python with opencv to give you some help.
The resulted image:
Hope this will be helpful enough.
import cv2
import numpy as np
# Read image
Irgb = cv2.imread('leaf.jpg')
R,G,B = cv2.split(Irgb)
# Do some denosiong on the red chnnale (The red channel gave better result than the gray because it is has more contrast
Rfilter = cv2.bilateralFilter(R,25,25,10)
# Threshold image
ret, Ithres = cv2.threshold(Rfilter,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Find the largest contour and extract it
im, contours, hierarchy = cv2.findContours(Ithres,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE )
maxContour = 0
for contour in contours:
contourSize = cv2.contourArea(contour)
if contourSize > maxContour:
maxContour = contourSize
maxContourData = contour
# Create a mask from the largest contour
mask = np.zeros_like(Ithres)
cv2.fillPoly(mask,[maxContourData],1)
# Use mask to crop data from original image
finalImage = np.zeros_like(Irgb)
finalImage[:,:,0] = np.multiply(R,mask)
finalImage[:,:,1] = np.multiply(G,mask)
finalImage[:,:,2] = np.multiply(B,mask)
cv2.imshow('final',finalImage)
I recommend you look into Otsu thresholding. It gives you a threshold which you can use to divide the image into two classes (background and foreground). using OpenCV's threshold method you can then create a mask if necessary.

opencv sliding window

Is there any built in library for sliding a window (custom size) over an image in opencv version 2.x?
I tried to write the algorithm by myself but I found it very painful and probably error-prone.
I need to slide over an image and create histogram for the input of svm.
there is one for HOG Descriptor, which calculates HOG features but I have my own feature set so I just need an algorithm to let me slide over an image.
You can define a Region of Interest (ROI) on a cv::Mat object, which gives you a new Mat object referring to the sub-window. This does not copy the underlying data, merely a new header with the appropriate metadata.
cv::Mat::operator()
See also this other question:
OpenCV C++, getting Region Of Interest (ROI) using cv::Mat
Basic code can looks like. The code is described good enought. I hope.
This is single scale slideing window 60x60 witch Step 30.
Result of this simple example is ROI.
You can visit this basic tutorial Tutorial Here.
// Parameters of your slideing window
int windows_n_rows = 60;
int windows_n_cols = 60;
// Step of each window
int StepSlide = 30;
for (int row = 0; row <= LoadedImage.rows - windows_n_rows; row += StepSlide)
{
for (int col = 0; col <= LoadedImage.cols - windows_n_cols; col += StepSlide)
{
Rect windows(col, row, windows_n_rows, windows_n_cols);
Mat Roi = LoadedImage(windows);
}
}

Flicker removal using OpenCV?

I am a newbie to openCV. I have installed the opencv library on a ubuntu system, compiled it and trying to look into some image/video processing apps in opencv to understand more.
I am interested to know if OpenCV library has any algorithm/class for removal flicker in captured videos? If yes what document or code should I should look deeper into?
If openCV does not have it, are there any standard implementations in some other Video processing library/SDK/Matlab,.. which provide algorithms for flicker removal from video sequences?
Any pointers would be useful
Thank you.
-AD.
I don't know any standard way to deflicker a video.
But VirtualDub is a Video Processing software which has a Filter for deflickering the video. You can find it's filter source and documents (algorithm description probably) here.
I wrote my own Deflicker C++ function. here it is. You can cut and paste this code as is - no headers needed other than the usual openCV ones.
Mat deflicker(Mat,int);
Mat prevdeflicker;
Mat deflicker(Mat Mat1,int strengthcutoff = 20){ //deflicker - compares each pixel of the frame to a previously stored frame, and throttle small changes in pixels (flicker)
if (prevdeflicker.rows){//check if we stored a previous frame of this name.//if not, theres nothing we can do. clone and exit
int i,j;
uchar* p;
uchar* prevp;
for( i = 0; i < Mat1.rows; ++i)
{
p = Mat1.ptr<uchar>(i);
prevp = prevdeflicker.ptr<uchar>(i);
for ( j = 0; j < Mat1.cols; ++j){
Scalar previntensity = prevp[j];
Scalar intensity = p[j];
int strength = abs(intensity.val[0] - previntensity.val[0]);
if(strength < strengthcutoff){ //the strength of the stimulus must be greater than a certain point, else we do not want to allow the change
//value 25 works good for medium+ light. anything higher creates too much blur around moving objects.
//in low light however this makes it worse, since low light seems to increase contrasts in flicker - some flickers go from 0 to 255 and back. :(
//I need to write a way to track large group movements vs small pixels, and only filter out the small pixel stuff. maybe blur first?
if(intensity.val[0] > previntensity.val[0]){ // use the previous frames value. Change it by +1 - slow enough to not be noticable flicker
p[j] = previntensity.val[0] + 1;
}else{
p[j] = previntensity.val[0] - 1;
}
}
}
}//end for
}
prevdeflicker = Mat1.clone();//clone the current one as the old one.
return Mat1;
}
Call it as: Mat= deflicker(Mat). It needs a loop, and a greyscale image, like so:
for(;;){
cap >> frame; // get a new frame from camera
cvtColor( frame, src_grey, CV_RGB2GRAY ); //convert to greyscale - simplifies everything
src_grey = deflicker(src_grey); // this is the function call
imshow("grey video", src_grey);
if(waitKey(30) >= 0) break;
}

Extracting Dominant / Most Used Colors from an Image

I would like to extract the most used colors inside an image, or at least the primary tones
Could you recommend me how can I start with this task? or point me to a similar code? I have being looking for it but no success.
You can get very good results using an Octree Color Quantization algorithm. Other quantization algorithms can be found on Wikipedia.
I agree with the comments - a programming solution would definitely need more information. But till then, assuming you'll obtain the RGB values of each pixel in your image, you should consider the HSV colorspace where the Hue can be said to represent the "tone" of each pixel. You can then use a histogram to identify the most used tones in your image.
Well, I assume you can access to each pixel RGB color. There are two ways you can so depending on how you want it.
First you may simply create some of all pixel's R, G and B. Like this.
A pseudo code.
int Red = 0;
int Green = 0;
int Blue = 0;
foreach (Pixels as aPixel) {
Red += aPixel.getRed();
Green += aPixel.getGreen();
Blue += aPixel.getBlue();
}
Then see which is more.
This give you only the picture is more red, green or blue.
Another way will give you static of combined color too (like orange) by simply create histogram of each RGB combination.
A pseudo code.
Map ColorCounts = new();
foreach (Pixels as aPixel) {
const aRGB = aPixel.getRGB();
var aCount = ColorCounts.get(aRGB);
aCount++;
ColorCounts.put(aRGB, aCount);
}
Then see which one has more count.
You may also reduce the color-resolution as a regular RGB coloring will give you up to 6.7 million colors.
This can be done easily by given the RGB to ranges of color. For example, let say, RGB is 8 step not 256.
A pseudo code.
function Reduce(Color) {
return (Color/32)*32; // 32 is 256/8 as for 8 ranges.
}
function ReduceRGB(RGB) {
return new RGB(Reduce(RGB.getRed()),Reduce(RGB.getGreen() Reduce(RGB.getBlue()));
}
Map ColorCounts = new();
foreach (Pixels as aPixel) {
const aRGB = ReduceRGB(aPixel.getRGB());
var aCount = ColorCounts.get(aRGB);
aCount++;
ColorCounts.put(aRGB, aCount);
}
Then you can see which range have the most count.
I hope these technique makes sense to you.

Resources