How to set pure red pixel full screen on iPhone OLED - ios

I am studying the pixel arrangement of the iPhone OLED screen.
I use the code:
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
data[(i*width + j)*4] = (Byte) (255) ;
data[(i*width + j)*4+1] = (Byte) (0) ;
data[(i*width + j)*4+2] = (Byte) (0) ;
data[(i*width + j)*4+3] = (Byte)255;
}
}
in the viewController, when setting the iPhone X full screen to red, the screen pixels of the iPhone X seen with the microscope is not all red pixels, and green pixels can also be seen first.
What I want to achieve is that when setting to pure red, the pixels of the iPhone X only display red pixels, and green pixels or blue pixels cannot be displayed.
How can I solve this problem?

Try with system color:
view.backgroundColor = UIColor.systemRed
iOS offers a range of system colors that automatically adapt to vibrancy and changes in accessibility settings like Increase Contrast and Reduce Transparency.
https://developer.apple.com/design/human-interface-guidelines/ios/visual-design/color/

Related

Extract area with no noise

What is the best way to extract the part with pattern from binary images like these? Size and position of pattern may vary a bit from image to image.
I've tried morphologyEx, but it looses too much info (and pattern position/size)
How do detect too noisy images?
Thanks in advance.
UPD
Looks like it works now. Still dont know how to detect 'bad' noisy frames. Here is an example with an excellent frame.
orig img (240x320).
(0)-> Morph close 4x4, to find large black areas.
(0) -> Morph open 2x2 -> Morph close 2x2 -> set black where black in (1).
(2) -> gaussian blur 13x13.
(3) -> gray to bin and leave only white zones with area more than some value.
When all the frames are processed 1-4:
for (var df in dFrames) {
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++) {
if (df.frame[x][y] == 0xffffffff)
screenImg[x][y] += (int)(pow(df.whiteArea, 1.2) / maxWhiteAreaInAllDfs * 100);
}
}
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++) {
if (screenImg[x][y] > dFrames.length * 0.5 * (100/2).floor())
screenImg[x][y] = 0xffffffff;
}
Finally crop all the frames (4), wrap perspective, find pattern width/height and calc avg color in every 'square'.

Improving the grey scale conversion result

Here is the colour menu:
Here is the same menu with some of the menu items disabled, and the bitmaps set as greyscale:
The code that converts to grey scale:
auto col = GetRValue(pixel) * 0.299 +
GetGValue(pixel) * 0.587 +
GetBValue(pixel) * 0.114;
pixel = RGB(col, col, col);
I am colourblind but it seems that some of them don’t look that much different. I assume it relates to the original colours in the first place?
It would just be nice if it was more obvious they are disabled. Like, it is very clear with the text.
Can we?
For people who are not colour blind it's pretty obvious.
Just apply the same intensity reduction to the images that you do to the text.
I did not check your values. Let's assume the text is white (100% intensity).
And the grayed out text is 50% intensity.
Then the maximum intensity of the bitmap should be 50% as well.
for each gray pixel:
pixel_value = pixel_value / max_pixel_value * gray_text_value
This way you decrease further decrease the contrast of each bitmap and avoid having any pixel brighter than the text.
This is not directly related to your question, but since you are changing colors you can also fix the corner pixels which stand out (by corner pixels I don't mean pixels at the edges of bitmap rectangle, I mean the corner of human recognizable image)
Example, in image below, there is a red pixel at the corner of the page. We want to find that red pixel and blend it with background color so that it doesn't stand out.
To find if the corner pixels, check the pixels at left and top, if both left and top are the background color then you have a corner pixel. Repeat the same for top-right, bottom-left, and bottom-right. Blend the corner pixels with background.
Instead of changing to grayscale you can change the alpha transparency as suggested by zett42.
void change(HBITMAP hbmp, bool enabled)
{
if(!hbmp)
return;
HDC memdc = CreateCompatibleDC(nullptr);
BITMAP bm;
GetObject(hbmp, sizeof(bm), &bm);
int w = bm.bmWidth;
int h = bm.bmHeight;
BITMAPINFO bi = { sizeof(BITMAPINFOHEADER), w, h, 1, 32, BI_RGB };
std::vector<uint32_t> pixels(w * h);
GetDIBits(memdc, hbmp, 0, h, &pixels[0], &bi, DIB_RGB_COLORS);
//assume that the color at (0,0) is the background color
uint32_t old_color = pixels[0];
//this is the new background color
uint32_t bk = GetSysColor(COLOR_MENU);
//swap RGB with BGR
uint32_t new_color = RGB(GetBValue(bk), GetGValue(bk), GetRValue(bk));
//define lambda functions to swap between BGR and RGB
auto bgr_r = [](uint32_t color) { return GetBValue(color); };
auto bgr_g = [](uint32_t color) { return GetGValue(color); };
auto bgr_b = [](uint32_t color) { return GetRValue(color); };
BYTE new_red = bgr_r(new_color);
BYTE new_grn = bgr_g(new_color);
BYTE new_blu = bgr_b(new_color);
//change background and modify disabled bitmap
for(auto &p : pixels)
{
if(p == old_color)
{
p = new_color;
}
else if(!enabled)
{
//blend color with background, similar to 50% alpha
BYTE red = (bgr_r(p) + new_red) / 2;
BYTE grn = (bgr_g(p) + new_grn) / 2;
BYTE blu = (bgr_b(p) + new_blu) / 2;
p = RGB(blu, grn, red); //<= BGR/RGB swap
}
}
//fix corner edges
for(int row = h - 2; row >= 1; row--)
{
for(int col = 1; col < w - 1; col++)
{
int i = row * w + col;
if(pixels[i] != new_color)
{
//check the color of neighboring pixels:
//if that pixel has background color,
//then that pixel is the background
bool l = pixels[i - 1] == new_color; //left pixel is background
bool r = pixels[i + 1] == new_color; //right ...
bool t = pixels[i - w] == new_color; //top ...
bool b = pixels[i + w] == new_color; //bottom ...
//we are on a corner pixel if:
//both left-pixel and top-pixel are background or
//both left-pixel and bottom-pixel are background or
//both right-pixel and bottom-pixel are background or
//both right-pixel and bottom-pixel are background
if(l && t || l && b || r && t || r && b)
{
//blend corner pixel with background
BYTE red = (bgr_r(pixels[i]) + new_red) / 2;
BYTE grn = (bgr_g(pixels[i]) + new_grn) / 2;
BYTE blu = (bgr_b(pixels[i]) + new_blu) / 2;
pixels[i] = RGB(blu, grn, red);//<= BGR/RGB swap
}
}
}
}
SetDIBits(memdc, hbmp, 0, h, &pixels[0], &bi, DIB_RGB_COLORS);
DeleteDC(memdc);
}
Usage:
CBitmap bmp1, bmp2;
bmp1.LoadBitmap(IDB_BITMAP1);
bmp2.LoadBitmap(IDB_BITMAP2);
change(bmp1, enabled);
change(bmp2, disabled);

Fastest way to perform a screenshot

I would like to perform screenshots (or screen captures) the fastest possible.
Googling this question brings many answeers but my concern is more specific :
I am not interested in the image itself, I would like to grab in near real time the screen brightness, not the hardware one, but the image one, given that, for example, the firefox white google page gives a brighter image than a dark xterm (when both are maximzed).
To make me as clear as possible, here is one way I already managed to implement with X11 and CImg library :
Here is the header :
#include <CImg.h>
using namespace cimg_library;
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <X11/Xos.h>
and the core part which extract an X11 image and make a loop on very pixel :
Display *display = XOpenDisplay(NULL);
Window root = DefaultRootWindow(display);
Screen* screen = DefaultScreenOfDisplay(display);
const int W = WidthOfScreen(screen);
const int H = HeightOfScreen(screen);
XImage *image = XGetImage(display, root, 0, 0, W, H, AllPlanes, ZPixmap);
unsigned long red_count(0), green_count(0), blue_count(0), count(0);
const unsigned long red_mask = image->red_mask;
const unsigned long green_mask = image->green_mask;
const unsigned long blue_mask = image->blue_mask;
CImg<unsigned char> screenshot(W, H, 1, 3, 0);
for (int x = 0; x < W; x += pixel_stride)
for (int y = 0; y < H; y += pixel_stride)
{
unsigned long pixel = XGetPixel(image, x, y);
screenshot(x, y, 0) = (pixel & red_mask) >> 16;
screenshot(x, y, 1) = (pixel & green_mask) >> 8;
screenshot(x, y, 2) = pixel & blue_mask;
red_count += (int) screenshot(x, y, 0);
green_count += (int) screenshot(x, y, 1);
blue_count += (int) screenshot(x, y, 2);
count++;
}
As I said, I do not keep the image itself, I just try to compute an average luminance value with respective values of red, green and blue pixels.
XFree(image);
const double luminance_relative = (red_luminance * double(red_count) +
green_luminance * double(green_count) +
blue_luminance * double(blue_count))
/ (double(255) * double(count));
The underlying idea is to adjust the hardware screen brightness depending on the image luminance. In short, the whiter is the screenshot, the more the brightness can be reduced and conversely.
I want to do that because I have sensitive eyes, it usually hurts my eyes when I switch from xterm to firefox.
To do so, the hardware brightness must be adjusted in a very short time, the screenshot, that is to say, the loop on pixels must be as fast as possible.
I began to implement it with X11 methods, but I wonder if there could be faster access methods ? Which comes to the question : what is the fastest way/library to get a screenshot ?
Thanks in advance for your help.
Regards

How to automatically detect and fill a closed region using opencv?

All I have is the following bitmap:
What I'm gonna do is to fill the contour automatically like the following:
It's kind like the fill function in MS Painter. The initial contours will not cross the boundary of the image.
I don't have a good idea yet. Is there any method in OpenCV can do this? or any suggestions?
Thanks in advance!
Probably Contours Hierarchy may help you to achieve this,
You need to do,
Find every contours.
Check hierarchy of each contour.
Based on hierarchy draw each contour to new Mat with either thickness filled or 1.
If you know that regions have to be closed you can just scan horizontally and keep an edge count:
// Assume image is an CV_8UC1 with only black and white pixels.
uchar white(255);
uchar black(0);
cv::Mat output = image.clone();
for(int y = 0; y < image.rows; ++y)
{
uchar* irow = image.ptr<uchar>(y)
uchar* orow = output.ptr<uchar>(y)
uchar previous = black;
int filling = 0;
for(int x = 0; x < image.cols; ++x)
{
// if we are not filling, turn it on at a black to white transition
if((filling == 0) && previous == black && irow[x] == white)
++filling ;
// if we are filling, turn it off at a white to black transition
if((filling != 0) && previous == white && irow[x] == black)
--filling ;
// write output image
orow[x] = filling != 0 ? white : black;
// update previous pixel
previous = irow[x];
}
}

Displaying histogram plot openCV

I have the histogram for an image which i have calculated. I want to display this as an image so that I can actually see the histogram. I think my problem is to do with scaling although i am slightly confused over the co ordinate system starting with 0,0 in the top left as well.
int rows = channel.rows;
int cols = channel.cols;
int hist[256] = {0};
for(int i = 0; i<rows; i++)
{
for(int k = 0; k<cols; k++ )
{
int value = channel.at<cv::Vec3b>(i,k)[0];
hist[value] = hist[value] + 1;
}
}
Mat histPlot = cvCreateMat(256, 500,CV_8UC1);
for(int i = 0; i < 256; i++)
{
int mag = hist[i];
line(histPlot,Point(i,0),Point(i,mag),Scalar(255,0,0));
}
namedWindow("Hist",1);
imshow("Hist",histPlot);
This is my calculation for creating my histogram and displaying the result. If i do mag/100 in my second loop then i get some resemblance of a plot appearing (although upside down). I call this method whenever i adjust a value of my image, so the histogram should also change shape, which it doesn't appear to do. Any help in scaling the histogram and displaying it properly is appreciated.
please don't use cvCreateMat ( aka, the old c-api ), you also seem to have rows and cols wrong, additionally, if you want a color drawing, you need a color image as well, so make that:
Mat histPlot( 500, 256, CV_8UC3 );
image origin is top-left(0,0), so you've got to put y in reverse:
line(histPlot,Point(i,histPlot.rows-1),Point(i,histPlot.rows-1-mag/100),Scalar(255,0,0));

Resources