Safari on iOS 8 supporting webgl is surely a good news but after I run my demo I find that there is no antialiasing. Any solutions?
role you own? There's lots of ways to anti-alias. Here's one. First check that you're not already anti-aliased
var gl = someCanvas.getContext("webgl");
var contextAttribs = gl.getContextAttributes();
if (!contextAttribs.antialias) {
// do your own anti-aliasing
}
The simplest way would be to make your canvas's backing store larger than it's displayed. Assuming you have a canvas that you want to be displayed a certain size
canvas.width = desiredWidth * 2;
canvas.height = desiredHeight * 2;
canvas.style.width = desiredWidth + "px";
canvas.style.height = desiredHeight + "px";
Now your canvas will most likely be bi-linear filtered when drawn. In iOS's case since they are all HD-DPI you probably will need to do 4x
canvas.width = desiredWidth * 4;
canvas.height = desiredHeight * 4;
canvas.style.width = desiredWidth + "px";
canvas.style.height = desiredHeight + "px";
You can find out by looking at window.devicePixelRatio. In fact if you want 1x1 pixels you'd do this
var devicePixelRatio = window.devicePixelRatio || 1;
var overdraw = 1; // or 2
var scale = devicePixelRatio * overdraw;
canvas.width = desiredWidth * scale;
canvas.height = desiredHeight * scale;
canvas.style.width = desiredWidth + "px";
canvas.style.height = desiredHeight + "px";
Otherwise, another way is to render to a texture attached to a framebuffer then render the texture into the canvas using a shader that does antialiasing.
Related
I would like to transform histograms based on images to vector graphics.
This could be a start:
function preload() {
img = loadImage("https://upload.wikimedia.org/wikipedia/commons/thumb/3/36/Cirrus_sky_panorama.jpg/1200px-Cirrus_sky_panorama.jpg");
}
function setup() {
createCanvas(400, 400);
background(255);
img.resize(0, 200);
var maxRange = 256
colorMode(HSL, maxRange);
image(img, 0, 0);
var histogram = new Array(maxRange);
for (i = 0; i <= maxRange; i++) {
histogram[i] = 0
}
loadPixels();
for (var x = 0; x < img.width; x += 5) {
for (var y = 0; y < img.height; y += 5) {
var loc = (x + y * img.width) * 4;
var h = pixels[loc];
var s = pixels[loc + 1];
var l = pixels[loc + 2];
var a = pixels[loc + 3];
b = int(l);
histogram[b]++
}
}
image(img, 0, 0);
stroke(300, 100, 80)
push()
translate(10, 0)
for (x = 0; x <= maxRange; x++) {
index = histogram[x];
y1 = int(map(index, 0, max(histogram), height, height - 300));
y2 = height
xPos = map(x, 0, maxRange, 0, width - 20)
line(xPos, y1, xPos, y2);
}
pop()
}
<script src="https://cdn.jsdelivr.net/npm/p5#1.4.1/lib/p5.js"></script>
But I would need downloadable vector graphic files as results that are closed shapes without any gaps between. It should look like that for example:
<svg viewBox="0 0 399.84 200"><polygon points="399.84 200 399.84 192.01 361.91 192.01 361.91 182.87 356.24 182.87 356.24 183.81 350.58 183.81 350.58 184.74 344.91 184.74 344.91 188.19 339.87 188.19 339.87 189.89 334.6 189.89 334.6 185.29 328.93 185.29 328.93 171.11 323.26 171.11 323.26 172.55 317.59 172.55 317.59 173.99 311.92 173.99 311.92 179.42 306.88 179.42 306.88 182.03 301.21 182.03 301.21 183.01 295.54 183.01 295.54 179.04 289.87 179.04 289.87 175.67 284.21 175.67 284.21 182.03 278.54 182.03 278.54 176 273.5 176 273.5 172.42 267.83 172.42 267.83 179.42 262.79 179.42 262.79 182.03 257.12 182.03 257.12 183.01 251.45 183.01 251.45 178.63 245.78 178.63 245.78 175.21 240.11 175.21 240.11 182.03 234.86 182.03 234.86 150.42 229.2 150.42 229.2 155.98 223.53 155.98 223.53 158.06 217.86 158.06 217.86 167.44 212.19 167.44 212.19 162.58 206.52 162.58 206.52 155.98 200.85 155.98 200.85 158.06 195.18 158.06 195.18 167.44 189.51 167.44 189.51 177.46 183.84 177.46 183.84 166.93 178.17 166.93 178.17 153.69 172.5 153.69 172.5 155.87 166.82 155.87 166.82 158.05 161.78 158.05 161.78 155.63 156.11 155.63 156.11 160.65 150.84 160.65 150.84 146.59 145.17 146.59 145.17 109.63 139.49 109.63 139.49 113.67 133.82 113.67 133.82 61.48 128.15 61.48 128.15 80.59 123.11 80.59 123.11 93.23 117.44 93.23 117.44 97.97 111.76 97.97 111.76 78.07 106.09 78.07 106.09 61.66 100.42 61.66 100.42 93.23 94.75 93.23 94.75 98.51 89.7 98.51 89.7 85.4 84.03 85.4 84.03 111.03 78.99 111.03 78.99 120.57 73.32 120.57 73.32 124.14 67.65 124.14 67.65 23.48 61.97 23.48 61.97 0 56.3 0 56.3 120.57 50.63 120.57 50.63 167.01 45.38 167.01 45.38 170.83 39.71 170.83 39.71 172.26 34.03 172.26 34.03 178.7 28.36 178.7 28.36 175.36 22.69 175.36 22.69 170.83 17.02 170.83 17.02 172.26 11.34 172.26 11.34 178.7 5.67 178.7 5.67 103.85 0 103.85 0 200 399.84 200"/></svg>
Has anyone an idea how to program that? It doesn't necessarily need to be based on p5.js, but would be cool.
Closing Gaps
In order to have a gapless histogram, you need to meet the following condition:
numberOfBars * barWidth === totalWidth
Right now you are using the p5 line() function to draw your bars. You have not explicitly set the width of your bars, so it uses the default value of 1px wide.
We know that the numberOfBars in your code is always maxRange which is 256.
Right now the total width of your histogram is width - 20, where width is set to 400 by createCanvas(400, 400). So the totalWidth is 380.
256 * 1 !== 380
If you have 256 pixels of bars in a 380 pixel space then there are going to be gaps!
We need to change the barWidth and/or the totalWidth to balance the equation.
For example, you can change your canvas size to 276 (256 + your 20px margin) and the gaps disappear!
createCanvas(276, 400);
However this is not an appropriate solution because now your image is cropped and your pixel data is wrong. But actually...it was already wrong before!
Sampling Pixels
When you call the global loadPixels() function in p5.js you are loading all of the pixels for the whole canvas. This includes the white areas outside of your image.
for (var x = 0; x < img.width; x += 5) {
for (var y = 0; y < img.height; y += 5) {
var loc = (x + y * img.width) * 4;
It is a 1-dimensional array, so your approach of limiting the x and y values here is not giving you the correct position. Your loc variable needs to use the width of the entire canvas rather than the width of just the image, since the pixels array includes the entire canvas.
var loc = (x + y * width) * 4;
Alternatively, you can look at just the pixels of the image by using img.loadPixels() and img.pixels.
img.loadPixels();
for (var x = 0; x < img.width; x += 5) {
for (var y = 0; y < img.height; y += 5) {
var loc = (x + y * img.width) * 4;
var h = img.pixels[loc];
var s = img.pixels[loc + 1];
var l = img.pixels[loc + 2];
var a = img.pixels[loc + 3];
b = int(l);
histogram[b]++;
}
}
The pixel values are always returned in RGBA regardless of the colorMode. So your third channel value is actually the blue, not the lightness. You can make use of the p5.js lightness() function to compute the lightness from the RGBA.
Updated Code
The actual lightness histogram looks dumb because 100% dwarfs all of the other bars.
function preload() {
img = loadImage("https://upload.wikimedia.org/wikipedia/commons/thumb/3/36/Cirrus_sky_panorama.jpg/1200px-Cirrus_sky_panorama.jpg");
}
function setup() {
const barCount = 100;
const imageHeight = 200;
createCanvas(400, 400);
background(255);
colorMode(HSL, barCount - 1);
img.resize(0, imageHeight);
imageMode(CENTER);
image(img, width / 2, imageHeight / 2);
img.loadPixels();
const histogram = new Array(barCount).fill(0);
for (let x = 0; x < img.width; x += 5) {
for (let y = 0; y < img.height; y += 5) {
const loc = (x + y * img.width) * 4;
const r = img.pixels[loc];
const g = img.pixels[loc + 1];
const b = img.pixels[loc + 2];
const a = img.pixels[loc + 3];
const barIndex = floor(lightness([r, g, b, a]));
histogram[barIndex]++;
}
}
fill(300, 100, 80);
strokeWeight(0);
const maxCount = max(histogram);
const barWidth = width / barCount;
const histogramHeight = height - imageHeight;
for (let i = 0; i < barCount; i++) {
const count = histogram[i];
const y1 = round(map(count, 0, maxCount, height, imageHeight));
const y2 = height;
const x1 = i * barWidth;
const x2 = x1 + barWidth;
rect(x1, y1, barWidth, height - y1);
}
}
<script src="https://cdn.jsdelivr.net/npm/p5#1.4.1/lib/p5.js"></script>
But the blue channel histogram looks pretty good!
function preload() {
img = loadImage("https://upload.wikimedia.org/wikipedia/commons/thumb/3/36/Cirrus_sky_panorama.jpg/1200px-Cirrus_sky_panorama.jpg");
}
function setup() {
const barCount = 100;
const imageHeight = 200;
createCanvas(400, 400);
background(255);
img.resize(0, imageHeight);
imageMode(CENTER);
image(img, width / 2, imageHeight / 2);
img.loadPixels();
const histogram = new Array(barCount).fill(0);
for (let x = 0; x < img.width; x += 5) {
for (let y = 0; y < img.height; y += 5) {
const loc = (x + y * img.width) * 4;
const r = img.pixels[loc];
const g = img.pixels[loc + 1];
const b = img.pixels[loc + 2];
const a = img.pixels[loc + 3];
const barIndex = floor(barCount * b / 255);
histogram[barIndex]++;
}
}
fill(100, 100, 300);
strokeWeight(0);
const maxCount = max(histogram);
const barWidth = width / barCount;
const histogramHeight = height - imageHeight;
for (let i = 0; i < barCount; i++) {
const count = histogram[i];
const y1 = round(map(count, 0, maxCount, height, imageHeight));
const y2 = height;
const x1 = i * barWidth;
const x2 = x1 + barWidth;
rect(x1, y1, barWidth, height - y1);
}
}
<script src="https://cdn.jsdelivr.net/npm/p5#1.4.1/lib/p5.js"></script>
Just to add to Linda's excellent answer(+1), you can use p5.svg to render to SVG using p5.js:
let histogram;
function setup() {
createCanvas(660, 210, SVG);
background(255);
noStroke();
fill("#ed225d");
// make an array of 256 random values in the (0, 255) range
histogram = Array.from({length: 256}, () => int(random(255)));
//console.log(histogram);
// plot the histogram
barPlot(histogram, 0, 0, width, height);
// change shape rendering so bars appear connected
document.querySelector('g').setAttribute('shape-rendering','crispEdges');
// save the plot
save("histogram.svg");
}
function barPlot(values, x, y, plotWidth, plotHeight){
let numValues = values.length;
// calculate the width of each bar in the plot
let barWidth = plotWidth / numValues;
// calculate min/max value (to map height)
let minValue = min(values);
let maxValue = max(values);
// for each value
for(let i = 0 ; i < numValues; i++){
// map the value to the plot height
let barHeight = map(values[i], minValue, maxValue, 0, plotHeight);
// render each bar, offseting y
rect(x + (i * barWidth),
y + (plotHeight - barHeight),
barWidth, barHeight);
}
}
<script src="https://unpkg.com/p5#1.3.1/lib/p5.js"></script>
<script src="https://unpkg.com/p5.js-svg#1.0.7"></script>
(In the p5 editor (or when testing locally) a save dialog should pop up.
If you use the browser's Developer Tools to inspect the bar chart it should confirm it's an SVG (not <canvas/>))
I have two images with similar sizes that show similar scenes. How can we show two images in two frames and when panning or zooming in the left image, it pans and zooms in the right one? I don't want to concatenate the images though.
Is there a solution to do this? Both python or c++ OpenCV are fine.
About zoom in/out:
The basic idea is deciding the scale changed every time on mouse wheel. After you get the current scale (v.s. origin image) and correct region of image you want to show on screen, you can get the position and length of rectangle on scaled image. So you can draw this rectangle on scaled image.
In my github,checking OnMouseWheel () and RefreshSrcView () in Fastest_Image_Pattern_Matching/ELCVMatchTool/ELCVMatchToolDlg.cpp may give what you want.
About showing two images simutaneouly with same region:
use two picture boxes with MFC framework or other UI builder.
or use two cv::namedWindow () without framework
Effect:
Part of the code:
BOOL CELCVMatchToolDlg::OnMouseWheel (UINT nFlags, short zDelta, CPoint pt)
{
POINT pointCursor;
GetCursorPos (&pointCursor);
ScreenToClient (&pointCursor);
// TODO: 在此加入您的訊息處理常式程式碼和 (或) 呼叫預設值
if (zDelta > 0)
{
if (m_iScaleTimes == MAX_SCALE_TIMES)
return TRUE;
else
m_iScaleTimes++;
}
if (zDelta < 0)
{
if (m_iScaleTimes == MIN_SCALE_TIMES)
return TRUE;
else
m_iScaleTimes--;
}
CRect rect;
//GetWindowRect (rect);
GetDlgItem (IDC_STATIC_SRC_VIEW)->GetWindowRect (rect);//重要
if (m_iScaleTimes == 0)
g_dCompensationX = g_dCompensationY = 0;
int iMouseOffsetX = pt.x - (rect.left + 1);
int iMouseOffsetY = pt.y - (rect.top + 1);
double dPixelX = (m_hScrollBar.GetScrollPos () + iMouseOffsetX + g_dCompensationX) / m_dNewScale;
double dPixelY = (m_vScrollBar.GetScrollPos () + iMouseOffsetY + g_dCompensationY) / m_dNewScale;
m_dNewScale = m_dSrcScale * pow (SCALE_RATIO, m_iScaleTimes);
if (m_iScaleTimes != 0)
{
int iWidth = m_matSrc.cols;
int iHeight = m_matSrc.rows;
m_hScrollBar.SetScrollRange (0, int (m_dNewScale * iWidth - m_dSrcScale * iWidth) - 1 + BAR_SIZE);
m_vScrollBar.SetScrollRange (0, int (m_dNewScale * iHeight - m_dSrcScale * iHeight) - 1 + BAR_SIZE);
int iBarPosX = int (dPixelX * m_dNewScale - iMouseOffsetX + 0.5);
m_hScrollBar.SetScrollPos (iBarPosX);
m_hScrollBar.ShowWindow (SW_SHOW);
g_dCompensationX = -iBarPosX + (dPixelX * m_dNewScale - iMouseOffsetX);
int iBarPosY = int (dPixelY * m_dNewScale - iMouseOffsetY + 0.5);
m_vScrollBar.SetScrollPos (iBarPosY);
m_vScrollBar.ShowWindow (SW_SHOW);
g_dCompensationY = -iBarPosY + (dPixelY * m_dNewScale - iMouseOffsetY);
//滑塊大小
SCROLLINFO infoH;
infoH.cbSize = sizeof (SCROLLINFO);
infoH.fMask = SIF_PAGE;
infoH.nPage = BAR_SIZE;
m_hScrollBar.SetScrollInfo (&infoH);
SCROLLINFO infoV;
infoV.cbSize = sizeof (SCROLLINFO);
infoV.fMask = SIF_PAGE;
infoV.nPage = BAR_SIZE;
m_vScrollBar.SetScrollInfo (&infoV);
//滑塊大小
}
else
{
m_hScrollBar.SetScrollPos (0);
m_hScrollBar.ShowWindow (SW_HIDE);
m_vScrollBar.SetScrollPos (0);
m_vScrollBar.ShowWindow (SW_HIDE);
}
RefreshSrcView ();
return CDialogEx::OnMouseWheel (nFlags, zDelta, pt);
}
I'm writing some code to render camera preview using SkiaSharp. This is cross-platform but I came across a problem while writing the implementation for android.
I needed to convert YUV_420_888 to RGB8888 because that's what SkiaSharp supports and with the help of this thread, somehow managed to show decent quality images to my SkiaSharp canvas. The problem is the speed. At best I can get about 8 fps but usually it's just 4 or 5 fps. It turned out the biggest factor is the conversion. I now have about 3 versions of my ToRGB converter. I've even ended up trying "unsafe" code and parallel loops. I'll just show you my best one yet.
private unsafe byte[] ToRgb(byte[] yValuesArr, byte[] uValuesArr,
byte[] vValuesArr, int uvPixelStride, int uvRowStride)
{
var width = PixelSize.Width;
var height = PixelSize.Height;
var rgb = new byte[width * height * 4];
var partitions = Partitioner.Create(0, height);
Parallel.ForEach(partitions, range =>
{
var (item1, item2) = range;
Parallel.For(item1, item2, y =>
{
for (var x = 0; x < width; x++)
{
var yIndex = x + width * y;
var currentPosition = yIndex * 4;
var uvIndex = uvPixelStride * (x / 2) + uvRowStride * (y / 2);
fixed (byte* rgbFixed = rgb)
fixed (byte* yValuesFixed = yValuesArr)
fixed (byte* uValuesFixed = uValuesArr)
fixed (byte* vValuesFixed = vValuesArr)
{
var rgbPtr = rgbFixed;
var yValues = yValuesFixed;
var uValues = uValuesFixed;
var vValues = vValuesFixed;
var yy = *(yValues + yIndex);
var uu = *(uValues + uvIndex);
var vv = *(vValues + uvIndex);
var rTmp = yy + vv * 1436 / 1024 - 179;
var gTmp = yy - uu * 46549 / 131072 + 44 - vv * 93604 / 131072 + 91;
var bTmp = yy + uu * 1814 / 1024 - 227;
rgbPtr = rgbPtr + currentPosition;
*rgbPtr = (byte) (rTmp < 0 ? 0 : rTmp > 255 ? 255 : rTmp);
rgbPtr++;
*rgbPtr = (byte) (gTmp < 0 ? 0 : gTmp > 255 ? 255 : gTmp);
rgbPtr++;
*rgbPtr = (byte) (bTmp < 0 ? 0 : bTmp > 255 ? 255 : bTmp);
rgbPtr++;
*rgbPtr = 255;
}
}
});
});
return rgb;
}
You can also find it on my repo. You can also find on that same repo the part where I rendered the output to SkiaSharp
For a preview size of 1440x1080, running on my phone, this code takes about 120ms to finish. Even if all the other parts are optimized, the most I can get from that is 8fps. And no, it's not my hardware because the built-in camera app runs smoothly. By the way 1440x1080 is the output of my ChooseOptimalSize algorithm that I got from the mono-droid examples of android's Camera2 API. I don't know if it's the best way or if it lacks logic on detecting the fps and sizing down the preview to make it faster.
Does SkiaSharp support GPU drawing? If you connect the camera to a SurfaceTexture, you can use the preview frames as GL textures and render them efficiently into an OpenGL scene.
Even if not, you may still get faster results by sending the frames to the GPU and reading them back to the CPU with something like glReadPixels, as that'll do a RGB conversion within the GPU.
I'm working on a cross platform cordova project which involves uploading images to a server and displaying them on the website. The code used for obtaining the resized image from the phone memory and get the base64 data is as follows:
var image = new Image();
image.src = URL;
image.onload = function () {
var canvas = document.createElement("canvas");
var img_width = this.width;
var img_height = this.height;
canvas.width =1000;
canvas.height = 1000;
var w=(1000/img_width);
var h=(1000/img_height);
var ctx = canvas.getContext("2d");
ctx.scale(w,h);
ctx.drawImage(this,0,0);
var dataURL = canvas.toDataURL("image/png",1);
Here, the problem is that, the image width is set as the canvas width but the height isnt.
Also, the code works perfectly on android devices and ipads, but the issue arises only on iphones.
Tested on Iphone 4s and 5c
Can Anyone Please Help Me!!!!
The input image is:
The image output on the canvas( as png file ) is:
I was able to create a workaround for the issue in my case by modifying the answer provided in the following link to my needs as follows:
When scaling and drawing an image to canvas in iOS Safari, width is correct but height is squished
I tweaked the code a little, so it looks like this now:
var image = new Image();
image.src = URL;
image.onload = function () {
var canvas = document.createElement("canvas");
var img_width = image.width;
var img_height = image.height;
canvas.width = 600;
canvas.height = 600;
var ctx = canvas.getContext("2d");
var imgRatio;
if (image.width > image.height) {
imgRatio = image.width / image.height;
} else {
imgRatio = image.height / image.width;
}
var canRatio = canvas.width / canvas.height;
var scaledWidth = image.width * (canvas.height / image.height);
var scaledHeight;
if (platformVersion.substring(0, 1) < 8) {
scaledHeight = (image.height * (canvas.width / image.height)) * 2.1;
} else {
scaledHeight = (image.height * (canvas.width / image.height));
}
if (imgRatio > canRatio) {
ctx.drawImage(image, 0, 0, canvas.width, scaledHeight);
} else {
ctx.drawImage(image, 0, 0, scaledWidth, canvas.height);
}
var dataURL = canvas.toDataURL("image/png", 1);
}
Here depending on whether the os of the phone is from the cordova device plugin i multiply a value to the scaled height to compensate for the squishing effect.
It looks like you're getting a much larger value for img_height than you're expecting. Are you sure that this is pointing to the right place? I assume you want the width and height of the image you loaded, so why not just use image.width and image.height?
I have a function defined by Intel IPP to operate on an Image / Region of Image.
The input to the image are the pointer to the image, parameters to define the size to process and parameters of the filter.
The IPP function is single threaded.
Now, I have an image of size M x N.
I want to apply the filter on it in parallel.
The main idea is simple, break the image into 4 sub images which are independent of each other.
Apply the filter to each sub image and write the result to a sub block of an empty image where each thread write to a distinct set of pixels.
It's really like processing 4 images each on it own core.
This is the program I'm doing it with:
void OpenMpTest()
{
const int width = 1920;
const int height = 1080;
Ipp32f input_image[width * height];
Ipp32f output_image[width * height];
IppiSize size = { width, height };
int step = width * sizeof(Ipp32f);
/* Splitting the image */
IppiSize section_size = { width / 2, height / 2};
Ipp32f* input_upper_left = input_image;
Ipp32f* input_upper_right = input_image + width / 2;
Ipp32f* input_lower_left = input_image + (height / 2) * width;
Ipp32f* input_lower_right = input_image + (height / 2) * width + width / 2;
Ipp32f* output_upper_left = input_image;
Ipp32f* output_upper_right = input_image + width / 2;
Ipp32f* output_lower_left = input_image + (height / 2) * width;
Ipp32f* output_lower_right = input_image + (height / 2) * width + width / 2;
Ipp32f* input_sections[4] = { input_upper_left, input_upper_right, input_lower_left, input_lower_right };
Ipp32f* output_sections[4] = { output_upper_left, output_upper_right, output_lower_left, output_lower_right };
/* Filter Params */
Ipp32f pKernel[7] = { 1, 2, 3, 4, 3, 2, 1 };
omp_set_num_threads(4);
#pragma omp parallel for
for (int i = 0; i < 4; i++)
ippiFilterRow_32f_C1R(
input_sections[i], step,
output_sections[i], step,
section_size, pKernel, 7, 3);
}
Now, the issues is I see no gain versus working Single Threaded mode on all image.
I tried to change the image size or filter size and nothing will the change the picture.
The most I could gain was nothing significant (10-20%).
I thought it might have something to do with that I can't "Promise" each thread the zone it received is "Read Only".
Moreover to let it know the memory location it writes to is also belongs only to himself.
I read about defining variables as private and share, yet I couldn't find a guide to deal with arrays and pointers.
What would be the proper way to deal with pointers and sub arrays in OpenMP?
How does the performance of threaded IPP compare?
Assuming no race conditions, performance problems with writing to shared arrays are most likely to occur in cache lines where part of the line is written by one thread and another part is read by another.
It's likely to require a data region larger than a 10 megabytes or so before full parallel speedup is seen.
You would need deeper analysis, e.g. by Intel VTune Amplifier, to see whether memory bandwidth or data overlaps are limiting performance.
Using Intel IPP Filter, the best solution was using:
int height = dstRoiSize.height;
int width = dstRoiSize.width;
Ipp32f *pSrc1, *pDst1;
int nThreads, cH, cT;
#pragma omp parallel shared( pSrc, pDst, nThreads, width, height, kernelSize,\
xAnchor, cH, cT ) private( pSrc1, pDst1 )
{
#pragma omp master
{
nThreads = omp_get_num_threads();
cH = height / nThreads;
cT = height % nThreads;
}
#pragma omp barrier
{
int curH;
int id = omp_get_thread_num();
pSrc1 = (Ipp32f*)( (Ipp8u*)pSrc + id * cH * srcStep );
pDst1 = (Ipp32f*)( (Ipp8u*)pDst + id * cH * dstStep );
if( id != ( nThreads - 1 )) curH = cH;
else curH = cH + cT;
ippiFilterRow_32f_C1R( pSrc1, srcStep, pDst1, dstStep,
width, curH, pKernel, kernelSize, xAnchor );
}
}
Thank You.