imagemagick cropping image creates jagged edges or saw-tooth shapes - imagemagick

The above pic (looks like zoomed one ) is from first level conversion from 1.Ai file 1_cropped.AI file what I get after cropping. I don't do resize during cropping it gets automatically resized.
I am trying crop an image, Seems without +repage imagemagick unable to crop. The problem is a simple crop created the jagged lines as you can see from the snapshot taken from a portion of image.
How to remove this. Some where in SOF post I found a recommendation to use "Gaussian blur" but didn't find a proper command to do the same. Many thanks! I am doing just the crop and no resizing.
Original : Due to copyright can't show the entire image. But below is one section:
Looking into : http://www.imagemagick.org/Usage/antialiasing/ now but unable to smoothe the 'stair case' or 'jaggies' so far.
UPDATE from the comments:
Yes the input is AI and output is almost all format AI/SVG/PNG/GIF/JPEG/BMP. So for smaller resolution files such as png/GIF I don't get that jagged shapes I I tried turning on anti-aliasing , blurring and guassian-bluring but no luck. I think the repaging zooms the image which I don't need, is it possible to set the canvas somehow so the original resolution is kept intact when converting from AI to AI? Yes initially I convert AI to AI after cropping and than feed the converted AI for further processing. The stair-stepping appears from first level AI to AI file conversion itself.

Related

Midas depth map: strange lines on sharp edge

I'm using Hugging Face's DPT large to compute depth map.
Here is an example of my problem:
(credit: museum of GenĂªve)
The depth map contains some little white lines just above the mountains in the background.
How can I avoid them ?
btw: I have cloned the repo and it works well on my local computer, so I have access to the code. I can make pre/post-processing. But as non-specialist I cannot patch Midas itself.
EDIT: I'm using Midas exactly as in the example: https://huggingface.co/spaces/akhaliq/DPT-Large/blob/main/app.py By the way, the effect I describe is visible in the offical demo.
EDIT: when I feed the extractor with the original 1148x790 image, the issue does not appears. It appears with a resized image 600x413. Thus a solution could be to only use non resized images.
Answer to myself.
It turns out that the issue disappears with:
Using the model "DPT_BEiT_L_512"
transform = midas_transforms.dpt_transform
prediction = torch.nn.functional.interpolate(
prediction.unsqueeze(1),
size=img.shape[:2],
mode="bilinear", # <--- instead of bicubic
antialias=True,
align_corners=True,

Cairolatex terminal produces bitmap output

I am using gnuplottex in Overleaf to write a document using the IEEETran document style.
When I plot filledcurves the result kind of bad as it seems not to be in a vector graphics format; thus getting pixelated upon zooming into the figure.
Additionaly, some weird white boxes are floating around hideing parts of a label.
I developed the figure locally on my machine using the cairopdf terminal and Gnuplot 5.4.
The first picture show the resulting pdf of of Latex using the cairolatex terminal.
The second picture shows a very deep zoom in the test pdf done with the cairopdf terminal where everything looks good.
set style fill transparent solid 0.35 noborder
plot for[i=1:num_states] normal(x, word(means, i), word(std, i)) notitle with filledcurves y1=0
What is going on? I read that epslatex has some trouble with transparency is that here also the case?
The PostScript language itself (including *.eps) does not support transparency or alpha-channel colors. So no *.ps or *.eps file can portray transparent/translucent areas properly. At best it can be approximated by using some intermediate representation that does handle alpha channels and then translating the final resulting blended colors to PostScript, but that means later color balance adjustments made during viewing or printing will not preserve the correct balance in the blended areas.
There should be no problem with pdf. I don't know what caused that partial white rectangle. Maybe if you show a more complete script that created the figure?
Edited:
Using set terminal cairolatex pdf in gnuplot should solve the pixelation and transparency issues. You can create a new question if the unwanted white rectangles persist.

Is anyone has idea about Palm line detection in iOS Swift?

Currently, I am doing Image COLOUR filtering operation second MEDIAN filtering then CANNY EDGE DETECTION ALGORITHM.
Then, I read pixels using for loop and I draw lines using pixel, but I do not getting proper result for palm scanning and showing lines on human Palm.
So if anybody has any types of idea regarding this then please let me know.
Currently i am getting this type of result:
but I need this type of output:
Oh I got your problem, You can do this by following steps.
1.process your hand image with canny edge detection algo lets name that cannyImage.
2.now create the bitmap of cannyImage and remove black pixels from the image and replace them with transparent pixels, black only because canny image will be filled with black color and objects lines in white when you process the image through the algo, now you have extracted the image with palm lines white in color, lets name that palmLineImage.
3.now the main part is MASKING you need to mask the palmLineImage on the original image.
These three steps will give you your desire O/P.
Tools you can use GPUImage awsesome library by BradLarson for this https://github.com/BradLarson/GPUImage2
For refining the palm image from background which I'm sure you have to use in future you can use GrabCut algo
LINK - https://github.com/naver/grabcutios
and now the apple has launched Photos captured in Portrait Mode on iOS 12 contain an embedded person segmentation matte that made it easy to create creative visual effects like background replacement.
Links - https://developer.apple.com/videos/play/wwdc2019/260/ , https://developer.apple.com/videos/play/wwdc2019/225/
Looks like you need to use something like the douglas peucker algorithm - to simplify the number of data points and smooth the lines. link - https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm

How to separate the query and the train image from the Mat object returned from DrawMatches() method

I am trying to detect an object in a video. i am using SURF as feature detection and descriptor extractor, and BRUTFORCE as matcher. i tested my work with faces, i captured a picture of me and when i run the camera and direct it toward me, my face gets detected and a rectangle is drawn around it. i tried to make another test, i captured an image of my mouse and resized it, and when i run the cam, it is not getting detected
the problems i am facing are:
1-is the size of the query/object image matters in such cases,? i am asking this question because the image i captured of my self is bigger than the one of the mouse, and the face is getting detected and the mouse not.
2-regardless of which image i am using as a query/object iamge, how to display camera preview of only the train/scene image without the query/object image. i am asking this question because, what i am getting is something as shown in the below posted images, while what i want to do is something as it is shown here, i checked the code in that link, it is in C++ but i followed the same thing and also the tutorial uses 'drawMatches' method which has a peer in java which is Features2D.DrawMatches() and both of them returns a Mat object with the query/object image on the left side and the train/scene image on the right side as also shown in the image i posted below.
what i want to do is, to display on the the camera output without the query/object image, i want the area designated for the camera output is to show only the train/scene image captured from the camera.
please let me know how to solve this issues, i want to do something as shown in the tutorial i cited in the link.
1 - size matters but in your case, I think the most crucial problem is "textureness". SURF detect the interest points where the "texture gradient" is strong. In the case of your mouse, the gradient is mainly smooth, except aroud the logo (fujitsu), the button and at the border of the image. In the tutorial you point to, you notice it uses a very textured object to demonstrate the effect.
2 - to the best of my knowledge, there is fully automatic method to do what you want, but it can be done with a few steps. Basically, you must determine the surrounding box of your object then draw it. To draw, the easier is to use cv::rectangle but you can be more precise with four (or more....) cv::line. To determine the surrounding box, you can estimate the extreme points among the filtered matches.
Good luck!

flood fill performance issue on iPad

I am using 4-Way floodfill algorithm.
I have a transparent image with black out line.
That is staring point image(without color).
And after filling the color in this image it look like this
Please help me and let me know what can i do for proper fill.
I used and implemented myself FloodFill in other projects and the algorithm goes trough the whole draw, looking for closed spaces and then draw inside (or outside) them.
Your problem happens with every tool in the world that fills a draw, and the problem is the same, the spaces are not 100% closed.
The floodfill algorithm goes pixel by pixel and when it detect a black pixel, it stops. For example, the arm of the scuba driver is not thick enough or it has holes on it, and the flood fill algorithm manages to go trough it and not detect it as an empty space.
Nobody here can tell you why unless we take your project and analyse it, so the best I can offer is a guideline about where your error could be.
I tried the code with an image that has a very precise defined border around it (from here) and it seems to work OK with that image. I suggest perhaps that if you zoom into your image that there is some grey aliasing around the edges which won't get filled. Perhaps the algorithm has a threshold function that can be tweaked?
Try setting the andTolerance value (I tried 4 which seemed to improve my example).
//Call function to flood fill and get new image with filled color
UIImage *image1 = [self.image floodFillFromPoint:tpoint withColor:newcolor andTolerance:4];

Resources