masking text3D in papervision3D - actionscript

I have a text3D and I want to mask it with a Plane . I usually mask my things by defining two ViewportLayers , then give every object a Layer by
myLayer1 = viewport.getChildLayer(myobject1);
myLayer2 = viewport.getChildLayer(myobject2);
and then mask it by
myLayer1.mask = myLayer2;
and it works , but when I try to mask a text3D object it doesn't work at all!
anyone has any experience with it ?
thanks

Related

How to properly separate foreground from a solid colour background in an image using numpy / PIL / Opencv

I have an image given below with black background. I want to separate the background and foreground. The idea is on the assumption that the image contains PURE Black [0,0,0] only at the background part. i.e if there's any other black present in the image, it must be having a bit of non black too for example: 0,0,1, 0,1,0.. or anything else.
I tried 3 different things and got different results each time. I want to know the proper way of doing it.
Below is the code which I tried.
mask_color = [0,0,0]
image = np.array(Image.open('black1.PNG'))
mask = image.copy()
non_black = np.all(mask != mask_color, axis = -1) # ALL pixel which are Non-black
mask[non_black] = [255,255,255] # Make the non black part white
Image.fromarray(mask) # show the mask
I also tried:
non_black = np.any(mask != mask_color, axis = -1) # ANY pixel which are Non-black
And then I tried getting a single channel whic is not so good approach because a single 0 could be anywhere.
mask_color = 0
image = np.array(Image.open('black1.PNG'))[:,:,0] # get only 1 channel
mask = image.copy()
mask[mask != mask_color] = 255 # set to white
could someone please suggest correct way of doing it?
Goal is to create a Black and White mask as mentioned by #Christoph Rackwitz in the comments

How to divide image into two parts without crossing any object using openCV?

I am using an object detection machine learning model (only 1 object). It working well in case there are a few objects in image. But, if my image has more than 300 objects, it can't recognize anything. So, I want to divide it into two parts or four parts without crossing any object.
I used threshold otsu and get this threshold otsu image. Actually I want to divide my image by this line expect image. I think my model will work well if make predictions in each part of image.
I tried to use findContour, and find contourArea bigger than a half image area, draw it into new image, get remain part and draw into another image. But most of contour area can't reach 1/10 image area. It is not a good solution.
I thought about how to detect a line touch two boundaries (top and bottom), how can I do it?
Any suggestion is appreciate. Thanks so much.
Since your region of interests are separated already, you can use connectedComponents to get the bounding boxes of these regions. My approach is below.
img = cv2.imread('circles.png',0)
img = img[20:,20:] # remove the connecting lines on the top and the left sides
_, img = cv2.threshold(img,0,1,cv2.THRESH_BINARY)
labels,stats= cv2.connectedComponentsWithStats(img,connectivity=8)[1:3]
plt.imshow(labels,'tab10')
plt.show()
As you can see, two regions of interests have different labels. All we need to do is to get the bounding boxes of these regions. But first, we have to get the indices of the regions. For this, we can use the size of the areas, because after the background (blue), they have the largest areas.
areas = stats[1:,cv2.CC_STAT_AREA] # the first index is always for the background, we do not need that, so remove the background index
roi_indices = np.flip(np.argsort(areas))[0:2] # this will give you the indices of two largest labels in the image, which are your ROIs
# Coordinates of bounding boxes
left = stats[1:,cv2.CC_STAT_LEFT]
top = stats[1:,cv2.CC_STAT_TOP]
width = stats[1:,cv2.CC_STAT_WIDTH]
height = stats[1:,cv2.CC_STAT_HEIGHT]
for i in range(2):
roi_ind = roi_indices[i]
roi = labels==roi_ind+1
roi_top = top[roi_ind]
roi_bottom = roi_top+height[roi_ind]
roi_left = left[roi_ind]
roi_right = roi_left+width[roi_ind]
roi = roi[roi_top:roi_bottom,roi_left:roi_right]
plt.imshow(roi,'gray')
plt.show()
For your information, my method is only valid for 2 regions. In order to split into 4 regions, you would need some other approach.

Blob Detection in openCV works well, but for some reason it fails for a specific image

I have managed to find circles quite easily thank to the built-in functionality of SimpleBlobDetector_create in openCV (4.2.0.34), I made sure the background would be white for easy recognition.
In the following image, 3 circles were found as I would expect:
But for some reason strangely when I apply the same code on the image below,
This perfect circle doesn't get recognized. how come??
here below is my short code.
img = cv2.imread(filename='img1.png')
cv2.imshow(winname="Original", mat=img)
cv2.waitKey(0)
params = cv2.SimpleBlobDetector_Params()
# set Circularity filtering parameters:
params.filterByCircularity = True
# 1 being perfect circle, 0 the opposite
params.minCircularity = 0.8
# create a detector with parameters
detector = cv2.SimpleBlobDetector_create(parameters=params)
keypoints = detector.detect(img)
print("Number of circular Blobs: " + str(len(keypoints)))
Thank you all for any help!
I tested your code here and got the same results. But adding
params.filterByArea = False
before params.filterByCircularity = True fixed the problem. This is kind of strange because I would expect that all the other attributes from SimpleBlobDetector would start with False as default. Also, after the change the code started to respond with 4 circles (which seems correct to me) and not 3 as previously.

Apply different effects on CAReplicatorLayer instances

I am trying to use the replicator layer to create a reflection of my original layer. The problem is I want to apply a different effect on each instance (Rasterize the copy, but keep the original intact). Is this possible using replicator layers, and if not can you suggest a way of achieving this?
Note: I tried duplicating the layers, but I could not because they get copied by reference and thus any effect applied to one is applied to the original layer.
let r = CAReplicatorLayer()
r.bounds = CGRect(x: 0.0, y: 0.0, width: background.frame.width , height: background.frame.height)
r.position = background.center
background.layer.addSublayer(r)
r.addSublayer(masterLayer)
r.instanceCount = 2
r.instanceTransform = CATransform3DMakeRotation(CGFloat(M_PI), 1, 0, 0)
r.masksToBounds = true
r.shouldRasterize = true
r.rasterizationScale = 0.2
Yes, one of the limitations of CAReplicatorLayer is that you don't have direct access to the individual replicated instances.
You can try bypassing CAReplicatorLayer altogether and instead create your own subclass of CALayer, give it an array property to hold the replicated sublayers (allowing you direct access to each of those sublayers) and then endow it with whatever CAReplicator-like abilities you require. This won't be a drop-in replacement for CAReplicatorLayer, of course, and I can't say if it's the solution you're looking for (without knowing the specifics of what you're trying to achieve with those individual layers) but you may want to give it a shot.
I posted a short write-up on this some months ago here (source code here) if you're interested. Hope this helps!

weird result for circle detection using opencv's HoughCircles()

blur = cv2.GaussianBlur(pimg,(3,3),0)
edges = cv2.Canny(blur, 30, 55)
circles = cv2.HoughCircles(blur[:,:,0],cv2.cv.CV_HOUGH_GRADIENT,1,8,param1=10,param2=55,minRadius=0,maxRadius=0)
Above is my code, how I use the function, link (sample) is my results. from left to right, is the original figure, result and edges. The results( green circles) do not match the input at all. :( What mistakes I have done here, please share your opinion. Thanks.

Resources