How to get network into protein shape in cytoscape? - cytoscape

I have a protein network loaded into cytoscape.
I want to get the network nodes into a shape that corresponds that of the protein shape/structure.
This is so that I can superimpose the network image onto the protein structure image.
I tried RINanalyzer and structureviz2, but it didn't help. Any solutions?

Can you be more specific about what didn't work with RINalyzer and structureViz2? The layout provided by RINalyzer usually does a pretty good job for me.
-- scooter

Related

Connecting discontinuous lines spinal roots

I have a project to build a 3D model of the spinal roots in order to simulate the stimulation by an electrode. For the moment I've been handled two things, the extracted positions of the spinal roots (from the CT scans) and the selected segments out of the points (see both pictures down below). The data I'm provided is in 3D and all the segments are clearly distinct although it does not look like it on the figures below as it is zoomed out.
Points and segments extracted from the spinal cord CT scans:
.
Selected segments out of the points:
I'm now trying to connect these segments so as to have the centrelines for all the spinal roots at the end. The segments are not classified, simply of different colors to differentiate them on the plot. The task is then about vertically connecting the segments that look to be part of the same root path.
I've been reviewing the literature on how I could tackle that issue. As I'm still quite new to the field I don't have much intuition on what could work and what could not. I have two subtasks to solve here, connecting the lines and classifying the roots, and while connecting the segments after classification seems no big deal, classifying them seems decently harder. So I'm not sure in which order to proceed.
Here are the few options I'm considering to deal with the task :
Use a Kalman filter to extract the vertical lines from the selected segments and the missing parts
Use of a Hough transform to detect vertical lines, by trying to express the spinal root segments in the parametric space and see how they cluster and see if anything can be inferred from there.
Apply some sort of SVM classification algorithm on the segments to classify them by roots. I could characterize each segment by its orientation and position, and classify them based on similarities in the parameters I'm selecting, and then connect the segments. Or use the endpoint position of each segment and connect it to one of the nearest neighbours if their orientation/position is matching.
I'm open to any suggestions, any piece of advice, or any other ideas on how to deal with the current problem.

Neural Network for Learning Cut VS Uncut Grass

I've got a script to take pictures like the one provided, with colored loops encircling either uncut grass, cut grass, or other background details (for purposes of rejecting non-grass regions), and generate training data in the form of a bunch of small images from inside the colored loops of those types of training data. I'm struggling to find which type of neural network that would work best for learning from this training data and telling me in real time from a video feed mounted on a lawn mower which sections of the image is uncut grass or cut grass as it is mowing though a field. Is there anyone on here experienced with neural networks, and can either tell me some I could use, or just point me in the right direction?
Try segmentation network. There are many types of segmentation.
Mind that for neuron networks, training data is necessary. Your case (to detect cut and uncut grass) is considered special, which means existing models may not fit your purpose. If so, you'll need a dataset including images and annotations. There are also tools for labeling segmentation images.
Hope it helps.

Having a neural network output a gaussian distribution rather than one single value?

Let's consider I have a neural network with one single output neuron. To outline the scenario: the network gets an image as input and should find one single object in that image. For simplifying the scenario, it should just output the x-coordinate of the object.
However, since the object can be at various locations, the network's output will certainly have some noise on it. Additionally the image can be a bit blurry and stuff.
Therefore I thought it might be a better idea to have the network output a gaussian distribution of the object's location.
Unfortunately I am struggling to model this idea. How would I design the output? A flattened 100 dimensional vector if the image has a width of 100 pixels? So that the network can fit in a gaussian distribution in this vector and I just need to locate the peaks for getting the approximated object's location?
Additionally I fail in figuring out the cost function and teacher signal. Would the teacher signal be a perfect gaussian distribution on the exact x-coordination of the object?
How to model the cost function, then? Currently I have a softmax cross entropy or simply a squared error: network's output <-> real x coordinate.
Is there maybe a better way to handle this scenario? Like a better distribution or any other way to have the network not output a single value without any information of the noise and so on?
Sounds like what you really need is a convolutional network.
You could train a network to recognize your target object when it's positioned in the center of the network's receptive field. You can then create a moving window, at each step feeding the portion of the larger image under that window into the net. If you keep track of the outputs of the trained network for each (x,y) position of the window, some locations of the window will produce better matches than others. Once you've covered the whole image, you can pick the position with the maximum network output as the position where the target object is most likely located.
To handle scale and rotation variations, consider creating an image pyramid, or sets of images at different scales and rotations that are versions of the original image. Then sieve over those images to find the target image.

Difference between Connected Component labeling and Image Segmentation?

Can someone explain the difference between Connected Component labeling and Image Segmentation in image processing? I've read about these techniques and found the outcome of both is almost same
Segmentation is a problem statement: How do you assign a finite set of labels to each pixel in an image, ideally so that the labels correspond to real-world objects you're looking for?
Connected component labeling is (or can be seen as) one very simple approach to solving that problem: Simply assign the same unique label to connected sets of pixels that share some binary characteristic (e.g. brightness above some fixed threshold).
It is however by no means the only or the best approach: Just google for "Graph cut segmentation" or "Watershed segmentation" to find examples that simply aren't possible with connected component labeling, like this one:
Connected components labeling scans an image and groups its pixels into components based on pixel connectivity. Connected components, in a 2D image, are clusters of pixels with the same value, which are connected to each other through either 4-pixel, or 8-pixel connectivity.
image segmentation can use any applicable algorithm to segment the voxels of interests which satisfy the feature that the user is looking for. This means not all the segmented voxels are connected together according to connected components. Sometimes, the VOI is obtained using C.C depending on what the VOI is. But most of the times in image processing for example if you are looking for a specific shape in your image, you can not use C.C since not all the voxels in the VOI are connected together.

Kmeans for image clustering

I am new to image processing and I am using k-means for clustering for my assignment. I am having an issue where my friend told me that to use k-mean in opencv, we need to only pass the color of the object of interest and not the whole image.
This has confused me as I am not sure how to obtain the color composition before apply kmeans. Sorry about my English, I would give an example. I have a picture with several colors and lets say I want to obtain the blue cluster which is a car. So does it means that I need to pass only the color blue to the kmeans.
Maybe I am totally wrong in this since I am unsure and i have been struggling for several days now. I think I need thorough explanation from some expert whom i think i will get it here.
Thank you for your time.
In order to put you in the right direction, below are some hints:
you will pass all the pixels to k-means alongwith the desired number of clusters (or groups) to find
k-means will process you data and will cluster you data in the specified number of clusters
you will take the pixels in blue cluster (e.g.) to do what you want to do.
I hope this will help ;)

Resources