Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have coded a program which dynamically inserts nodes and edges, overlaying the graph onto an image. My aim is to be able to automatically add the edge weights based on pixel distances from one node to the next (edge length). Is this possible? If so could you guide me in the right direction. Thank you.
JUNG doesn't exactly have a native notion of edge weights (or any other edge- or vertex-related metadata). What it has instead is a convention for how to tell algorithms that need such metadata how to access it. For more information, see the "User Data" section here: https://sourceforge.net/apps/trac/jung/wiki/JUNGManual
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
How does filter size affect image understanding?
For example, if our data is images of human faces, what is the difference between using a 3 * 3 filter and a 7 * 7 filter?
Does increasing the size of the filter differentiate more shapes and textures?
Yes, you are correct, to some extent.
Increasing the filter kernel allows the model to differentiate between more complex & bigger shapes and textures. For example:
Conv2D(filters=x, kernel_size=(7,7)) can differentiate more shapes than Conv2D(filters=x, kernel_size=(3,3)) can.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm going to be more specific about the situation:
I've captured a screenshot from the game DotA. The information I want to get is what objects eg. heroes (also its name, hp, ...), creeps (also which side), towers, etc. is visible in the image and where they are. A problem come from the fact that in DotA 2 many of these object can be viewed from many perspective, so let's reduce the problem and assume that every object have only one orientation. How might this problem be solved quickly enough, that it can recognise all objects in real time at about 30fps? Any help or suggestions is welcome.
I think that you have the good flags: CNN for image segmentation. So my point is that for so many different objects from different points of view and scale (because I guess that you can zoom in/out on your heroes/objects), the easiest way (but the heaviest in term of computation) is to build one CNN for each type of object.
But images would help a lot to get a better understanding of the problem.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am making a application that you can apply different effects to photos using GPU Image Framework by Brad Larson.I want to add X-ray Effect filter in GPU image app.Any pointers will be appreciated
You want something like the GPUImageColorInvertFilter:
If that doesn't produce the exact effect you want, you could create a custom filter based on it and have your fragment shader first convert to luminance, and then apply a greenish tint based on the inverse of the pixel's luminance. That would provide the exact effect you show above.
I'll leave the coding of such a shader as an exercise for the reader.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm writing an ios app that can create a word cloud or word mosaic with a pattern selected by user. My question is, how can I calculate the coordinates of each word, considering the sizes of each word is randomly calculated? Or any other approach?
Note that the pattern I want to work on is like this
but, rather like that, my app will fill the white areas with word mosaic, while preserving the photo of the user.
[EDIT]
my algorithm right now is to randomly set the size and position of each words and just put them there. How can I check if each word that I'm putting doesn't intersect with each other and the photo?
Reference: http://www.imagechef.com/ic/word_mosaic/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am new to Quartz, and I would like to draw a candlestick chart in iOS, but I don't know how.
How can I draw charts like this? Are there any good examples for iOS?
Core Plot may be able to do it. If not, it's probably a pretty easy thing to add to the Core Plot library.
http://code.google.com/p/core-plot/
You can do it with SwiftCharts, there are 2 candle sticks examples ready to use - one interactive and other not-interactive (with less overhead, in case you don't need interactivity)
(Disclosure: I'm the author)