I'm new to ML and have been playing around with the CNTK tutorials. I've trained a couple of models successfully.
I completed the Transfer Learning tutorial (https://github.com/Microsoft/CNTK/blob/v2.1/Tutorials/CNTK_301_Image_Recognition_with_Deep_Transfer_Learning.ipynb) and created a flower recognition model.
When I import this model into the CNTKAzureTutorial01 API tutorial (https://github.com/Microsoft/CNTK/tree/master/Examples/Evaluation/CNTKAzureTutorial01/CNTKAzureTutorial01) it successfully outputs a dense output.
The dense output is made up of and array of 102 decimal numbers which I assume it relates to the weighting of my 102 flower categories? I cant workout where I can map these values in the array to the flower categories. E.g Array[1] = Roses, Array[2] = Tulips.
I've been on data owners site and while it lists categories (http://www.robots.ox.ac.uk/~vgg/data/flowers/102/categories.html) I can't see how these categories map to my model output. Thought might be alphabetical but that stops half way down...
There are 2 .mat files which ive outputted in python which don't explain anything either.
Any pointers?
You need to do some detective work on the internet to find this. Here's the list of names that correspond to the classes:
['pink primrose', 'hard-leaved pocket orchid', 'canterbury bells', 'sweet pea', 'english marigold', 'tiger lily', 'moon orchid', 'bird of paradise', 'monkshood', 'globe thistle', 'snapdragon', "colt's foot", 'king protea', 'spear thistle', 'yellow iris', 'globe-flower', 'purple coneflower', 'peruvian lily', 'balloon flower', 'giant white arum lily', 'fire lily', 'pincushion flower', 'fritillary', 'red ginger', 'grape hyacinth', 'corn poppy', 'prince of wales feathers', 'stemless gentian', 'artichoke', 'sweet william', 'carnation', 'garden phlox', 'love in the mist', 'mexican aster', 'alpine sea holly', 'ruby-lipped cattleya', 'cape flower', 'great masterwort', 'siam tulip', 'lenten rose', 'barbeton daisy', 'daffodil', 'sword lily', 'poinsettia', 'bolero deep blue', 'wallflower', 'marigold', 'buttercup', 'oxeye daisy', 'common dandelion', 'petunia', 'wild pansy', 'primula', 'sunflower', 'pelargonium', 'bishop of llandaff', 'gaura', 'geranium', 'orange dahlia', 'pink-yellow dahlia?', 'cautleya spicata', 'japanese anemone', 'black-eyed susan', 'silverbush', 'californian poppy', 'osteospermum', 'spring crocus', 'bearded iris', 'windflower', 'tree poppy', 'gazania', 'azalea', 'water lily', 'rose', 'thorn apple', 'morning glory', 'passion flower', 'lotus', 'toad lily', 'anthurium', 'frangipani', 'clematis', 'hibiscus', 'columbine', 'desert-rose', 'tree mallow', 'magnolia', 'cyclamen ', 'watercress', 'canna lily', 'hippeastrum ', 'bee balm', 'ball moss', 'foxglove', 'bougainvillea', 'camellia', 'mallow', 'mexican petunia', 'bromelia', 'blanket flower', 'trumpet creeper', 'blackberry lily']
courtesy of this page
Related
I have the following overpass turbo query:
[out:json];
area[name="Zagreb"];
(
node["tourism"~"museum|gallery"](area);
node["amenity"~"cafe|bar"](area);
);
out center;
You can run it here: https://overpass-turbo.eu/s/1hmp
The problem is that it only returns the first node, so the tourism="museum|gallery" in this case, but not the amenity="cafe|bar".
I based my query on this answer, where both nodes get returned(!!!): Find multiple tags around coordinates with Overpass API
[out:json];
(
node["tourism"~"museum|gallery"](around:500,53.866444, 10.684738);
node["amenity"~"cafe|bar"](around:500,53.866444, 10.684738);
);
out center;
You can run this original one here: https://overpass-turbo.eu/s/1hml
Except that I changed the 'around' to the area of name="Zagreb". And this clearly works (although just for one of the nodes).
Does anyone have any ideas how to get both nodes (tourism="museum|gallery" and amenity="cafe|bar") to work within an area?
Thanks so much!
Lovro
You need to store the area in a named inputset (below it's named ".myarea"), otherwise the result of the first node statement would overwrite the area in the default inputset (called ._), and would no longer be available for the second node query.
[out:json];
area[name="Zagreb"]->.myarea;
(
node["tourism"~"museum|gallery"](area.myarea);
node["amenity"~"cafe|bar"](area.myarea);
);
out center;
Quite a frequent issue by the way, and I'm sure there are several other post that already deal with this error.
Is there a way to use x,y coordinates stored as attributes of each node to layout a graph using the coordinates that were calculated by another program?
And if not, would it be possible to implement this somehow - how does one get started on this?
Is there a standard way how to provide the node positions to the cytoscape.js web viewer somehow?
Background: it is trivial to use Python networkx to calculate some layouts which are not supported in Cytoscape, and it would also be trivial to store the coordinates as node attributes. All that would then be need is some way for Cytoscape to use those coordinates to find node positions instead of using a layout algorithm.
This is a quite easy thing to do. Many examples on use this functionlity in the demos, as you can see here:
1: FCose Demo
2: Cose Blicent Demo
3: d3-force Demo
4: Euler Compound Demo
5: Spread Demo
As you can see, there are an abundance of examples for this in the demos, but also in the docs. You can see one here and here:
// can use reference to eles later
var eles = cy.add([
{ group: 'nodes', data: { id: 'n0' }, position: { x: 100, y: 100 } },
{ group: 'nodes', data: { id: 'n1' }, position: { x: 200, y: 200 } },
{ group: 'edges', data: { id: 'e0', source: 'n0', target: 'n1' } }
]);
The json used in the .add() function can be created in your js application or directly in Python and added to the graph as some of the examples do.
In general, you should take some time and skim through through the docs. The ability to position nodes via x and y is quite basic and is one of the first pages in the docs.
If you don't understand the description in the docs and the examples I provided, please edit your question and add your current code as a Minimal, Reproducible Example, where you can show your efforts to implement the positioning.
Edit:
As #maxkfranz pointed out, the preset layout plays a big role here. The documentation states this in the Initialisation Chapter:
If you want to specify your node positions yourself in your elements JSON, you can use the preset layout — by default it does not set any positions, leaving your nodes in their current positions (i.e. specified in options.elements at initialisation time).`
I am new to using Cytoscape so things that are "easy" are not so easy for me. I often don't even know the right way to ask a question. Everyone has things that are hard and things that are easy, and step by step we expand our knowledge so what is hard today may be easy tomorrow.
Anyway, here is something that may be a part of the solution you are looking for:
In the Cytoscape desktop application, you can create a "Style" that maps a node attribute to "X Location" and another node attribute to "Y Location".
I have downloaded a dataset from the uci archive called mammographic mass data. I have saved the file into edexcel and then saved as a .csv file. The attribute information for the data set is:
Attribute Information:
BI-RADS assessment: 1 to 5 (ordinal)
Age: patient's age in years (integer)
Shape: mass shape: round=1 oval=2 lobular=3 irregular=4 (nominal)
Margin: mass margin: circumscribed=1 microlobulated=2 obscured=3 ill- defined=4 spiculated=5 (nominal)
Density: mass density high=1 iso=2 low=3 fat-containing=4 (ordinal)
Severity: benign=0 or malignant=1 (binominal)
I open the file in the experiment environment and try to run however I get the following error message:
13:01:56: Started
13:01:56: Class attribute is not nominal!
13:01:56: Interrupted
13:01:56: There was 1 error
I have tried changing the attribute to class in the explorer but that has not worked. Any suggestions would be great :)
What you need is a Filter, more specifically a Descritize filter, to preprocess your data.
For example, assuming ins is the instances object where your data set is stored. The following code shows how to use a filter.
Discretize filter = new Discretize();
filter.setOptions(...); // set options
filter.setInputFormat(ins);
ins = Filter.useFilter(ins, filter);
When i create statndart detector...
static vector<float> detector = HOGDescriptor::getDefaultPeopleDetector();
if (!detector.size()) {
fprintf(stderr, "ERROR: getDefaultPeopleDetector returned NULL\n");
return -1;
}
hog.setSVMDetector(detector);
hog.detectMultiScale(img, rects);
...all works fine.
But!
When i create my own classifier using "Classifier Tool For OpenCV" (classifieropencv.codeplex.com) i can't to find object. I use all default parameters: winSize, blockSize, blockStride, cellSize and others. Why? Any one used this tool to create classifiers fot HOG-detection? Any one used HOGDescriptor to detect his own object (without getDefaultPeopleDetector )?
Thanks!
This tool is useful: "Classifier Tool For OpenCV" (classifieropencv.codeplex.com)
Parameters in this tool (when you create classifier) must be equal with parameters in your OpenCv code(when you use classifier).
Here is manual in russian, but it have many pictures and video, and is clear.
I am trying to generate an .arff file from a csv data file I have. Now I am totally new to Weka and have started using it just a day back. I am trying out a simple twitter sentiment analysis with this for starters. I have generated training data in CSV. Contents of CSV file are as follows:
tweet,affinScore,polarity
ATAUTHORcfoblog is giving away a $25 Amex gift card (enter to win over $600 in prizes!) http://t.co/JD8EP14c ,4,4
"American Express has always been my dark horse acquirer of ATAUTHORFoursquare. Bundle in Square-like payments & its a lite-retailer platform, no? ",0,1
African-American Demos Express Ethnic Identity Differently http://t.co/gInv4bKj via ATAUTHORmediapost ,0,3
Google ???????? Visa ? American Express http://t.co/eEZTSiHY ,0,4
Secrets to Success from Small-Business Owners : Lifestyle :: American Express OPEN Forum http://t.co/b85F8JX0 via ATAUTHOROpenForum ,2,1
RT ATAUTHORhunterwalk: American Express has always been my dark horse acquirer of ATAUTHORFoursquare. Bundle in Square-like payments & its a lite ... ,0,1
Winning Surveys $1500 american express Huggies Sweeps http://t.co/WoaTFowp ,4,1
I root for Square mostly because a small business that takes Square is also one that takes American Express. ,0,1
I dont know how bitch be acting American Express but they cards be saying DEBIT ON IT HAVE A ?? PLEASE!!! ,-5,2
Uh oh... RT ATAUTHORBlackArrowBella: I dont know how bitch be acting American Express but they cards be saying DEBIT ON IT HAVE A ?? PLEASE!!! ,-5,2
Just got another credit card. A Blue Sky card with American Express. Its gonna help pay for the honeymoon! ATAUTHORAmericanExpress ,-1,1
Follow ATAUTHORShaveMagazine and ReTweet this msg to be entered to #Win an American Express Gift card. Winners contacted bi-weekly by direct msg! ,2,4
American Express Gold zakelijk aanvragen: http://t.co/xheZwmbt ,0,3
RT ATAUTHORhunterwalk: American Express has always been my dark horse acquirer of ATAUTHORFoursquare. Bundle in Square-like payments & its a lite ... ,0,1
Here first attribute is actual tweet, second is AFFIN score and third is actual classification class (1- Positive, 2-Negative, 3-Neutral, 4-Spam)
Now I try to generate .arff format from it using code:
import weka.core.Instances;
import weka.core.converters.ArffSaver;
import weka.core.converters.CSVLoader;
import java.io.File;
public class CSV2Arff {
/**
* takes 2 arguments:
* - CSV input file
* - ARFF output file
*/
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.out.println("\nUsage: CSV2Arff <input.csv> <output.arff>\n");
System.exit(1);
}
// load CSV
CSVLoader loader = new CSVLoader();
loader.setSource(new File(args[0]));
Instances data = loader.getDataSet();
// save ARFF
ArffSaver saver = new ArffSaver();
saver.setInstances(data);
saver.setFile(new File(args[1]));
saver.setDestination(new File(args[1]));
saver.writeBatch();
}
}
This generates .arff file that looks somewhat like:
#relation file
#attribute tweet {_ATAUTHORcfoblog_is_giving_away_a_$25_Amex_gift_card_(enter_to_win_over_$600_in_prizes!)_http://t.co/JD8EP14c_,'American_Express_has_always_been_my_dark_horse_acquirer_of__ATAUTHORFoursquare._Bundle_in_Square-like_payments_&_its_a_lite-retailer_platform,_no?_',African-American_Demos_Express_Ethnic_Identity_Differently_http://t.co/gInv4bKj_via__ATAUTHORmediapost_,Google_????????_Visa_?_American_Express__http://t.co/eEZTSiHY_,Secrets_to_Success_from_Small-Business_Owners_:_Lifestyle_::_American_Express_OPEN_Forum_http://t.co/b85F8JX0_via__ATAUTHOROpenForum_,RT__ATAUTHORhunterwalk:_American_Express_has_always_been_my_dark_horse_acquirer_of__ATAUTHORFoursquare._Bundle_in_Square-like_payments_&_its_a_lite_..._
#data
_ATAUTHORcfoblog_is_giving_away_a_$25_Amex_gift_card_(enter_to_win_over_$600_in_prizes!)_http://t.co/JD8EP14c_,4,4
'American_Express_has_always_been_my_dark_horse_acquirer_of__ATAUTHORFoursquare._Bundle_in_Square-like_payments_&_its_a_lite-retailer_platform,_no?_',0,1
African-American_Demos_Express_Ethnic_Identity_Differently_http://t.co/gInv4bKj_via__ATAUTHORmediapost_,0,3
Google_????????_Visa_?_American_Express__http://t.co/eEZTSiHY_,0,4
Secrets_to_Success_from_Small-Business_Owners_:_Lifestyle_::_American_Express_OPEN_Forum_http://t.co/b85F8JX0_via__ATAUTHOROpenForum_,2,1
RT__ATAUTHORhunterwalk:_American_Express_has_always_been_my_dark_horse_acquirer_of__ATAUTHORFoursquare._Bundle_in_Square-like_payments_&_its_a_lite_..._,0,1
I am new to Weka but from what I have read, I have a suspicion that this ARFF is not correctly formed. Can anyone comment on it?
Also if it is wrong, can someone point me to where exactly am I going wrong?
Make sure to set the type of the tweet attribute to be arbitrary strings, not a categorial attribute, which seems to be the default. This doesn't scale well, as it puts a copy of every tweet in the type definition otherwise.
Note that for actual analysis of the tweet contents, you will likely need to preprocess them much further. You will likely want a sparse vector representation of the text instead of a long string.
If you are using the UI as previously mentioned then you can just load the file directly into Weka.
If you just want to generate an ARFF file based on the CSV file you can do the following. This was taken from the CSV2Arff tool which comes as part of Weka.
import weka.core.Instances;
import weka.core.converters.ArffSaver;
import weka.core.converters.CSVLoader;
import java.io.File;
public class CSV2Arff {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.out.println("\nUsage: CSV2Arff <input.csv> <output.arff>\n");
System.exit(1);
}
// load CSV
CSVLoader loader = new CSVLoader();
loader.setSource(new File(args[0]));
Instances data = loader.getDataSet();
// save ARFF
ArffSaver saver = new ArffSaver();
saver.setInstances(data);
saver.setFile(new File(args[1]));
saver.setDestination(new File(args[1]));
saver.writeBatch();
}
}