Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 9 days ago.
Improve this question
I am badly looking for a small size (around 50 nodes) link prediction dataset, with node attributes (preferably real-valued attributes).
The network can be directed or undirected (preferably directed), but the network preferably not be bipartite.
Of course I am looking for a free, ready to use dataset.
Perhaps related to a social network, author communications, etc.
Does any one knows a specific dataset? I really really appreciate showing me one.
This Google+ social networks with node attributes should be good for you. You can find it here: https://gonglab.pratt.duke.edu/google-dataset
The 'Social Network Analysis Interactive Dataset Library' at http://www.growmeme.com/sna/visual has over 150 datasets, and an ability to filter by directed/undirected and bipartite = true/false and whether community information is present (among other things).
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed yesterday.
Improve this question
I have some job experience in machine learning, specifically in the branch of computer vision. However, when I develop at work I use tools that abstract away the compute and storage via some cloud API.
Now, I want to develop a deep NN in my spare time and on my PC - I cannot use the work's resources and code.
What is the best setup, from your experience, for compute and storage I can get on a budget? To be specific, I am aiming to:
have ~1M samples of data - images of decent size, let's say 500X500.
try a variety of models, CNN and transformer architectures included.
Of course, I want training to be done in reasonable time (up to a day).
I can save checkpoints and resume if that helps with budget but preferably I will want to train one time each model.
I know some people use Google Colab which offers GPU access. What are the pros and cons and what alternatives are out there?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I am looking for beginner Machine Learning Linear Regression problems. I searched in Kaggle, but couldn't find a proper one. Can you please suggest me a beginner problem from Kaggle, or from any other platform?
Thanks in advance.
Kaggle has tons of linear regression notebooks and datasets to learn from, most popular ones are probably about house pricing (given certain house features predict it's price).
Here's a new one I'm looking forward to solve:
Ben & Jerry's flavours and ratings ---> products.csv
The main goal would be predict wich ice cream flavours are better accepted based on it's ingredients.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am a sort of newbie to NLP world.
But anyway, I have just started my NLP project.
My task is about inferring hidden sentence in a paragraph.
Let me show you an example question.
a multiple choice question about inferring a clause in the blank
I want my machine learning model to extract some meaningful phrase from the given text(in above image, a paragraph)
I know that my question sounds quite ambiguous for you all. I just want to know even a small clue.
Thank you for your response in advance.
Skip-thought vectors are a system for predicting sentences from a context, by essentially constructing sentence-wide vectors. Might be useful, especially so in combination with context2vec if you want to build a custom model.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
i am exploring new architectures for LSTMs. i have looked into a few commonly used datasets, such as IMDB's movie reviews and sine waves, but haven't found a good generalizable dataset. if MNIST is the "hello world" for convolutional networks, then what would be the equivalent dataset for LSTMs?
You can check examples in which people use simpler models, like HMM and try running LSTM on them.
For example you can try running this POS tagging code (the pos_* part) from lazyprogrammer's course (here is a script that downloads and handles the data). This code contains models that use LSTMs on Tensorflow/Theano and also HMMs (and even logistic regression that does not take into account the sequential nature of the data).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am working on a project wherein I am supposed to detect human beings from a live video stream which I get from a UAV's camera module. I do not need to create any rectangles or boxes around detected subjects, but just need to reply with a yes or no. I am fairly new to Open-CV and have no prior experience.
What I have tried:
I started by training my SVM on HOG features. My team gathered a few images from a UAV we had, with people in it. I then trained the SVM from the crops of those people. We got unsatisfactory results when we used the trained detector on the a video from sky with people. Moreover processing each frame turned out to be very slow , therefore the system became unusable.(it did work on still images to some extent).
My question:
I wanted to know if there is some other technique, library etc I could try for achieving good results. Please point me to the next step.