Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
It looks like some new fancy methodology named EBSE is coming upon us in 2010.
Can someone explain it to me, please?
From the official website, "EBSE is concerned with determining what works, when and where, in terms of software engineering practice, tools and standards".
Basically, EBSE is inspired in medical practice and other professions with similar trajectories, and tries to apply their empirical, down-to-earth approach to the often chaotic world of software development.
The EBSE stands for Evidence-based software engineering. The concept tries to bring evidence to decisions made in the software engineering.
The main instrument of EBSE is the systematic literature review (SLR). The concept is derived from medicine and was adapted by Kitchenham in 2004 in the paper Procedures for perfoming systematic reviews. The idea behind the SLR is to obtain accurate data by analyzing other primary studies, eliminating possible bias that this studies may suffer.
Since 2004 multiple authors proposed changes to Kitchenham's procedure, but Kitchenham remains as the ultimate authority in SLRs in software engineering.
Some popular SLR papers are Empirical studies of agile software development: A systematic review and Lessons from applying the systematic literature review process
within the software engineering domain
I don't see that evidence based software engineering is any different from empirical or experimental software engineering. (ESE) They all have the intention of replacing opinion with a scientific epistemology for the creation of knowledge about how software is/can be created. The International Conference of Software Engineering (http://www.icse-conferences.org/) always has papers on this topic.
Do you mean evidence-based scheduling? The basic gist is that estimates for features in development should be based on statistics gathered about how long previously completed features took.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Thanks for making it this far on my post!
I am studying engineering, yet have a passion for programming and wish to implement computer science knowledge into my own research.
My question is pertaining to any resources that this community has available and any advice you all are willing to give regarding getting started in this broad field.
I’m mainly confused about ‘neural networks’ in relation to Deep Learning as well as implementation of algorithms.
I have slight Python and R knowledge.
Note: one of the subfora of StackExchange is probably a better fit for this question.
In any case, for ML you can do just fine with basic Python/R. Most of the research and work done on ML is based on TensorFlow and similar frameworks currently (2018). To use the frameworks you don't really need a strong programming background to setup and train models on them (although it certainly helps). Actually, math/statistics will help you more, specially if you want to get to the bottom of it (i.e. reading the latest articles/papers, etc.).
Mainly I’m confused about ‘neural networks’ in relation to Deep Learning
"Deep Learning" is basically taking advantage of modern computing capabilities to train complex models (e.g. neural networks with many hidden layers) which a few years ago (e.g. 10 years ago) were unfeasible. Informally speaking, the more complex your network is, the more interesting are the things that it can learn.
as well as implementation of algorithms.
Typically, you will use an existing framework -- you won't implement the algorithms yourself. Although, of course, implementing a MultiLayer Perceptron by yourself is always a good and fun learning exercise.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Any tools could help recognize the data distribution pattern, and then make the decision to choose ML algorithms?
Firstly, you have to understand Machine Learning as a field, and have some understanding of its sub fields. If you don't intuitively understand your tools, you won't be able to identify when to use them.
The idea you're talking about is called exploratory data analysis, and it can be very approachable if you think about it the right way. Think about it in terms of the scientific method:
First, look over the data, and any documentation about it.
Then, come to some hypotheses about the patterns that might exist.
Based on your understanding of ML, brainstorm some approaches that might give some insight into your hypotheses. For example, if you see that your proposed dependent value can have several distinct values, you have a classification problem, and based on your input data, you should choose an appropriate approach.
The tools that you might find useful are plentiful, but a good start could be the programming language R, or Python. Both are very strong data science tools. R has a greater learning curve, but is built with data science in mind. Python, on the other hand, is very easy to pick up, but you have more choices to make with regards to ML and data science libraries. With Python, look into Pandas for CSV and data manipulation, and Tensorflow, Theano or Scikit-Learn for data analysis and ML.
Hope this helps!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Disclaimer: although I know some things about big data and am currently learning some other things about machine learning, the specific area that I wish to study is vague, or at least appears vague to me now. I'll do my best to describe it, but this question could still be categorised as too vague or not really a question. Hopefully, I'll be able to reword it more precisely once I get a reaction.
So,
I have some experience with Hadoop and the Hadoop stack (gained via using CDH), and I'm reading a book about Mahout, which is a collection of machine learning libraries. I also think I know enough statistics to be able to comprehend the math behind the machine learning algorithms, and I have some experience with R.
My ultimate goal is making a setup that would make trading predictions and deal with financial data in real time.
I wonder if there're any materials that I can further read to help me understand ways of managing that problem; books, video tutorials and exercises with example datasets are all welcome.
Take ML course on coursera. It is a good introductery into ML algorithms which will tell you what ML could do\some general approaches:
https://www.coursera.org/course/ml
Also to get a broader picture I suggest coursera's DataSciense course:
https://www.coursera.org/course/datasci
Finally a good book is Mahout in action - it is more about solving practical matters with mahout and has lots of examples and case-studies.
I beleive after that you will have a better understanding of what you want to do next.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have recently started studying Machine Learning and found that I need to refresh probability basics such as Conditional Probability, Bayes Theorem etc.
I am looking for online resources where I can quickly brush up probability concepts wrt Machine Learning.
The online resorces, I stumbled upon are either very basic or too advanced.
This might help: http://www.cs.cmu.edu/~tom/10601_fall2012/lectures.shtml
The above link is from Tom Mitchell's Machine Learning Class # CMU. Videos are available too. You will gain a very good understanding of ML concepts if you go through all the videos. (or just the first few videos for Conditional Probability, Bayes Theorem, etc).
The notion of conditional probability and bayes theorem are very basic themselves. It doesn't get any more basic than that in probabilistic modeling, you might say. Which suggests that you didn't look two well at what you've found or didn't really do any search at all.
Off the top of my head, I can name two resources: first, any Coursera course dealing with probabilities or machine learning (see AI, Statistics One or Probabilistic Graphical Models) contains these preliminaries. Second, there's a number of books on statistics freely available online, one example being Information Theory, Inference, and Learning Algorithms.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I tried to read Digital Image Processing by Gonzalez/Woods but I found it difficult to understand/grasp. I have taken a Graduate Course in Computer Vision, which is more practically oriented and I am doing lot of cool stuff with OpenCV, however I still feel I am swimming in higher abstractions, and do NOT understand the basics beneath.
I am planning to read a book on Computer Vision/Image Processing during the Winter Break to solidify my understanding of the content and would appreciate some must-read suggestions
I have done assignments like - camera calibration, image transforms, stitching images into panoramas, haar classification.
You should probably take a look at Szeliski's book
Hartley and Zisserman's book is also excellent.
Gonzales and woods (or Wintz in my day) is a very good introduction.
There is a more readable but less concise introduction - Image-Processing-Analysis-Machine-Vision
And since you are working with opencv - you can do worse than read the opencv book
Have a look at this book. It's quite heavy (and expensive!), but it covers a lot of topics, and each chapter is authored by a different person that is competent in the corresponding field. If cost is a huge issue, I've seen reprints from Taiwan that appear to be legitimate for a fraction of the original price (they are soft cover, though, and the print quality is obviously not as good).
Mind you, I've got both The Handbook and Gonzalez & Woods, and I've found Gonzalez to be easier to digest during the initial stages. Rather than just reading, it is definitely recommended to attempt to reproduce all the examples that they give, and make an honest attempt at the exercises at the end of each chapter. The Handbook is great for coverage but lacks exercises.
Finally, your choice of must read really depends which specific direction you are expecting to be working in. The basic knowledge (spatial and frequency domain filtering, for example) has been around since the dawn of the field (early 60s) and is usually covered fairly well by most texts. If you want to learn about more recent applications, you have be a bit more specific (or go for The Handbook as it attempts to cover it all).
For contemporary readers viewing this question, an outstanding text is Prince's Computer Vision: Models, Learning, and Inference . The pdf is available free on that site.