Greedy approach for single solution problem - greedy

Can we use greedy approach for non-optimal, single solution problem?

If you are asking if you can use the greedy approach to solve a problem that does not have the optimal substructure property, the short answer is no, not really. Let me explain:
Greedy algorithms basically take all of the data in a problem and then set a rule they use to determine which data to add to the solution during every step of this algorithm (essentially, the greedy technique picks the locally optimal data to add to the solution in the hopes it results in a globally optimal solution).
A problem having the optimal substructure property means that a (globally) optimal solution can be built from the (locally) optimal solutions of its subproblems.
Considering that this property is essentially the key for what the greedy approach does, I would say it would be a bad idea to use the greedy approach for a "non-optimal, single solution" problem. I hope this helps!

Related

best way to whittle down features in a classification machine learning model

I have 66 features which i'm using to create a classifcation machine learning model in python. However, just to prevent issues like overfitting, I was wondering what the best way to reduce the number of fetures would be. I have read about PCA, but am not sure whether any good methodology exists to reduce features, and whether any tools exist in sklearn to help facilitate this.
Thanks.
The first thing you should then maybe do is reading through the documentation of scikit-learn's feature selection methods.
Every method has its perks and peeves, and which one is best (if there is even one) depends on the specific use-case.
That being said, the methods offered in scikit-learn are by no means exhaustive. But discussing different choices and elaborating on an appropriate method is maybe better asked on platforms like Cross Validated or similar.

Reinforcement Learning in arbitrarily large action/state spaces

I’m interested to use Deep Reinforcement Learning in order to find an - unique - optimal path back home among (too many) possibilities and a few (required) intermediate stopes (for instance, buy a coffee or refuel).
Furthermore, I want to apply this in cases where the agent doesn’t know a “model” of the environment, and the agent can't try all possible combinations of states and actions at all. I.e. needing to use approximation techniques in Q-value function (and/or policy).
I’ve read of methods for facing cases like this - where rewards, if any, are sparse and binary - like Monte Carlo Tree search (which implies some sort of modeling and planning, according to my understandings) or Hindsight Experience Replay (HER), applying ideas of DDPG.
But there are so many different kind of algorithms to consider, I’m a bit confused what’s best to begin with.
I know it’s a difficult problem, and maybe it’s too naive to ask this, but Is there any clear, direct and we’ll-known way to solve the problem I want to face?
Thanks a lot!
Matias
If the final destination is fixed as in this case(home) you can go for dynamic search as a* will not work due to changeable enviornment.
And if you want to use deep learning algorithm then go for a3c with experience replay due to the large action/state spaces.It capable of handling complex probelm.

path planning -> ways from goal to initial state?

the problem: is it true that finding a path from goal to start point is much more efficient than finding a path from start to goal?
if this is true,can some one help me out and explain why?
my opinion:
it shouldn't be different because finding a way from goal to start is just like renaming goal to start and start to goal.
The answer to your question all depends on the path finding algorithm you use.
One of the most well know path finding algorithms, A-Star (or A*), is commonly used in a reverse sense. It all has to do with the heuristics. Since we usually use proximity as the heuristic for the algorithm we can get stuck in obstacles. These obstacles might however be easier to face the other way around. A great explanation with examples can be found here. Just for clarity: if there is no certain knowledge of obstacles, then there is no predictable difference between forwards and backwards path finding by A*.
Another reason why you might want to reverse the pathfinding is if you have multiple actors trying to reach the same goal. Instead of having to execute A*, or another path finding algorithm, for every actor you can combine them into a single executing of a graph explorational path finding algorithm. For example, a variation on Dijkstra's algorithm could find all the shortest distances to all actors in one graph exploration.

MachineLearning or DecisionTree for job matching?

I'm working on a job matching app and I was wondering what's the best way to match elements between themselves to sort the best result?
In my mind it's by going through a decision tree as we already know the structure of the element and the expected result.
However, would the machinelearning be an alternate solution or is it worthless to do so?
I might be mistaken but to me ML is efficient for sorting datas which at first sight don't have obvious common points, right?
Thanks for your advices!
Decision tree is part of ML. Maybe you means more complex algorithm than decision tree, like xgboost or neural network.
xgboost or neural network are good when you have too much variables to make sense to create manually a decision tree.
Decision tree is better when you want to control your algorithm (for exemple, for ethical or managerial reasons).
xgboost and unsupervised thec are also good to create the boundaries used in your decision tree. For exemple, should you create a category 18-25 or 18-30, etc..
Considering the complexity of such a problem, with time and geographical constrain, using advanced algorithms seems a good idea to me.
Have a look at that Kaggle competition which seems close to your problem, it may give you some good insight:
https://www.kaggle.com/c/job-recommendation/data

Stereo Matching Baseline Evaluation

The stereo matching problem consists of obtaining a correspondence between right and left images. I want to do an evaluation between a baseline and a Dynamic Programming method. However, I don't have a baseline yet. I would like to know which method should I use. I was thinking to try a brute-force algorithm. Is there something like that in the literature?
What do you suggest as a baseline method? I want a simple solution, something without heuristics and optimizations, such as this brute-force strategy . But, I have no material to research about it, only methods using Graph Cut, Dynamic Programming etc.
Thanks in advance!
The Middlebury Univ. reference datasets and database of algorithms are the standard everyone uses for evaluation these days.
http://vision.middlebury.edu/stereo/
You should have a look at the basics before delving into graph cuts. Consult the relevant chapter here, it might help http://szeliski.org/Book/

Resources