How to prove that Greedy approaches will not work - greedy

For any given problem where greedy approaches will not give optimal value, we can find a counter example to disprove that approach.
However, is it possible to prove that for a given problem, any greedy approach in general will not work.

The most general answer I can come up with is that any greedy algorithm will find local optima. If a problem has several local optima where only one of them represent the global optimum, then any greedy algorithm might get stuck at one of the local optima.
To find a counter example all you have to do is figure out an instance of the problem that has such local optimum, and construct it so that you "trick" the algorithm into that local optimum.
I don't think there's a general way of showing that a greedy approach will not work. The best way to refute an algorithm is probably to find a counter example where it doesn't produce the correct result.

Related

Cut points from which to choose the best split in Decision Tree regressor with continious feature?

I understand, that in the Decision Tree algorithm, when the splitting is decided, we choose the best split based on some criterion. And when you're looking for the best split, you have to iterate through some list of values. But it seems very computationally expensive to consider every value of the feature as the possible threshold (or, so called, cut point). Thus, there is a necessity for some heuristic for choosing these thresholds. For example, if we have continuous feature and categorical target (i.e, we are dealing with classification problem), we can do the following: sort dataset by given feature and consider for splitting only values, where target variable is changing it's value.
But what do you do if you have regression task, i.e. both feature and target are continuous variables? I realize, that I have to calculate, for example, the mean variance or mean median deviation in both branches for each split. But how do you decide from which values you're choosing you best split? People surely have came up with some optimal solution in order to avoid iterating over every value of the feature in the training set.
I've done some research, but most sources only focuses on different criteria and questions of how you determine whether your split is suitable. Which is not really answering my question.
I've found this question, but Predictor only suggests, that it can be done using the percentiles. And I think, that there is no guarantee, that this is how it really done in real life.
I've also found this question, but for me geledek's answer is not very clear (obviously, dude just copy-pasted his answer from presentation, that he is referring to). I'm pretty much fine with the Method 1, but I would really appreciate if someone could explain Method 2 in more details. Or, perhaps, provide some different source or explanation of your own.
UPD: I've also looked up to the scikit-learn repo at GitHub, and found this line. I can't quite understand the overall code, but it seems that this particular line implies that thresholds are chosen as the averages of the neighboring feature values (which corresponds with the aforementioned Method 1 from the question above). Is that correct? I also don't understand this comment: # sum of halves is used to avoid infinite value. How exactly does dividing by two prevent from getting infinite values? Don't you get infinity only when you are dividing by zero? Is dividing by two necessary, because this way we are getting average value (and not because we don't want to get infinitely)?

How to leverage Z3 SMT solver for ILP problems

Problem
I'm trying to use z3 to disprove reachability assertions on a Petri net.
So I declare N state variables v0,..v_n-1 which are positive integers, one for each place of a Petri net.
My main strategy given an atomic proposition P on states is the following :
compute (with an exterior engine) any "easy" positive invariants as linear constraints on the variables, of the form alpha_0 * v_0 + ... = constant with only positive or zero alpha_i, then check_sat if any state reachable under these constraints satisfies P, if unsat conclude, else
compute (externally to z3) generalized invariants, where the alpha_i can be negative as well and check_sat, conclude if unsat, else
add one positive variable t_i per transition of the system, and assert the Petri net state equation, that any reachable state has a Parikh firing count vector (a value of t_i's) such that M0 the initial state + product of this Parikh vector by incidence matrix gives the reached state. So this one introduces many new variables, and involves some multiplication of variables, but stays a linear integer programming problem.
I separate the steps because since I want UNSAT, any check_sat that returns UNSAT stops the procedure, and the last step in particular is very costly.
I have issues with larger models, where I get prohibitively long answer times or even the dreaded "unknown" answer, particularly when adding state equation (step 3).
Background
So besides splitting the problem into incrementally harder segments I've tried setting logic to QF_LRA rather than QF_LIA, and declaring the variables as Real than integers.
This overapproximation is computationally friendly (z3 is fast on these !) but unfortunately for many models the solutions are not integers, nor is there an integer solution.
So I've tried setting Reals, but specifying that each variable is either =0 or >=1, to remove solutions with fractions of firings < 1. This does eliminate spurious solutions, but it "kills" z3 (timeout or unknown) in many cases, the problem is obviously much harder (e.g. harder than with just integers).
Examples
I don't have a small example to show, though I can produce some easily. The problem is if I go for QF_LIA it gets prohibitively slow at some number of variables. As a metric, there are many more transitions than places, so adding the state equation really ups the variable count.
This code is generating the examples I'm asking about.
This general presentation slides 5 and 6 express the problem I'm encoding precisely, and slides 7 and 8 develop the results of what "unsat" gives us, if you want more mathematical background.
I'm generating problems from the Model Checking Contest, with up to thousands of places (primary variables) and in some cases above a hundred thousand transitions. These are extremum, the middle range is a few thousand places, and maybe 20 thousand transitions that I would really like to deal with.
Reals + the greater than 1 constraint is not a good solution even for some smaller problems. Integers are slow from the get-go.
I could try Reals then iterate into Integers if I get a non integral solution, I have not tried that, though it involves pretty much killing and restarting the solver it might be a decent approach on my benchmark set.
What I'm looking for
I'm looking for some settings for Z3 that can better help it deal with the problems I'm feeding it, give it some insight.
I have some a priori idea about what could solve these problems, traditionally they've been fed to ILP solvers. So I'm hoping to trigger a simplex of some sort, but maybe there are conditions preventing z3 from using the "good" solution strategy in some cases.
I've become a decent level SMT/Z3 user, but I've never played with the fine settings of :options, to guide the solver.
Have any of you tried feeding what are basically ILP problems to SMT, and found options settings or particular encodings that help it deploy the right solutions ? thanks.

Are heuristic functions that produce negative values inadmissible?

As far as I understand, admissibility for a heuristic is staying within bounds of the 'actual cost to distance' for a given, evaluated node. I've had to design some heuristics for an A* solution search on state-spaces and have received a lot of positive efficiency using a heuristic that may sometimes returns negative values, therefore making certain nodes who are more 'closely formed' to the goal state have a higher place in the frontier.
However, I worry that this is inadmissible, but can't find enough information online to verify this. I did find this one paper from the University of Texas that seems to mention in one of the later proofs that "...since heuristic functions are nonnegative". Can anyone confirm this? I assume it is because returning a negative value as your heuristic function would turn your g-cost negative (and therefore interfere with the 'default' dijkstra-esque behavior of A*).
Conclusion: Heuristic functions that produce negative values are not inadmissible, per se, but have the potential to break the guarantees of A*.
Interesting question. Fundamentally, the only requirement for admissibility is that a heuristic never over-estimates the distance to the goal. This is important, because an overestimate in the wrong place could artificially make the best path look worse than another path, and prevent it from ever being explored. Thus a heuristic that can provide overestimates loses any guarantee of optimality. Underestimating does not carry the same costs. If you underestimate the cost of going in a certain direction, eventually the edge weights will add up to be greater than the cost of going in a different direction, so you'll explore that direction too. The only problem is loss of efficiency.
If all of your edges have positive costs, a negative heuristic value can only over be an underestimate. In theory, an underestimate should only ever be worse than a more precise estimate, because it provides strictly less information about the potential cost of a path, and is likely to result in more nodes being expanded. Nevertheless, it will not be inadmissible.
However, here is an example that demonstrates that it is theoretically possible for negative heuristic values to break the guaranteed optimality of A*:
In this graph, it is obviously better to go through nodes A and B. This will have a cost of three, as opposed to six, which is the cost of going through nodes C and D. However, the negative heuristic values for C and D will cause A* to reach the end through them before exploring nodes A and B. In essence, the heuristic function keeps thinking that this path is going to get drastically better, until it is too late. In most implementations of A*, this will return the wrong answer, although you can correct for this problem by continuing to explore other nodes until the greatest value for f(n) is greater than the cost of the path you found. Note that there is nothing inadmissible or inconsistent about this heuristic. I'm actually really surprised that non-negativity is not more frequently mentioned as a rule for A* heuristics.
Of course, all that this demonstrates is that you can't freely use heuristics that return negative values without fear of consequences. It is entirely possible that a given heuristic for a given problem would happen to work out really well despite being negative. For your particular problem, it's unlikely that something like this is happening (and I find it really interesting that it works so well for your problem, and still want to think more about why that might be).

Will all TSP algorithms give the same optimum route?

I was just wondering if all algorithms for the TSP will give the same optimum routes? I thought that this would be the case but ive implemented branch and bound and A* and they both give very different results to the same input, I was just wondering if this is normal?
The route my differ, but the cost of all optimal solution should be the same.
If your A* solution is more expensive, than your heuristic is wrong.
Have a look at wikipedia A* algorithm for proofs that it always finds an optimal solution.
No. Provided more than one optimal route exists, different algorithms will not necesarily find the same path. It will depend on the implementation, and I assume it will also depend on how you label the graph, so that different labelings will make the same algorithm find different routes.

What algorithm would you use for clustering based on people attributes?

I'm pretty new in the field of machine learning (even if I find it extremely interesting), and I wanted to start a small project where I'd be able to apply some stuff.
Let's say I have a dataset of persons, where each person has N different attributes (only discrete values, each attribute can be pretty much anything).
I want to find clusters of people who exhibit the same behavior, i.e. who have a similar pattern in their attributes ("look-alikes").
How would you go about this? Any thoughts to get me started?
I was thinking about using PCA since we can have an arbitrary number of dimensions, that could be useful to reduce it. K-Means? I'm not sure in this case. Any ideas on what would be most adapted to this situation?
I do know how to code all those algorithms, but I'm truly missing some real world experience to know what to apply in which case.
K-means using the n-dimensional attribute vectors is a reasonable way to get started. You may want to play with your distance metric to see how it affects the results.
The first step to pretty much any clustering algorithm is to find a suitable distance function. Many algorithms such as DBSCAN can be parameterized with this distance function then (at least in a decent implementation. Some of course only support Euclidean distance ...).
So start with considering how to measure object similarity!
In my opinion you should also try expectation-maximization algorithm (also called EM). On the other hand, you must be careful while using PCA because this algorithm may reduce the dimensions relevant to clustering.

Resources