layout | title | tags | mathjax | ||
---|---|---|---|---|---|
post |
GBO notes: Approximation algorithms |
|
true |
This note is a brief introduction to approximation algorithms. Basically, the "Intro to Algorithms" courses are concerned with problems which are solvable in poly-time (i.e., problems in the class P). But there are a ton of important problems that are NP-hard, and cannot be solved in poly-time. We want to design algorithms for such problems which can give us some guarantees about the quality of the solutions --- by obtaining bounds on how much worse the solution is compared to the optimal solution. Let us see some examples of these problems and simple algorithms for solving them.
There are basically four types of algorithms to solve such problems:
- Greedy
- Local search
- Dynamic programming
- Randomized algorithms
Other than these, a more optimization-based approach is to express the problems as linear programs and then solve them. This is called LP rounding. The basic idea is that we first express the NP-hard problem as an integer linear program (ILP) which is still NP-hard. We then let go of some of the integrality constraints, which makes the problem solvable. Once we have the LP solutions, we then round them to get corresponding integer solutions to the original problem.
Problem: Given a graph
Algorithm:
- Choose an arbitrary edge and add both its end-points to the set.
- Remove all edges incident on the newly added vertices.
- Repeat until no edges are remaining.
Problem: Given a graph
Note that if
Algorithm: First, we apply metric completion to make sure all nodes are pairwise connected. This
is done by computing
For the metric
Now, we reduce this solution to a solution for the original problem by expanding the edges in the solution and combining the edges that are on multiple paths (which will create Steiner nodes).
A graph is Eulerian if there is a closed tour that uses every edge exactly once.
Problem: Given a graph
Algorithm: First compute a minimum spanning tree
where
We are losing a factor of 2 here because we doubled all the edges (to make
Christofides algorithm: Find a min-cost perfect matching of the odd-degree nodes of
Problem: Given a universe
Algorithm (greedy): Greedily select a set
Problem: Given a universe
Algorithm: We can again use a greedy algorithm, but now we try to maximize the "bang for the buck",
i.e., maximize
Problem: Given a universe
Algorithm: We can again use the greedy approach as in set cover, but stop after
Problem: Given a set of
Algorithm (greedy): Start with an arbitrary center. Greedily select a center that is farthest from
the previously selected centers, and add it to
So far we have mostly seen greedy algorithms for problems. Let us now look at some "local search" methods. The idea in these is to start with a feasible solution and then make local improving changes until we get to a local optimum.
Problem: Given a graph
The max-cut problem is NP-hard, but the min-cut problem is in P.
Algorithm: Start with an arbitrary cut
Problem: Given a graph
We cannot use the same idea as before for this problem since it may take exponential time.
Algorithm: Instead, we bound the improvement with a multiplicative factor, i.e., we only
change sides for a node if the weight improvement is at least
Let us see an example of dynamic programming to solve some NP-hard problems.
Problem: Given
Naive greedy method: Use the "bang-for-the-buck", i.e., keep adding items such that the ratio
D.P. algorithm: Instead of asking what is the maximum profit that we can get for size