# Greedy algorithm

In mathematics and computer science, a greedy algorithm is one that selects for the maximal immediate benefit, without regard for how this selection affects future choices.

# Introduction

As with all algorithms, greedy algorithms seek to maximize the overall utility of some process. They operate by making the immediately optimal choice at each sub-stage of the process, hoping that this will maximize the utility of the entire process. More formally, when we reframe the problem in terms of forming a set with a desired property, at each step a greedy algorithm will add the element into the set if and only it does not cause the set to lose the desired property.

Greedy algorithms are among the simplest types of algorithms; as such, they are among the first examples taught when demonstrating the subject. They have the advantage of being ruthlessly efficient, when correct, and they are usually among the most natural approaches to a problem. For this reason, they are often referred to as "naïve methods". In many cases, more complicated algorithms are formed by adjusting the greedy process to be correct, often through the use of clever sorting.

# Examples of greedy algorithms

Many real-life scenarios are good examples of greedy algorithms. For example, consider the problem of converting an arbitrary number of cents into standard coins; in other words, consider the problem of making change. The process you almost certainly follow, without consciously considering it, is first using the largest number of quarters you can, then the largest number of dimes, then nickels, then pennies. This is an example of working greedily: at each step, we chose the maximal immediate benefit (number of coins we could give).

Some problems are not so obviously algorithmic processes. For example, the Rearrangement inequality states that if $(a_1, a_2, \hdots, a_n)$ and $(b_1, b_2, \hdots, b_n)$ are increasing sequences, we have $$a_1b_n+a_2b_{n-1}+\hdots+a_nb_1\leq a_{\sigma(1)}b_{\sigma(1)}+a_{\sigma(2)}b_{\sigma(2)}+\hdots+a_{\sigma(n)}b_{\sigma(n)} \leq a_1b_1+a_2b_2+\hdots+a_nb_n$$ where $\sigma$ denotes any permutation of $a$ and $b$ (so $\sigma(1), \sigma(2), \hdots, \sigma(n)$ are the numbers $1, 2, 3, \hdots, n$ in any order). This may have an intimidating looking statement, but it is equivalent to saying that, in the particular scenario it deals with, the greedy algorithm is the best! This is because we can view the situation algorithmically: for each of the $a_i$, we are selecting a $b_j$ to pair it with - with the caveat that each $b_j$ must be used exactly once. The Rearrangement inequality states that the largest $a_i$ should be paired with the largest $b_j$ to achieve the maximal dot product.

This idea of transforming problems into algorithmic process is a very important one. Take the following example:

 Given a 10 by 10 grid of unit squares, what is the maximum number of squares that can be shaded such that no row and no column is completely shaded?

The first step is to transform this into an algorithmic process, which we can do as follows: for each row, in order, shade in some (but not all) of the 10 squares.

# Proving correctness

Of course, greedy algorithms are not generally very interesting unless they're correct; in other words, they always produce the maximal overall benefit. In order to prove the correctness of a greedy algorithm, we must show that it is never beneficial to take less than the maximal benefit at any step of the process. This is most often done by focusing on two specific steps, and showing that if the benefit of one decreases and the benefit of the other increases, the overall benefit is decreased, and so taking the maximal benefit at the first step is most beneficial.

For example, let's prove the correctness of the rearrangement inequality mentioned above. Suppose we have two sequences, $a$ and $b$. We now want to prove that $a_1b_1+a_2b_2+\hdots+a_nb_n$ is maximized when $a$ and $b$ are similarly sorted; i.e. they are either both increasing or both decreasing. More formally, we wish to show that the dot product $a \cdot b$ is maximized when the sequences are similarly sorted. We first reformat the problem algorithmically: we consider the $a_i$ in order, at each step choosing a $b_j$ to pair it with.

We claim that the greedy algorithm produces the best result; i.e. if $a_1 \geq a_2 \geq \hdots a_n$ and $b_1 \geq b_2 \geq \hdots \geq b_n$, then $a_1b_1+a_2b_2+\hdots+a_nb_n$ is the maximal possible answer. Suppose that there exists a better algorithm. Consider the first step $i$ in which we pair $a_i$ with $b_j$ such that $i (in other words, $a_i$ is in a "higher position" than $b_j$ is) - if this step didn't exist, we'd always be pairing $a_i$ with $b_i$, and be done immediately. Then, at some step $k$, we pair $a_k$ with $b_i$ - we also know that $k>i$, since we've already done the steps where $k \leq i$.

We want to show that the result of this process is no better than if we simply paired $a_i$ and $b_i$ (note that it is not necessary to prove that it is worse, only that it cannot be better). In other words, we need to show that $a_ib_j+a_kb_i\leq a_ib_i+a_kb_j$. But this factors nicely: we want to show $(a_i-a_k)(b_j-b_i)\leq 0$. Fortunately, this is quite easy: we know that $i, so $b_i \geq b_j$, but we also know that $i, so $a_i \geq a_k$. Thus $a_i-a_k$ is nonnegative, but $b_j-b_i$ is nonpositive, proving the statement.

Of course, the immediate application of greedy algorithms does not always produce the optimal result. For example, if asked what the maximum number of elements in the set $\{0.6, 0.5, 0.4, 0.3, 0.2, 0.1\}$ can be chosen with sum at most 1, a particularly naive greedy algorithm will conclude the answer is two, as it will put the $0.6$ term into the "greedy set", not put the $0.5$ term in, put the $0.4$ term in, and put none of the remaining terms in. Obviously, this is not the correct result, as the set $\{0.4, 0.3, 0.2, 0.1\}$ is much bigger and still has sum at most 1.
Of course, greedy algorithms are not always the optimal process, even after adjusting the order of their processing. For example, there is no way to salvage a greedy algorithm to do the following classic problem: given the following triangle of numbers, at each step we will move either left or right, and add the number we reach to a running total. Our goal is to minimize the final total. $[asy] defaultpen(fontsize(10pt)); unitsize(.7inch); real x=.7;//Scales figure in x-direction label("Start",(0,2)); draw((.2x,1.8)--(.8x,1.2)); label("1",(1x,1)); draw((-.2x,1.8)--(-.8x,1.2)); label("2",(-1x,1)); draw((-1.2x,.8)--(-1.8x,.2)); label("1",(-2x,0)); draw((-.8x,.8)--(-.2x,.2)); draw((.8x,.8)--(.2x,.2)); label("over 9000",(0,0)); draw((1.2x,.8)--(1.8x,.2)); label("over 9000",(2x,0)); [/asy]$
Obviously, the optimal path is to go left twice - but a greedy algorithm will begin by moving to the right! This is an example of when all paths must be considered, and taking a shortcut by using a greedy algorithm is insufficient. As an aside, it may appear that, in the general version of this problem with $n$ layers, we have to consider all $2^n$ possible paths - but there is a much more clever approach to this problem, which - as a conclusion to this article - we offer as an exercise to the reader.