A common problem archetype is the following: “given 𝑛 tasks, order them in some optimal way” or “find an order of 𝑛 tasks satisfying a certain condition”. I will outline some common methods to solve these problems below.

Total Orders

In many cases, it will be possible to define a total order on the tasks; that is, it’s possible to assign each task a “priority” such that it’s always optimal to do high-priority tasks before low-priority ones.

An example of a non-total order is rock-paper-scissors, because the three moves can’t be placed into a total order (e.g. rock > paper > scissors) due to their cyclic nature.

Tasks and Deadlines

(CSES) You have to process 𝑛 tasks. Each task has a duration and a deadline, and you will process the tasks in some order one after another. Your reward for a task is 𝑑𝑓 where 𝑑 is its deadline and 𝑓 is your finishing time. (The starting time is 0, and you have to process all tasks even if a task would yield negative reward.)

The first sneaky observation is that the +𝑑 term doesn’t actually matter, since we can rewrite the total reward as (𝑑𝑓)=𝑑𝑓, and note that since we have to complete all the tasks, 𝑑 is constant over all possible orderings. Therefore, we can reduce this problem to minimizing 𝑓 instead.

As it turns out, if we want to minimizing this quantity, we should always do tasks in order of low to high duration! There are several ways to show this, but one particularly nice way is the exchange argument.

Exchange Argument

This idea is an example of local analysis. Here’s the idea:

Suppose we’ve found an optimal ordering. Then, swapping any two adjacent elements in this ordering certainly cannot be better, otherwise our assumption of optimality would be contradicted. This simple, local condition can often help us greatly reduce the complexity of ordering problems.

Let’s apply this idea to the CSES problem above. Say we’ve ordered the tasks in a way such that the 𝑖th task has duration 𝑑𝑖 and is completed at time 𝑓𝑖. Then, how does our exchange condition constrain the relationship between 𝑑1 and 𝑑2?

Consider swapping the 1st and 2nd tasks. Then, the first observation we should make is that 𝑓2,𝑓3,,𝑓𝑛 will all remain unchanged. Therefore, we only need to consider how 𝑓1 changes! This is why considering adjacent swaps in particular is so useful: in many cases, it will leave the suffix unchanged, leading to a much simpler analysis.

Going back to our problem, if our order is optimal, then the following must hold:

𝑓𝑓𝑓1𝑓1𝑑1𝑑2

If we repeat this argument for other indices, we can indeed show that in an optimal order, 𝑑𝑖𝑑𝑖+1 for all 𝑖, as desired.

Note that in this problem, this condition ends up being both necessary and sufficient: since there’s only one way for this condition to be satisfied, there’s no need to worry about finding a suboptimal local minimum. However, this blog has some educational examples of situations in which we need to be slightly more careful. In particular, it considers the following problem:

Task Scheduling

There are 𝑛 jobs, and each job consists of two parts. The first part must be done in center A, the second part must be done in center B, each center can do at most one job in one time, and the second part of the 𝑖-th job can be started only if the first part of the 𝑖-th job is already done. Find the minimum amount of time required to complete all the tasks assuming you can arrange them in any order.

ABt

An illustration of the scheduling procedure described above. The two parts of each job are assigned the same color. Notice how the second red part can only begin once the first red part has finished.

If we try applying the same exchange argument as above, we’ll end up with the condition that

min(𝑎𝑖+1,𝑏𝑖)min(𝑎𝑖,𝑏𝑖+1)

for all 𝑖. The exchange argument guarantees that this condition is necessary, but it ultimately turns out not to be sufficient (for more details, consult the blog). In order for this comparator to work, we have to make it slightly less “relaxed” by tie-breaking with 𝑎𝑖.

Johnson’s Rule

However, I’d like to present a cleaner line of thinking. Actually, at first glance, it’s not at all obvious that a total order will even exist. Couldn’t it be possible that 𝑎<𝑏 and 𝑏<𝑐, but 𝑐<𝑎 (here, 𝑥<𝑦 means it’s better to perform task 𝑥 before task 𝑦)?

Let’s change our perspective and instead analyze 𝑡𝐴, the amount of time for which only center 𝐴 is operating. Then, the total time taken, 𝑡, will simply be 𝑡𝐴+𝑏𝑖, and since 𝑏𝑖 is constant, we can turn our attention to minimizing 𝑡𝐴 instead.

Now, there turns out to be a very nice way of visualizing 𝑡𝐴. Let me illustrate with an example where 𝑎1=1, 𝑏1=3, 𝑎2=4, 𝑏2=2:

12345¡2¡1123a1b1a2b2tA

𝑡𝐴 is just the maximum height reached! Algebraically:

𝑡𝐴=max𝑖(𝑖1𝑎𝑖𝑖11𝑏𝑖)

Take a moment to make sure you understand why.

Hopefully, from this perspective, it’s slightly clearer why a total order emerges. The first observation we should make is that we should put all tasks with 𝑎𝑖<𝑏𝑖 before tasks with 𝑎𝑖>𝑏𝑖: visually, that means we want to build as deep a valley as possible before climbing back up.

So, we can now consider the subproblem where 𝑎𝑖<𝑏𝑖 holds for all 𝑖. In this case, how should we arrange our tasks?

Now, how should we arrange tasks when 𝑎𝑖>𝑏𝑖?

Putting it all together, we’ve just discovered Johnson’s rule!

Problems

The next two problems involve using total orders to reduce the task of finding an ordered subset to finding an unordered subset instead.

D2F: Flint and Steel

IOI 2025: Festivals