Consensus Halving for Sets of Items

Consensus halving refers to the problem of dividing a resource into two parts so that every agent values both parts equally. Prior work has shown that when the resource is represented by an interval, a consensus halving with at most $n$ cuts always exists, but is hard to compute even for agents with simple valuation functions. In this paper, we study consensus halving in a natural setting where the resource consists of a set of items without a linear ordering. When agents have additive utilities, we present a polynomial-time algorithm that computes a consensus halving with at most $n$ cuts, and show that $n$ cuts are almost surely necessary when the agents' utilities are drawn from probabilistic distributions. On the other hand, we show that for a simple class of monotonic utilities, the problem already becomes PPAD-hard. Furthermore, we compare and contrast consensus halving with the more general problem of consensus $k$-splitting, where we wish to divide the resource into $k$ parts in possibly unequal ratios, and provide some consequences of our results on the problem of computing small agreeable sets.


Introduction
Given a set of resources, how can we divide it between two families in such a way that every member of both families believes that the two resulting parts have the same value? This is an important problem in resource allocation and has been addressed several times under different names [Neyman, 1946;Hobby and Rice, 1965;Alon, 1987], with consensus halving being the name by which it is best known today [Simmons and Su, 2003].
In prior studies of consensus halving, the resource is represented by an interval, and the goal is to find an equal division into two parts that makes a small number of cuts in the interval. 1 Using the Borsuk-Ulam theorem from topology, Simmons and Su [2003] established that for any continuous preferences of the n agents involved, there is always a consensus halving that uses no more than n cuts-this also matches the smallest number of cuts in the worst case. In addition, the same authors developed an algorithm that computes an ε-approximate solution for any given ε > 0, meaning that the values of the two parts differ by at most ε for every agent. Although the algorithm is more efficient than a brute-force approach, its running time is exponential in the parameters of the problem. This is in fact not a coincidence:  recently showed that ε-approximate consensus halving is PPA-complete, implying that the problem is unlikely to admit a polynomial-time algorithm. Filos-Ratsikas et al. [2020a] strengthened this result by proving that the problem remains hard even when the agents have simple valuations over the interval. In particular, the PPA-completeness result holds for agents with "two-block uniform" valuations, i.e., valuation functions that are piecewise uniform over the interval and assign non-zero value to at most two separate pieces.
While these hardness results stand in contrast to the positive existence result, they rely crucially on the resource being in the form of an interval. Most practical division problems do not fall under this assumption, including when we divide assets such as houses, cars, stocks, business ownership, or facility usage. When each item is homogeneous, a consensus halving can be easily obtained by splitting every item in half. However, since splitting individual assets typically involves an overhead, for example in managing a joint business or sharing the use of a house, we want to achieve a consensus halving while splitting only a small number of assets. Fortunately, a consensus halving that splits at most n items is guaranteed to exist regardless of the number of items-this can be seen by arranging the items on a line in arbitrary order and applying the aforementioned existence theorem of Simmons and Su [2003]. The bound n is also tight: if each agent only values a single item and the n valued items are distinct, all of them clearly need to be split. Nevertheless, given that the items do not inherently lie on a line, the hardness results from previous work do not carry over. Could it be that computing a consensus halving efficiently is possible when the resource consists of a set of items?

Overview of Results
We assume throughout the paper that the resource is composed of m items. Each item is homogeneous, so the utility of an agent for a (possibly fractional) set of items depends only on the fractions of the m items in that set. For this overview we focus on the more interesting case where n ≤ m, but all of our results can be extended to arbitrary n and m.
We begin in Section 2 by considering agents with additive utilities, i.e., the utility of each agent is additive across items and linear in the fraction of each item. Under this assumption, we present a polynomial-time algorithm that computes a consensus halving with at most n cuts by finding a vertex of the polytope defined by the relevant constraints. This positive result stands in stark contrast with the PPA-hardness when the items lie on a line, which we obtain by discretizing an analogous hardness result of Filos-Ratsikas et al. [2020a]. We then show that improving the number of cuts beyond n is difficult: even computing a consensus halving that uses at most n − 1 cuts more than the minimum possible for a given instance is NPhard. Nevertheless, we establish that instances admitting a solution with fewer than n cuts are rare. In particular, if the agents' utilities for items are drawn independently from non-atomic distributions, it is almost surely the case that every consensus halving requires no fewer than n cuts.
Next, in Section 3, we address the broader class of monotonic utilities, wherein an agent's utility for a set does not decrease when any fraction of an item is added to the set. For such utilities, we show that the problem of computing a consensus halving with at most n cuts becomes PPAD-hard, thereby providing strong evidence of its computational hardness. 2 Perhaps surprisingly, this hardness result holds even for the class of utility functions that we call "symmetric-threshold utilities", which are very close to being additive. Indeed, such utility functions are additive across items; for each item, having a sufficiently small fraction of the item is the same as not having the item at all, having a sufficiently large fraction of it is the same as having the whole item, and the utility increases linearly in between. On the other hand, we present a number of positive results for monotonic utilities when the number of agents is constant in Appendix A.
In Section 4, we provide some implications of our results on the "agreeable sets" problem studied by Manurangsi and Suksompong [2019]. A set is said to be agreeable to an agent if the agent likes it at least as much as the complement set. Manurangsi and Suksompong proved that a set of size at most m+n 2 that is agreeable to all agents always exists, and this bound is tight. They then gave polynomial-time algorithms that compute an agreeable set matching the tight bound for two and three agents. We significantly generalize this result by exhibiting efficient algorithms for any number of agents with additive utilities, as well as any constant number of agents with monotonic utilities. In addition, we present a short alternative proof for the bound m+n 2 via consensus halving. Finally, in Section 5, we study the more general problem of consensus k-splitting for agents with additive utilities. Our aim in this problem is to split the items into k parts so that all agents agree that the parts are split according to some given ratios α 1 , . . . , α k ; consensus halving corresponds to the special case where k = 2 and α 1 = α 2 = 1/2. Unlike for consensus halving, however, in consensus k-splitting we may want to cut the same item more than once when k > 2, so we cannot assume without loss of generality that the number of cuts is equal to the number of items cut. For any k and any ratios α 1 , . . . , α k , we show that there exists an instance in which cutting (k − 1)n items is necessary. On the other hand, a generalization of our consensus halving algorithm from Section 2 computes a consensus k-splitting with at most (k − 1)n cuts in polynomial time, thereby implying that the bound (k − 1)n is tight for both benchmarks. We also illustrate further differences between consensus k-splitting and consensus halving, both with respect to item ordering and from the probabilistic perspective.

Related Work
Consensus halving falls under the broad area of fair division, which studies how to allocate resources among interested agents in a fair manner Taylor, 1996, 1999;Moulin, 2003]. Common fairness notions include envy-freeness-no agent envies another agent in view of the bundles they receive-and equitability-all agents have the same utility for their own bundle. The fair division literature typically assumes that each recipient of a bundle is either a single agent or a group of agents represented by a single preference. However, a number of recent papers have considered an extension of the traditional setting to groups, thereby allowing us to capture the differing preferences within the same group as in our introductory example with families [Manurangsi and Suksompong, 2017;Suksompong, 2018;Kyropoulou et al., 2019;Segal-Halevi and Nitzan, 2019;Segal-Halevi andSuksompong, 2019, 2020]. Note that a consensus halving is envy-free for all members of the two groups; moreover, it is equitable provided that the utilities of the agents are additive and normalized so that every agent has the same value for the entire set of items.
A classical fair division algorithm that dates back over two decades is the adjusted winner procedure, which computes an envy-free and equitable division between two agents [Brams and Taylor, 1996]. 3 The procedure has been suggested for resolving divorce settlements and international border disputes, with one of its advantages being the fact that it always splits at most one item. Sandomirskiy and Segal-Halevi [2019] investigated the problem of attaining fairness while minimizing the number of shared items, and gave algorithms and hardness results for several variants of the problem. Like in our work, both the adjusted winner procedure and the work of Sandomirskiy and Segal-Halevi [2019] assume that items are homogeneous and, as in Section 2, that the agents' utilities are linear in the fraction of each item and additive across items. Moreover, both of them require the assumption that all items can be shared-if some items are indivisible, then an envy-free or equitable allocation cannot necessarily be obtained. 4 Besides consensus halving, another problem that also involves dividing items into equal parts is necklace splitting, which can be seen as a discrete analog of consensus halving [Goldberg and West, 1985;Alon and West, 1986;Alon, 1987]. In a basic version of necklace splitting, there is a necklace with beads of n colors, with each color having an even number of beads. Our task is to split the necklace using at most n cuts and arrange the resulting pieces into two parts so that the beads of each color are evenly distributed between both parts. Observe that the difficulty of this problem lies in the spatial ordering of the beads-the problem would be trivial if the beads were unordered items as in our setting. While consensus halving and necklace splitting have long been studied by mathematicians, they recently gained significant interest among computer scientists thanks in large part to new computational complexity results Goldberg, 2018, 2019;Deligkas et al., 2019;Alon and Graur, 2020;Filos-Ratsikas et al., 2020a,b]. In particular, the PPA-completeness result of  for approximate consensus halving was the first such result for a problem that is "natural" in the sense that its description does not involve a polynomial-sized circuit.

Additive Utilities
We first formally define the problem of consensus halving for a set of items. There is a set N = [n] of n agents and a set M = [m] of m items, where [r] := {1, 2, . . . , r} for any positive integer r. A fractional set of items contains a fraction x j ∈ [0, 1] of each item j. We will mostly be interested in fractional sets of items in which only a small number of items are fractionalthat is, most items have x j = 0 or 1. Agent i has a utility function u i that describes her nonnegative utility for any fractional set of items; for an item j ∈ M , we sometimes write u i (j) to denote u i ({j}). A partition of M into fractional sets of items M 1 , . . . , M k has the property that for every item j ∈ M , the fractions of item j in the k fractional sets sum up to exactly 1.
Definition 2.1. A consensus halving is a partition of M into two fractional sets of items M 1 and M 2 such that u i (M 1 ) = u i (M 2 ) for all i ∈ N . An item is said to be cut if there is a positive fraction of it in both parts of the partition.
In this section, we assume that the agents' utility functions are additive. This means that for a set M ′ containing a fraction x j of item j, the utility of agent i is given by u i (M ′ ) = j∈M x j · u i (j). Observe that under additivity, M ′ forms one part of a consensus halving exactly when As we mentioned in the introduction, a consensus halving with no more than n cuts is guaranteed to exist regardless of the number of items. Our first result shows that such a division can be found efficiently for additive utilities.
Theorem 2.2. For n agents with additive utilities, there exists a polynomial-time algorithm that computes a consensus halving with at most min{n, m} cuts.
item (EFX), which have been extensively studied in the last few years (e.g., Plaut and Roughgarden, 2020]). However, as Sandomirskiy and Segal-Halevi [2019] noted, when a divorcing couple decides how to split their children or two siblings try to divide three houses between them, it is unlikely that anyone will agree to a bundle that is envy-free up to one child or house.
Proof. If n ≥ m, a partition that divides every item in half is clearly a consensus halving and makes m = min{n, m} cuts. We therefore assume from now on that n ≤ m and describe a polynomial-time algorithm that computes a consensus halving using no more than n cuts. The main idea of our algorithm is to start with the trivial consensus halving where x 1 = x 2 = · · · = x m = 1/2, and then gradually reduce the number of cuts. We stop when the process cannot be continued, at which point we show that the consensus halving must contain at most n cuts. Our algorithm is presented below.
2. Let S denote the set of n equations j∈M y j − 1 2 · u i (j) = 0 for i ∈ N , and let T = ∅.
3. While there exists a solution (y 1 , . . . , y m ) = (x 1 , . . . , x m ) to S ∪ T , do the following: (a) For every j ∈ M such that y j = x j , compute (c) For every j ∈ M , let s j := (1 − γ j * ) · x j + γ j * · y j , and update the value of x j to s j .
Finding a solution (y 1 , . . . , y m ) to S ∪ T that is not equal to (x 1 , . . . , x m ) or determining that such a solution does not exist (Step 3) can be done in polynomial time via Gaussian elimination. 5 Moreover, it is obvious that the other steps of the algorithm run in polynomial time.
We next prove the correctness of our algorithm, starting with arguing that (x 1 , . . . , x m ) forms a consensus halving. Since we start with a consensus halving x 1 = · · · = x m = 1/2, it suffices to show that each execution of the loop in Step 3 preserves the validity of the solution. Observe that, since both (x 1 , . . . , x m ) and (y 1 , . . . , y m ) are solutions to the equations (1), their convex combination (in Step 3c) also satisfies the equations (1). Furthermore, for each j such that y j = x j , the value γ j is chosen so that if we replace γ j * by γ j in the formula for s j , we would have s j = 1 for the case y j > x j , and s j = 0 for the case y j < x j . Since γ j * ≤ γ j , we have that s j ∈ [0, 1] for all j such that y j = x j . In addition, the value of x j does not change for j such that y j = x j . Thus, (x 1 , . . . , x m ) remains a consensus halving throughout the algorithm.
Finally, we are left to show that at most n items are cut in the output (x 1 , . . . , x m ). As noted above, our definition of γ j ensures that x j * ∈ {0, 1} after the execution of Step 3c. Furthermore, as the constraint y j * = x j * is then immediately added to T , the value of x j * does not change for the rest of the algorithm. As a result, every item j ∈ T is uncut. Thus, it suffices to show that |T | ≥ m − n at the end of the execution.
When the while loop in Step 3 terminates, (x 1 , . . . , x m ) must be the unique solution to S ∪T . Recall that a system of linear equations with m variables can only have a unique solution when the number of constraints is at least m. This means that |S ∪T | ≥ m at the end of the algorithm. Since |S| = n, we must have |T | ≥ m − n, as desired. Note that the above algorithm can be viewed as finding a vertex of the polytope defined by the constraints (1) and 0 ≤ x j ≤ 1 for all j ∈ M . In fact, it suffices to use a generic algorithm for this task; however, to the best of our knowledge, such algorithms often involve solving a linear program, whereas the algorithm presented above is conceptually simple and can be implemented directly. We also remark that our algorithm works even when some utilities u i (j) are negative, i.e., some of the items are goods while others are chores. Allocating a combination of goods and chores has received increasing attention in the fair division community [Bogomolnaia et al., 2017;Segal-Halevi, 2018;Aziz et al., 2019].
As we discussed in the introduction, an important reason behind the positive result in Theorem 2.2 is the lack of linear order among the items. Indeed, as we show next, if the items lie on a line and we are only allowed to cut the line using n cuts, finding a consensus halving becomes computationally hard. This follows from discretizing the hardness result of Filos-Ratsikas et al. [2020a] and holds even if we allow the consensus halving to be approximate instead of exact. Formally, when the items lie on a line, we may place a number of cuts, with each cut lying either between two adjacent items or at some position within an item. All (fractional or whole) items between any two adjacent cuts must belong to the same fractional set of items in a partition, where the left and right ends of the line also serve as "cuts" in this requirement (see Figure 1 for an example). We say that a partition into fractional sets of items (M 1 , M 2 ) is an ε-approximate consensus halving if |u i (M 1 ) − u i (M 2 )| ≤ ε · u i (M ) for every agent i. Theorem 2.3. Suppose that the items lie on a line. There exists a polynomial p such that finding a 1/p(n)-approximate consensus halving for n agents with at most n cuts on the line is PPA-hard, even if the valuations are binary and every agent values at most two contiguous blocks of items.
Proof. We prove this by discretizing the hard instances constructed by Filos-Ratsikas et al. [2020a, Theorem 2]. In their setting there are n agents who have piecewise-uniform valuation functions v 1 , . . . , v n over the interval [0, 1]. 6 By a closer inspection of their proof, we note that the instances they construct have some useful properties. Namely, there exist polynomials p and q such that: 1. Every agent has a two-block uniform valuation on [0, 1], i.e., the density of the valuation function is piecewise-uniform and non-zero in at most two intervals. In other words, every agent has (at most) two blocks of value and they have the same height.
2. There exists an integer d ≤ q(n) such that for all agents, the endpoints of the blocks are rational numbers with denominator d.
Using these properties, we can construct an equivalent instance in our setting. We position m = d items on a line, where the jth item represents the interval I j := [(j − 1)/d, j/d] in the original instance. Note that for every agent of the original instance, the density of their valuation function is constant over I j for each j. Thus, by letting for all i ∈ [n] and j ∈ [d], we have exactly recreated the same valuation functions in our setting, up to normalization. In particular, any 1/p(n)-approximate consensus halving of the items using at most n cuts on the line immediately yields a 1/p(n)-approximate consensus halving of v 1 , . . . , v n using at most n cuts on [0, 1], implying that our problem is also PPA-hard.
Although Theorem 2.2 allows us to efficiently compute a consensus halving with no more than n cuts in any instance, for some instances there exists a solution using fewer cuts. An extreme example is when all agents have the same utility function, in which case a single cut already suffices. This raises the question of determining the least number of cuts required for a given instance. Unfortunately, when there is a single agent, deciding whether there is a consensus halving that leaves all items uncut is already equivalent to the well-known NP-hard problem Partition. For general n, even computing a division that uses at most n − 1 cuts more than the optimal solution is still computationally hard, as the following theorem shows.
Theorem 2.4. For n agents with additive utilities, it is NP-hard to compute a consensus halving that uses at most n − 1 cuts more than the minimum number of cuts for the same instance.
Proof. We reduce from the NP-hard problem Partition. Let w 1 , . . . , w r be the integers that form a Partition instance. We construct a consensus halving instance I with n agents and a set of n × r items M = {(ℓ, j) : ℓ ∈ [n], j ∈ [r]}. Every agent values a distinct set of items according to the numbers w 1 , . . . , w r . Formally, for all i, ℓ ∈ [n] and j ∈ [r]. It is easy to see that this instance has the following properties: 1. If w 1 , . . . , w r can be partitioned into two sets of equal sum, then our instance I admits a consensus halving using no cut.
2. If w 1 , . . . , w r cannot be partitioned into two sets of equal sum, then any consensus halving of our instance I uses at least n cuts. This is because in that case, for every agent i ∈ N , at least one of the items (i, 1), . . . , (i, r) must be cut.
As a result, in the first case, any consensus halving that uses at most n − 1 cuts more than the minimum number of cuts will have at most n − 1 cuts. In the second case, any consensus halving that uses at most n − 1 cuts more than the minimum number of cuts will have at least n cuts. Thus, Partition reduces to the problem of computing a consensus halving that uses at most n − 1 cuts more than the minimum number of cuts.
Theorem 2.4 implies that there is no hope of finding a consensus halving with the minimum number of cuts or even a non-trivial approximation thereof in polynomial time, provided that P = NP. Nevertheless, we show that instances that admit a consensus halving with fewer than n cuts are rare: if the utilities are drawn independently at random from probability distributions, then it is almost surely the case that any consensus halving needs at least n cuts. We say that a distribution is non-atomic if it does not put positive probability on any single point.
Theorem 2.5. Suppose that for each i ∈ N and j ∈ M , the utility u i (j) is drawn independently from a non-atomic distribution D i,j . Then, with probability 1, every consensus halving uses at least min{n, m} cuts. Proof. The high-level idea is to show that if there are less than min{n, m} cuts, then a certain utility u i (j) needs to take on a specific value; this event occurs with probability 0 since the distribution D i,j is non-atomic.
Let m cut = min{n, m}−1. Recall that a consensus halving corresponds to a tuple (x 1 , . . . , x m ) ∈ [0, 1] m for which the constraint (1) is satisfied, and that item j is cut if and only if x j / ∈ {0, 1}. As a result, from union bound, it suffices to show that for any fixed M cut ⊆ M of size m cut , we have For notational convenience, we will only show that (2) holds for M cut = {1, . . . , m cut }; due to symmetry, the same bound also holds for every M cut ⊆ M of size m cut . To show (2) for M cut = {1, . . . , m cut }, we may apply the union bound again to derive Hence, it suffices to show that, for any fixed t mcut+1 , . . . , t m ∈ {0, 1}, we have To see that this is the case, consider any fixed values of u i (j) for all i ∈ N, j ∈ M cut ; we will show that the above probability is 0 over the randomness of the utilities u i (j) for i ∈ N, j / ∈ M cut . We may rearrange the constraint (1) as Now, since there are n linear equations and only m cut < n variables x 1 , . . . , x mcut , the coefficient vectors (u 1 (1), . . . , u 1 (m cut )), . . . , (u n (1), . . . , u n (m cut )) must be linearly dependent. In other words, there exists (a 1 , . . . , a n ) = (0, . . . , 0) such that Hence, by taking the corresponding linear combination of (3), we have From (a 1 , . . . , a n ) = (0, . . . , 0), there exists i * ∈ N such that a i * = 0. Moreover, since m cut < m, we have m / ∈ M cut . The above equality therefore implies that where t m − 1/2 is nonzero because t m ∈ {0, 1}. Since D i * ,m is non-atomic and the utilities are drawn independently, the above equality occurs with probability 0, which implies that As discussed, this in turn implies that the probability that there is a consensus halving with at most m cut cuts is 0, concluding our proof.
We now comment on the necessity of the two distributional assumptions in Theorem 2.5.
• Non-atomicity condition: Suppose n = 1 and D 1,j is the Bernoulli distribution with p = 1/2 for all j ∈ M , i.e., u 1 (j) = 0 and u 1 (j) = 1 with probability 1/2 each. Then the minimum number of cuts is 1 if u i (j) = 1 for an odd number of j, and 0 otherwise; the probability that each event occurs is 1/2.
• Independence condition: Suppose all agents have the same utility function, i.e., the dependence between the utilities is such that u 1 (j) = · · · = u n (j) for all j ∈ [m]. In this case, it is clear that no more than one cut is needed regardless of n and m.
As our final remark of this section, consider utility functions that are again additive across items, but for which the utility of each item scales quadratically as opposed to linearly in the fraction of the item. That is, for a set M ′ containing a fraction x j of item j, the utility of agent i is given by u i (M ′ ) = j∈M x 2 j · u i (j). Even though these utility functions appear different from the ones we have considered so far, it turns out that the set of consensus halvings remains exactly the same. Indeed, a partition (M 1 , M 2 ) is a consensus halving under the quadratic functions if and only if j∈M (1), so all of our results in this section apply to the quadratic functions as well.

Monotonic Utilities
Next, we turn our attention to utility functions that are no longer additive as in Section 2. We assume that the utilities are monotonic, meaning that the utility of an agent for a set of items cannot decrease upon adding any fraction of an item to the set. Our main result is that finding a consensus halving is computationally hard for such valuations; in fact, the hardness holds even when the utilities take on a specific structure that we call symmetric-threshold. Symmetricthreshold utilities are additive over items, and linear with symmetric thresholds within every item. Formally, the utility of agent i for a fractional set of items M ′ containing a fraction x j ∈ [0, 1] of each item j can be written as where c ij ∈ [0, 1/2) is the threshold or cap of agent i for item j. Intuitively, symmetric-threshold utilities model settings where having a small fraction of an item is the same as not having the item at all, while having a large fraction of the item is the same as having the whole item. The point where this threshold behavior occurs is controlled by the cap c ij , which can be different for every pair (i, j) ∈ N × M . It is easy to see that the resulting utility functions are indeed monotonic. Note that although general monotonic utility functions do not necessarily admit a concise representation (see the discussion preceding Theorem 4.3), symmetric-threshold utility functions can be described succinctly.
Even though symmetric-threshold utility functions are very close to being additive, we show that finding a consensus halving for such utilities is computationally hard. Recall that a partition Theorem 3.1. There exists a constant ε > 0 such that finding an ε-approximate consensus halving for n agents with monotonic utilities that uses at most n cuts is PPAD-hard, even if all agents have symmetric-threshold utilities.
Proof. We prove this result by reducing from a modified version of the generalized circuit problem. The generalized circuit problem is the main tool that has been used (implicitly or explicitly) to prove hardness of computing Nash equilibria in various settings [Chen et al., 2009;Daskalakis et al., 2009;Rubinstein, 2018]. A generalized circuit is a generalization of an arithmetic circuit, because it allows cycles, which means that instead of a simple computation, the circuit now represents a constraint satisfaction problem. The version of the problem we use is different from the standard one in two aspects. First, instead of the domain [0, 1], we use [−1, 1], which is more adapted to the consensus halving problem. Second, we will only allow the circuit to use three types of arithmetic gates. As we will show below, these modifications do not change the complexity of the problem.
Formally, we consider the following simplified generalized circuits.
Definition 3.3. Let ε > 0. The problem ε-simple-Gcircuit is defined as follows: given a simple generalized circuit (V, T ), find an assignment x : V → [−1, 1] that ε-approximately satisfies all the gates T = (G, u 1 , u 2 , v, ζ) in T , namely: As mentioned earlier, it turns out that this modified version of the generalized circuit problem is also PPAD-hard. This can be proved by reducing from the standard ε-Gcircuit problem, which was shown to be PPAD-hard even for constant ε by Rubinstein [2018]. The idea is that these simple gates are enough to simulate all the gates in the standard version of the problem. Both problems are in fact PPAD-complete, since they can be reduced to the problem of finding an approximate Brouwer fixed point, but here we are only interested in the hardness.
The proof of Lemma 3.4 can be found in Appendix B. Let ε > 0 be a constant for which the ε-simple-Gcircuit problem is PPAD-hard. We will now show that the ε-simple-Gcircuit problem reduces to the problem of finding an εapproximate consensus halving for n agents with symmetric-threshold utilities that uses at most n cuts.
Let (V, T ) be an instance of ε-simple-Gcircuit. Partition V into four sets V 0 ∪V + ∪V × ∪V 1 , where • V 0 contains every node that is not the output of any gate in T , • V + contains every node that is the output of a G + gate in T , • V × contains every node that is the output of a G ×−|ζ| gate in T , • V 1 contains every node that is the output of a G 1 gate in T .
We construct a consensus halving instance with n = 2|V + | + |V × | + |V 1 | agents and m = denote the corresponding item, and for every v ∈ V + , let j ′ (v) ∈ M denote the second corresponding item. Finally, let j * ∈ M denote the single remaining item, which we call the special item.
It remains to specify the utility functions for the agents and the constant ε > 0. We will see below that in any partition of M into two fractional sets of items (M 1 , M 2 ), there is a simple way to associate a value val(j) ∈ [−1, 1] to every item j ∈ M . We will pick the agents' utilities so that in any ε-approximate consensus halving (with at most n cuts), these values must satisfy the gate constraints in T .
Value Encoding. Consider any partition of M into two fractional sets of items (M 1 , M 2 ). Let x j ∈ [0, 1] denote the fraction of item j in M 1 . This fraction x j ∈ [0, 1] encodes a number val(j) ∈ [−1, 1] as follows: The main idea of the reduction is that the value x[v] of node v ∈ V will be given by val(j(v)). Next, we show how to pick the utility functions in order to enforce the gate constraints in T . In the construction below we assume that ε ≤ 1/10; the exact value of ε will be picked at the end.
. We want to ensure that in any solution to ε-approximate consensus halving, we have val(j) = −ζ · val(j 1 ) ± ε. To achieve this we define the symmetric-threshold utility function of agent i as follows. For any item ℓ / ∈ {j 1 , j}, we let u i (ℓ) = 0 and c iℓ = 0. We let u i (j) = 1/ζ and c ij = 0. For j 1 we use what we call a standard input utility function, which is defined as follows: u i (j 1 ) = 1/3 and c ij 1 = 1/3. Note that u i (M ) = 1/3 + 1/ζ.
G 1 gates. For any gate (G 1 , nil, nil, v, nil) ∈ T , where v ∈ V 1 , we do the following. Let j = j(v) and i = i(v). We use the same construction as for G ×−|ζ| gates with j 1 = j * (the special item) and ζ = 1. By the same arguments, it follows that in any ε-approximate solution it must hold that val(j) = −val(j * ) ± 4ε and item j must be fractional, i.e., x j ∈ (0, 1). Thus, as long as 4ε ≤ ε and val(j * ) = −1, this correctly enforces the gate constraint.
To enforce the first constraint, we define the utilities of agent i ′ = i ′ (v) as follows. For any item ℓ / ∈ {j 1 , j 2 , j ′ }, we let u i ′ (ℓ) = 0 and c i ′ ℓ = 0. We let u i ′ (j ′ ) = 1 and c i ′ j ′ = 0. For j 1 and j 2 we use the standard input utility function as defined earlier. Note that u i ′ (M ) = 5/3.
We are now ready to complete the proof. Set ε = ε/10. Consider any ε-approximate consensus halving (M 1 , M 2 ) that uses at most n cuts. We claim that letting x[v] = val(j(v)) for all v ∈ V yields a solution to the ε-simple-Gcircuit instance. Indeed, by construction, all gates of type G + and G ×−|ζ| are correctly enforced. For gates of type G 1 , they will be correctly enforced if val(j * ) = −1, which we now prove. Note that in our construction, we have ensured that for every v ∈ V + ∪ V × ∪ V 1 , item j(v) must be fractional, and for every v ∈ V + , item j ′ (v) must also be fractional. Since these 2|V + | + |V × | + |V 1 | = n items are fractional, and we used at most n cuts, this means that all other items are not fractional. In particular, j * is not fractional, i.e., x j * ∈ {0, 1}. Without loss of generality, assume that x j * = 0 (if x j * = 1, then swap the roles of M 1 and M 2 ). It follows that val(j * ) = −1.

Connections to Agreeable Sets
We now present some implications of results from consensus halving on the setting of computing agreeable sets. Let us first formally define the agreeable set problem, introduced by Manurangsi and Suksompong [2019]. 7 As in consensus halving, there is a set N of n agents and a set M of m items. Agent i has a monotonic utility function u i over non-fractional sets of items, where we assume the normalization u i (∅) = 0; this corresponds to a set function.
As one of their main results, Manurangsi and Suksompong [2019] showed that for any n and m, there exists a set of at most min ⌊ m+n 2 ⌋, m items that is agreeable to all agents, and this bound is tight. Their proof relies on a graph-theoretic statement often referred to as "Kneser's conjecture", which specifies the chromatic number for a particular class of graphs called Kneser graphs. Here we present a short alternative proof that works by arranging the items on a line in arbitrary order, applying consensus halving, and rounding the resulting fractional partition.
As a bonus, our proof yields an agreeable set that is composed of at most ⌊n/2⌋ + 1 blocks on the line. . If s ≥ m, the entire set of items M has size m = min{s, m} and is agreeable to all agents due to monotonicity, so we may assume that s ≤ m. Arrange the items on a line in arbitrary order, and extend the utility functions of the agents to fractional sets of items in a continuous and monotonic fashion. 8 Consider a consensus halving with respect to the extended utilities that uses at most n cuts on the line; some of the cuts may cut through items, whereas the remaining cuts are between adjacent items. Let r ≤ n be the number of items that are cut by at least one cut. Without loss of generality, assume that the first part M ′ contains no more full items than the second part M ′′ , so M ′ contains at most m−r 2 full items. By moving all cut items from M ′′ to M ′ in their entirety, M ′ contains at most m−r 2 + r = m+r 2 ≤ s items. Since we start with a consensus halving and only move fractional items from M ′′ to M ′ , we have that M ′ is agreeable to all agents. Moreover, one can check that M ′ is composed of at most n+1 2 = n 2 + 1 blocks on the line.
In light of Theorem 4.2, an important question is how efficiently we can compute an agreeable set whose size matches the worst-case bound. Manurangsi and Suksompong [2019] addressed this question by providing a polynomial-time algorithm for two agents with monotonic utilities and three agents with "responsive" utilities, a class that lies between additive and monotonic utilities. They left the complexity for higher numbers of agents as an open question, and conjectured that the problem is hard even when the number of agents is a larger constant. We show that this is in fact not the case: the problem can be solved efficiently for any number of agents with additive utilities, as well as for any constant number of agents with monotonic utilities. Note that since the input of the problem for monotonic utilities can involve an exponential number of values (even for constant n), and consequently may not admit a succinct representation, we assume a "utility oracle model" in which the algorithm is allowed to query the utility u i (M ′ ) for any i ∈ N and M ′ ⊆ M . Proof. Similarly to Theorem 4.2, if n ≥ m we can simply include all items in our set, so we may focus on the case n ≤ m. For (i), we first use our polynomial-time algorithm from Theorem 2.2 to find a consensus halving, and then compute an agreeable set of size at most m+n 2 by rounding the consensus halving as in the proof of Theorem 4.2.
Next, consider (ii). Recall that for any ordering of the items on a line, Theorem 4.2 guarantees the existence of an agreeable set of size at most m+n 2 involving no more than n cuts 8 For example, one can use the Lovász extension or the multilinear extension (see Section A.2). on the line. Fix an ordering of the items; we will perform a brute-force search over all (nonfractional) partitions involving at most n cuts with respect to the ordering. For t ∈ [n], there are O(m t ) ways to place t cuts, and for each way, we have two candidate sets to check: one including the leftmost item, and one not including it. A candidate set is valid if and only if it has size at most m+n 2 and is agreeable to all agents. Hence the brute-force search runs in time n t=1 O(m t ) = O(n · m n ) = O(m n ), which is polynomial since n is constant.

Consensus k-Splitting
In this section, we address two important generalizations of consensus halving, both of which were mentioned by Simmons and Su [2003]. In consensus splitting, instead of dividing the items into two equal parts, we want to divide them into two parts so that all agents agree that the split satisfies some given ratio, say two-to-one. In consensus 1/k-division, we want to divide the items into k parts that all agents agree are equal. We consider a problem that generalizes both of these problems at once.
Definition 5.1. Let α 1 , . . . , α k > 0 be real numbers such that α 1 + · · · + α k = 1. A consensus k-splitting with ratios α 1 , . . . , α k is a partition of M into k fractional sets of items M 1 , . . . , M k such that When the ratios are clear from context, we will simply refer to such a partition as a consensus k-splitting.
As in Section 2, we will assume that the utility functions are additive, in which case our desired condition is equivalent to u i (M ℓ ) = α ℓ · u i (M ) for all i ∈ N and ℓ ∈ [k]. While there is no reason to cut an item more than once in consensus halving, one may sometimes wish to cut the same item multiple times in consensus k-splitting in order to split the item across three or more parts. Hence, even though the number of cuts made is always at least the number of items cut, the two quantities are not necessarily the same in consensus k-splitting. If there are n items and each agent only values a single distinct item, then it is clear that we already need to make (k − 1)n cuts for any ratios α 1 , . . . , α k , in particular k − 1 cuts for each item. Nevertheless, it could still be that for some ratios, it is always possible to achieve a consensus k-splitting by cutting fewer than (k − 1)n items. We show that this is not the case: for any set of ratios, cutting (k − 1)n items is necessary in the worst case.
Theorem 5.2. For any ratios α 1 , . . . , α k > 0, there exists an instance with additive utilities in which any consensus k-splitting with these ratios cuts at least (k − 1)n items.
Proof. Fix α 1 , . . . , α k > 0. We construct an instance such that each agent i has utility 1/b for each of the b items in a set B i , where b is an integer that we will choose later, and utility 0 for every other item. The sets B 1 , . . . , B n are pairwise disjoint. Note that u i (M ) = u i (B i ) = 1 for every i. It suffices to choose b such that at least k − 1 items in each set B i must be cut in any consensus k-splitting with ratios α 1 , . . . , α k . By symmetry, we may focus on the first agent and the corresponding set B 1 .
To see why this is sufficient, observe that each uncut item must belong to one of the k parts in its entirety. The number of uncut items in B 1 is therefore at most α 1 1/b + · · · + α k 1/b = ⌊α 1 b⌋ + · · · + ⌊α k b⌋, meaning that the number of cut items in B 1 is at least b − (⌊α 1 b⌋ + · · · + ⌊α k b⌋) = (α 1 b + · · · + α k b) − (⌊α 1 b⌋ + · · · + ⌊α k b⌋) where the first equality follows from α 1 + · · · + α k = 1. Since b, ⌊α 1 b⌋, . . . , ⌊α k b⌋ are all integers, this implies that at least k − 1 items in B 1 must be cut. It remains to show the existence of b for which (4) is satisfied. Let s be an integer such that Divide the interval [0, 1] into subintervals of length at most 1/s each. By the pigeonhole principle, there exist positive integers p, q such that q ≥ p + 2, and {α i p} and {α i q} fall in the same subinterval for every i ∈ [k]. Letting c = q − p, we have that for each i ∈ [k], either where we use the assumption that s > k. Hence (4) is satisfied, and the proof is complete.
Next, we show that computing a consensus k-splitting with at most (k − 1)n cuts can be done efficiently using a generalization of our algorithm for consensus halving (Theorem 2.2). Note that such a splitting also cuts at most (k − 1)n items.
Theorem 5.3. For n agents with additive utilities and ratios α 1 , . . . , α k , there is a polynomialtime algorithm that computes a consensus k-splitting with these ratios using at most (k − 1) · min{n, m} cuts.
Proof. Let us start with the case k = 2, which can then be used as a subroutine for the case k > 2. Our algorithm for consensus 2-splitting generalizes the consensus halving algorithm in Theorem 2.2, so we only highlight the differences. To find a consensus 2-splitting with ratios α 1 , α 2 , the only change to the algorithm in Theorem 2.2 is that we initialize x 1 = · · · = x m = α 1 and let S be the set of n equations j∈M (y j − α 1 ) · u i (j) for i ∈ N . By analogous arguments as in Theorem 2.2, this modified algorithm produces a consensus 2-splitting with ratios α 1 , α 2 in polynomial time and uses at most min{n, m} cuts.
We now move on to the case k > 2. In this case, we simply apply the above consensus 2-splitting algorithm successively, each time producing one additional part at the expense of at most min{n, m} cuts. This is stated more precisely below. It is clear that the output is a consensus k-splitting with ratios α 1 , . . . , α k , and that the algorithm runs in polynomial time. Finally, observe that each time we apply the consensus 2-splitting algorithm, if there are m ′ items left, we additionally use at most min{n, m ′ } ≤ min{n, m} cuts. As a result, the total number of cuts is at most (k − 1) · min{n, m}, as desired.
As in Theorem 2.2, our algorithm does not require the nonnegativity assumption on the utilities and therefore works for combinations of goods and chores.
When the items lie on a line, there is always a consensus halving that makes at most n cuts on the line and therefore cuts at most n items-this matches the upper bound on the number of items cut in the absence of a linear order. Theorem 5.3 shows that the bound n continues to hold for consensus splitting into two parts with any ratios. As we show next, however, this bound is no longer achievable for some ratios with ordered items, thereby demonstrating another difference that the lack of linear order makes. 9 Theorem 5.4. Let n ≥ 2, k = 2 and (α 1 , α 2 ) = ( 1 n , n−1 n ). There exists an instance such that the n agents have additive utilities, the items lie on a line, and any consensus k-splitting with ratios α 1 and α 2 makes at least 2n − 4 cuts on the line.
Proof. We discretize a slight modification of an instance used by Stromquist and Woodall [1985] to show a lower bound on the number of cuts when the resource is represented by a onedimensional circle. Suppose that there are n 2 −1 "primary items", which we label as 1, 2, . . . , n 2 − 1 according to their linear order. Moreover, there are n 2 − 2 "secondary items", one between every adjacent pair of primary items. The utilities of the agents are as follows: • For i ∈ [n − 1], agent i has utility 1 n+1 for each of the n + 1 primary items i, i + (n − 1), . . . , i + n(n − 1), and utility 0 for all secondary items.
• Agent n has value 1 n 2 −2 for each secondary item, and value 0 for all primary items. Note that u i (M ) = 1 for all i. Let M ′ be a fractional set of items for which all agents have utility 1/n. Since each agent i ∈ [n − 1] has utility 1 n+1 for a primary item, M ′ must contain a positive fraction of at least two primary items that the agent values. These items are disjoint for different agents, so M ′ necessarily contains a positive fraction of at least 2n − 2 primary items. On the other hand, the utility function of agent n implies that M ′ can contain at most 1/n 1/(n 2 −2) = n − 1 entire secondary items.
Suppose that M ′ is composed of r non-adjacent intervals I 1 , . . . , I r . Notice that for any interval I on the line, if the interval contains a positive fraction of t 1 (I) primary items, along with t 2 (I) entire secondary items, then t 1 (I) ≤ t 2 (I) + 1. Hence, we have implying that r ≥ n − 1. This means that the consensus 2-splitting with M ′ as one part involves at least 2(n − 1) = 2n − 2 cuts, possibly including endpoints of the line. At most two of these cuts can correspond to endpoints, implying that the number of cuts made is at least 2n − 4, as desired.
For consensus halving, Theorem 2.5 shows that in a random instance, any solution almost surely uses at least the worst-case number of cuts min{n, m}. One might consequently expect that an analogous statement holds for consensus k-splitting, with (k −1)·min{n, m} cuts almost always being required. However, we show that this is not true: even in the simple case where n = 1 and the agent's utilities are drawn from the uniform distribution over [0,1], it is likely that we only need to make one cut (instead of k − 1) for large m.
Theorem 5.5. Let n = 1, and suppose that the agent's utility for each item is drawn independently from the uniform distribution on [0, 1]. For any ratios α 1 , . . . , α k > 0, with probability approaching 1 as m → ∞, there exists a consensus k-splitting with these ratios using at most one cut. Moreover, there is a polynomial-time algorithm that computes such a solution.
In what follows, we denote the agent's utility function by u, and say that an event happens "with high probability" if the probability that it happens approaches 1 as m → ∞. The proof of Theorem 5.5 proceeds by identifying a simple (deterministic) condition that guarantees a solution cutting only a single item; this is done in Lemma 5.6. Then, we show that this condition is satisfied with high probability.
Lemma 5.6. Suppose that there is a single agent. Let j * := argmax j u(j) denote a mostpreferred item, and let M low-utility := {j ∈ M | u(j) ≤ 1 k ·u(j * )} denote the set of items whose utility is less than 1/k times the utility of j * . For any ratios α 1 , . . . , α k > 0, if j∈M low-utility u(j) ≥ k · u(j * ), then there is a consensus k-splitting with these ratios that cuts only j * . Moreover, there is a polynomial-time algorithm that computes such a solution.
Proof. For each ℓ ∈ [k], let w ℓ := α ℓ · ( j∈M u(j)) be the "target utility" for part ℓ of the partition. Consider the following greedy algorithm.
The algorithm clearly runs in polynomial time. We claim that it terminates with M 0 = ∅ provided that j∈M low-utility u(j) ≥ k · u(j * ). This implies the statement of the lemma, because it would then suffice to split only item j * .
Suppose for the sake of contradiction that M 0 = ∅ at the end of the execution. Consider the following two cases, based on whether j max at termination belongs to M low-utility .
• Case 1: j max / ∈ M low-utility . Since the algorithm terminates, it must be that u(P ℓ ) > w ℓ − u(j max ) ≥ w ℓ − u(j * ) for each ℓ. Summing this over ℓ ∈ [k], we get On the other hand, since j max / ∈ M low-utility , it must be that M low-utility is disjoint from P 1 ∪ · · · ∪ P k . As a result, we have where the second inequality is from the assumption of the lemma. The above two inequalities imply the desired contradiction.
In both cases, we arrive at a contradiction, and our proof is complete.
With Lemma 5.6 ready, we can now prove Theorem 5.5.
Proof of Theorem 5.5. Since each u(j) is drawn independently from the uniform distribution on [0, 1], the probability that u(j * ) ≥ 1/2 is 1 − 1/2 m , which converges to 1 for large m. In addition, since u(j) ∈ [0.1/k, 0.5/k] with probability 0.4/k for each j, a standard Chernoff bound argument implies that with probability approaching 1, we have The union bound implies that both events occur simultaneously with high probability. Suppose that they both occur and m ≥ 40k 3 . From the first event, we have u(j) ≤ 0.5/k ≤ u(j * )/k for each j ∈ M ′ , and so M ′ ⊆ M low-utility . Hence, the second event implies that From this and Lemma 5.6, we conclude that with high probability, we can efficiently find a consensus k-splitting that cuts only a single item, as claimed.

Conclusion
In this paper, we studied a natural version of the consensus halving problem where, in contrast to prior work, the items do not have a linear structure. We showed that computing a consensus halving with at most n cuts in our version can be done in polynomial time for additive utilities, but already becomes PPAD-hard for a class of monotonic utilities that are very close to additive. We also demonstrated several extensions and connections to the problems of consensus k-splitting and agreeable sets.
While our PPAD-hardness result serves as strong evidence that consensus halving for a set of items is computationally hard for non-additive utilities, it remains open whether the result can be strengthened to PPA-completeness-indeed, the membership of the problem in PPA follows from a reduction to consensus halving on a line, as explained in the introduction. Obtaining a PPA-hardness result will most likely require new ideas and perhaps even new insights into PPA, since all existing PPA-hardness results for consensus halving heavily rely on the linear structure. Of course, it is also possible that the problem is in fact PPAD-complete. In addition to consensus halving, settling the computational complexity of the agreeable sets problem for a non-constant number of agents with monotonic utilities would also be of interest.
Another important question that arises from our work is whether there always exists a consensus k-splitting with at most (k − 1)n cuts when items do not lie on a line and agents have monotonic utilities. If these utilities are also additive, the claim holds by Theorem 5.3; for consensus halving, the claim also holds by reducing to the linear version. However, for nonadditive utilities and unequal ratios, this reduction technique no longer works: even for k = 2, we may already need to make more than n cuts on the line (Theorem 5.4). From a broader point of view, our work illustrates the richness of consensus halving and related problems, which we believe deserve further study.

A Constant Number of Agents
In this section, we provide additional results for the case where there are a constant number of agents who are endowed with monotonic utilities.

A.1 Discrete Consensus Halving
We begin by introducing a discrete version of consensus halving, which allows us to focus solely on the agents' utilities for non-fractional sets of items.
Definition A.1. A discrete consensus halving is a partition of the items into three (nonfractional) sets of items Note that for any r, a consensus halving with r cuts yields a discrete consensus halving with |M 0 | ≤ r simply by moving all cut items into M 0 . Hence, a discrete consensus halving with |M 0 | ≤ n is guaranteed to exist. The bound n is also tight here: when each agent values a single distinct item, all valued items must be included in M 0 .
The following result shows that for constant n, a discrete consensus halving with |M 0 | ≤ n can be found efficiently. Similarly to Theorem 4.2, the proof involves arranging the items on a line and appealing to the existence of a consensus halving with at most n cuts on the line. As in Theorem 4.3, we assume a utility oracle model in which the algorithm can query the utility u i (M ′ ) for any i ∈ N and M ′ ⊆ M .
Theorem A.2. For any constant number of agents with monotonic utilities, there exists a polynomial-time algorithm that computes a discrete consensus halving with |M 0 | ≤ min{n, m} (assuming access to a utility oracle).
Proof. If n ≥ m, we can simply include all items in M 0 , so assume that n ≤ m. Arrange the items on a line in arbitrary order, and extend the utility functions of the agents to fractional sets of items in a continuous and monotonic fashion (see footnote 8). Consider a consensus halving with respect to the extended utilities that uses at most n cuts on the line, and move all cut items to M 0 . The resulting discrete consensus halving has the property that for any pair of consecutive items in M 0 , the block of items in between either all belong to M 1 or all belong to M 2 .
We perform a brute-force search over all possible partitions of the items into M 0 , M 1 , M 2 satisfying the above property. For t ∈ [n], there are O(m t ) sets of items that we can choose as M 0 , and for each choice of M 0 , there are at most 2 t+1 ways to assign the resulting blocks of items to M 1 or M 2 . Hence the brute-force search runs in time n t=1 O(2 t+1 m t ) = O(m n ), which is polynomial since n is constant.
With two agents, the algorithm in Theorem A.2 runs in quadratic time. We next present a more sophisticated algorithm that uses only linear time for this special case. In fact, we will show a stronger statement based on a notion introduced by Kyropoulou et al. [2019].
Definition A.3. Let n = 2. A partition of the items into two (non-fractional) sets of items M 1 and M 2 is said to be Exact1 if for each pair i, k ∈ {1, 2}, either M 3−k = ∅ or there exists an item j ∈ M 3−k such that u i (M k ) ≥ u i (M 3−k \{j}).
In words, Exact1 means that for each agent and each part of the partition, this part can be made at least as valuable as the other part in the agent's view by removing at most one item from the latter part. Given an Exact1 partition, we can easily obtain a discrete consensus halving as follows. From the partition, each agent i proposes (at most) one item to include in M 0 . Specifically, if u i (M 1 ) < u i (M 2 ), then agent i proposes an item j such that u i (M 1 ) ≥ u i (M 2 \{j}); the opposite case is analogous. (If u i (M 1 ) = u i (M 2 ), agent i does not need to propose any item.) It is clear that |M 0 | ≤ 2, and one can check that (M 0 , M 1 , M 2 ) forms a discrete consensus halving. Kyropoulou et al. [2019] showed that an Exact1 partition exists for two agents with "responsive utilities", a class that lies between additive and monotonic utilities. Here, we present an algorithm that computes an Exact1 partition for arbitrary monotonic utilities in linear time-to the best of our knowledge, even the existence of such a partition has not been established before.
Our algorithm is based on carefully discretizing a procedure of Austin [1982], which computes a (non-discrete) consensus halving for two agents assuming that the resource is represented by the circumference of a circle. Austin's procedure works by letting the first agent place two knives on the circle so that the item is cut in half according to her valuation. The agent then moves both knives continuously clockwise, maintaining the invariant that the knives divide the items into two equal halves in her opinion. The first agent stops moving the knives when the two parts are equal according to the valuation of the second agent, and the procedure returns the resulting partition. Since the second knife would reach the initial position of the first knife at the same time as the first knife reaches the starting point of the second knife, it follows from the intermediate value theorem that the procedure necessarily terminates.
The main challenge in applying this procedure to our discrete item setting is that it is not a priori clear how to implement moving both knives simultaneously-indeed, moving each of the knives over one item does not always maintain the invariant that the partition is Exact1. Nevertheless, as we will show, this invariant can be maintained by either moving both knives or moving one of the two knives, whichever option is appropriate at each stage. In fact, for this algorithm and proof, we will use a slightly stronger definition of Exact1 wherein the items lie on a circle, each part of the partition forms a contiguous block on the circle, and the item j in Definition A.3 is only allowed to be one of the items at the end of block M 3−k . 10 We say that a partition is Exact1 for agent i if the (stronger) Exact1 condition is fulfilled for agent i and both k ∈ {1, 2}.
Algorithm 1 (for two agents with monotonic utilities) Step 1: Arrange the items on a circle in arbitrary order. Place the first knife between two arbitrary consecutive items on the circle, and the second knife between two items so that the partition induced by the two knives is Exact1 for the first agent.
Step 2: If the current partition is Exact1 for the second agent, return this partition.
Step 3: If one of the knives is at the initial position of the other knife, go to Step 4. Else, perform one of the following actions so that the new partition remains Exact1 for the first agent: Step 4: Move the knife that is not at the initial position of the other knife clockwise by one position. Go back to Step 2.
Theorem A.4. For two agents with monotonic utilities, Algorithm 1 computes an Exact1 partition in time linear in m (assuming access to a utility oracle).
Proof. Observe that throughout the algorithm, the partition induced by the two knives is Exact1 for the first agent. Moreover, a partition is returned only if it is Exact1 for the second agent. Hence, if the algorithm terminates, the partition that it outputs is Exact1 for both agents. It therefore suffices to establish that the algorithm is well-defined and always terminates. For convenience, we will say that a bundle is envy-free up to one item (EF1) for a specific agent if the Exact1 condition (specifically, the stronger version described before the algorithm) is fulfilled for the agent when that bundle is taken as M k .
First, we need to show that in Step 1, there exists a position of the second knife such that the resulting partition is Exact1 for the first agent. It turns out that this already follows from Theorem 3.1 of Oh et al. [2019], so the first step can be implemented.
Next, the key part of our proof is to show that in Step 3, at least one of the three actions keeps the new partition Exact1 for the first agent. Assume that actions (a) and (b) do not; we claim that action (c) does. Call the two parts of the partition M 1 and M 2 , and assume without loss of generality that moving the first knife clockwise would enlarge M 1 . Suppose that the next item that the first knife would move over is j, and the next item that the second knife would move over is j ′ . (See Figure 2 for an illustration.) Let O 1 = (M 1 ∪ {j})\{j ′ } and O 2 = (M 2 ∪ {j ′ })\{j} be the two parts of the partition that results from action (c). Since action (a) does not keep the partition Exact1, we have that M 2 \{j} is not EF1 for the first agent. Hence, implying that O 1 is EF1 for the first agent. By symmetry, since action (b) does not keep the partition Exact1, we have that M 1 \{j ′ } is not EF1 for the first agent. This implies that O 2 is EF1 for the agent. It follows that action (c) keeps the partition Exact1 for the first agent, as claimed. Now, consider Step 4. Since each knife never moves by more than one position at a time, unless the algorithm terminates beforehand, this step will eventually be reached. Suppose that the first knife has arrived at the initial position of the second knife, but the second knife is not yet at the initial position of the first knife. The current partition is Exact1 for the first agent. Also, if the second knife moves clockwise to the initial position of the first knife, again we have an Exact1 partition for the agent. Hence, monotonicity of the EF1 property implies that every partition in between is also Exact1 for the agent.
Finally, we show that the algorithm necessarily terminates. Suppose that this is not the case. Assume that in the initial partition with parts M 1 and M 2 , the second agent believes that M 2 is not EF1. This means that u 2 (M 2 ) < u 2 (M 1 \{j}) for any j at the end of block M 1 . In one iteration of Step 3 or 4, M 1 loses at most one end item to M 2 -call this item j ′ (if M 1 does not lose any item, take j ′ to be an arbitrary end item in M 1 ), and the respective parts of the partition after the iteration O 1 and O 2 . Since u 2 (M 1 \{j ′ }) > u 2 (M 2 ) = u 2 ((M 2 ∪{j ′ })\{j ′ }), we have that O 1 is also EF1 for the second agent. However, since the algorithm does not terminate here by assumption, O 2 is not EF1 for the second agent. The same argument tells us that in further iterations, the second bundle (i.e., M 2 , O 2 , and so on) is still not EF1 for the agent. However, the algorithm must reach a point where the first knife is at the initial position of the second knife and, at the same time, the second knife is also at the initial position of the first knife. At this point, the second bundle coincides with the initial first bundle, so it must be EF1 for the second agent. This yields the desired contradiction.
Regarding the running time, note that each knife moves clockwise around the circle only once, so the number of partitions considered by the algorithm is linear. For each partition, checking the relevant Exact1 condition can be done in constant time since it involves hypothetically removing only a constant number of items. Hence the algorithm runs in linear time, as claimed.

A.2 Continuous Extensions
The discrete consensus halving problem allows us to concern ourselves exclusively with the agents' utilities for non-fractional sets of items, which are represented by set functions. For an additive set function, there exists an obvious extension to fractional sets of items: the linear extension used in Section 2. This is, however, not the case for general monotonic functions. In this subsection, we address two extensions that have been studied in the literature, namely the Lovász extension and the multilinear extension. We refer to the lecture notes of Vondrák [2010] for further discussion of these extensions. Let x = (x 1 , . . . , x m ), and for each subset S ⊆ [m], denote by 1 S the vector of length m such that the ith component is 1 if i ∈ S, and 0 otherwise.
Proposition A.7. If a function f : {0, 1} m → R is monotonic, then so is its Lovász extension f L .
Proof. Let f be a monotonic set function, and f L be its Lovász extension. Let x ∈ [0, 1] m , and assume that x 1 ≤ x 2 ≤ · · · ≤ x m (other orderings can be handled analogously). In this case, we have It suffices to show that for any i, the value f L (x) does not decrease upon increasing x i . This is obvious if i = m. For 1 ≤ i ≤ m − 1, we only need to prove that f L (x) does not decrease when we increase x i until it reaches x i+1 -indeed, if we want to increase x i further, we can swap the roles of x i and x i+1 and apply the same argument. When we increase x i in the range [x i−1 , x i+1 ], the only terms that change are x i · f ({i, . . . , m}) and −x i · f ({i + 1, . . . , m}). The net change is x i · (f ({i, . . . , m}) − f ({i + 1, . . . , m})), which is nonnegative due to the monotonicity of f . The conclusion follows.
When n is constant, computing a consensus halving for a utility function given by the Lovász extension of a monotonic set function can be done efficiently.
Theorem A.8. For a constant number of agents with monotonic utilities, each given by the Lovász extension of a set function, there exists a polynomial-time algorithm that computes a consensus halving with at most min{n, m} cuts (assuming access to a utility oracle for the set function).
Proof. If n ≥ m, we can simply divide every item in half, so assume that n ≤ m. Arrange the items on a line in arbitrary order. Similarly to the proof of Theorem A.2, there exists a consensus halving that uses at most n cuts on the line such that for any pair of consecutive cut items, the block of whole items in between either all belong to M 1 or all belong to M 2 . We will perform a brute-force search over all partitions of items into (M 0 , M 1 , M 2 ) such that all cut items belong to M 0 and the above property is satisfied; as in Theorem A.2, this search takes polynomial time.
For each such partition, it remains to determine the ratios by which we should divide the items in M 0 between M 1 and M 2 . Denote by x 1 , . . . , x r the fraction of the r ≤ n items in M 0 that should go into M 1 . We iterate over all possible orderings of x 1 , . . . , x r -there are at most n! orderings, which is polynomial since n is constant. For each ordering, one can verify that the consensus-halving condition for each agent reduces to a linear equation in x 1 , . . . , x r . Hence, to check the feasibility of a partition along with an ordering, we can run any efficient linear programming algorithm (with an arbitrary objective) on the ordering and consensus-halving constraints. The previous paragraph implies that at least one combination of partition and ordering results in a feasible linear program, which in turn gives rise to the desired consensus halving.
A consequence of Theorem A.8 is that for the Lovász extension, if the set function is rational, then there exists a consensus halving with rational ratios. By contrast, for the multilinear extension, a consensus halving may necessarily involve splitting items in irrational ratios, even if the set function only takes on integer values. Theorem A.9. There exists an instance with n = 2 and m = 3 in which each agent has a monotonic utility function given by the multilinear extension of a set function taking on integer values, but every consensus halving with at most two cuts involves splitting some items in irrational ratios.
Proof. Assume that n = 2 and m = 3. The utility functions of the agents are given in Table 1. Notice that the function of the second agent is the same as that of the first agent, except with the roles of items 2 and 3 reversed.
The only positive solution to (7) is x 2 = √ 1049−29 8 ≈ 0.4235 . . . , meaning that every consensus halving involves splitting items 2 and 3 in irrational ratios. Theorem A.9 implies that for the multilinear extension, computing a consensus halving exactly may not be possible if our computation model only allows representing rational numbers. As we can see, with two agents and two necessary cuts, the problem already requires solving a quadratic equation. For more agents, we can therefore expect that one would need to solve higher-degree polynomial equations-the Abel-Ruffini theorem states that almost all polynomials of degree at least five do not admit a solution in radicals. Hence, for this extension, finding an approximate consensus halving is likely the best that one could do even under general computational models.

B Proof of Lemma 3.4
We reduce from the ε-Gcircuit problem, which is known to be PPAD-hard even for some constant ε > 0 [Rubinstein, 2018]. In this problem we are given a generalized circuit (V, T ), where there are 9 gate types: G ζ , G ×ζ , G = , G + , G − , G < , G ∨ , G ∧ and G ¬ with ζ ∈ [0, 1] for the first two gates (see [Rubinstein, 2018] for a formal definition of the gates). The last three gate types correspond to Boolean operations. As shown by Schuldenzucker and Seuken [2019, Corollary 1], these three gate types are actually not necessary, and the problem remains PPAD-hard for constant ε even without them. Apart from the set of gates, the other difference with ε-simple-Gcircuit is that in ε-Gcircuit we want to assign a number in [0, 1] to each node (instead of [−1, 1]).
Let ε > 0 be a constant such that the ε-Gcircuit problem without Boolean operation gates is PPAD-hard, and let (V, T ) be an instance of ε-Gcircuit without Boolean gates. We construct an instance (V ′ , T ′ ) of ε-simple-Gcircuit, where ε > 0 is a sufficiently small constant (which we pick later), such that any solution to the new instance yields a solution to the original instance. We let V ′ = V ∪ V aux , where V aux is a set of nodes that will be used for "intermediate" results when simulating the gates of the original problem with the restricted set of gates allowed in ε-simple-Gcircuit. We will construct T ′ such that it induces the original constraints of T on the nodes V ⊂ V ′ . Furthermore, we will also ensure that in any solution x : V ′ → [−1, 1], we have x[v] ∈ [0, 1] for all v ∈ V ⊂ V ′ . Thus, restricting x to V will immediately yield a solution to the original ε-Gcircuit instance.
Recall that we only have three types of gates at our disposal: G + , G ×−|ζ| for ζ ∈ (0, 1], and G 1 . We begin by constructing some useful gadgets that simulate more operations on the same domain [−1, 1]. Throughout, we denote the input nodes by u 1 , u 2 (if applicable) and the output node by v. G ×ζ : multiplication by ζ ∈ [−1, 1]. This gadget ensures that x[v] = ζ · x[u 1 ] ± 2ε. If ζ < 0, use a G ×−|ζ| gate with input u 1 and output v. If ζ > 0, use a G ×−|ζ| gate with input u 1 and output w ∈ V aux , and then a G ×−|1| gate with input w and output v, which ensures that x[v] = ζ ·x[u 1 ]± 2ε. Finally, if ζ = 0, then use a G ×−|1| gate with input u 1 and output w ∈ V aux , and then a G + gate with inputs u 1 , w and output v. This ensures that x[v] = 0 ± 2ε. G ζ : constant ζ ∈ [−1, 1]. This gadget ensures that x[v] = ζ ± 3ε. We use a G 1 gate with output w ∈ V aux , and then a G ×ζ gadget with input w and output v, which yields the desired result. G ×2 : multiplication by 2. This gadget ensures that x[v] = T [−1,1] (2x[u 1 ]) ± 3ε. We use a G ×1 gadget with input u 1 and output w ∈ V aux , and then a G + gate with inputs u 1 , w and output v, which yields the desired result.
Before we show how to construct gadgets that simulate the gates of ε-Gcircuit, we need a way to ensure that for v ∈ V ⊂ V ′ , we have x[v] ∈ [0, 1]. To achieve this we will make extensive use of the following gadget. To achieve this we use the fact that for any t ∈ [−1, 1], it holds that T [0,1] (t) = T [−1,1] [t+(−1)]+1. First, we use a G −1 gadget with output w 1 ∈ V aux , and then a G + gate with inputs u 1 , w 1 and output w 2 ∈ V aux . Next, we use a G 1 gate with output w 3 ∈ V aux , and then a G + gate with inputs w 2 , w 3 and output w 4 ∈ V aux . Since, the G −1 gadget has error at most 3ε and the G + and G 1 gates have error at most ε, we obtain that x[w 4 ] = T [0,1] (x[u 1 ]) ± 6ε. Furthermore, it holds that x[w 4 ] ≥ −2ε, since x[w 4 ] = T [−1,1] (x[w 2 ] + x[w 3 ]) ± ε, x[w 2 ] ∈ [−1, 1] and x[w 3 ] ≥ 1 − ε. Finally, we also use a G 6ε gadget with output w 5 ∈ V aux , and a G + gate with inputs w 4 , w 5 and output v. This introduces an additional error of at most 4ε and thus ensures that We are now ready to simulate the constraints T of the original instance on the nodes V ⊂ V ′ . First of all, for any node v ∈ V that does not appear as the output of any gate in T , we ensure that x[v] ∈ [0, 1] as follows: create a node w ∈ V aux and use a G [0,1] gadget with input w and output v. Note that we do not care about the error in this case, since we only want to ensure that x[v] ∈ [0, 1]. For all v ∈ V that appear as the output of some gate in T , the gadget that outputs into v will ensure that x[v] ∈ [0, 1].
For the analysis, we first consider the case x[u 1 ] < x[u 2 ] − ε. Then, it holds that x[w 0 ] ≥ ε − 2ε ≥ ε − 3ε. First, let us show by contradiction that there must exist i ∈ [k] such that We can now finish the reduction. We set ε = ε/25. This ensures that all the assumptions we have made about ε hold, and that all the gadgets that simulate the gates in T have error at most ε.