Smooth integers and de Bruijn's approximation $\Lambda$

This paper is concerned with the relationship of $y$-smooth integers and de Bruijn's approximation $\Lambda(x,y)$. Under the Riemann hypothesis, Saias proved that the count of $y$-smooth integers up to $x$, $\Psi(x,y)$, is asymptotic to $\Lambda(x,y)$ when $y \ge (\log x)^{2+\varepsilon}$. We extend the range to $y \ge (\log x)^{3/2+\varepsilon}$ by introducing a correction factor that takes into account the contributions of zeta zeros and prime powers. We use this correction term to uncover a lower order term in the asymptotics of $\Psi(x,y)/\Lambda(x,y)$. The term relates to the error term in the prime number theorem, and implies that large positive (resp. negative) values of $\sum_{n \le y} \Lambda(n)-y$ lead to large positive (resp. negative) values of $\Psi(x,y)-\Lambda(x,y)$, and vice versa. Under the Linear Independence hypothesis, we show a Chebyshev's bias in $\Psi(x,y)-\Lambda(x,y)$.


Main results
Let ψ(y) = n≤y Λ(n) and The following theorem gives an asymptotic formula for Ψ(x, y) for y smaller than (log x)2 .
Theorems 1.1 and 1.2 and their proofs have their origin in our work in the polynomial setting [10], where Ψ(x, y) corresponds to the number of m-smooth polynomials of degree n over a finite field, while Λ(x, y) is analogous to the number of m-smooth permutations of S n (multiplied by q n /n!).In that setting, the analogue of G 1 (s, y) is identically 1 (the relevant zeta function has no zeros) which makes the analysis unconditional.

Applications: sign changes and biases
From Theorem 1.1 we deduce in §2.2 the following holds for T ≥ 4, where the sum is over zeros of ζ.
Corollary 1.3 implies that large positive (resp.negative) values of ψ(y)−y lead to large positive (resp.negative) values of Ψ(x, y) − Λ(x, y) and vice versa.Large and small values of ψ(y) − y were exhibited by Littlewood [15,Thm. 15.11].Note that Corollary 1.3 sharpens (1.4) if y ≤ x 1−ε . 1  Let π(x) be the count of primes up to x and Li(x) be the logarithmic integral.It is known that π(x) − Li(x) is biased towards positive values in the following sense.Assuming RH and the Linear Independence hypothesis (LI) for zeros of ζ, Rubinstein and Sarnak [16] showed that the set {x ≥ 2 : π(x) > Li(x)} has logarithmic density ≈ 0.999997.This is an Archimedean analogue of the classical Chebyshev's bias on primes in arithmetic progressions.We use Corollary 1.3 to exhibit a similar bias for smooth integers.Let us fix the value of β = 1 − ξ(u)/ log y to be β = β 0 where β 0 ∈ (1/2, 1).This amounts to restricting x to be a function x = x(y) of y defined by In particular y = (log x) 1/(1−β0)+o (1) .Then Corollary 1.3 shows Applying the formalism of Akbary, Ng and Shahabi [1] to the right-hand side of (1.9) we deduce immediately Corollary 1.4.Assume RH.Assume LI for ζ.Fix β 0 ∈ (1/2, 1) and let x be a function of y defined as in (1.8).Then the set {y ≥ 2 : Ψ(x(y), y) > Λ(x(y), x)} has logarithmic density greater than 1/2, and the left-hand side of (1.9) has a limiting distribution in logarithmic sense.
In the same way that Chebyshev's bias for primes relates to the contribution of prime squares, this is also the case for smooth integers.Writing G as G 1 G 2 as in §1.1, G 2 captures the contribution of proper powers of primes.When β 0 ∈ (1/2, 1), the only significant term in G 2 (β 0 , y) is k = 2, which corresponds to squares of primes.The squares lead to the term y 1/2 /(2β 0 − 1) in (1.9) which creates the bias.
Remark 1.Consider the arithmetic function α y (n) defined implicitly via This function is supported on y-smooth numbers and coincides with the indicator of y-smooth numbers on squarefree integers.Working with the summatory function of α y instead of Ψ(x, y), the bias discussed above disappears.This is because, modifying the proof of Theorem 1.1, one finds that , meaning the bias-causing factor G 2 (β, y) does not arise.This is analogous to how the indicator function of primes is biased, while Λ(n)/ log n is not.
Remark 2. It is interesting to see if one can formulate and prove variants of Corollaries 1.3 and 1.4 in the range y ≤ (log x) 1−ε .In this range, an accurate main term for Ψ(x, y) was established in [6].
1.4 Strategy behind Theorems 1.1 and 1.2 We write Ψ(x, y) as a Perron integral, at least for non-integer x: x s s ds then we can bound the integral by using pointwise bounds for the integrand.Instead of subtracting Λ(x, y), we subtract Λ(x, y) times G(β, y), which leads to We want to bound the integral in (1.13).The proof of Theorem 1.1 considers separately the range and its complement.When u satisfies (1.14), then in (1.13) one needs only small values of ℜs to estimate the integral (|ℜs| ≤ 1/ log y) with arbitrary power saving in y.This is an unconditional observation established in Proposition 3.1.However, for smaller u, one needs |ℜs| going up to a power of y if one desires power saving in y, which makes the proof more involved.
In our proofs, RH is only invoked at the very end to estimate G 1 and its derivatives.For instance, in the range where (1.14) and y ≥ (log x) 2+ε hold, we prove in (4.6) the unconditional estimate See (4.7) for a similar estimate for u ≤ (log y)(log log y) 3 .In particular, our proofs are easily modified to recover (1.3).

Conventions
The letters C, c denote absolute positive constants that may change between different occurrences.We denote by C ε , c ε positive constants depending only on ε, which may also change between different occurrences.The notation A ≪ B means |A| ≤ CB for some absolute constant C, and for some absolute positive constants C i , and A ≍ ε B means C i may depend on ε.The letter ρ will always indicate a non-trivial zero of ζ.When we differentiate a bivariate function, we always do so with respect to the first variable.We set L(y) := exp((log y) 3 5 (log log y) − 1 5 ).
We turn to G 2 .By the non-negativity of the coefficients of log G 2 , for i ≥ 0 and ℜs > 0 we have Corollary 2.9 and Lemma 2.10, applied with i = 0, imply the following Corollary 1.3 follows from Theorem 1.1 by simplifying G(β, y) using Lemma 2.11 and (2.1).

Truncation estimates for Ψ and Λ
The purpose of this section is to prove the following two propositions.
Proof.By [19,Thm. 4.11], for every r > 0 we have as long as s = 1, ℜs ≥ ε and |ℑs| ≤ 2r.Suppose s = σ + it with |t| ≥ 1.We apply this estimate with r = |t|, obtaining We now plug (3.3) in the left-hand side of (3.2).The contribution of the error term to the integral is acceptable: The contribution of n −s 1 n≤|t| in (3.3) to the left-hand side of (3. and so the total contribution of the n-sum in (3.3) to the left-hand side of (3.2) is It remains to estimate (3.5), which we do according to the size of n.The contribution of n ≥ 2x is The contribution of n ∈ (x/2, 2x) can be bounded by considering separately the n closest to x, and partitioning the rest of the ns according to the value of k ≥ 0 for which Finally, the contribution of acceptable as well.
Corollary 3.4.Fix ε ∈ (0, 1).Suppose x ≥ y ≥ C ε .For σ ∈ [ε, 1] and x ≥ T ≥ max{2, y 1−σ / log y} we have The first integral in the right-hand side of (3.6) is estimated in Lemma 3.3.To bound the second integral we apply the second moment estimate for ζ given in Lemma 2.5.We first suppose that σ ≥ 1/2.Using Cauchy-Schwarz, the second integral in the right-hand side of (3.6) is at most Multiplying this by the prefactor x σ y 1−σ / log y, we see that this is acceptable.If ε ≤ σ ≤ 1/2 we use Lemma 2.3.We obtain that the second integral in the right-hand side of (3.6) is at most concluding the proof.
Let α = α(x, y) be the saddle point associated with y-smooth numbers up to x [13], that is, the minimizer of the convex function s → x s ζ(s, y) (s > 0).Lemma 3.5.For σ ∈ (0, 1], x ≥ y ≥ C and T ≥ 2 we have Our proof makes more precise a similar estimate appearing in Saias [17, p. 98], which does not allow general y and T but contains the main ideas. Proof.The truncated Perron's formula [14, p. 435] bounds the error in (3.7) by .
The contribution of the terms with T .
We now study the terms with | log(x/n)| < 1.These contribute The subset of terms with | log(x/n)| ≤ 1/T contributes to (3.8) The contribution of the rest of the terms to to (3.8), namely, those terms with 1/T < | log(x/n)| < 1, can be dyadically dissected to terms with where log 2 is the base-2 logarithm.(We interpret Ψ(a, y) for negative a as equal to 0.) Note that the sum in (3.10) dominates the right-hand side of (3.9).We shall make use of Hildebrand where in the second inequality we replaced Ψ(Cx, y) with Ψ(x, y) using [13,Thm. 3].To conclude, we recall Theorem 2.4 of [5] says Ψ(x/d, y) ≪ Ψ(x, y)/d α holds for x ≥ y ≥ 2 and 1 ≤ d ≤ x.We apply this inequality with d = 2 k and obtain as needed.
We now treat the range 1/ log y ≤ |ℑs| ≤ T .By the definition of F , First suppose t ≥ π/ log y.By the second case of Lemma 2.6, this range contributes where we used the functional equation if β < 1/2 (Lemma 2.3).The contribution of 1/ log y ≤ t ≤ π/ log y to the right-hand side of (3.13) is treated using the first part of Lemma 2.6, and we find that it is at most x exp(I(ξ)) log y exp(uξ) By our choice of T and assumptions on u and y, this can be absorbed in the error term of (3.1).
Proof.Our strategy is to establish Ψ(x, y) = Λ(x, y)G(β, y)(1 The theorem will then follow by rearranging, once we recall that xρ(u) ≍ ε Λ(x, y).From Proposition 3.1, which explains E 3 .Let t 0 be as in the statement of the proposition.We upper bound the contribution of t 0 ≤ |ℑs| ≤ 1/ log y to the integral in the right-hand side of (4.1).We have The triangle inequality shows, by definition of F , that Since −e −v 2 /2 is the antiderivative of e −v 2 /2 v, the first part of Lemma 2.6 shows Hence t 0 ≤ |ℑs| ≤ 1/ log y contributes in total where we used Lemma 2.2 to simplify.Once we divide this by Λ(x, y)G(β, y) ≍ ε xρ(u)G(β, y) we obtain the error term E 2 .It remains to study the contribution of |ℑs| ≤ t 0 to the integral in the right-hand side of (4.1), which will yield E 1 .We Taylor-expand the integrand at s = β.We write s = β + it, |t| ≤ t 0 .We first simplify the integrand using the definition of F : x β e γ+I(ξ) exp(I(ξ − it log y) − I(ξ) + it log x).

Proof of Theorem 1.1: small u
Here we prove Theorem 1.1 for u in the range (4.3).In this range, β = 1 + o(1) and Ψ(x, y) = x 1+o (1) .Moreover, log G(β, y) = O(1) unconditionally by Corollary 2.9 and Lemma 2.10.The hardest range of the proof will be u ≍ 1.Before proceeding with the actual proof, note that from Proposition 4.2 and the triangle inequality, it follows that Ψ(x, y) = Λ(x, y)G(β, y) for i = 0, 1, 2 and t ∈ R where we simplified y −β using (2.1).From now on we assume RH.Corollary 2.9 implies ) for i = 0, 1, 2 when |t| ≤ 1.As in the medium u case, one can bound E 1 by an acceptable quantity using our estimates for (log G 1 ) (i) and (log G 2 ) (i) .Recall which is a consequence of (2.5) and (4.8).
To handle E 4 it remains to prove Here we cannot use the triangle inequality and put absolute value inside the integral.Indeed, if we use the pointwise bound (4.10), along with our bounds for ζ (Lemmas 2.4 and 2.5), we get a bound which falls short by a factor of (log y) 3 .We shall overcome this by several integrations by parts as we now describe.
To deal with the contribution of log G(β, y) to (4.11) we use (4.10) with t = 0 along with the bound which follows by integration by parts, where we replace x it by its antiderivative x it / log x.
To deal with the contribution of log G(β + it, y) to (4.11) we write it log G 1 (β + it, y) + log G 2 (β + it, y) and obtain two integrals which we bound separately.

Treatment of log G 1
Recall we assume y ≤ x 1−ε .We want to show ).
We divide and multiply the integrand by y it , so the left-hand side of (4.12) is now 1 log x t1≤|t|≤y The derivatives of log G in the right-hand side of (4.17 Dividing this by (log x)(log y) gives a bound for the second term in (4.13).

A Review of Λ(x, y)
A.1 λ y and its Laplace transform Saias [17,Lem. 4(iii)] proved that λ y (v) ≪ ρ(v)v 3 + e 2v y −v holds for y ≥ 2, v ≥ 1.The following is a weaker version of his result which suffices for us. .

( 3
.14) using Lemma 2.2 in the second inequality.Recall the second moment estimate for ζ given in Lemma 2.5.It shows that right-hand side of (3.14) is bounded by