Newton steps (higher from the state of the art by a
factor
√
M). But, this algorithm has good binary prop-
erty: it keeps the binary size of intermediary numbers
bounded by
e
O(L), and, offers an explicit strategy for
rounding all intermediary numbers (see table 1).
REFERENCES
Ambainis, A., Filmus, Y., and Le Gall, F. (2015). Fast
matrix multiplication: limitations of the coppersmith-
winograd method. In Proceedings of the forty-seventh
annual ACM symposium on Theory of Computing,
pages 585–593.
Anderson, E. D., Gondzio, J., M
´
esz
´
aros, C., and Xu, X.
(1996). Implementation of interior-point methods for
large scale linear programs. In Interior Point Meth-
ods of Mathematical Programming, pages 189–252.
Springer.
Chubanov, S. (2015). A polynomial projection algorithm
for linear feasibility problems. Mathematical Pro-
gramming, 153(2):687–713.
Cohen, M. B., Lee, Y. T., and Song, Z. (2021). Solving
linear programs in the current matrix multiplication
time. Journal of the ACM (JACM), 68(1):1–39.
Dantzig, G. B. e. a. (1955). The generalized simplex method
for minimizing a linear form under linear inequality
restraints. In Pacific Journal of MathematicsAmerican
Journal of Operations Research.
Fang, X. G. and Havas, G. (1997). On the worst-case com-
plexity of integer gaussian elimination. In Proceed-
ings of the 1997 international symposium on Symbolic
and algebraic computation, pages 28–31.
Khachiyan, L. (1979). A polynomial algorithm for linear
programming. Doklady Akademii Nauk SSSR.
Nemirovski, A. (2004). Interior point polynomial time
methods in convex programming. Lecture notes,
42(16):3215–3224.
Nesterov, Y. and Nemirovskii, A. (1994). Interior-point
polynomial algorithms in convex programming. Siam.
Pe
˜
na, J. and Soheili, N. (2016). A deterministic rescaled
perceptron algorithm. Mathematical Programming,
155(1-2):497–510.
Renegar, J. (1988). A polynomial-time algorithm, based on
newton’s method, for linear programming. Mathemat-
ical programming, 40(1):59–93.
Rosenblatt, F. (1958). The perceptron: a probabilistic model
for information storage and organization in the brain.
Psychological review, 65(6):386.
APPENDIX
Equivalence of Linear Programming and
Linear Feasibility
This paper provides an algorithm algo
0
which re-
turns v such that AA
T
v > 0 on an input A assum-
ing ∃x, Ax ≥ 1 (undefined behaviour otherwise - v
is positive but this does not matter). Trivially, it is
thus possible to form algo
1
which returns x such that
Ax > 0 on input A assuming such x exists by returning
A
T
algo
0
(A) (undefined behaviour otherwise).
• Thank to algo
1
, one could form algo
2
(A,b) which
returns x such that Ax > b assuming such x ex-
ists (undefined behaviour otherwise). Indeed, let
consider any A,b such that ∃x, Ax > b, finding
such x is equivalent to find a pair x,t such that
Ax −t ×b > 0 and t > 0, because
x
t
is then a solu-
tion of the original problem. Formally, let A
1
the
matrix A plus 1 as additional column and (0 1)
as additional row. Thus, one can get (x
1
t
1
) by
computing algo
1
(A) and returning
x
1
t
1
as output of
algo
2
(A,b).
Importantly, only constant number of vari-
ables/constraints are added, and, binary size is not
increased. So complexity of algo
2
(A,b) is the
same than algo
1
(A,b).
• Thank to algo
2
, one could form algo
3
(A,b) which
returns x such that Ax ≥b assuming such x exists.
Indeed, if ∃x/Ax ≥ b, then a fortiori ∃x,t such
that Ax + t1× > b, 0 < t <
1
Ω(A)
(with Ω(A) the
maximal subdeterminant of A). So, one could call
algo
1
on A
2
,b
2
with A
2
being A plus 1 column
plus a row with 0 and Ω(A) and b
2
being b plus
two 1. Thus, algo
2
(A
2
,b
2
) = x
2
,t
2
.
Now, one could consider greedy improvement
of min t
x,t, Ax+t1≥b,t≥0
initialized from (x
2
,t
2
). Such
greedy improvement can be performed by project-
ing (x,t) on {(x,t), Ax + t1 ≥ b} while minimiz-
ing t. One such greedy step can simply be done
by looking for χ, τ such that A
S
χ + t1
S
= 0 and
τ = −1 with S the saturated rows in Ax + t1 ≥ b.
If no such χ,τ exists, the greedy improvement
has terminated otherwise one could do (x,t) ←
(x +µχ,t +µτ) with µ such that Ax +t1 ≥b,t ≥0.
There will be no more than M such greedy purifi-
cation because one row enter the saturated ones at
each step.
When this greedy process terminates, this leads to
ˆx,
ˆ
t with A ˆx +
ˆ
t1 ≥ b, 0 ≤
ˆ
t ≤ t
2
<
1
Ω(A)
but ˆx,
ˆ
t is
a vertex of A. So Cramer rule applies, and so
ˆ
t =
Solving Linear Programming While Tackling Number Representation Issues
45