Authors:
Pritam Pallab
and
Abhijit Das
Affiliation:
Indian Institute of Technology, Kharagpur, India
Keyword(s):
General Number Field Sieve Method, RSA Cryptanalysis, Line Sieving, Lattice Sieving, Block Sieving, Bucket Sieving, Single Instruction Multiple Data (SIMD), Multi-core, Multi-thread, AVX-512, Skylake.
Abstract:
The fastest known general-purpose technique for factoring integers is the General Number Field Sieve Method (GNFSM), in which the most time-consuming part is the sieving stage. For both line sieving and lattice sieving, two cache-friendly extensions used in practical implementations are block sieving and bucket sieving. The new AVX-512 instruction set in modern Intel CPUs offers some fast vectorization intrinsics. In this paper, we report our AVX-512 based cache-friendly parallelization of block and bucket sieving for the GNFSM. We use vectorization for both sieve-index calculations and sieve-array updates in block sieving, and for the insertion stage in bucket sieving. Our experiments using Intel Xeon Skylake processors demonstrate a performance boost in both single-core and multi-core environments. The introduction of cache-friendly sieving leads to a speedup of up to 63%. On top of that, vectorization yields a speedup of up to 25%.