【24h】

Polynomial Factorization: a Success Story

机译:多项式因式分解:成功案例

获取原文
获取原文并翻译 | 示例

摘要

The problem of factoring a polynomial in a single or several variables over a finite field, the rational numbers or the complex numbers is one of the success stories in the discipline of symbolic computation. In the early 1960s implementors investigated the constructive methods known from classical algebra books, but―with the exception of Gauss's distinct degree factorization algorithm―found the algorithms quite inefficient in practice. The contributions in algorithmic techniques that have been made over the next 40 years are truly a hallmark of symbolic computation research. The early pioneers. Berlekamp, Musser, Wang, Weinberger, Zassenhaus and others applied new ideas like randomization, that even before the now famous algorithms for pri-mality testing by Rabin and Strassen, and like generic programming with coefficient domains as abstract data classes, and they introduced the powerful Hensel lifting lemma to computer algebra. We note that while de-randomization for integer primality testing has been accomplished recently, the same remains open for the problem of computing a root of a polynomial modulo a large prime [12, Research Problem 14.48]. Polynomial-time complexity for rational coefficients was established in the early 1980s by the now-famous lattice basis reduction algorithm of A. Lenstra, H. W. Lenstra, Jr., and L. Lovasz. The case of many variables first became an application of the DeMillo and Lipton/Schwartz/Zippel lemma and then triggered a fundamental generalization from the standard sparse (distributed) representation of polynomials to the one by straight line and black box programs. Effective versions of the Hilbert irre-ducibility theorem are needed for the probabilistic analysis, which serendipitously later have also played a role in the PCP characterization of NP. Unlike many other problems in commutative algebra and algebraic geometry, such as algebraic system solving, the polynomial factoring problem is of probabilistic polynomial-time complexity in the number of variables. Complex coefficients in multivariate factors can be represented either by exact algebraic numbers or by imprecise floating point numbers. The latter formulation is a cornerstone in the new computer algebra subject of SNAP (Symbolic-Numeric Algorthms for Polynomials) (see, e.g., [4]). The approaches for both exact and imprecise coefficients are manifold, including Ruppert's partial differential equations and Gao's and Lander's far-reaching generalization of Eisenstein's criterion in the multivariate case to Newton polytope decomposition. The currently best algorithms were all discovered recently within the past ten years. The baby steps/giant steps technique and fast distinct and equal degree factorization implementations have, at last, yielded in the mid 1990s theoretical and practical improvements over the original univariate Berlekamp algorithm for coefficients in finite fields. The average time analysis for selected algorithms is also completed. For bi-variate polynomials over finite fields, surprisingly Groebner basis techniques are useful in practice. New polynomial-time complexity results are the computation of low degree factors of very high degree sparse (la-cunary) polynomials by H. W. Lenstra, Jr., and the deterministic distinct degree factorization for multivariate polynomials over large finite fields. However, many problems with high degree polynomials over large finite fields in sparse or straight line program representations, such as computing a root modulo a large prime, are not known to be in random polynomial time or NP-hard(cf.[24, 25, 15]). Finally, in 2000 Mark van Hoeij reintroduced latice basis reduction, now in the Berlekamp-Zassenhaus lago rithm, to conquer the hard-to-factor Swinnerton-Dyer polynomials in practice. Sasaki in 1993 had already hinted of the used approach. In my talk I will discuss a selection of the highlights, state remaining open problems, and give some applications including an unusual one d
机译:在有限域,有理数或复数上将一个或多个变量分解为多项式的问题是符号计算学科中的成功案例之一。在1960年代初期,实施者研究了经典代数书籍中已知的构造方法,但是,除了高斯独特的度数分解算法外,发现该算法在实践中效率很低。在接下来的40年中,对算法技术的贡献确实是符号计算研究的标志。早期的开拓者。 Berlekamp,Musser,Wang,Weinberger,Zassenhaus等人应用了新思想,例如随机化,甚至早在Rabin和Strassen进行著名的素数测试算法之前,就喜欢将系数域作为抽象数据类的泛型编程,于是他们引入了强大的Hensel提升引理到计算机代数。我们注意到,尽管最近已经完成了用于整数素数测试的去随机化,但对于计算以大素数为模的多项式的根的问题仍然没有解决[12,研究问题14.48]。有理系数的多项式时间复杂度是由A.Lenstra,H.W.Lenstra,Jr。和L.Lovasz所采用的著名的格基约简算法于1980年代初期建立的。许多变量的情况首先成为DeMillo和Lipton / Schwartz / Zippel引理的一种应用,然后通过直线法和黑盒程序触发了从多项式的标准稀疏(分布式)表示到一个多项式的基本概括。概率分析需要有效的希尔伯特不可约性定理,后者后来偶然在NP的PCP表征中也发挥了作用。与可交换代数和代数几何中的许多其他问题(例如代数系统求解)不同,多项式因式分解问题的变量数量具有多项式时间复杂度。多元因子中的复数系数可以由精确的代数数或不精确的浮点数表示。后者的表述是新的计算机代数主题SNAP(多项式的符号-数字算法)的基石(例如,参见[4])。精确系数和不精确系数的方法都是多种多样的,包括Ruppert的偏微分方程以及在多变量情况下对牛顿多面体分解的爱森斯坦准则的高斯和兰德的深远推广。当前最好的算法都是最近十年内发现的。婴儿步/巨型步技术以及快速不同且相等度数的分解实现,最终在1990年代中期对原始有限域系数的单变量Berlekamp算法产生了理论和实践上的改进。所选算法的平均时间分析也已完成。对于有限域上的二元多项式,令人惊讶的是Groebner基技术在实践中很有用。新的多项式-时间复杂度结果是由H.W.Lenstra,Jr.计算非常高阶稀疏(la-cunary)多项式的低阶因数,以及在大有限域上对多元多项式进行确定性的阶分解。然而,在稀疏或直线程序表示中,大有限域上的高次多项式的许多问题,例如计算大素数的根模,都不是随机多项式时间或NP-hard(参见[24,25 ,15])。最终,在2000年,马克·范·霍伊(Mark van Hoeij)在现在的Berlekamp-Zassenhaus lago rithm中重新引入了经纬度折减,以在实践中征服难以分解的Swinnerton-Dyer多项式。 Sasaki在1993年已经暗示过使用这种方法。在我的演讲中,我将讨论重点内容的选择,陈述尚待解决的问题,并给出一些应用程序,包括不寻常的应用程序。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号