论文标题

幂律缩放以协助人工智能方面的关键挑战

Power-law Scaling to Assist with Key Challenges in Artificial Intelligence

论文作者

Meir, Yuval, Sardi, Shira, Hodassman, Shiri, Kisos, Karin, Ben-Noam, Itamar, Goldental, Amir, Kanter, Ido

论文摘要

Power-Laws缩放是一个关键现象中的一个核心概念,在深度学习中很有用,在此中,手写数字示例的优化测试错误将其作为幂律作为幂律的零,并具有数据库大小的零。对于一个训练时期的快速决策,每个示例仅向受过训练的网络呈现一次,幂律指数随着隐藏层的数量而增加。对于最大的数据集,估计获得的测试误差符合大型时期数字的最新算法的接近。 Power-Laws缩放有助于当前人工智能应用中发现的关键挑战,并促进先验数据集尺寸估计,以达到所需的测试准确性。它建立了测量训练复杂性以及机器学习任务和算法的定量层次结构的基准。

Power-law scaling, a central concept in critical phenomena, is found to be useful in deep learning, where optimized test errors on handwritten digit examples converge as a power-law to zero with database size. For rapid decision making with one training epoch, each example is presented only once to the trained network, the power-law exponent increased with the number of hidden layers. For the largest dataset, the obtained test error was estimated to be in the proximity of state-of-the-art algorithms for large epoch numbers. Power-law scaling assists with key challenges found in current artificial intelligence applications and facilitates an a priori dataset size estimation to achieve a desired test accuracy. It establishes a benchmark for measuring training complexity and a quantitative hierarchy of machine learning tasks and algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源