论文标题

关于用于可变星的天文时间序列分类的神经体系结构

On Neural Architectures for Astronomical Time-series Classification with Application to Variable Stars

论文作者

Jamal, Sara, Bloom, Joshua S.

论文摘要

尽管神经网络(NNS)用于天文序列分类,但应用于不同数据集的学习架构的扩散仍阻碍了不同方法的直接比较。在这里,我们对基于NN的学习和天文学时间序列的推断进行了首次全面研究,旨在为社区提供有关相对绩效的概述,并希望可以为实践实现提供一系列一流的选择。在受监督和自我监督的环境中,我们研究了不同时间序列兼容层选择的影响,即扩张的时间卷积神经网络(DTCN),长短术语记忆(LSTM)NN,GETSTM NNS,GETERT RECURRENT单位(GRUS)和时间卷积NNS(TCNNS)。我们还研究了与直接分类网络,包括辅助(非时序)元数据的不同途径相比,编码器数据编码器(即自动编码器)网络的功效和性能,以及包含多通路数据(即每个来源的多个时间序列)的不同方法。性能---应用于10个不平衡类中的男子气概调查的17,604个变量星的样本 - 在训练收敛时间,分类准确性,重建误差和生成的潜在变量中测量。我们发现,具有复发性NN(RNN)的网络通常优于DTCN,并且在许多情况下,屈服于TCNN的准确性。在学习时间和内存需求中,基于卷积的层的性能更高。我们通过讨论可变恒星分类的深度体系结构的优势和局限性结论,并特别关注下一代调查,例如LSST,WFIRST和ZTF2。

Despite the utility of neural networks (NNs) for astronomical time-series classification, the proliferation of learning architectures applied to diverse datasets has thus far hampered a direct intercomparison of different approaches. Here we perform the first comprehensive study of variants of NN-based learning and inference for astronomical time-series, aiming to provide the community with an overview on relative performance and, hopefully, a set of best-in-class choices for practical implementations. In both supervised and self-supervised contexts, we study the effects of different time-series-compatible layer choices, namely the dilated temporal convolutional neural network (dTCNs), Long-Short Term Memory (LSTM) NNs, Gated Recurrent Units (GRUs) and temporal convolutional NNs (tCNNs). We also study the efficacy and performance of encoder-decoder (i.e., autoencoder) networks compared to direct classification networks, different pathways to include auxiliary (non-time-series) metadata, and different approaches to incorporate multi-passband data (i.e., multiple time-series per source). Performance---applied to a sample of 17,604 variable stars from the MACHO survey across 10 imbalanced classes---is measured in training convergence time, classification accuracy, reconstruction error, and generated latent variables. We find that networks with Recurrent NN (RNNs) generally outperform dTCNs and, in many scenarios, yield to similar accuracy as tCNNs. In learning time and memory requirements, convolution-based layers are more performant. We conclude by discussing the advantages and limitations of deep architectures for variable star classification, with a particular eye towards next-generation surveys such as LSST, WFIRST and ZTF2.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源