论文标题
有效的节目:有效的语义分割网络
EfficientSeg: An Efficient Semantic Segmentation Network
论文作者
论文摘要
深度神经网络训练没有预先训练的权重,很少有数据表明需要更多的训练迭代。同样,众所周知,更深的模型比其浅层语义分割任务更成功。因此,我们介绍了EfficitySeghentecture,这是U-NET的修改且可扩展的版本,尽管具有深度,但它可以有效地训练。我们使用相同的参数计数(51.5%MIOU)评估了缩略图数据集的有效结构,并优于U-NET基线得分(40%MIOU)。我们最成功的模型获得了58.1%MIOU得分,并获得了ECCV 2020 Vipriors Challenge的语义细分曲目中的第四名。
Deep neural network training without pre-trained weights and few data is shown to need more training iterations. It is also known that, deeper models are more successful than their shallow counterparts for semantic segmentation task. Thus, we introduce EfficientSeg architecture, a modified and scalable version of U-Net, which can be efficiently trained despite its depth. We evaluated EfficientSeg architecture on Minicity dataset and outperformed U-Net baseline score (40% mIoU) using the same parameter count (51.5% mIoU). Our most successful model obtained 58.1% mIoU score and got the fourth place in semantic segmentation track of ECCV 2020 VIPriors challenge.