Video object segmentation is the problem of labelling the foreground object of interest that has widespread applications. We reevaluate One-shot Video Object Segmentation (OSVOS), a simple method that adapts VGG to image segmentation using a structure similar to a Fully Convolutional Network. We propose a range of improvements to make OSVOS competitive to newer methods while keeping its simplicity. Specifically, we replace VGG with EfficientNet, and adopt the U-net architecture. We also utilize Focal Loss and Dice Loss to handle the imbalanced binary classification, and finally we remove the boundary snapping module. With our amendments, we achieve 82.4% J&F on DAVIS 2016 validation set, an improvement over the original 80.2% of OSVOS. We also achieve much faster inference time per frame than OSVOS.
展开▼