Loughborough University
Browse
elsarticle-template.pdf (8.47 MB)

A fully convolutional two-stream fusion network for interactive image segmentation

Download (8.47 MB)
journal contribution
posted on 2018-10-17, 11:26 authored by Yang Hu, Andrea SoltoggioAndrea Soltoggio, Russell LockRussell Lock, Steve Carter
In this paper, we propose a novel fully convolutional two-stream fusion network (FCTSFN) for interactive image segmentation. The proposed network includes two sub-networks: a two-stream late fusion network (TSLFN) that predicts the foreground at a reduced resolution, and a multi-scale refining network (MSRN) that refines the foreground at full resolution. The TSLFN includes two distinct deep streams followed by a fusion network. The intuition is that, since user interactions are more direction information on foreground/background than the image itself, the two-stream structure of the TSLFN reduces the number of layers between the pure user interaction features and the network output, allowing the user interactions to have a more direct impact on the segmentation result. The MSRN fuses the features from different layers of TSLFN with different scales, in order to seek the local to global information on the foreground to refine the segmentation result at full resolution. We conduct comprehensive experiments on four benchmark datasets. The results show that the proposed network achieves competitive performance compared to current state-of-the-art interactive image segmentation methods.

Funding

This work was supported by Ice Communication Limited and Innovate UK (project KTP/10412).

History

School

  • Science

Department

  • Computer Science

Published in

Neural Networks

Volume

109

Pages

31-42

Citation

HU, Y. ... et al, 2018. A fully convolutional two-stream fusion network for interactive image segmentation. Neural Networks, 109, pp.31-42.

Publisher

© Elsevier

Version

  • AM (Accepted Manuscript)

Publisher statement

This paper was accepted for publication in the journal Neural Networks and the definitive published version is available at https://doi.org/10.1016/j.neunet.2018.10.009

Acceptance date

2018-10-09

Publication date

2018-10-21

Copyright date

2019

ISSN

0893-6080

Language

  • en