Russian
Graphics & Media lab
Eng > Research > "GrowCut" - Interactive Multi-Label N-D Image Segmentation
[Home]
[Researh]
[Library]
[Publications]
[About us]
[Links]
Hosted sites
[GraphiCon]
[Courses]
GrowCut - Interactive Multi-Label N-D Image Segmentation

Moscow State University
Graphics and Media Lab

Publications

  • Vezhnevets V., Konouchine V., "Grow-Cut" - Interactive Multi-Label N-D Image Segmentation. Accepted to Graphicon-2005.
    .pdf (871kb)

Downloads

'GML GrowCut' - FREEWARE Adobe Photoshop selection plugin

Introduction

Image segmentation is an inherent part of important image processing applications like automated medical images analysis and photo editing. Also, a wide range of computational vision problems could benefit from existence of reliable and efficient image segmentation technique. For instance, intermediate-level vision problems such as shape from silhouette, shape from stereo and object tracking in video could make use of reliable segmentation of the object of interest from the rest of the scene. Higher-level problems such as recognition and image indexing can also make use of segmentation results in matching.

Fully automated segmentation techniques are being constantly improved, however, current state-of-the art is such that no automated image segmentation technique can be applied fully autonomously with reliable results in general case. That is why semi-automatic segmentation techniques that allow solving moderate and hard segmentation problems by modest interactive effort on the part of the user are becoming more and more popular.

Related work

Current state of the art interactive segmentation tools can be loosely divided into two families - applied to generic images, and designed especially for medical images processing.

Generic image segmentation tools include:

  • Magic Wand - is a common selection tool for almost any image editor nowadays. It gathers color statistics from the user-specified image point (or region) and segments image region with pixels, which color properties fall within some given tolerance of the gathered statistics.

  • Intelligent Paint is region-based interactive segmentation technique, based on hierarchical image segmentation by toboganning. The strategy it uses coordinates human-computer interaction to extract regions of interest from backgrounds using paint strokes with a mouse.

  • Intelligent Scissors (Magnetic Lasso) - is a boundary-based method, that computes minimum-cost path via shortest-path graph algorithms between user-specified boundary points.

  • Graph Cut is a combinatorial optimization technique, which was applied to the task of image segmentation. For the case of two labels (i.e. object and background) the globally optimal pixel labelling can be efficiently computed by max-flow/min-cut algorithms.

  • GrabCut extends graph-cut by introducing iterative segmentation scheme, that uses graph-cut for intermediate steps. The user draws rectangle around the object of interest. Each iteration step estimates color statistics of object and background and applies graph-cut to compute new refined segmentation.

  • Random walker - given a small number of pixels with user-defined seed labels, this method analytically determines the probability that a random walker starting at each unlabeled pixel will first reach one of the pre-labeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, image segmentation is obtained.

Proposed method

Our proposed method uses completely different instrument (Cellular Automaton) for solving pixel labelling task, starting from small number of pixels with user-specified seed labels. We take a popular user interaction scheme - user place several seeds (or loosely paints with `seed brush') inside the object(s) in the image that should be segmented from each other. Similar scheme is used by other authors and both by their and our own experiments has proven to be easy for the user to master and allows fast achievement of the desired result.

Our method is iterative, giving feedback to the user as the segmentation is computed. Our method allows (but not requires) human input during labelling process to provide dynamic interaction and feedback between the user and the algorithm. This allows human input to correct and guide the algorithm where the segmentation is difficult to compute, yet does not require additional user effort where the segmentation is easy to computed automatically.

Photo editing examples

Easy segmentation example
Moderate segmentation example


Hard segmentation example
Hard segmentation example
Hard segmentation example

More complicated examples. These examples were created by our interactive implementation. Average time for an average user to achieve the shown results is 15 to 25 seconds.

Hard segmentation example

Medical image examples

Segmentation of medical image

The project team

Vladimir Vezhnevets (vvp @ graphicon.ru), Vadim Konushin
Graphics & Media lab (webmaster@graphics.cs.msu.su)