Image fusion based on nonsubsampled contourlet transform and saliency-motivated pulse coupled neural networks (Q459528)

From MaRDI portal





scientific article; zbMATH DE number 6354131
Language Label Description Also known as
default for all languages
No label defined
    English
    Image fusion based on nonsubsampled contourlet transform and saliency-motivated pulse coupled neural networks
    scientific article; zbMATH DE number 6354131

      Statements

      Image fusion based on nonsubsampled contourlet transform and saliency-motivated pulse coupled neural networks (English)
      0 references
      0 references
      0 references
      0 references
      13 October 2014
      0 references
      Summary: In the nonsubsampled contourlet transform (NSCT) domain, a novel image fusion algorithm based on the visual attention model and pulse coupled neural networks (PCNNs) is proposed. For the fusion of high-pass subbands in NSCT domain, a saliency-motivated PCNN model is proposed. The main idea is that high-pass subband coefficients are combined with their visual saliency maps as input to motivate PCNN. Coefficients with large firing times are employed as the fused high-pass subband coefficients. Low-pass subband coefficients are merged to develop a weighted fusion rule based on firing times of PCNN. The fused image contains abundant detailed contents from source images and preserves effectively the saliency structure while enhancing the image contrast. The algorithm can preserve the completeness and the sharpness of object regions. The fused image is more natural and can satisfy the requirement of human visual system (HVS). Experiments demonstrate that the proposed algorithm yields better performance.
      0 references

      Identifiers