RNfinity
Research Infinity Logo, Orange eye of horus, white eye of Ra
  • Home
  • Submit
    Research Articles
    Ebooks
  • Articles
    Academic
    Ebooks
  • Info
    Home
    Subject
    Submit
    About
    News
    Submission Guide
    Contact Us
    Personality Tests
  • Login/sign up
    Login
    Register

Physics Maths Engineering

Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead

rnfinity

info@rnfinity.com

orcid logo

Kai Zhang,

Kai Zhang

School of Information Science and Engineering

zhangkainuc@sdnu.edu.cn


Feng Zhang,

Feng Zhang

School of Information Science and Engineering

2019010100@stu.sdnu.edu.cn


Wenbo Wan,

Wenbo Wan

School of Information Science and Engineering

wanwenbo@sdnu.edu.cn


Hui Yu,

Hui Yu

School of Creative Technologies

hui.yu@port.ac.uk


Jiande Sun,

Jiande Sun

School of Information Science and Engineering

jiandesun@sdnu.edu.cn


Javier Del Ser,

Javier Del Ser

TECNALIA, Basque Research and Technology Alliance (BRTA)

info@rnfinity.com


Eyad Elyan

Eyad Elyan

School of Computing

info@rnfinity.com


  Peer Reviewed

copyright icon

© attribution CC-BY

  • 0

rating
679 Views

Added on

2023-05-10

Doi: https://doi.org/10.1016/j.inffus.2022.12.026

Abstract

Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area.

Key Questions

What is pan-sharpening?

Pan-sharpening is the process of merging panchromatic (high spatial resolution) and multispectral (high spectral resolution) images to create a single image with both high spatial and spectral resolution. This fused image is more reliable for tasks like image interpretation and analysis.

Why is pan-sharpening important?

Pan-sharpening is crucial for applications like satellite imagery, environmental monitoring, and urban planning. It enhances the quality of images, making them more useful for downstream tasks such as object detection, land cover classification, and change detection.

What are the main methods for pan-sharpening?

The main methods include:

  • Component Substitution: Replaces parts of the multispectral image with details from the panchromatic image.
  • Multiresolution Analysis: Decomposes images into different resolution levels and combines them.
  • Degradation Model: Simulates the imaging process to align and fuse the images.
  • Deep Neural Networks: Uses AI to learn the best way to merge spatial and spectral information.

How has AI and deep learning improved pan-sharpening?

AI and deep learning have revolutionized pan-sharpening by enabling more accurate and efficient fusion of images. Deep neural networks can learn complex relationships between spatial and spectral features, resulting in higher-quality fused images compared to traditional methods.

What are the challenges in pan-sharpening?

Challenges include:

  • Balancing spatial and spectral accuracy in the fused image.
  • Handling noise and artifacts introduced during fusion.
  • Lack of standardized datasets and evaluation metrics.

How is the quality of pan-sharpened images evaluated?

Quality is evaluated using:

  • Reduced-Resolution Assessment: Compares the fused image to a reference image at a lower resolution.
  • Full-Resolution Assessment: Measures quality without a reference image, focusing on spatial and spectral consistency.

What are the future trends in pan-sharpening?

Future trends include:

  • Developing more advanced deep learning models for better fusion.
  • Creating standardized datasets and benchmarks for fair comparison.
  • Integrating pan-sharpening with other image processing tasks for end-to-end solutions.

What are the practical applications of pan-sharpening?

Pan-sharpening is used in:

  • Satellite imagery for environmental monitoring and disaster management.
  • Urban planning for detailed land use analysis.
  • Agriculture for crop health monitoring and yield prediction.

How can newcomers get started with pan-sharpening research?

This survey serves as a comprehensive starting point for newcomers. It provides an overview of methods, challenges, and future directions, along with references to key techniques and datasets.

What are the limitations of current pan-sharpening techniques?

Limitations include:

  • Difficulty in preserving both spatial and spectral details perfectly.
  • Dependence on the quality of input images.
  • Computational complexity, especially for deep learning methods.

Summary Video Not Available

Review 0

Login

ARTICLE USAGE


Article usage: May-2023 to Jun-2025
Show by month Manuscript Video Summary
2025 June 56 56
2025 May 89 89
2025 April 73 73
2025 March 73 73
2025 February 46 46
2025 January 52 52
2024 December 39 39
2024 November 37 37
2024 October 32 32
2024 September 41 41
2024 August 27 27
2024 July 35 35
2024 June 19 19
2024 May 30 30
2024 April 24 24
2024 March 6 6
Total 679 679
Show by month Manuscript Video Summary
2025 June 56 56
2025 May 89 89
2025 April 73 73
2025 March 73 73
2025 February 46 46
2025 January 52 52
2024 December 39 39
2024 November 37 37
2024 October 32 32
2024 September 41 41
2024 August 27 27
2024 July 35 35
2024 June 19 19
2024 May 30 30
2024 April 24 24
2024 March 6 6
Total 679 679
Related Subjects
Physics
Math
Chemistry
Computer science
Engineering
Earth science
Biology
copyright icon

© attribution CC-BY

  • 0

rating
679 Views

Added on

2023-05-10

Doi: https://doi.org/10.1016/j.inffus.2022.12.026

Related Subjects
Physics
Math
Chemistry
Computer science
Engineering
Earth science
Biology

Follow Us

  • Xicon
  • Contact Us
  • Privacy Policy
  • Terms and Conditions

5 Braemore Court, London EN4 0AE, Telephone +442082758777

© Copyright 2025 All Rights Reserved.