Authors

Abstract

This paper discusses the possibility of stitching two video streams to create a panoramic view in real-time, The challenge with constructing Panorama Video in real-time is the time-consuming in stitching images.
Image alignment didn't need to detect all interest point, only few well distributed point is enough to calculate good projection matrix, and the great similarity between the frames in video stream provides temporal information which can be used to reduce the number of selected points
In this paper the stitching images method which using Harris Corners Detector has been customized to be appropriate to meet the requirements of viewing a video in real time, through reducing the number of Interest Points in each stitching and employ the Parallel Processing to separate the calculation of transform matrix from the process of blending.
Reducing the number of Interest Points has been done by fragmentation of images into numerous regions and getting a feedback information from the stitching of the previous frame to limitate the search space based on the area of overlapping.
Results show that (i) panoramas generated from the proposed algorithm feature a smooth transition in image overlapping areas and satisfy human visual requirements; and (ii) the preview speed of the generated panorama satisfies the real-time requirements that are commonly accepted in video panorama stitching.