The increasing popularity of smart mobile devices and the development of social intelligence have led to a growing demand for location services among people. Several indoor positioning technologies, including Bluetooth, Wi-Fi, and UWB, have rapidly emerged and found applications in diverse indoor environments, such as specific stadiums, underground mines, and construction sites. The widespread adoption of indoor positioning based on the aforementioned sensors is challenging due to varying environmental requirements and levels of signal interference. Recently, researchers have favored indoor positioning methods that rely on cell phone images due to their convenient access to visual information, low operating cost, and rich feature information. Nonetheless, existing vision-based indoor positioning methods suffer from issues such as inadequate real-time performance, significant positioning errors, and limited robustness. Therefore, this paper investigates the image retrieval technology based on cell phone image database, indoor localization technology based on improved RANSAC algorithm, and the specific work is as follows:
Conventional image retrieval algorithms encounter challenges such as visual and angular variations, semantic gaps, and other issues when retrieving indoor scene images. To address these challenges, this paper employs an image retrieval method that relies on the visual bag-of-words model and the TF-IDF model. Initially, the SURF algorithm is applied to extract features from each image, followed by clustering the extracted features using the K-means clustering algorithm. Next, a k-d tree dictionary is introduced to classify and store the images, resulting in the creation of an indoor image database that encompasses image feature information and geographic location details. Finally, the TF-IDF weighting model is employed for feature indexing, followed by applying similarity calculation theory to retrieve images from the database that closely resemble the user's query images. The proposed method enhances the accuracy and semantic consistency of the retrieval results, thereby laying a solid foundation for future research in positioning.
In order to mitigate the issues of poor accuracy and robustness associated with the existing RANSAC algorithm for removing image mismatches, an enhanced version is proposed. This improved RANSAC algorithm incorporates several modifications, including the construction of a triangular topology for the feature points in two images, quantification of the probability of mismatch for each feature point pair, calculation of the optimal single-response matrix between the two images, and integration of the mismatch probability into the random process of RANSAC. Experimental results demonstrate that the average localization error achieved by the RANSAC algorithm is 1.67 m, whereas the improved RANSAC algorithm achieves an average localization error of 0.69 m. Moreover, the improved method exhibits an 88% probability of achieving an error of less than 1 meter in the localization experiment. The enhanced algorithm significantly reduces image mismatch compared to the original RANSAC algorithm by eliminating mis-matched feature points prior to the base matrix solution, thereby improving the accuracy and robustness of localization.