Web Image Re Ranking Using Query Specific Semantic Signatures(2014)

Note: Please Scroll Down to See the Download Link.

ABSTRACT:

Image re-ranking, as an effective way to improve the results of web-based image search, has been adopted by current commercial search engines such as Bing and Google. Given a query keyword, a pool of images is first retrieved based on textual information. By asking the user to select a query image from the pool, the remaining images are re-ranked based on their visual similarities with the query image. A major challenge is that the similarities of visual features do not well correlate with images’ semantic meanings which interpret users’ search intention. Recently people proposed to match images in a semantic space which used attributes or reference classes closely related to the semantic meanings of images as basis. However, learning a universal visual semantic space to characterize highly diverse images from the web is difficult and inefficient. In this paper, we propose a novel image re-ranking framework, which automatically offline learns different semantic spaces for different query keywords. The visual features of images are projected into their related semantic spaces to get semantic signatures. At the online stage, images are re-ranked by comparing their semantic signatures obtained from the semantic space specified by the query keyword. The proposed query-specific semantic signatures significantly improve both the accuracy and efficiency of image re-ranking. The original visual features of thousands of dimensions can be projected to the semantic signatures as short as 25 dimensions. Experimental results show that 25-40 percent relative improvement has been achieved on re-ranking precisions compared with the state-of-the-art methods.

EXISTING SYSTEM:

WEB-SCALE image search engines mostly use keywords as queries and rely on surrounding text to search images. They suffer from the ambiguity of query keywords, because it is hard for users to accurately describe the visual content of target images only using keywords. For example, using “apple” as a query keyword, the retrieved images belong to different categories (also called concepts in this paper), such as “red apple,” “apple logo,” and “apple laptop.”

This is the most common form of text search on the Web.  Most search engines do their text query and retrieval using keywords. The keywords based searches they usually provide results from blogs or other discussion boards. The user cannot have a satisfaction with these results due to lack of trusts on blogs etc. low precision and high recall rate. In early search engine that offered disambiguation to search terms. User intention identification plays an important role in the intelligent semantic search engine.

DISADVANTAGES OF EXISTING SYSTEM:

*       Some popular visual features are in high dimensions and efficiency is not satisfactory if they are directly matched.

*       Another major challenge is that, without online training, the similarities of low-level visual features may not well correlate with images’ high-level semantic meanings which interpret users’ search intention.

*       Some visual features are in high dimensions and efficiency is not satisfactory if they are directly matched with query image.

*       Without online training, the similarities of low-level visual features may not well correlate with images.

*       Re ranking methods usually fail to capture the user’s intention when the query term is ambiguous.

PROPOSED SYSTEM:

In this paper, a novel framework is proposed for web image re-ranking. Instead of manually defining a universal concept dictionary, it learns different semantic spaces for different query keywords individually and automatically. The semantic space related to the images to be re-ranked can be significantly narrowed down by the query keyword provided by the user. For example, if the query keyword is “apple,” the concepts of “mountain” and “Paris” are irrelevant and should be excluded. Instead, the concepts of “computer” and “fruit” will be used as dimensions to learn the semantic space related to “apple.” The query-specific semantic spaces can more accurately model the images to be re-ranked, since they have excluded other potentially unlimited number of irrelevant concepts, which serve only as noise and deteriorate the re-ranking performance on both accuracy and computational cost. The visual and textual features of images are then projected into their related semantic spaces to get semantic signatures. At the online stage, images are re-ranked by comparing their semantic signatures obtained from the semantic space of the query keyword. The semantic correlation between concepts is explored and incorporated when computing the similarity of semantic signatures.

We propose the semantic web based search engine which is also called as Intelligent Semantic Web Search Engines. We use the power of xml meta-tags deployed on the web page to search the queried information. The xml page will be consisted of built-in and user defined tags. Here propose the intelligent semantic web based search engine. We use the power of xml meta-tags deployed on the web page to search the queried information. The xml page will be consisted of built-in and user defined tags. The metadata information of the pages is extracted from this xml into rdf. our practical results showing that proposed approach taking very less time to answer the queries while providing more accurate information.

ADVANTAGES OF PROPOSED SYSTEM:

1) The visual features of images are projected into their related semantic spaces automatically learned through keyword          expansions offline.

2) Our experiments show that the semantic space of a query keyword can be described by just 20-30 concepts (also                  referred as “reference classes”). Therefore the semantic signatures are very short and online image re-ranking becomes      extremely efficient. Because of the large number of keywords and the dynamic variations of the web, the semantic                  spaces of query keywords are automatically learned through keyword expansion.

3) Our query-specific semantic signatures effectively reduce the gap between low-level visual features and semantic.

4) Query-specific semantic signatures are also effective on image re-ranking without query images being selected.

5) Collecting information from users to obtain the specified semantic space.

6) Localizing the visual characteristics of the user’s intention in this specific semantic space.

MODULES:

1.      Re-Ranking accuracy

2.      Re-Ranking Images outside Reference Class

3.      Incorporating Semantic Correlations

4.      Re-Ranking with Semantic Based

MODULE DESCRIPTION

Re-Ranking accuracy:

In this module, we invited five labelers to manually label testing images under each query keyword into different categories according to semantic meanings. Image categories were carefully defined by the five labelers through inspecting all the testing images under a query keyword. Defining image categories was completely independent of discovering reference classes. The labelers were unaware of what reference classes have been discovered by our system. The number of image categories is also different than the number of reference classes. Each image was labeled by at least three labellers and its label was decided by voting. Some images irrelevant to query keywords were labeled as outliers and not assigned to any category.

Re-Ranking Images outside Reference Class:

It is interesting to know whether the query-specific semantic spaces are effective for query images outside reference classes. We design an experiment to answer this question. If the category of a query image corresponds to a reference class, we deliberately delete this reference class and use the remaining reference classes to train classifiers and to compute semantic signatures when comparing this query image with other images.

Incorporating Semantic Correlations:

We can further incorporate semantic correlations between reference classes when computing image similarities. For each type of semantic signatures obtained above, i.e., QSVSS Single, QSVSS Multiple, and QSTVSS Multiple, we compute the image similarity, and name the correspond-ing results as QSVSS SingleCorr, QSVSS MultipleCorr, and QSTVSS MultipleCorr respectively. The re-ranking precisions for all types of semantic signatures on the three data sets. Notably, QSVSS SingleCorr achieves around 10 percent relative improvement com-pared with QSVSS Single, reaching the performance of QSVSS multiple despite its signature is six times shorter.

Re-Ranking with Semantic Based:

Query-specific semantic signature can also be applied to image re-ranking without selecting query images. This application also requires the user to input a query keyword. But it assumes that images returned by initial text-only search have a dominant topic and images belonging to that topic should have higher ranks. Our query-specific semantic signature is effective in this application since it can improve the similarity measurement of images. In this experiment QSVSS Multiple is used to compute similarities.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

Ø System                          :         Pentium IV 2.4 GHz.

Ø Hard Disk                      :         40 GB.

Ø Floppy Drive                 :         1.44 Mb.

Ø Monitor                         :         15 VGA Colour.

Ø Mouse                            :         Logitech.

Ø Ram                               :         512 Mb.

SOFTWARE REQUIREMENTS:

Ø Operating system           :         Windows XP/7.

Ø Coding Language         :         ASP.net, C#.net

Ø Tool                                  :         Visual Studio 2010

Ø Database                        :         SQL SERVER 2008

Click here to download Web Image Re Ranking Using Query Specific Semantic Signatures(2014) source code