Killmer (2002), in his article “So much information, so little time: Evaluating Web resources with search engines”, states, “An abundance of search engines tools can be used to retrieve information from the World Wide Web. search engine Watch (2001) reports that more than 75 search engine tools are available and provide links to many relevant sources” (P 21).
However, in using search engines on the web to retrieve information, one must be cognizant of the many deleterious situations one can encounter with the web page and its search engine. For instance, search engines can retrieve pages out of context, and one must try to return to the “home page” to determine the source of information. Another problem with search engines is getting the information filtered. In other words, it may take a number of “hits” before one reaches the relevant information.
The source of information is difficult to determine because the authorship of the web resource is missing. The author’s qualifications are frequently absent, and the publisher responsibility is often not indicated on the web page. The instability of web pages is another drawback. Users may not be able to refer back to a web page because of its constant dynamic changes.
There are a number of ways to gauge the performance of search engines. Three of the key points are as follows: Key Point Number 1: Not all search engines find the same information. Killmer (2002) states, ” This can be attributed to the fact that different search engines are used, as well as the fact that each engine belongs to a different category of search tool” (P 21).
The Essay on Internet Search Engines Web Pages Site
Internet Search Engines How Search Engines Work Search engines are programs that crawl the web to compile lists of web sites so that this information can be used by people to find web pages that they are looking for. There are three major components to search engines. These are the spider, the index, and search engine software. The spider component to a search engine is the part the crawls through ...
Each search engine is different in its own search capabilities, and needs certain information in order to retrieve the relevant or needed information. Some search engines are more apt in finding a specific piece of data, document or site requested, while others retrieve tons of information. Some search engines utilize indexing software agents often called “robots” or “spiders.” These agents are programmed to constantly “crawl” the web search of new or updated pages.
Furthermore, each engine search tool is dependent and differs in its database for finding information. Key Point Number 2: Retrieval effectiveness of the search results. Two types of measures are used to evaluate the retrieval effectiveness of search engines; recall and precision. “Recall measures how well an engine retrieves all the relevant documents, whereas precision measures how well the system retrieves only the relevant documents” (Blair and Maron 1985).
In other words, recall is the percentage of sites we want that were retrieved, while precision is the percentage of sites retrieved that we want. Figure 1 below depicts a graphical representation of recall and precision.
Figure 1 In the above figure, U represents the “web universe”, all included. Areas A, B, and C, represent three search engines. In areas A are the sites that we want, but always miss; whereas in area C are the sites we do not want, but always receive. Areas A plus B are the sites we want to see, and, areas B plus C are the sites we get in return. The recall process for these three engines is calculated by dividing Area B by the addition of Areas A and B. Dividing Area B by the sum of areas B and C calculates the precision process.
The Term Paper on Search Engine Google One Page
... PC magazine named google one of its top 100 web sites and search engines. The more the media noticed google the more it ... many different areas. One of the internets major areas is the search engines. Right now google is the one of the biggest search engines if ... a billion page index. Google became the world's largest search engine. This billion page index allows so much of the webs content to ...
Key Point Number 3. Obtaining different results, using a combination of search engines and times. If one was to use all 75 search engines, one would not only find a difference in site content, but also in the results. As Killmer (2002) states, “all search engines provide different results, even if they are simultaneously selected.
As noted by Lawrence and Giles (1999), the overlap in information between the various search engines is relatively low, therefore, one obvious conclusion is that combining results of multiple engines greatly improves coverage of the web in searches” (P 26).
There are two critical thinking questions stem from the use of search engines for the retrieval of information. First, can meta tags increase the chances of one’s web page being found via a search engine. “Effective” meta tags can be used to assist search engines in identifying a site and giving it a “higher” score when relevant search terms are requested. Secondly, are there dates on the page to indicate: When the page was written? When the page was first placed on the Web? When the page was last revised? The date is an indication that the material on the web page is kept current, and as such, the searched relevant information is up to date. For financial information, is there an indication it was filed with the Securities and Exchange Commission and is the filing date listed? For material from the company’s annual report, is the date of the report listed? Relation to Profession.
I currently work with ten Department of Defense, Air Force, software- intensive programs. These programs are web-based, and in the process, are tested for a number of web related objectives, one of which is the usability of the search engines. In the operational effectiveness of these programs, we test to ensure the web site provides adequate response times. We need to determine of concurrent users, and identify the fastest and slowest web pages to be down loaded.
While the number of “hits”, page views, or operational users per day are useful measures for an operational test perspective, a better gauge of the load being applied to a web site will include the number of transactions per second, the number of concurrent users and the number of session initiations per hour. There is a tremendous amount of knowledge and information available on the Web. The most expeditious and efficient electronic manner to search for documents, data and information is through one or more of the 75 search engines available in this country. Not all search engines are created equal. Some need more defined and precise requests to provide the useful and relevant information. Some differ in their searching capabilities, methods of displaying the information requested, while others contain their own database.
The Essay on Search Engines Web Engine Page
... won't be able to access the new information. Most of the search engines that are on the web today have many differences that are ... using general statements to browse the web. But what exactly is a search engine? Search engines are huge databases of web page files that have been assembled ... tell the popularity of the site by looking at the number of tags that are attached to the site. So the ...
Other search engines, such as the metasearch engines can search multiple databases simultaneously via a single interface. Search engines can be used to evaluate web resources. In doing so, one can attest to its information retrieval effectiveness by the recall and precision methods of measuring wanted or relevant information. Reference: Killmer, K. A. , and Ko pul, N.
B. (2002, August).
So much information, so little time: Evaluating Web resources with search engines. T. H. E Journal, 30 (1), 21-29.
Figure 1. Graphical Representation of Recall and Precision.