Also known as Crawlers and Robots. These programs are used by search engines to retrieve web pages to include in their database. Most spiders have cute or unusual names.
also called Crawler-This is a program that search engine uses to 'crawl' along the web to follow the links from page to page for indexing the page.
Computer robot programs, referred to sometimes as "crawlers" or "Bots" that are used by search engines to search the World Wide Web. They are also used by spammers to harvest email addresses from web pages to add to spam lists.
A spider is a software program that travels the Web (hence the name "spider"), ...
Part of a search engine which surfs the web, storing the URLs and indexing the keywords and text of each page it finds. After your website is submitted to search engines, their spiders will crawl the web and index your website.
A software program that traverses the Web to collect information about resources for later queries by users seeking to find resources; major species of active spiders include search engines such as Lycos and WebCrawler.
A script or program that explores the web, normally fetching and indexing data found for use in search engines. Web surfers can then search the indexed data for desired products and information. Synonym : Cralwer.
A program that automatically retrieves Web sites to feed pages to search engines. As most pages contain links to other pages, a spider can begin "crawling" to retrieve a link as soon as one is recognized.
Also known as a bot, robot, or crawler. Spiders are programs used by a search engine to periodically explore your web site, download the HTML content (not including graphics) of your pages, strip out whatever it considers superfluous and redundant out of the HTML, and store the rest in a database (i.e. it's index). A web crawler (also known as a web spider or ant) is a program which browses the World Wide Web in a methodical, automated manner. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam). A web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit. As it visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, recursively browsing the Web according to a set of policies.[ edit
A term used to describe an application which travels the World Wide Web collecting information. Many Web search engines use spiders to collect data from which they build their indices.
A spider is a process that travels over the Web performing tasks like data collection and building indexes to data.
Eight–legged creature that lives in webs and a program which browses the WWW in a methodical, automated manner. Another name for a web crawler or bot. A web crawler is one type of bot. Web crawlers keep a copy of all the visited pages for later processing – for example by a search engine but also index these pages to make the search narrower.
(or crawler or bot): a program that visits Web pages, on a regular basis, reads their content, follows their links to the other pages in the Web site, then takes the information to the index
Automated software used by search engines to robotically acquire information about web pages for their index.
Software used by search engines to locate web sites to add to the engine's database.
A program that automatically searches documents on the World Wide Web in order to build topical, statistical, or historical indexes of websites. To search a website, a spider clicks on each link to access the content.
A program used by search engines that periodically searches the web to locate new contents on the web or entirely new websites.
A software robot that serves a search engine by exploring the net, collecting web page addresses and page contents, and following links from them to other addresses to collect still more web information. Also known as a worm or crawler. See search engine.
A computer program ( a bot ) that travels the Web to locate web pages to index.
The wire assembly on the dartboard that marks off target areas
Spider represents the web of creation. It is the crystal cluster, and how the many have come together to do the work together. Clusters can have hundreds of totem attributes to assist you on your path.
Also known as a spider, ant, robot ("bot") and intelligent agent, a crawler is a program that searches for information on the World Wide Web. It is used to locate new documents and new sites by following hypertext links from server to server and indexing information based on search criteria.[] T-Z
A program that automatically culls Web pages. Spiders are often used in search engines.
Also known as a crawler or robot, the part of a search engine that locates and indexes every page on the Web that is a possible answer to a searcher's query. Successful search engine optimisation depends on crawlers finding many or all a Web site's pages . Most prolific spiders come from Google: googlebot, Yahoo: slurp and MSN: msnbot.
spiders are sent out by various search engines to search the web for information on web sites
A software program used by search engines to crawl the Web, storing URLs and indexing the keywords and text of pages. Spiders are also referred to as crawlers or robots.
A spider is an automated program that searches and indexes web sites, usually with the intent of providing information for search engines.
A spider is a program run by a search engine to build a summary of a website’s content (content index). It creates a text-based summary of content and an address ( Url) for each webpage. When a user searches, the keyword(s) they enter are compared with the available website content indexes. Due to the large number of webpages indexed, direct text-only-matching is rare, rather search engines use sophisticated logics ( algorithms) to rank potential matches. For example, the underlying information hierarchy of a webpage (semantic markup) may be factored into the ranking a webpage is assigned.
a computer program that automatically crawls across the Web, from link to link, and sends information back to the search engine for indexing and ranking
a computer program that browses the Web to discover and to store web page contents
a computer program that searches for Web pages
a computer program that systematically browses the Web , building indexes as it follows every link it can find
a computer program used by search engines to find links to webpages
a computer software that moves from web page to web page by links gathering information
a conventionally known program that automatically explores the World Wide Web (WWW) by retrieving a document and recursively retrieving some or all of the documents that are referenced in it
an application used mainly by search engines to crawl through pages on the World Wide Web
an automated process and results may vary
an automated program designed to gather information on Web pages
an automated program that continuously searches the web to build an index of links and words that the search engines can update and keep in a database, I think
an automated software application that follows links, collects data and then sends that information back to a database
an electronic robot that travels the web examining websites in order to add them to a search engine database and rank them according to the specific ranking criteria for that search engine
a neural network powered search engine which aim is to determine if a given raw information is relevant, using the experience of the active user of the hosting computer as a reference
an extremely useful software routine that follows links between pages
an indexer able to search the WWW, depth-first, and store netmaange the topology
an unmanned program operated by a Search Engine that surfs the Web just like you would
a program that automatically surfs the web
a program that browses (crawlers) web sites extracting information for search engine databases
a program that collects Web pages by traversing the Web, following links from site to site in a systematic way
a program that crawls your site and finds your pages
a program that enters your web site and categorizes the information contained in the site to determine your web site's ranking in search results
a program that goes from page to page on the web, gathering information about what page links to what other pages
a program that goes to various web sites, reads their pages and other information, and creates entries for a search engine index
a program that helps to provide an index for search engines by going to different websites and grouping like information
a program that many search engines use to determine placement of a page on their index
a program that runs on a server and periodically visits your website to check for updates
a program that visits websites and reads their pages and other information in order to create entries for a index
a program that visits Web sites and reads their pages and other information in order to create entries for a web site
a program that visits your site and collects keywords from your pages
a program that works for a search engine, exactly like a
a robot from a search engine which crawls the site and indexes it's content
a robot or program that is used to go to your site and index the information into the web results
a robot program that crawls the Web to index keywords and page text and then rank and order Web pages according to what it deems most relevant
a robot that follows links on the web to index them into a search engine
a search engine, like Yahoo or Googol, but with embedded intelligence," Canton explains
a small piece of software that crawls around the Web picking up URLs and information about the pages they represent
a small piece of software that roams the web and records the information that appears on each page
a small piece of software that works its way through your website and returns information to search engines about the makeup of the website
a small program that crawls around on the web looking for food
a small software program that visits servers (machines that store the websites) and automatically requests documents from them for indexing
a software program operated by a search engine that surfs the Web, visits a Web site, records all the words on the pages, and notes links to other sites
a software program which crawls around the web reading web pages
a specialized bot that is designed to seek out other sites based on the content found in a known site
a type of 'bot , rather than infectious malware like viruses , trojans , or worm
a type of robot designed to traverse the web performing some task (usually collecting data)
a Web-roaming program used to find documents pertaining to your search
An automated software program that scans web sites to gather data for search engines. Once gathered, the data is processed and indexed to be stored in the search engine's database.
Any software agent that navigates the web, scans documents and adds them to an index by following links.SE Spiders rely on its own judgement to classify websites. For that purpose, they collect keyword information from the website pages.
Software used by search sites that scans content on Web pages, following HTML links. The content is then added to a search site's index.
bot that visits publicly accessible websites following all links it comes across collecting data for search engine 'indexes'. A spider discovers new sites and updates information from sites previously visited. A spider can also be used to check links within a website.
This is also called a "Bot" or "SpiderBot". This is a program used to visit web pages and bring information about that web page back to the source.
aka robot, bot A web program that explores a website. Search engines use spiders to scan yoursite and store relevant details in its database.
The software that crawls your site to try and determine the content it finds.
A software program that search engines use which visits every site on the web, follows all of the links, and catalogs all of the text of every web page that (a) contains text, and (b) it is able to visit or crawl.
A spider (or 'bot' or robot) is a little software program that crawls the Web, jumping from hyperlink to hyperlink, finding and collecting content. These little guys are constantly trying to keep the search results fresh and up-to-date and they never take a vacation. Some sites get crawled daily, some weekly and some monthly. Some cannot get crawled at all due to bad site design, server problems, dead links, database problems, or other technical issues. A spider visiting your site is a good sign, but it doesn't guarantee you'll see results right away. The spider just collects the data, a lot more processing has to go on behind the scenes before your page gets updated in the engines. There are different kinds of spiders. Some just look for 404 errors, some just look for images, some for blog feeds, etc.
Search Engine computer program that seeks and "reads" web content to determine placement in Search Engine "catalog".
A "robot" used by some search engines to "spider" a web site (to "crawl" from link to link until it has accessed and indexed all or most of the pages on the site).
An automated program designed to crawl about the Web indexing Web sites. They are also known as robots, crawlers, and wanderers. Some of them index Web sites by title, some by URL, some by words in each document in a Web site, and some by combinations of these.
A computer program that visits a website travelling from page to page via links. The spider downloads the content of each page for analysis and storage in the search engines database.
Software used by search engines to index web pages.
also called bots, spiders are software that scans and indexes the Web. Spiders have many applications from indexing images and content for search engines to harvesting e-mail addresses for spammers.
A common term referring to the software which search engines use to follow links finding new sites to add to their search engine indices. Also known as a webcrawler, robot, or 'bot.
software program used by Search Engines to roam the web for new web sites to .
The automated programs sent out by search engines to review and index websites.
A program following links through web sites to include, update or delete data from a database.
Not a 6 legs creepy crawly but the action a search engine's automated robot/spider completes when following links from web page to web page on the www.
A meta search engine or it's robot.
a computer program that finds webpages.
A software application used by search engine companies to visit web sites and return information about pages.
A spider or crawler is a program utilised by search engines to capture the content of your website and its pages.
A software application that automatically finds and retrieves information from the Web. Also called a "robot" or "crawler."
Another name for the software used by a search engine to collect data.
a search engine that searches the Web by document title and contents, archiving the information for searching purposes.
Also known as a Web crawler, a robot, Web spider, or sometimes a worm. A program that runs on the Internet, goes out to an URL (Web page), and requests all links that are referred to on that page. Robots learn as they go, building a database of links. They index based on meta tags in the HTML or the title, or just about anything else you can imagine. Usually, they come from search engines and are designed to keep the search engines current.
A program that automatically fetches Web pages. Spiders are used to feed pages to search engines. It's called a spider because it crawls over the Web. Another term for these programs is web crawler. Because most Web pages contain links to other pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and fetches it.[Go Back
The software program, also known as a robot, which a search engine runs to read through and analyze your site. Google's spiders are called Googlebot and Freshbot.
Also called a 'bot', a spider is a search engine program that crawls the internet adding web sites to the search engine data base
a program that searches the Internet and ...
This is software that searchs the internet looking for keywords to record in its search engine database. It can also be called a web robot or a web crawler.
An automated software that retrieves webpages and follows the hyperlinks contained in them. Used to generate indexes used by search engines.
An automated computer program that searches the Internet. Many search engines use spiders to catalog information on the Internet.
A search engine that obtains its information by starting at a specified Web page and visiting each page linked to it, and so on. This process continues as a spider moves its way across the Web.
(also bot , crawler ) A program that searches throughout the World Wide Web, moving from link to link , from web page to web page, from web site to web site, collecting and indexing web pages for search engine applications.
Browser programs that are parts of search engines, which are not under human control. They surf the web, finding and indexing web sites, their content, keywords, text, links, etc. while storing their URL as well.
A program, also known as a worm or a crawler, that serves a search engine by exploring the Internet, collecting Web page addresses and page contents, and following links from those pages to collect even more information.
A spider is a special piece of software that is sent out by a search engine to index a web site's pages for inclusion in a search engine's database of information. Spiders work by parsing the HTML on a site, extracting relevant information and following any links that are contained on the page to continue the parsing process. It is important that your web site is indexable by a spider otherwise it may not visit some pages. A common method used to ensure that it indexes all the most important web pages is to include a site map on your site.
An automated browser program that follows links to visit websites but is not directly under human control. Robots then process and index the code and content of a webpage to be stored in the search engine's database. Also known as robot or crawler.
Spiders (or robots) are the software programs that search engines use to scan the web to build their indexes.
A spider is a program which browses the World Wide Web in a methodical, automated manner. A web crawler is one type of bot. Web crawlers not only keep a copy of all the visited pages for later processing - for example by a search engine but also index these pages to make the search narrower.
An automated program which is sent out by search engines to index websites on the internet.
A spider is an automated program that accesses a web site and traverses through the site by following the links present on the pages. Known as a bot, robot, spider or Web Crawler.
no, it's not that yucky thing crawling around your basement. This means a search engine visits web pages and reads its pages. The action of "spidering" is following all the links to read all the site content.
An unattended program, ususally operated by a search engine, which retrieves and indexes websites. As the spider "crawls" your website, it makes notes of all of the links that contained within your site. Eventually the spider will come back and index any pages to which it finds links. Spiders have names, and they usually leave their calling card each time they come to visit your site. In most cases, spiders are good, because they are the most reliable way to be listed in an search engine. If you don't submit your site to the search engine, these spiders will eventually find your site via links from other websites.
not the 8-legged creature, but a special computer program that Search Engines use to find information on the Internet
A browser-like program that forms part of a search engine. Its task is to "surf" the web by following links from one page to the next and from one site to the next. It collects information from the sites it visits and that information is stored in the search engine's database.
is a program that visits and collects specific information from a web page, including the URL and indexing the keywords and text of each page it finds, which is stored in a searchable index.
Small piece of software (also known as a (ro)bot), used by some search engines to index Web sites. Spiders search the Web to find URLs that match to the given query string.
A program that is utilized to scan for information. This is how your search results are produced when using Google or Yahoo.
A fast, automated program—such as a search engine, indexing program, or cataloging software—that requests Web pages much faster than human beings can. Other commonly used terms for spider are crawler and robot.
An Internet robot (used by a search engine) that explores the Web at large. Spiders collect Web page addresses based on content found at those pages.
An application that automatically searches the Web for sites and pages, and catalogs them. Also referred to as "crawlers," spiders vary in the types of information they collect and the way they organize data. See also Index, Web.
A program which looks at the code of web pages for specific information which it can collect and return to the server.
A program which follows links through websites to add or update a database (usually for a search engine, but spamdexers have spiders too). They look at HTML code and add information their search engines will use to determine the page's relevance to keywords and phrases. They are text-based, and often can't follow frames.
An automated program sent out by search engines to crawl along links it finds on the internet to create a comprehensive index of websites. All search engine optimization is designed to fit the complex rules used by the spiders to index sites.
Computer robot programs, referred to sometimes as "crawlers" or "knowledge-bots" or "knowbots" that are used by search engines to roam the World Wide Web via the Internet, visit sites and databases, and keep the search engine database of web pages up to date. They obtain new pages, update known pages, and delete obsolete ones. Their findings are then integrated into the "home" database. Most large search engines operate several robots all the time. Even so, the Web is so enormous that it can take six months for spiders to cover it, resulting in a certain degree of "out-of-datedness" (link rot) in all the search engines. For more information, read about search engines.
Spider is also known as a crawler or bot. Spiders are computers used by a search engine to periodically explore your web site.
An automatic system used to retrieve and automatically add pages to a search engine.
An automated software robot that continuously crawls hyperlinks and pages on the Internet and collects data that is returned to its database for indexing. This is how Search Engines function. The process of crawling the web, storing URLs' and indexing keywords, links and text, is the act of Spidering.
A program that automatically fetches web pages and feeds them to search engines. It is so named because ‘crawls’ over the web and is also known as a WebCrawler.
A programme that search engines send out to read the meta and body html of a site. Also known as a robot.
Software which searches the Internet for keywords to create entries for a search engine index.
Also called crawlers or robots (bots), spiders search the Web for new Web sites or pages and other files. These are placed into a colossal database so that internet users can search for specific items or topics.
A program that indexes WebPages and follows links on WebPages to find other WebPages to index. This is the most common way a search engine updates and adds to its database of WebPages.
A program that crawls around the internet gathering information.
An automated program that searches the Internet for new Web documents and places their addresses and content-related information in a database, which can be accessed with a search engine. Spiders are generally considered to be a type of bot, or Internet robot.
Webcrawler A seach engine program which enables the examination of internet files by following hypertext links. When visiting a website they often look for a robots.txt file in the root directory which will tell them which areas of the site are to be spidered.
Software used by a search engine to automatically find and collect web pages or other information into a searchable database.
Also known as a Web spider, this class of robot software explores the World Wide Web by retrieving a document and following all the hyperlinks in it. Web sites tend to be so well linked that a spider can cover vast amounts of the Internet by starting from just a few sites. After following the links, spiders generate catalogs that can be accessed by search engines. Popular search sites like Alta Vista, Excite, and Lycos use this method.
Search engines use programs that “crawl†the Web, often referred to as Spiders. They index and organize websites across the Internet.
A program that automatically fetches web pages by visiting websites and reading their pages and other information in order to create entries for a search engine index. The major search engines on the web all have such a program, which is also known as a "crawler" or a "bot." Spiders are called spiders because they usually visit many sites in parallel at the same time, their "legs" spanning a large area of the "web."
Search engines use spiders to read web pages and seek other information in order to create entries for a search engine index.
Search engine indexing programs, also known as “bots†or “crawlers". Spiders constantly follow links and index new content to be used for easy retrieval by the search engines.
Software program used by search engines to crawl the Web, storing URLs and indexing the keywords and text of pages. Also called a crawler or robot.
An automated software program that gathers pages from the Internet.
A spider is the software which creates indexes for the search engine to use. In terms of the Norfolk Portal the spider indexes information across all websites of participating organisations.
A software robot sent by search engines to crawl pages on the Internet and collect data for the analysis of ranking.
A spider, also referred to as a robot, is a software program that scans the Internet.
Spiders or robots (bots for short) are names given to the indexing programmes sent out to crawl the web by search engines. Eventually they should find all the web pages stored on all the servers connected to the internet, but there is an obvious benefit in telling them that you have published a site ( see submission) and giving them additional information in the meta tags.
Spiders are computer programs that are used by search engines to roam the World Wide Web via the Internet, they visit sites and keep the search engine database of web pages up to date. They find their way around your site by following your internal links, obtain new pages, update known pages, and delete obsolete ones.
Automated program that follows links to visit web sites on behalf of search engines to fill and update their database.
a small program which connects to many Internet sites to retreive the contents of their pages. Spiders, also called bots or robots, are typically run by search engines for the purpose of indexing websites for the benefit of their users.
A computer program which "crawls" the web, searching for and indexing content from web pages. Related terms: Search Engine
A program used by Search Engines to crawl the web by following links from page to page. This is how most Search Engines index the Internet. Also referred to as a crawler or robot.
A program used by search engines to gather information for their databases of sites. When the spider visits the site, it crawls through every page and deposits the info in the engine's index. See Phase 6 - Table of Contents - Optimization. This topic is discussed at our Training Center. Please email us for one month's FREE access to ecommerce-training.com.
A spider or Web crawler is a program that exhaustively surfs all the links from a page and returns them to another program for processing. For example, all of the Internet search engine sites rely on spider robots to discover new Web sites and add them to their index. Another typical use of a spider is by a publisher against his or her own site. The spider program makes sure that all of the links function correctly and reports dead links.
A program that visits and downloads specific information from a webpage.
A spider, also known as a robot or crawler, is a program that travels the Web to index websites and put them into a search engine. Major search engines all rely on spiders to visit and catalog new sites. WebCrawler and Lycos are examples of spiders. Source: TechSoup.org
The software that scans documents and adds them to an index by following links. Spider is often used as a synonym for search engine.
An automated program (sometimes called a webcrawler) which crawls over the World Wide Web, gathering web pages for search engines. Large search engines employ many spiders. Spiders are a type of robot.
spider is a program that automatically fetches web pages. Spiders are used to feed pages to search engines. They are called spiders because they “crawl” over the web. Because most web pages contain links to other pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and fetches it. Large search engines, like Alta Vista, have many spiders working in parallel.
A computer program used by search engines to index the contents of Web pages at each site as it travels from one site to another.
Web Design SQL Web Design
As a verb, this term refers to a search engine moving from one page to another on a single web site or multiple web sites.
A computer program that travels the Internet to locate Web documents and FTP resources. It indexes the documents in a database, which is then searched using a search engine (such as AltaVista or Excite). A spider can also be referred to as a robot or wanderer. Each search engine uses a spider to build its database.
A program that prowls the Internet, attempting to locate new, publicly accessible resources such as WWW documents, files available in public FTP archives, and gopher documents. Also called wanderers or robots (bots), spiders contribute their discoveries to a database, which Internet users can search by using an Internet accessible Search engine such as Lycos or Yahoo. Spiders are necessary because the rate at which people are creating new Internet documents greatly exceeds manual indexing capacity.
This is a software program that regularly searches the Internet indexing text from Web pages. Spiders allow search engines to locate any new content on the Web.
A program that searches the World Wide Web automatically by retrieving a document and all documents linked to it.
Automated program used by search engines to "crawl" the Internet and categorize websites.
A spider is a tool used by search engines to view and rank website's submitted to its search engine. Spiders are electronic robots programmed to visit website's submitted to a search engine.
A program used to fetch files from the internet for the purpose of indexing in search engines. Also called a web crawler, robot or bot, a spider follows links on web pages to find additional pages to index.
Also known as robots. Automated programs used to collect data from websites. Search engines send spiders out to classify the content of web pages. Spammers often use spiders to harvest email addresses from web pages and send you junk mail.
A program that automatically fetches Web pages. Spiders are used to feed pages to search engines. It's called a spider because it crawls over the Web. Other terms for these programs are robot and Webcrawler. Because most Web pages contain links to other pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and fetches it. Large search engines have many spiders working in parallel.
the software Search Engines use to find documents on the Internet by following hyperlinks. Also known as crawlers, robots or bots.
Program that automatically fetches web pages; spiders are used to feed pages to search (Source: IAB)
A term for a search engine robot that scans web sites for indexing them into their search results.
An automated software tool that can visit hundreds of web sites per second and extract (‘harvestâ€(tm)) any information on those sites (such as phone numbers, mailing addresses, or the most commonly extracted item—email addresses). Spiders are often used by spammers. Other types of spiders (also known as robots), simply record all text of the page and store it in a database. These spiders are used by search engines to collect data, which it then uses to rank each site for every possible search term, based on its unique algorithm.
The metal web that divides the dartboard into sections.
A special program that search engines use to "crawl" a Web site to determine its content and how it should be properly categorized in search engine results.
(Robot) - A program which automatically gets information from sites. Spiders gather information for search engines, extract emails, check links, etc.
(or robot) A program that automatically fetches Webpages. Spiders are used to feed pages to Search Engines. It's called a spider because it crawls over the Web. Another term for these programs is webcrawler. See also: The Web Robots Pages
Synonyms: crawler, web crawler, robot, bot, 'bot Related Terms: indexer, import/export, robots.txt is a special type of document indexer that follows links on a web site, to eventually index the entire web site. It goes from web page to web page, via the HTML hyperlinks, until the entire site has been indexed.
A software program that crawls the internet, by following links and indexing web pages.
Refer to programs which visit sites to collect, record or index the pages or content. Generally, spiders are considered as robots which move from link-to-link within the site. This movement is assumed to be comparable to a spider tracing a web.
A software that visits web sites and indexes the pages present in those sites. Search engines use spiders to build up their databases. Example: The spider for AltaVista is called Scooter.
A spider, or Web crawler, is a program that finds all the links from a page and returns them to another program for processing. All web sites submitted to an Internet search engine rely on the search engine's spider robots to discover new Web sites, pages, and changes and update their index accordingly.
For our purposes a spider is the main program used by search engines to retrieve web pages to include in their database. Spiders have other uses as well. For example spammers use them to harvest Email addresses off the web and some businesses use them to monitor their competitors websites.
A process search engines use to investigate pages on the web and collect the information that needs to be put in their indices. Spiders -- also called robots -- are the underlying function of search engines.
A program that automatically searches the Web pages, and indexes them for searches. The most popular example is Google. (Yahoo is actually a directory, and not a Spider-type search engine.)
An automated software tool used by search engines to travel throughout the internet collecting information which it then returns to the search engineâ€(tm)s indices.
A program that automatically fetches Web pages. Spiders are used to deliver pages to search engines, which then use an indexing program to process the pages.
A program utilized by search engines to roam the Internet, gather information and help index Websites for their database.
A term used to describe search engines such as Yahoo and Alta Vista, because of the way they cruise all over the world wide web to find information. It is a software program which combs the web for new sites and updated information on old ones, like a spider looking for a fly.
a program that searches the Web; called a spider because it crawls all over the Web
A program designed to continuously search the Internet for new public resources and web pages that can be compiled into catalogs, which can be accessed by search engines.
a program that traverses the web, following links from page to page. Also called a robot.
A program that travels through the internet seeking out and indexing new websites.
Spiders are also known as robots or simply bots for short. Spiders crawl the Internet looking for information. Search engines like Goolge have a spider known as Googlebot which is used for adding web pages to their database.
Also called wanderers or robots (bots), spiders are programs that search the Internet for new, publicly accessible resources such as Web pages and files in public FTP archives. Spiders contribute their discoveries to a database, which Internet users can search by using search engines such as Lycos or WebCrawler.
A Spider (or Web Crawler) is a piece of software that scans the World Wide Web finding pages to add to the index of a search engine. See also: search engines.
An automated program which searches the internet.
Software used by a search engine to find and retrieve web pages to include in its index.
Google finds pages on the World Wide Web and records their details in its index by sending out ‘spiders' or ‘robots'. These spiders make their way from page to page and site to site by following text links.
Software used by a search engine to crawl all over the web and catalogue it to enable swift response to search requests.
Also called a bot (or robot). Spiders are software programs that scan the web. They vary in purpose from indexing web pages for search engines to harvesting e-mail addresses for spammers. Return to Top of SEO Glossary
Also known as a "robot" which is an automatic software program that serves a search engine by exploring the Web and collects Web page addresses and stores them in a huge database.
A software program that "crawls" the Web, searching and indexing Web pages to create a database that can be easily searched by a search engine.
What the search engines send out to index web pages. Collects web page addresses and page contents, and following links from them to other addresses to collect still more web information. Also known as a worm, robot or crawler.
A computer program that travels the Internet to locate such resources as Web documents, FTP archives, and Gopher documents. It indexes the documents in a database, which is then searched using a search engine. A spider can also be referred to as a robot or wanderer. Each search engine uses a spider to build its database.
or robot, scans the web and indexes webpages for search engines.
bot that automatically browses the Web by following the hyperlinks. Spiders (also called crawlers) are used by search engines, and also by attackers looking for email addresses.
The querying scanner that the search engine uses to crawl your site. For example, FAST which feeds Lycos and All the Web, or Inktomi’s SLIRP which feeds MSN among others.
A program designed to search the Internet. See also Robots.
It's the element of a search engine that spends its time crawling around on the web (get it?). It follows link to link to link and stores pages to be indexed.
A spider is part of a search engine. It's a bit like electricity because, although you can't see it, it does exist. At least it does in cyberspace! Spiders are software programs which roam around the internet gathering information. Imagine the Internet is a city filled with homes and buildings (websites). The spider could be a person going from building to building making a report on what's in each one. Most search engines send out several spiders that work as a team: They visit websites, find out what they're about, look for any changes and follow links that they find on those websites to others. Web spiders are fast critters too - they can visit several million webpages in a single week! Once the spider has found a webpage - it relays the info back to HQ at the search engine. Another piece of software processes what the spider found, and decides what to do with it. Whenever a spider visits your website it leaves a special mark which appears in your 'referrer logs'.
Software used by search engines to identify web pages.
A program that automatically follows hyperlinks. Search engines use spiders to index Web pages; Web masters often use spiders to find broken links.
Also known as a crawler, this is the automated search engine software that constantly browses the internet to collect what it finds and deliver it to an indexer, which then sorts and ranks everything according to the controlling algorithms for the index or database of a search engine. Google's spider is known as the Googlebot. (See Algorithm and Index).
An automated robot program that searches the Internet, usually retrieving information from web pages for storage in a database.
A process search engines use to investigate new pages on a web site and collect the information that needs to be put in their indices.
A software program that searches websites and reports the data back to a central database.
Also known as a bot, robot, or crawler. Spiders are computers used by a search engine to periodically explore your web site, download the HTML content (not including graphics) of your pages, strip out whatever it considers superfluous and redundant out of the HTML, and store the rest in a database (i.e. it's index). See also: Agent Name, Googlebot, , Robots.txt, Spider Trap, Stop Character
A type of robot program that autonomously travels the Web from server to server and indexes the contents of publicly-accessible files found on the servers. Also called Web walkers, crawlers, and worms. Spiders are used by search engines for creating searchable indices of Net and Web files.
Automated web browsing software which gathers information about visited pages in its database. Also known as a crawler.
A program that automatically finds web pages and sends the links to search engines.
Search engine program that scans several thousand web pages per day, and transfers information about these web pages to the search engine index.
A component of a search engine that roams the Web, storing the URLs and indexing the keywords and text of each page encountered. Also referred to as a robot or crawler.
A spider is an automated program that "crawls" the Web, generally for the purpose of indexing web pages for use by search engines. Because most web pages contain links to other pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and fetches it. Large search engines have many spiders working in parallel.
Spider written as SPIDER (written with an inverted 'R') is a monthly magazine circulated in Pakistan by the DAWN group of newspapers, focusing on issues related to software/hardware and internet technologies. The magazine sports a tagline boasting it to be "Pakistan's Internet Magazine", while most of the issues discussed in its leaves are mostly about things not just internet. A symbol that dons its cover pronouncing its identity is that of a mouse with eight limbs and a blinking red sensor as an eye over a web engulfed in a yellow circle, mostly found at the top-right corner of the publication.