From Free Software Directory
Jump to: navigation, search


Web crawler

Larbin is an HTTP Web crawler that can fetch more than 5 million pages a day on a standard PC (pentium II 300, 128 Mo SDRAM and a 10 Mbit ethernet card, with a good network). Larbin uses standard libraries, plus adns. The program is multithreaded but prefers using select instead of a lot of threads (for efficiency purposes). The advantage of Larbin over wget or ht://dig is that it is much faster (because it opens a lot of connexions at a time) and very easy to customize). Common uses include: a crawler for a standard search engine, a crawler for a specialized search engine (xml, images, mp3...), and to provide statistics about servers or page contents).



Verified by

Verified on




Verified by

Janet Casey

Verified on

4 April 2001

Leaders and contributors

Sebastien Ailleret Maintainer

Resources and communication

AudienceResource typeURI
Bug Tracking,Developer,

Software prerequisites


Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the page “GNU Free Documentation License”.

The copyright and license notices on this page only apply to the text on this page. Any software or copyright-licenses or other similar notices described in this text has its own copyright notice and license, which can usually be found in the distribution or license text itself.