free society. We build this foundation.
With your help, we will raise $450,000 this
winter to make this foundation even stronger.
Donate today, and build us up for 2014.
Larbin is an HTTP Web crawler that can fetch more than 5 million pages a day on a standard PC (pentium II 300, 128 Mo SDRAM and a 10 Mbit ethernet card, with a good network). Larbin uses standard libraries, plus adns. The program is multithreaded but prefers using select instead of a lot of threads (for efficiency purposes). The advantage of Larbin over wget or ht://dig is that it is much faster (because it opens a lot of connexions at a time) and very easy to customize). Common uses include: a crawler for a standard search engine, a crawler for a specialized search engine (xml, images, mp3...), and to provide statistics about servers or page contents).
DocumentationUser guide included
released on 16 July 2002
|License||Verified by||Verified on||Notes|
|GPLv2||Janet Casey||4 April 2001|
Leaders and contributors
Resources and communication
This entry (in part or in whole) was last reviewed on 24 June 2008.