Many modern web sites need fast access to an amount of information so large that it cannot be efficiently stored on a single computer. A good way to deal with this problem is to Ã¢â¬ÅshardÃ¢â¬ that information; that is, store it across multiple computers instead of on just one. Sharding strategies often involve two techniques: partitioning and replication. With partitioning, the data is divided into small chunks and stored across many computers. Each of these chunks is small enough that the computer that stores it can efficiently manipulate and query the data. With the other technique of replication, multiple copies of the data are stored across several machines. Since each copy runs on its own machine and can respond to queries, the system can efficiently respond to tons of queries for the same data by adding more copies. Replication also makes the system resilient to failure because if any one copy is broken or corrupt, the system can use another copy for the same task. The problem is: sharding is difficult. Determining smart partitioning schemes for particular kinds of data requires a lot of thought. And even more difficult is ensuring that all of the copies of the data are consistent despite unreliable communication and occasional computer failures.
Documentationsee README file.
released on 28 May 2010
|License||Verified by||Verified on||Notes|
|Apache2.0||Kelly Hopkins||3 August 2010|
Leaders and contributors
Resources and communication
This entry (in part or in whole) was last reviewed on 3 August 2010.