Volunteer computing

Volunteer computing is a type of distributed computing in which people donate their computers' unused resources to a research-oriented project.[1]

The practice, which dates back to the mid-1990s, can potentially make substantial processing power available to researchers at minimal cost. Typically, a program running on a volunteer's computer periodically contacts a research application to request jobs and report results. A middleware system usually serves as an intermediary.

History

The first volunteer computing project was the Great Internet Mersenne Prime Search, which was started in January 1996.[2] It was followed in 1997 by distributed.net. In 1997 and 1998, several academic research projects developed Java-based systems for volunteer computing; examples include Bayanihan,[3] Popcorn,[4] Superweb,[5] and Charlotte.[6]

The term volunteer computing was coined by Luis F. G. Sarmenta, the developer of Bayanihan. It is also appealing for global efforts on social responsibility, or Corporate Social Responsibility as reported in a Harvard Business Review[7] or used in the Responsible IT forum.[8]

In 1999, the SETI@home and Folding@home projects were launched. These projects received considerable media coverage, and each one attracted several hundred thousand volunteers.

Between 1998 and 2002, several companies were formed with business models involving volunteer computing. Examples include Popular Power, Porivo, Entropia, and United Devices.

In 2002, the Berkeley Open Infrastructure for Network Computing (BOINC) project was founded at University of California, Berkeley Space Sciences Laboratory, funded by the National Science Foundation. BOINC provides a complete middleware system for volunteer computing, including a client, client GUI, application runtime system, server software, and software implementing a project web site. The first project based on BOINC was Predictor@home, based at the Scripps Research Institute, which began operation in 2004. Soon thereafter, SETI@home and ClimatePrediction.net began using BOINC. A number of new BOINC-based projects were created over the next few years, including Rosetta@home, Einstein@home, and AQUA@home. In 2007, IBM World Community Grid switched from the United Devices platform to BOINC.[9]

Middleware

The client software of the early volunteer computing projects consisted of a single program that combined the scientific computation and the distributed computing infrastructure. This monolithic architecture was inflexible. For example, it was difficult to deploy new application versions.

More recently, volunteer computing has moved to middleware systems that provide a distributed computing infrastructure independent from the scientific computation. Examples include:

Most of these systems have the same basic structure: a client program runs on the volunteer's computer. It periodically contacts project-operated servers over the Internet, requesting jobs and reporting the results of completed jobs. This "pull" model is necessary because many volunteer computers are behind firewalls that do not allow incoming connections. The system keeps track of each user's "credit", a numerical measure of how much work that user's computers have done for the project.

Volunteer computing systems must deal with several issues involving volunteered computers: their heterogeneity, their churn (the tendency of individual computers to join and leave the network over time), their sporadic availability, and the need to not interfere with their performance during regular use.

In addition, volunteer computing systems must deal with problems related to correctness:

  • Volunteers are unaccountable and essentially anonymous.
  • Some volunteer computers (especially those that are overclocked) occasionally malfunction and return incorrect results.
  • Some volunteers intentionally return incorrect results or claim excessive credit for results.

One common approach to these problems is replicated computing, in which each job is performed on at least two computers. The results (and the corresponding credit) are accepted only if they agree sufficiently.

Drawbacks for participants

  • Increased power consumption: A CPU generally uses more electricity when it is active compared to when it is idle. Additionally, the desire to participate may cause the volunteer to leave the PC on overnight or disable power-saving features like suspend. Furthermore, if the computer cannot cool itself adequately, the added load on the volunteer's CPU can cause it to overheat.
  • Decreased performance of the PC: If the volunteer computing application runs while the computer is in use, it may impact performance of the PC. This is due to increased usage of the CPU, CPU cache, local storage, and network connection. If RAM is a limitation, increased disk cache misses and/or increased paging can result. Volunteer computing applications typically execute at a lower CPU scheduling priority, which helps to alleviate CPU contention.[10]

These effects may or may not be noticeable, and even if they are noticeable, the volunteer might choose to continue participating. However, the increased power consumption can be remedied to some extent by setting an option to limit the percentage of the processor used by the client, which is available in some client software.

Benefits for researchers

  • Volunteer computing can provide researchers with computing power that is not achievable any other way. Approximately 10 petaflops of computing power are available from volunteer computing networks.
  • Volunteer computing is often cheaper than other forms of distributed computing.[11]

Importance

Although there are issues such as lack of accountability and trust between participants and researchers while implementing the projects, volunteer computing is crucially important, especially to projects that have limited funding.

  • Since there are more than one billion PCs in the world, volunteer computing can supply more computing power to researches, that do not have the required competencies regarding the computing power, on any kind of topic; such as academic (university-based) or scientific researches. Also, advancements in the technology will provide the advancements in consumer products such as PCs and game consoles happen faster than any other specialized products which will increase the number of PCs and computing power in the world consequently.
  • Supercomputers that have huge computing power are extremely expensive and are available only to some applications only if they can afford it. Whereas volunteer computing is not something that can be bought, its power arises from the public support. A research project that has limited sources and funding can get huge computing power by attracting public attention.
  • By volunteering and providing support and computing power to the researches on topics such as science, citizens are encouraged to be interested in science and also citizens are allowed to have a voice in directions of scientific researches and eventually the future science by providing support or not to the researches.[1]
gollark: It didn't crash.
gollark: Replying to https://discord.com/channels/346530916832903169/348702212110680064/747428502802006094Please show me then?
gollark: There are no "symlinks".
gollark: Well, that there was the filesystem overlay mechanism anyway for other reasons, and I decided that using it for really small programs was mildly more efficient than having to ship a million different tiny `bin/whatever.lua` files.
gollark: Also, you can run "build" to see what version you have!

See also

References

  1. "VolunteerComputing – BOINC". boinc.Berkeley.edu. Retrieved November 18, 2017.
  2. "GIMPS History". Mersenne.org. Great Internet Mersenne Primes Search. Retrieved December 29, 2013.
  3. Sarmenta, L.F.G. (1998). "Bayanihan: Web-Based Volunteer Computing Using Java". Worldwide Computing and Its Applications — WWCA'98: Second International Conference Tsukuba, Japan, March 4–5, 1998 Proceedings. Lecture Notes in Computer Science. 1368. Springer Berlin Heidelberg. pp. 444–461. CiteSeerX 10.1.1.37.6643. doi:10.1007/3-540-64216-1_67. ISBN 978-3-540-64216-9. ISBN 978-3-540-64216-9 (print) ISBN 978-3-540-69704-6 (online)
  4. O Regev; Noam Nisan (October 28, 1998). "The POPCORN market—an online market for computational resources". Proceedings of the first international conference on Information and computation economies (in Charleston, South Carolina). New York, NY: ACM Press. pp. 148–157. doi:10.1145/288994.289027. ISBN 1-58113-076-7.
  5. Alexandrov, A.D.; Ibel, M.; Schauser, K.E.; Scheiman, K.E. (1996). "SuperWeb: Research issues in Java-Based Global Computing". Proceedings of the Workshop on Java for High performance Scientific and Engineering Computing Simulation and Modelling. New York: Syracuse University.
  6. Baratloo, A.; Karaul, M.; Kedem, Z.; Wyckoff, P. (September 1996). "Charlotte: Metacomputing on the Web". Proceedings of the 9th International Conference on Parallel and Distributed Computing Systems.
  7. Michael Porter; Mark Kramer. "The Link Between Competitive Advantage and Corporate Social Responsibility" (PDF). Harvard Business Review. Archived (PDF) from the original on July 14, 2007. Retrieved August 25, 2007.
  8. "ResponsI.TK". Responsible IT forum.
  9. "BOINC Migration Announcement". Aug 17, 2007. Retrieved December 29, 2013.
  10. Geoff Gasior (November 11, 2002). "Measuring Folding@Home's performance impact". Retrieved December 29, 2013.
  11. http://mescal.imag.fr/membres/derrick.kondo/pubs/kondo_hcw09.pdf
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.