Yellowstone (supercomputer)

Yellowstone[1] was the inaugural supercomputer at the NCAR-Wyoming Supercomputing Center[2] (NWSC) in Cheyenne, Wyoming. It was installed, tested, and readied for production in the summer of 2012.[3] The Yellowstone supercomputing cluster was decommissioned on December 31, 2017,[4] being replaced by its successor Cheyenne.[5]

Yellowstone in 2014

Yellowstone was a highly capable petascale system designed for conducting breakthrough scientific research in the interdisciplinary field of Earth system science. Scientists used the computer and its associated resources to model and analyze complex processes in the atmosphere, oceans, ice caps, and throughout the Earth system, accelerating scientific research in climate change, severe weather, geomagnetic storms, carbon sequestration, aviation safety, wildfires, and many other topics.[6][7] Funded by the National Science Foundation and the State and University of Wyoming, and operated by the National Center for Atmospheric Research, Yellowstone's purpose was to improve the predictive power of Earth system science simulation to benefit decision-making and planning for society.[8]

System description

Yellowstone was a 1.5-petaflops IBM iDataPlex cluster computer with 4,536 dual-socket compute nodes that contained 9,072, 2.6-GHz Intel Xeon E5-2670 8-core processors (72,576 cores), and its aggregate memory size was 145 terabytes.[9] The nodes interconnected in a full fat tree network via a Mellanox FDR InfiniBand switching fabric.[9] System software[10] includes the Red Hat Enterprise Linux operating system for Scientific Computing,[11] LSF Batch Subsystem and Resource Manager,[12] and IBM General Parallel File System (GPFS).[9]

Yellowstone was integrated with many other high-performance computing resources in the NWSC. The central feature of this supercomputing architecture was its shared file system that streamlined science workflows by providing computation, analysis, and visualization work spaces common to all resources. This common data storage pool, called the GLobally Accessible Data Environment[13] (GLADE), provides 36.4 petabytes of online disk capacity shared by the supercomputer, two data analysis and visualization (DAV) cluster computers (Geyser and Caldera),[9] data servers for both local and remote users, and a data archive with the capacity to store 320 petabytes of research data. High-speed networks connect this Yellowstone environment to science gateways,[14] data transfer services, remote visualization resources, Extreme Science and Engineering Discovery Environment (XSEDE) sites, and partner sites around the world.

This integration of computing resources, file systems, data storage, and broadband networks allowed scientists to simulate future geophysical scenarios at high resolution, then analyze and visualize them on one computing complex.[15] This improves scientific productivity[6] by avoiding the delays associated with moving large quantities of data between separate systems. Further, this reduces the volume of data that needs to be transferred to researchers at their home institutions. The Yellowstone environment at NWSC makes more than 600 million processor-hours available each year to researchers in the Earth system sciences.[16]

gollark: PotatOS actually just does `os.queueEvent "terminate"` and kills shell, though.
gollark: You queue some wrong event.
gollark: You can just crash the rednet coroutine and hook `printError`.
gollark: Now *none*, actually, since that's done in native code and CC:T has `debug`.
gollark: The coroutine manager thing is totally orthogonal to any sandboxing bios.lua does.

See also

References

  1. "Yellowstone", NCAR Computational and Information Systems Laboratory (CISL) website: Resources. Retrieved 2012-06-12.
  2. "NCAR-Wyoming Supercomputing Center Fact Sheet", University Corporation for Atmospheric Research (UCAR) website, Retrieved 2012-06-12.
  3. NCAR Advances Weather Research Capabilities With IBM Supercomputing Technology, IBM News Release, 08 Nov 2011.
  4. "Yellowstone to be decommissioned December 31 | Computational & Information Systems Laboratory". dailyb.cisl.ucar.edu. Retrieved 2018-01-19.
  5. Scoles, Sarah (31 March 2017). "Why You Should Put Your Supercomputer in Wyoming". Wired.com. Wired. Retrieved 6 October 2018.
  6. NCAR Selects IBM for Key Components of New Supercomputing Center, NCAR/UCAR AtmosNews, 7 November 2011.
  7. Yellowstone, NWSC science impact, NCAR Computational and Information Systems Laboratory (CISL) website: Resources. Retrieved 2012-06-12.
  8. The NCAR-Wyoming Supercomputing Center Science Justification, Proposal to The National Science Foundation by The National Center for Atmospheric Research and The University Corporation for Atmospheric Research in partnership with The University and State of Wyoming, 4 September 2009.
  9. System overview, Yellowstone: High-performance computing resource, NCAR Computational and Information Systems Laboratory (CISL) website: Resources. Retrieved 2012-06-12.
  10. Yellowstone Software, NCAR Computational and Information Systems Laboratory (CISL) website: Resources. Retrieved 2012-06-12.
  11. Red Hat Enterprise Linux For Scientific Computing, Red Hat Products website, Retrieved 2012-06-12.
  12. [Note that IBM has acquired Platform Computing, Inc., developers of LSF.]
  13. NCAR’s Globally Accessible Data Environment, FY2011 CISL Annual Report. Note: This October 2011 report describes GLADE at NCAR’s Mesa Lab Computing Facility in Boulder, Colorado. The design of GLADE at NWSC in Cheyenne, Wyoming is identical at this level of description.
  14. Science gateway services, FY2011 CISL Annual Report.
  15. NCAR to Install 1.6 Petaflop IBM Supercomputer, HPCwire, November 07, 2011.
  16. NCAR's next supercomputer: Yellowstone, News@Unidata, 22 November 2011.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.