Collective Tuning Initiative

The Collective Tuning Initiative is a community-driven initiative started by Grigori Fursin to develop free collaborative open-source research tools with unified API for code and architecture characterization, optimization and co-design. This enables the sharing of benchmarks, data sets and optimization cases from the community in the open optimization repository through unified web services to predict better optimizations or architecture designs (provided there is enough information collected in the repository from multiple users).[1][2] Using common research-and-development tools should help to improve the quality and reproducibility of research into code, architecture design and optimization, encouraging innovation in this area. This approach helped establish Artifact Evaluation at several ACM-sponsored conferences to encourage sharing of artifacts and validation of experimental results from accepted papers.[3]

The tools and repository include:

  • Collective Optimization Database: Open repository to share optimization cases from the community, provide web services and plugins to analyze collective optimization data and predict program optimizations based on statistical and machine-learning techniques and improve the quality and reproducibility of the compiler (and architecture research)
  • Online machine learning-based program optimization predictor: Suggests optimization-improving factors such as execution time, code size and compilation time, based on similarities between programs (program features)
  • Continuous Collective Compilation Framework: Automates and distribute iterative feedback-directed exploration of large optimization spaces by multiple users
  • Interactive Compilation Interface: Opens and transforms production compilers into stable interactive research tool sets using an event-driven plugin system to avoid the development of new research compilers from scratch
  • Collective benchmark with multiple data sets: Enables realistic benchmarking and research on iterative compilation and run-time adaptation.
  • Universal Adaptation Framework: Enables run-time adaptation and optimization of statically-compiled programs for heterogeneous, multi-core architectures.

A new version of these open-source tools to support collaborative and reproducible experimentation (Collective Knowledge) was released in 2015.

Collective Optimization Database

The Collective Optimization Database is an open repository to enable sharing of benchmarks, data sets and optimization cases from the community, provide web services and plugins to analyze optimization data and predict program transformations or better hardware designs for multi-objective optimizations based on statistical and machine learning techniques provided there is enough information collected in the repository from multiple users.[4]

Functionality

The Collective Optimization Database is also intended to improve the quality and reproducibility of the research on code and architecture design, characterization and optimization. It includes an online machine learning-based program optimization predictor [5][6] that can suggest profitable optimizations to improve program execution time, code size, or compilation time, based on similarities between programs. The Collective Optimization Database is an important part of the Collective Tuning Initiative[1][2] which is developing open-source R&D tools for collaborative and reproducible computing systems research.

gollark: What happens if people eventually get round to mining asteroids?
gollark: Why not?
gollark: But if you want to actually have slightly more money over time, that is not very smart.
gollark: Barring any technological changes improving availability/efficiency of utilizing them, yes.
gollark: Interesting question! I don't actually know. Probably other investors being smarter than reddit and the shortselly ones.

References

  1. Grigori Fursin. Collective Tuning Initiative: automating and accelerating development and optimization of computing systems. Proceedings of the GCC Summit'09, Montreal, Canada, June 2009 (link)
  2. Rethinking code optimization for mobile and multicore, InfoWorld, July 2009 (link)
  3. Artifact Evaluation for computer systems' conferences
  4. Grigori Fursin and Olivier Temam. Collective optimization. Proceedings of the International Conference on High Performance Embedded Architectures & Compilers (HiPEAC 2009), Paphos, Cyprus, January 2009 (link)
  5. Original compiler optimization prediction service at cTuning.org: cTuning.org/cpredict
  6. Collective Knowledge based portal for collaborative benchmarking and optimization of emerging workloads at cknowledge.io
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.