The cTuning foundation is a non-profit research and development organization. We design open source knowledge management framework and web-based repository of knowledge to enable collaborative and reproducible experimentation in computer engineering while exposing it to powerful predictive analytics (statistical analysis, machine learning, detection of missing features, improvement of models) and collective intelligence. Our technology already helped several academic and industrial partners unify, systematize, standardize and accelerate their previously ad-hoc, complex, time-consuming and error-prone process of benchmarking, auto-tuning (optimization) and co-design of computer systems (software and hardware) making them faster, smaller, cheaper, more energy efficient and reliable. Better computer systems, in turn, help boost innovation in science and technology!
The cTuning foundation primarily focuses on developing, supporting and extending an open source and published cTuning technology from the MILEPOST project to enable faster, smaller, cheaper, more energy efficient and reliable self-tuning computer systems. This technology, considered by IBM to be the first in the world, turns complex, ad-hoc, error-prone, time consuming and costly process of benchmarking, optimization and co-design of computer systems across all software and hardware layers (applications, compilers, run-time libraries, heterogeneous multi-core architectures) into a unified big data problem. We then systematize and considerably speed it up (sometimes by several orders of magnitude) using predictive analytics (statistical analysis, machine learning, data mining, feature selection), public repository of knowledge, empirical auto-tuning, run-time adaptation, crowdsourcing and collective intelligence. As a consequence, cTuning approach dramatically reduces development costs and time to market for the new multi-core devices thus boosting innovation in science and technology. As a side effect, our approach also enables new reproducible research and publication model in computer engineering where articles, experimental results and all related artifacts are continuously shared, discussed, validated and improved by the community. See our manifesto and history for more details.
Our expertise, research and developments:
Our open source technology and expertise has already been successfully used in multiple industrial and academic projects helping our partners develop faster, smaller, cheaper, more power efficient and reliable computer systems while dramatically reducing time to market for the new products (software and hardware) by an order of magnitude and thus boosting innovation. We are systematizing, automating and considerably speeding up the following R&D tasks as described in our vision papers (2009, 2013, 2014):
At the same time, we develop novel or improve existing machine learning, data mining, knowledge discovery, statistical analysis, feature detection, crowdsourcing, auto-tuning and run-time adaptation techniques used in computer engineering.
More details:
|
Our 2008 paper with all shared material |
Since 2006, we have been releasing all experimental data and tools along with our publications [TR2013, TR2009, IJPP2011, ACM TACO2010, PLDI2010, SMART2009, HiPEAC 2009] to allow collaborative validation, reproducibility and extensions by the community. At the same time, having common collaborative R&D repository and infrastructure allows researchers focus their effort on novel approaches combined with with data mining, predictive analytics and classification rather than spending considerable effort on building new tools with already existing functionality. It also allows conferences and journals to favor publications that can be collaboratively validated by the community as described in our vision paper [arXiv , ACM DL]. |
Demo of Collective Mind powered online graph from our recent paper (enabling interactive articles) |