Speakers
Apply to be<br> considered for a student <br> award (Yes / No)?
Yes
Level for award<br> (Hons, MSc, <br> PhD)?
Hons
Main supervisor (name and email)<br>and his / her institution
Bruce Mellado, bruce.mellado@wits.ac.za
Would you like to <br> submit a short paper <br> for the Conference <br> Proceedings (Yes / No)?
Yes
Abstract content <br> (Max 300 words)<br><a href="http://events.saip.org.za/getFile.py/access?resId=0&materialId=0&confId=34" target="_blank">Formatting &<br>Special chars</a>
‘Big Science’ projects such as the to-be upgraded ATLAS detector at the Large Hadron Collider at CERN are expected to produce data in volumes which far exceed current system data throughput capacities. In addition, cost considerations for large-scale computing systems remain a source of general concern. A potential solution involves using low-cost, low-power ARM processors in large arrays in a manner which provides massive parallelisation and high rates of data throughput (relative to existing large-scale computing designs). Giving greater priority to both throughput-rate and cost considerations increases the relevance of primary memory performance and design optimisations to overall system performance. Using several primary memory performance benchmarks to evaluate various aspects of RAM and cache performance, we provide characterisations of the performances of three different models of ARM-based SoC, namely the Cortex-A9, Cortex-A7 and Cortex-A15. We then discuss the relevance of these results to high throughput-rate computing and the potential for ARM processors. Finally, applications to the upgrade of the on-line and off-line data processing at the ATLAS detector are also discussed.