Speaker
Abstract content <br> (Max 300 words)
The ATLAS experiment at CERN is a large multi purpose detector used to gather data from proton-proton and nucleon-nucleon collisions at the Large Hadron Collider. The Detector stands 4 stories high, and comprises greater than 150 Million readout channels, spread between a variety of detector technology in order to maximise the reach of the ATLAS scientific programe
The collaboration comprises over 3000 people.
In order to fully characterise the detector output under different running conditions (like the energy and composition of the colliding proton bunches) which is necessary for precision measurement there is a large computing requirement to generate large samples for each possible configuration and gain insight to how the detector will behave. The smaller the uncertainties on the detector response the more precise our measurements will be.
From a more physical side, ATLAS was designed to look for small signals over a large background, and requires large simulated samples in order to characterise these background physical processes (in a well calibrated detector), and reduce our uncertainties.
Within this context I will outline the ATLAS distributed computing model, and the current resources available in South Africa, as well as how close our current resources get to providing our optimal needs
Would you like to <br> submit a short paper <br> for the Conference <br> Proceedings (Yes / No)?
Yes
Apply to be<br> consider for a student <br> award (Yes / No)?
No