La presentazione è in caricamento. Aspetta per favore

La presentazione è in caricamento. Aspetta per favore

1 ATLAS Data Challenges e Tier-2 a Milano Situazione dei Data Challenges in Atlas (s/w workshop 4-8 Marzo) Situazione Risorse INFN CSN1 Roma 25 Marzo 2002.

Presentazioni simili


Presentazione sul tema: "1 ATLAS Data Challenges e Tier-2 a Milano Situazione dei Data Challenges in Atlas (s/w workshop 4-8 Marzo) Situazione Risorse INFN CSN1 Roma 25 Marzo 2002."— Transcript della presentazione:

1 1 ATLAS Data Challenges e Tier-2 a Milano Situazione dei Data Challenges in Atlas (s/w workshop 4-8 Marzo) Situazione Risorse INFN CSN1 Roma 25 Marzo 2002 L.Perini

2 2 Situazione DC Seguono slides dal talk del DC coordinator G.Poulard il 4-marzo scorso Miei updates per successive decisioni e risultati (nonchesottolineature) in rosso… Campione prevalente da produrre: jet events –da ripetere (tutto?) con pileup –altri eventi < 20% CPU totale (da decidere) –simulazione intensiva giugno-metaluglio

3 3 Progressi da Settembre per DC OK per il s/w di simulazione per DC1-HLT –Catena completa usata per > 100 k eventi previsti –Objy sostituita da ROOT Definizione e distribuzione dei campioni richiede ancora lavoro: –Riconoscimento Atlas di INFN Tier1 +2Tier2 +1Tier3 come partecipante DC1 si basa su sharing e stime di settembre non ancora riviste –la mia valutazione è che CPU e disco che ATLAS usera in DC1 sono incerte di un fattore fino a 2 per disco (pileup), meno per CPU E decisiva la capacità INFN di partecipare a questa prima produzione distribuita con circa le risorse promesse

4 4 DC0 as of today Event generation in place –Few 100k Z+jet events with Z decay to e+e-, µ + µ - and tau+tau-, generated with Pythia, Isajet and Herwig Detector G3 simulation working well (reading Objy) –~200k events fully simulated Data conversion (Zebra-> Objy) not fully ready Non la useremo! –Not for all Physics TDR data (there are several different detector geometries) –Muon missing Athena reconstruction almost functional –Muon reconstruction & IpatRec not available, due to delay in updating input data Atlfast chain in place Objectivity infrastructure in place Non la useremo! Work going on for production & bookkeeping tools Geant4 robustness test done satisfactorily –Run in Fads/Goofy –100k single muon with test-beam set-up generated –100k H 4 mu events for the ATLAS detector geometry (simplified geometry for Inner Detector & Calorimeters, Muon geometry from AMDB) Continuity test finished by mid-March !?

5 5 ATLAS Data Challenges: DC1 Goals: –Reconstruction & analysis on a large scale learn about data model; I/O performances; identify bottle necks … –Data management Use (evaluate) more than one database technology (Objectivity and ROOT I/O) –Being revisited (plan to drop Objectivity) Droppata! Learn about distributed analysis –Need to produce data for HLT & Physics groups (others?) HLT TDR due by end 2002 –Study performance of Athena and algorithms for use in HLT –High statistics needed »scale 10 7 events in 10-20 days, O(1000) PCs circa 40 giorni? 2 samples... »Simulation & pile-up will play an important role –Checking of Geant4 versus Geant3 –Involvement of CERN & outside-CERN sites –Use of LCG prototype –Use of GRID as and when possible and appropriate Due to conflicting requirements decided to split DC1 in two phases

6 6 ATLAS Data Challenges: DC1 Phase I –Primary concern is delivery of events to HLT community; Switch to ROOT I/O –Not trivial, but seems possible Fatto!!! –Plan to be discussed this week Updated geometry (P version for muon spectrometer) Pile-up added in the chain (not in DC0) –Will be done in Atlsim –Several TB of data to be produced Production of simple Analysis Object Data (AOD) Distributed production Use of GRID Tools ATLAS kit to be prepared –Due to time constraint part of the simulation could be run in background, means over a longer period and not in stress conditions This is something that we would like to consider, being sure that the quality of the simulation is well under control

7 7 ATLAS Data Challenges: DC1 Phase II –Introduction & testing of new Event Data Model (EDM) This should include new Detector Description –Evaluation of database technology –Intensive use of GEANT4 –Test of pile-up in Athena –Software ported to the GRID environment –Production of data for Physics and Computing Model studies Both ESD (Event Summary Data ) and AOD (Analysis Object Data) should be produced (not necessarily in the same path) –Testing of computing model –Testing of distributed analysis using AOD

8 8 Tools for DC1 Plan is to use Grid tools as much as possible –Job submission –Data replication gdmp –gdmp_stage for Castor@CERN; HPSS@Lyon; ATLAS datastore@RAL –Data moving (Globus_url_copy) –Catalog (Magda from PPDG) We will also use our own simple production tools (scripts) AMD (ATLAS Metadata Database) is being developed (Grenoble) Close collaboration between ATLAS Grid and ATLAS DC communities

9 9 ATLAS DC1 schedule DC0 late –Readiness and continuity test in preparation of DC1 –~3 months delay (was scheduled for December 2001) DC1 Schedule –Phase I: start: April 15 end: July 15 –Phase II start: September 23 end: December 20 –Resources not allocated yet Requests @ CERN: –1 week per month for preparation with limited resources –3 consecutive weeks at the end of each period Collecting information from Tiers centers

10 10 Expression of Interest (some…) CERN INFN –CNAF, Milan, Roma1, Naples CCIN2P3 Lyon IFIC Valencia Karlsruhe UK –RAL, Birmingham, Cambridge, Glasgow, Lancaster Also: –FCUL Lisboa, Nordic cluster, Prague BNL Russia (RIVK BAK) –JINR Dubna, ITEP Moscow –SINP MSU Moscow –IHEP Protvino Alberta Tokyo Taiwan Melbourne ……

11 11 ATLAS Data Challenges: DC2 DC2 Spring-Autumn 2003 –Scope will depend on what has been achieved in DC0 & DC1 –At this stage the goal includes: Use of TestBed which will be built in the context of the Phase 1 of the LHC Computing Grid Project –Scale at a sample of 10 8 events –System at a complexity ~50% of 2006-2007 system »Probably to be revisited Extensive use of the GRID middleware Geant4 should play a major role Physics samples could(should) have hidden new physics Calibration and alignment procedures should be tested –May be to be synchronized with Grid developments

12 12 Risorse totali e INFN Si conferma quanto presentato a Set. 2001 –Calcolando INFN=10% ATLAS totale (2 10 7 eventi) ATLAS 3*10 11 SpI95*sec in 50 giorni richiede 60kSpI95 e 80 TB disco (se tutto con pileup anche) –A settembre il piano delle risorse era: 6000 SpI95 e 9 TB disco totali: 2Tier2 circa uguali e Tier1 Tier1 CNAF 2500 SpI95, 5TB disco, 15 TB tape –Finanziamento recente di poco inferiore ( disco 4TB?) –2 Tier2 Milano e Roma1 hanno ora: Roma1 ha 30 dualCPU = 1800 SpI95 e 1.5 TB disco Milano ha 3 dualCPU = 270 SpI95 e 1.2 TB disco –Milano chiede completamento a 25 dualCPU (2200 SpI95) e 2.2 TB disco

13 13 Installato a Milano Utilizzati i 70 Ml di Inventariabile finanziati a settembre: –39.4 Ml disco SCSI raid 1.2 GB configurato con PC – 15.1 ML 3 biproc. – 9.5 Ml switch 24 porte + schede e connessioni – 4.0 Ml partecipazione acquisto LT0 con dot1 Installato e testato, disponibile per uso generale da inizio Aprile –realizzare possibilta contemporanea di uso con e senza Grid (EDG)


Scaricare ppt "1 ATLAS Data Challenges e Tier-2 a Milano Situazione dei Data Challenges in Atlas (s/w workshop 4-8 Marzo) Situazione Risorse INFN CSN1 Roma 25 Marzo 2002."

Presentazioni simili


Annunci Google