La presentazione è in caricamento. Aspetta per favore

La presentazione è in caricamento. Aspetta per favore

22 Giugno 2004 P. Capiluppi - CSN1 Pisa CMS Computing risultati e prospettive Outline u Schedule u Pre Data Challenge 04 Production u Data Challenge 04.

Presentazioni simili


Presentazione sul tema: "22 Giugno 2004 P. Capiluppi - CSN1 Pisa CMS Computing risultati e prospettive Outline u Schedule u Pre Data Challenge 04 Production u Data Challenge 04."— Transcript della presentazione:

1 22 Giugno 2004 P. Capiluppi - CSN1 Pisa CMS Computing risultati e prospettive Outline u Schedule u Pre Data Challenge 04 Production u Data Challenge 04 è Disegno e scopo è Componenti sw e mw è Risultati è Lezione u Prospettive ed attivita’ prossime u Conclusioni Nota: poco “pre-Challenge (PCP), ma update di quanto presentato a Settembre a Lecce

2 2 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 CMS Computing schedule u 2004  Mar/Apr. DC04 to study T0 Reconstruction, Data Distribution, Real- time analysis 25% of startup scale  May/Jul. Data available and useable by PRS groups  Sep. PRS analysis feed-backs  Sep. Draft CMS Computing Model in CHEP papers  Nov. ARDA prototypes  Nov. Milestone on Interoperability  Dec. Computing TDR in initial draft form. [NEW milestone date] u 2005  July. LCG TDR and CMS Computing TDR [NEW milestone date]  Post July?... DC05, 50% of startup scale. [NEW milestone date]  Dec. Physics TDR [~ Based on Post-DC04 activities] u 2006  DC06 Final readiness tests  Fall. Computing Systems in place for LHC startup  Continuous testing and preparations for data

3 3 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Strong contribution of INFN and CNAF Tier-1 to CMS past&future productions: 252 assid’s in PCP-DC04, for all production step, both local and (when possible) Grid The system is evolving into a permanent production effort… CMS ‘permanent’ production Digitisation … Pre DC04 start ‘Spring02 prod’ ‘Summer02 prod’ CMKIN CMSIM + OSCAR DC04 start 200220042003 # Datasets/month T. Wildish

4 4 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 ~ 43 Mevts in CMS ~ 7.8 Mevts (~ 18%) done by INFN PCP @ INFN statistics (4/4) 2x10 33 digitisation step (INFN only) 2x10 33 digitisation step (all CMS) Note: strong contribution to all steps by CNAF T1 but only outside DC04 (on DC too hard for CNAF T1 to be a RC also!!) DC04 May 04Feb 04 24 Mevents, 6 weeks CMS production steps: Generation Simulation ooHitformatting Digitisation Digitisation continued through DC! D. Bonacorsi

5 5 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 EU-CMS: submit to LCG scheduler  CMS-LCG “virtual” Regional Center 0.5 Mevts Generation [“heavy” pythia] (~2000 jobs ~8 hours* each, ~10 KSI2000 months) ~ 2.1 Mevts Simulation [CMSIM+OSCAR] (~8500 jobs ~10hours* each, ~130 KSI2000 months) ~2 TB data * PIII 1GHz CMSIM: ~1.5 Mevts on CMS/LCG-0 OSCAR: ~0.6 Mevts on LCG-1 PCP grid-based prototypes Constant work of integration in CMS between:  CMS software and production tools  evolving EDG-X  LCG-Y middleware in several phases:  CMS “Stress Test” stressing EDG<1.4, then:  PCP on the CMS/LCG-0 testbed  PCP on LCG-1 … towards DC04 with LCG-2 D. Bonacorsi

6 6 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Scopo del Data Challenge 04 Aim of DC04:  reach a sustained 25Hz reconstruction rate in the Tier-0 farm (25% of the target conditions for LHC startup)  register data and metadata to a catalogue  transfer the reconstructed data to all Tier-1 centers  analyze the reconstructed data at the Tier-1’s as they arrive  publicize to the community the data produced at Tier-1’s  monitor and archive of performance criteria of the ensemble of activities for debugging and post-mortem analysis Not a CPU challenge, but a full chain demonstration! Pre-challenge production in 2003/04  70M Monte Carlo events (30M with Geant-4) produced  Classic and grid (CMS/LCG-0, LCG-1, Grid3) productions Era un “challenge”, e ogni volta che si e’ trovato un limite di scalabilita’ di una componente, e’ stato un Successo!

7 7 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Data Challenge 04: layout Tier-2 Physicist T2storage ORCA Local Job Tier-2 Physicist T2storage ORCA Local Job Tier-1 Tier-1 agent T1storage ORCA Analysis Job MSS ORCA Grid Job Tier-1 Tier-1 agent T1storage ORCA Analysis Job MSS ORCA Grid Job Tier-0 Castor IB fake on-line process RefDB POOL RLS catalogue TMDB ORCA RECO Job GDB Tier-0 data distribution agents EB LCG-2 Services Tier-2 Physicist T2storage ORCA Local Job Tier-1 Tier-1 agent T1storage ORCA Analysis Job MSS ORCA Grid Job Unico Tier2 nel DC04: LNL Full chain (but the Tier-0 reconstruction) done in LCG-2, but only for INFN and PIC Not without pain… By C. Grandi INFN INFN INFN INFN INFN INFN 30 Mar 04 – Rates from GDB to EBs RAL, IN2P3, FZK FNAL INFN, PIC A. Fanfani

8 8 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Data Challenge 04: numbers u Pre Challenge Production (PCP04) [Jul03-Feb04]  Eventi simulati : 75 M events [750k jobs, ~800k files, 5000 KSI2000 months, 100 TB of data ] (~30 M Geant4)  Eventi digitizzati (raw): 35 M events [35k jobs, 105k files]  Dove: INFN, USA, CERN, …  In Italia: ~ 10-15 M events (~20%)  Per cosa (Physics and Reconstruction Software Groups): “Muons”, B-tau”, “e-gamma”, “Higgs” u Data Challenge 04 [Mar04-Apr04]  Eventi ricostruiti (DST) al Tier0 del CERN: ~25 M events [~25k jobs, ~400k files, 150 KSI2000 months, 6 TB of data ]  Eventi distribuiti al Tier1-CNAF e Tier2-LNL: gli stessi ~25 M events e files  Eventi analizzati al Tier1-CNAF e Tier2-LNL: > 10 M events [~15 k jobs, ognuno di ~ 30min CPU ] u Post Data Challenge 04 [May04- …]  Eventi da riprocessare (DST): ~25 M events  Eventi da analizzare in Italia: ~ 50% di 75 M events  Eventi da produrre e distribuire: ~ 50 M

9 9 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Data Challenge 04: componenti MW e SW u CMS specific  Transfer Agents per trasferire i files di DST (al CERN, ai Tier1)  Mass Storage Systems su nastro (Castor, Enstore, etc.) (al CERN ai Tier1)  RefDb, Database delle richieste e “assignment” di datasets (al CERN)  Cobra, framework del software di CMS (CMS wide)  ORCA, OSCAR (Geant4), ricostruzione e simulazione di CMS (CMS wide)  McRunJob, sistema per preparazione dei job (CMS wide)  BOSS, sistema per il job tracking (CMS wide)  SRB, sistema di replicazione e catalogo di files (al CERN, a RAL, Lyon e FZK)  MySQL-POOL, backend di POOL sul database MySQL (a FNAL)  ORACLE database (al CERN e al Tier1-INFN) u LCG “common”  User Interfaces including Replica Manager (al CNAF, Padova, LNL, Bari, PIC)  Storage Elements (al CNAF, LNL, PIC)  Computing Elements (al CNAF, a LNL e a PIC)  Replica Location Service (al CERN e al Tier1-CNAF)  Resource Broker (al CERN e al CNAF-Tier1-Grid-it)  Storage Replica Manager (al CERN e a FNAL)  Berkley Database Information Index (al CERN)  Virtual Organization Management System (al CERN)  GridICE, sistema di monitoring (sui CE, SE, WN, …)  POOL, catalogo per la persistenza (in CERN RLS) u US specific  Monte carlo distributed prod system (MOP) (a FNAL, Wisconsin, Florida, …)  MonaLisa, sistema di monitoring (CMS wide)  Custom McRunJob, sistema di preparazione dei job (a FNAL e…forse Florida)

10 10 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Data Challenge 04 Processing Rate   Processed about 30M events   But DST “errors” make this pass not useful for analysis   Generally kept up at T1’s in CNAF, FNAL, PIC   Got above 25Hz on many short occasions   But only one full day above 25Hz with full system   Working now to document the many different problems

11 11 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Data Challenge 04: data transfer from CERN to INFN exercise with ‘big’ files CNAF - Tier1 >500k~6 TB A total of >500k files and ~6 TB of data transferred CERN T0  CNAF T1 45000 max nb.files per day is ~45000 on March 31 st, 400 GB700 GB max size per day is ~400 GB on March 13 th (>700 GB considering the “Zips”) ~340 Mbps (>42 MB/s) sustained for ~5 hours (max was 383.8 Mbps 383.8 Mbps) Global CNAF network May 2 nd May 1 st GARR Network use D. Bonacorsi

12 12 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 DC04 Real-Time (fake) Analysis  CMS software installation  CMS Software Manager (M. Corvo) installs software via a grid job provided by LCG  RPM distribution based on CMSI or DAR distribution  Used at CNAF, PIC, Legnaro, Ciemat and Taiwan with RPMs  Site manager installs RPM’s via LCFGng  Used at Imperial College  Still inadequate for general CMS users  Real-time analysis at Tier-1  Main difficulty is to identify complete file sets (i.e. runs)  Information today in TMDB or via findColls  Job processes single runs at the site close to the data files  File access via rfio  Output data registered in RLS Push data or info Pull info BOSS JDL RB RLS CE SE WN Job metadata bdII CE SE CE output data registration UI rfio A. Fanfani – C. Grandi

13 13 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 DC04 Fake Analysis Architecture TMDB Mysql TMDB POOL RLS catalogue Transfer agent Replication agent Mass Storage agent SE Export Buffer PIC CASTOR StorageElement MSS CIEMAT disk SE PIC disk SE Drop agent Fake Analysis agent Drop Files LCG Resource Broker LCG Worker Node Data Transfer Fake Analysis  Drop agent triggers job preparation/submission when all files are available  Fake Analysis agent prepares xml catalog, orcarc, jdl script and submits job  Jobs record start/end timestamps in mysql DB J. Hernandez

14 14 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 the dataset-oriented analysis made the results dependent on which dataset were sent in real time from CERN Tuning of the Tier-1 Replica Agent Replica Agent operation affected by CASTOR problem Analysis Agents were not always up due to debugging for 1 dataset Zipped Metadata were late with respect to data few problems with submission   The minimum time from T0 to T1 analysis was 10 minutes   Different problems contributed to the time spread: Real-time DC04 analysis: Turn-around time from T0 N. De Filippis, A. Fanfani, F. Fanzago

15 15 P. Capiluppi - CSN1 Pisa 22 Giugno 2004   Maximum rate of analysis jobs: 194 jobs/hour   Maximum rate of analysed events: 26 Hz   Total of ~15000 analysis jobs via Grid tools in ~2 weeks (95-99% efficiency)   Datasets examples:   B 0 S  J/  Bkg: mu03_tt2mu, mu03_DY2mu   tTH, H  bbbar t  Wb W  l T  Wb W  had. Bkg: bt03_ttbb_tth Bkg: bt03_qcd170_tth Bkg: mu03_W1mu   H  WW  2  2 Bkg: mu03_tt2mu, mu03_DY2mu DC04 Real-time Analysis N. De Filippis, A. Fanfani, F. Fanzago

16 16 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Software di ricostruzione e DST n Last CMS wk: Today: Prototype DST in place u Huge effort by large number of people, especially S. Wynhoff, N. Neumeister, T. Todorov, V. Innocente for “base”. Also from: l Emilio Meschi, David Futyan, George Daskalakis, Pascal Vanlaer, Stefano Lacaprara, Christian Weiser, Arno Heister, Wolfgang Adam, Marcin Konecki, Andre Holzner, Olivier van der Aa, Christophe Delaere, Paolo Meridiani, Nicola Amapane, Susanna Cucciarelli, Haifeng Pi n DST constitutes first “CMS summary” u Examples of “doing physics” with it in place. But not complete Senza l’attivita’ dei PRS (b-tau, muon, e-gamma) per il software di ricostruzione non ci sarebbe analisi ne’ Data Challenge (04): L’INFN e’ il major contributor: Ba, Bo, Fi, Pi, Pd, Pg, Rm1, To. P. Sphicas

17 17 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 PRS analysis contributions… n n ttH; H→bb and related backgrounds u u S. Cucciarelli, F. Ambroglini, C. Weiser, S. Kappler. A. Bocci, R. Ranieri, A. Heister... B s →J/   and related backgrounds u u V. Ciulli, N. Magini, Dubna group... A/H susy →  established channel for SUSY H; HLT n n People/channels:   A/H→2  →  -jet +  -jet S. Gennai, S. Lehti, L. Wendland n n Reconstruction; full track reco starting from to raw-data; several algos already implemented l l Studies of RecHits, sensor positions, B field, material dist l l W. Adam, M. Konecki, S. Cucciarelli, A. Frey, M. Konecki, T. Todorov n n H  u u People: G. Anagnostou, G. Daskalakis, A. Kyriakis, K. Lassila, N. Marinelli, J. Nysten, K. Armour, S. Bhattacharya, J. Branson, J. Letts, T. Lee, V. Litvin, H. Newman, S. Shevchenko n n H  ZZ ( * )  4e u u People: David Futyan, Paolo Meridiani, Kate Mackay, Emilio Meschi, Ivica Puljak, Claude Charlot, Nikola Godinovic, Federico Ferri, Stephane Bimbot n n H  WW  2  2 u u Zanetti, Lacaprara E molti altri !!!! n n Calibrazioni ed allineamenti n n Higgs studies

18 18 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Data Challenge 04: lezione (1/2) u Molte componenti usate non scalano (sia CMS che NON):  RLS  Castor  D-cache  Metadata  SRB  Cataloghi di vario tipo e specie  Job submission system at the Tier0  Etc. u Molte funzioni/componenti mancavano:  Data Transfer Management  Global Data location per tutti (almeno) i Tier1 Niente di male, era un challenge fatto per questo! u Ma la vera lezione e’ stata (surprise?) che:  NON c’era (c’e’) l’organizzazione, ne’ per LCG ne’ per CMS ne’ per Grid3  NON c’era (c’e’) un consistente disegno ne’ di Data ne’ di Computing Model  Salvo che parzialmente in Italia e in USA!

19 19 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 D. Bonacorsi Data Challenge 04: lezione (2/2) Infatti, per es. DC04 data time window: 51 (+3) days March 11 th – May 3 rd T0 Castor problems  ramp-down @150 ramp-up @ 300 jobs T0 Castor nameserver Creaking under load  ramp-down.. T0 @>20 Hz, but config agent OFF EB agents ON but useless, then ramp-up @500 all T1’s had some homework from the EBs here.. T0 issue of the 17k files on wrong stager ramp-up @100 Easter prod & transfer T0 @20 and CNAF empties backlog ramp-up @ 50 then 200 jobs T0 jobs “mass extinction” then ramp-up @300 “Zips” exercise ramp-up &down ramp-down

20 20 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Prospettive INFN u Breve termine  Ricostruire i DST con una versione di ORCA (sw CMS) è Validata dalle analisi mentre avviene la produzione è Dovunque (Tier0, Tier1s e Tier2s) sia possibile  Distribure i DST, gli altri formati di dati (Digi, Simhits) e i metadati è Ai Tier1 e di conseguenza ai Tier2  Consentire l’analisi “localmente distribuita” è In modo consistente per l’accesso ai dati (pochi tools lo permettono…) u Medio termine  Costruire un “Data Model”  Costruire un “Computing Model”  Costruire una architettura consistente e distribita  Costruire un accesso controllato (e “semi-trasparente”) ai dati u Con le “componenti” che ci sono e che hanno una prospettiva di scalabilita’ (da misurare di nuovo, in modo organico)

21 21 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Attivita’ post Data Challenge 04 u [June 04 – July 04]  Ricreazione dei DST  Distribuzione dei file necessari (data e metadata) per l’analisi  Primi risultati per i PRS e per il Physics TDR u [July 04 – July 05]  Produzione di nuovi (o vecchi) datasets (inclusi i DST):  Target 10 M events/month, steady, per il Physics TDR  Analisi continua dei dati prodotti u [Sep 04 – Oct 04]  Risultati del Data Challenge 04 per CHEP04  Prima definizione del Data & Computing Model  Definizione dei MoUs u [Jul 05 - …]  CMS Computing TDR (e LCG TDR)  Data Challenge 05, per verificare il Computing Model u Serviranno risorse (2005) di:  Storage per l’analisi e la produzione ai Tier1, Tier2 e Tier3  CPUs per la produzione e l’analisi ai Tier1 e Tier2  Attivita’ continua  Risorse dedicate?

22 22 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Possible evolution of CCS tasks (Core Computing and Software) CCS will Reorganize to match the new requirements and the move from R&D to Implementation for Physics  Meet the PRS Production Requirements (Physics TDR Analysis)  Build the Data Management and Distributed Analysis infrastructures  Production Operations group [NEW]  Outside of CERN. Must find ways to reduce manpower requirements.  Using predominantly (only?) GRID resources.  Data Management Task [NEW]  Project to respond to DM RTAG  Physicists/ Computing to define CMS Blueprint, relationships with suppliers (LCG/EGEE…), CMS DM task in Computing group  Expect to make major use of manpower and experience from CDF/D0 Run II  Workload Management Task [NEW]  Make the Grid useable to CMS users  Make major use of manpower with EDG/LCG/EGEE experience  Distributed Analysis Cross Project (DAPROM) [NEW]  Coordinate and harmonize analysis activities between CCS and PRS  Work closely with Data and Workload Management tasks Establish high-level Physics/Computing panel between T1 countries to ensure Collaboration Ownership of Computing Model for MoU and RRB discussions

23 23 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Conclusioni u Il Data Challenge “04” di CMS ha avuto successo:  Misurate molte funzionalita’ in modo “scientifico”  Scoperti molte “failures” e bottlenecks (ma raggiunti i 25 Hz!)  Capite (??) molte cose  Contributo italiano (INFN) determinate u Il Data Challenge “04” di CMS non ha avuto successo:  Non e’ stato programmato a sufficienza  Ha richiesto una continua (due mesi) presenza ed intervento di persone “volonterose” (20 ore per giorno, inclusi i week-end) per soluzioni “al volo”:  ~30 persone, world-wide  NON c’e’ ancora una valutazione “oggettiva” dei risultati  Tutto quello che ha funzionato (nel bene e nel male) viene a-priori criticato senza proposte realistiche alternative… Tuttavia, CMS, superato lo “stress” del DC04, si sta riprendendo… The CMS system is evolving into a permanent Production and Analysis effort…

24 24 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Milestones 2004: specifiche (1/2) u Partecipazione di almeno tre sedi al DC04 [Marzo]  Importare in Italia (Tier1-CNAF) tutti gli eventi ricostruiti al T0  Distribuire gli streams selezionati su almeno tre sedi (~ 6 streams, ~ 20 M eventi, ~ 5TB di AOD)  La selezione riguarda l’analisi di almeno 4 canali di segnale e relativi fondi, ai quali vanno aggiunti gli studi di calibrazione  Deliverable: contributo italiano al report DC04, in funzione del C-TDR e della “preparazione” del P-TDR. Risultati dell'analisi dei canali assegnati all'Italia (almeno 3 stream e 4 canali di segnale) u Integrazione del sistema di calcolo CMS Italia in LCG [Giugno]  Il Tier1, meta’ dei Tier2 (LNL, Ba, Bo, Pd, Pi, Rm1) e un terzo dei Tier3 (Ct, Fi, Mi, Na, Pg, To) hanno il software di LCG installato e hanno la capacita’ di lavorare nell’environment di LCG  Comporta la installazione dei pacchetti software provenienti da LCG AA e da LCG GDA (da Pool a RLS etc.)  Completamento analisi utilizzando infrastruttura LCG e ulteriori produzioni per circa 2 M di eventi  Deliverable: CMS Italia e’ integrata in LCG per piu’ della meta’ delle risorse Fine del DC04 slittata ad Aprile Sedi: Ba, Bo, Fi, LNL, Pd, Pi, CNAF-Tier1 2 Streams, ma 4 canali di analisi DONE, 90% Sedi integrate in LCG: CNAF-Tier1, LNL, Ba, Pd, Bo, Pi Il prolungarsi dell’analisi dei risultati del DC04 fa slittare di almeno 3 mesi In progress, 30%

25 25 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Milestones 2004: specifiche (2/2) u Partecipazione al C-TDR [Ottobre]  Include la definizione della partecipazione italiana al C-TDR in termini di: è Risorse e sedi (possibilmente tutte) è Man-power è Finanziamenti e piano di interventi  Deliverable: drafts del C-TDR col contributo italiano u Partecipazione al PCP DC05 di almeno il Tier1 e i Tier2 [Dicembre]  Il Tier1 e’ il CNAF e i Tier2 sono: LNL, Ba, Bo, Pd, Pi, Rm1  Produzione di ~ 20 M di eventi per lo studio del P-TDR, o equivalenti (lo studio potrebbe richiedere fast-MC o speciali programmi)  Contributo alla definizione del LCG-TDR  Deliverable: produzione degli eventi necessari alla validazione dei tools di fast-simulation e allo studio dei P-TDR (~20 M eventi sul Tier1 + i Tier2/3) Il Computing TDR e’ ora dovuto per Luglio 2005 La milestone slitta di conseguenza Stand-by/progress, 10% Il Data Challenge 05 slitta al Luglio 2005 La milestone slitta di conseguenza Stand-by, 0%

26 26 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Back-up Slides

27 27 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Computing Model di CMS u Computing Model design u Data location and access Model u Analysis (user) Model u CMS Software and Tools u Infrastructure & Organization (Tiers and LCG)

28 28 P. Capiluppi - CSN1 Pisa 22 Giugno 2004

29 29 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 CPU Power Ramp Up Average slope =x2.5/year DC04 C TDR DC05 P TDR LCG TDR DC06 Readiness LHC 2E33 LHC 1E34 DAQTDR Time shared ResourcesDedicated CMS Resources Actual DC04 level Actual PCP level

30 30 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Estimates prepared as input to the MoU Task Force Computing models under active development NO HEAVY IONS INCLUDED YET!

31 31 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Tier-1 Centers are Crucial to CMS u CMS expects to have (External) T1 centers at  CNAF, FNAL, Lyon, Karlsrhue, PIC, RAL  And a Tier-1 center at CERN (Still discussing role of CERN T1) u Current Computing model gives total External T1 requirements  Assumed over 6 centers, but not necessarily 6 equal centers u Tier-1 centers will be crucial for  Calibration, Reprocessing, Data-Serving  To service the requirements of the Tier-2 centers è Both from the region and via explicit relationships with external T2 centers.  Servicing the analysis requirements of their ‘regions’ u Next step is to iterate with the T1 centers/CMS Country managements to understand what they can realistically hope to propose and to possibly succeed in obtaining

32 32 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Possible Sizing of Regional T1’s u Assume 1 T1 at CERN and Sum of 6 External T1’s u Take truncated sum of collaboration at T1 Countries and calculate Fractions in those countries u Share the 6+1 T1’s according to this algorithm to get opening scenario for discussions:  CERN 1 T1 for CMS (By Definition)  France 0.5T1 for CMS  Germany0.4T1  Italy1.7T1  Spain0.2T1  UK0.4T1  USA2.6T1

33 33 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Tier-2 u Ask Now for intentions from all CMS Agencies  I have an “old” list, I request that you contact me with your intentions so I can bring this up to date. u T1 countries are making a very heavy commitment  They may need to demonstrate sharing of costs with the dependent T2’s  T2’s need to start defining with which T1 they will enter into service agreements, and negotiating with them to how costs will be distributed.

34 Claudio Grandi INFN Bologna 34 RLS performance ● Time to register the output of a single job (16 files) – left axis ● Load on client machine at the time of registration – right axis April 2 nd, 18:00 0.4 files/s  25 Hz 0.16 files/s  10 Hz

35 P. Capiluppi - CSN1 Pisa Slide 35 RLS issues  Total Number of files registered in the RLS during DC04:   570K LFNs each with  5-10 PFN’s and 9 metadata attributes  Inserting information into RLS  Insert PFN (file catalogue) was fast enough if using the appropriate tools, produced in-course  LRC C++ API programs (  0.1-0.2sec/file), POOL CLI with GUID (secs/file)  Insert files with their attributes (file and metadata catalogue) was slow  We more or less survived, higher data rates would be troublesome 5 Apr 10:002 Apr 18:00 3 sec / file Time to register the output of a Tier-0 job (16 files) Sometimes the load on RLS increases and requires intervention on the server (i.g. log partition full, switch of server node, un-optimized queries)  able to keep up in optimal condition, so and so otherwise

36 36 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 BOSS DB Dataset metadata Job metadata McRunjob + plug-in CMSProd Site Manager starts an assignment RefDB Phys.Group asks for a new dataset shell scripts Local Batch Manager Computer farm Job level query Data-level query Production Manager defines assignments Push data or info Pull info JDL Grid (LCG) Scheduler LCG- 0/1 RLS DAG job DAGMan (MOP) Chimera VDL Virtual Data Catalogue Planner Grid3 PCP set-up: a hybrid model by C.Grandi

37 37 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Generation step (all CMS) ~79 Mevts in CMS ~9.9 Mevts (~13%) done by INFN (strong contribution by LNL) Generation step (INFN only) Jun – mid-Aug 03 contribute to this slope PCP @ INFN statistics (1/4) CMS production steps:Generation Simulation ooHitformatting Digitisation

38 38 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 ~75 Mevts in CMS ~10.4 Mevts (~14%) done by INFN (strong contribution by CNAF T1+LNL) PCP @ INFN statistics (2/4) Simulation step [CMSIM+OSCAR] (INFN only) Simulation step [CMSIM+OSCAR] (all CMS) Jul – Sep 03 CMS production steps: GenerationSimulation ooHitformatting Digitisation

39 39 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 ~37 Mevts in CMS ~7.8 Mevts (~21%) done by INFN PCP @ INFN statistics (3/4) ooHitformatting step (INFN only) ooHitformatting step (all CMS) Dec 03end-Feb 04 CMS production steps: Generation SimulationooHitformatting Digitisation D. Bonacorsi

40 OSCAR in Production 22 Giugno 2004P. Capiluppi - CSN1 Pisa40 OSCAR PCP04 Production with OSCAR begins ~20 million events in 6 months, ~750K per week

41 41 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Evolution of Transfer Requirements

42 José Hernández CIEMAT 42 From GDB to analysis at T1 Transfer Replication Job preparation Job Submission

43 43 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Real-Time (Fake) Analysis  Goals  Demonstrate data can be analyzed in real time at the T1  Fast Feedback to reconstruction (e.g. calibration, alignment, check of reconstruction code, etc.)  Establish automatic data replication to T2s  Make data available for offline analysis  Measure time elapsed between reconstruction at T0 and analysis at T1  Architecture  Set of software agents communicating via local mysql DB  Replication, data set completeness, job preparation & submission  Use LCG to run jobs  Private Grid Information System for CMS DC04  Private Resource Broker J. Hernandez

44 44 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 From GDB to analysis at T1 GDB EB T1 T2 Reconstruction Analysis Publisher and configuration agents EB agent Transfer and replication agents Drop and Fake Analysis agents J. Hernandez

45 45 P. Capiluppi - CSN1 Pisa 22 Giugno 2004   Real-time analysis: two weeks of quasi-continuous running!   The total number of analysis jobs submitted ~ 15000   Overall Grid efficiency ~ 95-99%   Problems :   RLS query to prepare a POOL xml catalog done using file GUID otherwise much slower   Resource Broker disk being full causing the RB unavailability for several hours. This problem was related to large input/output sandbox. Possible solutions:   Set quotas on RB space for sandbox   Configure to use RB in cascade   Network problem at CERN, not allowing connections to the RLS and CERN RB   Legnaro CE/SE disappeared in the Information System during one night   Failures in updating Boss database due to overload of MySQL server (~30% ). The Boss recovery procedure was used Real-time DC04 analysis: Summary N. De Filippis, A. Fanfani, F. Fanzago

46 46 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Description of RLS usage in DC04 CERN RLS POOL catalogue RM/SRM/SRB EB agents Configuration agent Tier-1 Transfer agent LCG ORCA Analysis Job SRB GMCAT XML Publication Agent Replica Manager 1. Register Files 2. Find Tier-1 Location (based on metadata) 4. Copy files to Tier-1’s 6. Process DST and register private data Local POOL catalogue TMDB Resource Broker Specific client tools: POOL CLI, Replica Manager CLI, C++ LRC API based programs, LRC java API tools (SRB/GMCAT), Resource Broker CNAF RLS replica 5. Submit analysis job ORACLE mirroring 3. Copy/delete files to/from export buffers

47 47 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Context for the agent system Replica managers Agents (and TMDB) Grid transfer tools File catalogue Configuration agent Metadata Analysis: A separate world? Resource brokers? Global system management/ steering

48 22 Giugno 2004P. Capiluppi - CSN1 Pisa48 T0 T1 castor SE T1 disk SE CNAF T2 disk SE LNL b/  datasets muon datasets UI CNAF DST files 2. Notify that new files are available for analysis RB CERN/CNAF CMS software (ORCA8.0.1) installed by the CMS software manager using a GRID job based on xcmsi tool CNAF or LNL Computing Elements to Real-time analysis schema Replica Agent Real-time Analysis Agent 1. Replicate data to disk SEs at T1/2 ORCA 8.0.1 on UI to compile analysis code 1. 1.Check if a file-set (run) is ready to be analyzed (greenlight) 2. 2.Prepare the job to analyze the run 3. 3.Submit the job via BOSS to the RB

49 22 Giugno 2004P. Capiluppi - CSN1 Pisa49 tTH analysis results Muon and Neutrino Informations: transverse energy Muon Pt Isolated Muon Pt Isolation Efficiency Single muon = 88% (98% wrt selection)

50 22 Giugno 2004P. Capiluppi - CSN1 Pisa50 tTH analysis results Jet Informations: Total number of Jet Number of B Jet Et of non B Jet Et of B Jet

51 22 Giugno 2004P. Capiluppi - CSN1 Pisa51 tTH analysis results Leptonic Top Hadronic Top Hadronic W Reconstructed Masses:

52 05/05/2004 Federica Fanzago INFN Padova data transfert and job preparation T0 T1 castor T1 disk SE CNAF T2 disk SE LNL b/tau dataset Muon dataset Replica agent UI CNAF DST files Notify that new files are available for analysis RB cern/cnaf Real-time analysis agent Only If the collection file has “greenlight” the agent prepares and submits a job to analyse one run Submission via BOSS CMS software is installed by the CMS Software Manager using a GRID job based on xcmsi tool CNAF or LNL testbed To ORCA_8_0_1 available on UI to compile analysis code 2

53 53 P. Capiluppi - CSN1 Pisa 22 Giugno 2004

54 54 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 An example: Replicas to disk-SEs CNAF T1 disk-SE green CNAF T1 Castor SE eth I/O input from SE-EB eth I/O input from Castor SE TCP connections RAM memory Legnaro T2 disk-SE eth I/O input from Castor SE Just one day: Apr, 19 th D. Bonacorsi

55 55 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Data Transfer CERN EB (3 disk SE) PIC disk SE Castor CNAF disk SE CIEMAT disk SE Legnaro disk SE Tier-1 Tier-2 Tier-1 Tier-2 PIC SE Castor CNAF SE   Transfer tools:   Replica Manager CLI used for EB  CNAF and CNAF  Legnaro   Java-based CLI introduces non negligible overhead at start-up   globus-url-copy + LRC C++ API used for EB  PIC and PIC  Ciemat   Faster   Performance has been good with both tools   Total network throughput limited by small file size   Some transfer problem caused by performance of underlying MSS   Always use a disk SE in front of an MSS in the future? A. Fanfani

56 56 P. Capiluppi - CSN1 Pisa 22 Giugno 2004 Dataset bt03_ttbb_ttH analysed with executable ttHWmu Total execution time ~ 28 minutes ORCA execution time ~ 25 minutes Job waiting time before starting ~ 120 s Time for staging input and output files ~ 170 s Overhead of GRID + waiting time in queue Real-time DC04 analysis: job time statistic N. De Filippis, A. Fanfani, F. Fanzago


Scaricare ppt "22 Giugno 2004 P. Capiluppi - CSN1 Pisa CMS Computing risultati e prospettive Outline u Schedule u Pre Data Challenge 04 Production u Data Challenge 04."

Presentazioni simili


Annunci Google