ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working CSN1 Aprile 2006
Valerio Vercesi - INFN Pavia 2 S. Falciano (Roma1) Coordinatore Commissioning HLT A. Negri (Irvine, Pavia) Coordinatore Event Filter Dataflow A. Nisati (Roma1) TDAQ Institute Board chair e Coordinatore PESA Muon Slice F. Parodi (Genova) Coordinatore b-tagging PESA V. Vercesi (Pavia) Deputy HLT leader e Coordinatore PESA (Physics and Event Selection Architecture) Attività italiane Trigger di Livello-1 muoni barrel (Napoli, Roma1, Roma2) Trigger di Livello-2 muoni (Pisa, Roma1) Trigger di Livello-2 pixel (Genova) Event Filter Dataflow (LNF, Pavia) Selection software steering (Genova) Event Filter Muoni (Lecce, Napoli, Pavia, Roma1) DAQ (LNF, Pavia, Roma1) DCS (Napoli, Roma1, Roma2) Monitoring (Cosenza, Napoli, Pavia, Pisa) Pre-series commissioning and exploitation (Everybody)
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 3 ATLAS Trigger & DAQ 40 MHz ~100 kHz 2.5 s ~3 kHz ~ 10 ms ~ 1 s ~200 Hz Muon LVL1 CaloInner Pipeline Memories Read-Out Drivers RatesLatency RoI LVL2 Event builder cluster Local Storage: ~ 300 MB/s Read-Out Subsystems hosting Read-Out Buffers Event Filter farm EF ROB ROB ROB ROB ROB ROB ROD ROB ROB ROB ROB ROB ROB Hardware based (FPGA, ASIC) Calo/Muon (coarse granularity) Software (specialised algs) Uses LVL1 Regions of Interest All sub-dets, full granularity Emphasis on early rejection Offline-like algorithms Possibly seeded by LVL2 result Work with full event Full calibration/alignment info High Level Trigger
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 4 TDAQ Networks and Processing Dual(quad)-CPU nodes SDX1 USA15 UX15 ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) UX15 USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links Gigabit Ethernet RoI Builder Regions Of Interest VME ~150 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data Event data ≤ 100 kHz, 1600 fragments of ~ 1 kByte each LVL2 Super- visor DataFlow Manager Event Filter (EF) pROS ~ 500~1600 stores LVL2 output ~100~30 Network switches Event data pulled: partial ≤ 100 kHz, full ~ 3 kHz Event rate ~ 200 Hz Data storage Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 5 One Switch rack - TDAQ rack port GEth for L2+EB One ROS rack - TC rack + horiz. Cooling - 12 ROS 48 ROBINs One Full L2 rack - TDAQ rack - 30 HLT PCs Partial Superv’r rack - TDAQ rack - 3 HE PCs Partial EFIO rack - TDAQ rack - 10 HE PC (6 SFI - 2 SFO - 2 DFM) Partial EF rack - TDAQ rack - 12 HLT PCs Partial ONLINE rack - TDAQ rack - 4 HLT PC (monitoring) 2 LE PC (control) 2 Central FileServers RoIB rack - TC rack + horiz. cooling - 50% of RoIB 5.5 surface: SDX1 underground : USA15 Pre-series system in ATLAS point-1 8 racks (10% of final dataflow, 2% of EF) ROS, L2, EFIO and EF racks: one Local File Server, one or more Local Switches Machine Park: Dual Opteron and Xeon nodes, uniprocessor ROS nodes Operating System: Net booted and diskless nodes, running SLC3
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 6 Commissioning and exploitation Fully functional, small scale, version of the complete HLT/DAQ Equivalent to a detector’s ‘module 0’ Purpose and scope of the pre-series system Pre-commissioning phase To validate the complete, integrated, HLT/DAQ functionality To validate the infrastructure, needed by HLT/DAQ, at point-1 Commissioning phase To validate a component (e.g. a ROS) or a deliverable (e.g. a Level-2 rack) prior to its installation and commissioning TDAQ post-commissioning development system Validate new components (e.g. their functionality when integrated into a fully functional system) Validate new software elements or software releases before moving them to the experiment
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 7 Pre-series tests at Point 1 Used integrated software release ( installation image ) with offline release , Event Format version 2.4, TDAQ release , HLT release First time e/γ- and μ-selections run in a combined menu with algorithms muon calorimeter inner detector E.g. Level-2 setup 8 ROS emulators with preloaded data Data with Level-1 simulation: di-jets (17 GeV), single e (25 GeV), single μ (100 GeV) Dataflow applications with instrumentation measure execution times, network access times and transferred data sizes Used recently up to 20 Level-2 processors each with up to 4 applications Factor 1.9 improvement respect to one application/node
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 8 Infrastruttura Event Filter Caratteristiche principali SW infrastruttura EF Completo disaccoppiamento tra data flow (EFD) e data processing (PTs) sicurezza trattamento dei dati Massimo sfruttamento delle architetture SMP Design flessibile e completamente configurabile SFI SFO SFI SFO SFI SFO SFI SFO Muon ROD LVL1 CaloInner RoI LVL2 Event builder network Storage: ~ 300 MB/s ROBROBROB EF SubFarm Node n PT #1 PTIOPTIO PT #2 PTIOPTIO EFD Sorting ExtPTs Output Trash SFI Input PT #a PTIOPTIO PT #b PTIOPTIO SFO [debug] Calibration Implementation example SFO [calib] SFO [std]
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 9 EF tests Verifiche e studi sulla parte infrastrutturale Ottimizzazione del protocollo di comunicazione tra EF e SFI/SFO: miglioramento delle performance per eventi piccoli (calibrazione) e farm remote Aggiunta di funzionalità addizionali Integrazione e validazione degli algoritmi di selezione Algoritmi derivati dall'offline Ma condizioni operative diverse, es: adattamento delle job-option all'online concorrenza nell'accesso al DB Integrata e validata la muon slice Altre slice in corso di validazione Tested with timing: EF-only, 9 EFDs per 2 PTs, TrigMoore algo, 1 MySQL (CERN site) All 9 nodes connect to MySQL simultaneously all 18 PTs do not 1 but 3 connections to CDI (3x18=54 - fast scaling) 6.90 0.20 s – geometry 0.10 0.03 s – AMDCsimRecAthena 0.06 0.03 s – magnetic field DB-caching was used
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 10 HLT Core Software Work plan defined for design review 2005 ( HLT compliant with trigger operation Steering and sequencing of algorithms Integration with most recent TDAQ software Cycling through TDAQ state machine (start/stop/reinitialize/…) HLT trigger configuration from data base Use of conditions DB in HLT Integration with online services for error reporting and system monitoring Many of these issues have a direct impact on selection algorithms Functionality needs to be available early in core software to give time to algorithm developers. System performance optimization instrumentation for measurement of network transfer times, data volumes and ROS access patterns ( complementary to work in PESA group) For commissioning and readout tests Basic fault tolerance Stability
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 11 Software Installation Image TDAQ Offline HLT TDAQ Common Software repositories Example Partitions / Data Files Test suites Setup / installation scripts Originally developed for Large Scale Test 2005 Contains a consistent set of all software in one file needed to run a standalone HLT setup Completely tested before deployment by PESA, HLT and DAQ specialists Used for first exploitation of pre-series Useful for outside CERN installations and new test bed setups P1 installation procedure presently being worked out Future images snapshot of P1 installation ~ 6.5 GByte software Project builds
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 12 Trigger Configuration Data Base TriggerDB online running offline running shift crewoffline userexpert TriggerTool DB population scripts R/O interface Configuration System compilers TriggerTool: GUI for DB population menu changes for experts (HLT and LVL1) TriggerDB: stores all information to configure the trigger: LVL1 menu, HLT menu, HLT algorithm parameters, HLT release information Versions identified with key Configuration and Condition DB Retrieval of information for running: get information via a key, either as: XML/JobOption files direct DB read-out for both online + offline running LVL1 + HLT as integrated system amp;resId=7&materialId=slides&confId=048
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 13 Global Monitoring Scheme OHP Event Monitoring Service Event Builder GNAM Detector Specific Plug-in Online Histogramming Service Athena Detector Specific Athena Algorithm Athena Monitoring Event Displays Gatherer ROD ROS Analysis Framework Monitoring Data Storage
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 14 GNAM Monitoring Principio: disaccoppiare e mascherare le azioni comuni dagli algoritmi di monitoring GNAM CORE USER LIB Event Monitoring Service On-line Histogramming Service Istogrammi Comandi Eventi Dal dataflow Presenter Viewer Checker GNAM core: azioni comuni sicronizzazione con la DAQ campionamento degli eventi decodifica della parte detector-ind pubblicazione e salvataggio degli histo gestione dei comandi (update, reset, rebin) tools per gli algoritmi (circular buffer, histogram flags, histogram metadata,...) Algoritmi di monitoring (librerie dinamiche a run-time) decodifica detector-dependent booking e filling degli istogrammi gestione di comandi specifici
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 15 Online Histogram Presenter (OHP) Interactive presenter developed in close connection to GNAM monitoring However used to display histograms published on the OHS by any producer Designed to be used both as expert mode: a browser of all the histograms on OHS shifter mode: an histogram presenter to show only predefined sets of histograms in configured tabs Completely interactive with the GNAM Core (rebin, reset, …) Completely redesigned, after the CTB experience, to minimize network traffic and to have a scalability appropriate for whole ATLAS A very useful collaboration with Computer Science students has been established. Browser part Preconfigured set of histograms in tabs Commands to the Core : rebinning, reset...
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 16 Monitoring: commissioning Sviluppato un sistema di monitoring/analisi/ validazione on-line dei rivelatori basato su GNAM produzione di istogrammi visualizzati con On-line Histogram Presenter (OHP) on-line event display (in collaborazione con Saclay) In uso al commissioning dal settembre 2005 In sviluppo reperire la configurazione dei rivelatori da DB controlli automatici e generazioni di allarmi Utilizzato da Tile e MDT, interesse espresso da altri
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 17 ROD Crate DAQ RCD usato come interafccia verso i RODs per Control, Configuration, Monitoring, Data readout (via VME) Gli sviluppi RCD hanno avuto sostanzialmente due fasi ReadoutApplication (ovvero l'applicazione che costituisce il ROD Crate DAQ, il ROS ed il Data Driven Event Builder) modificata in modo sostanziale per accomodare tutte le richieste dei rivelatori ed essere pronta con tutte le fuzionalità necessarie per il commissioning accesso standardizzato ad Information Service ed Online Histogramming possibilità di accesso ai dati in risposta agli interrupt semplificazione della costruzione delle classi per il controllo e l'acquisizione dei moduli definizione e realizzazione di un data driven event builder librerie per gestione standardizzata delle condizione di errore Supporto dei rivelatori per il commissioning Nuovo sviluppo necessario per garantire tramite una semplice interfaccia comune a RAL/CORAL che l'accesso al database di configurazione sia thread safe (fase di inizializzazione)
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 18 Attività RCD Parte specifica del detector del ROD Crate DAQ di MDT ed RPC Database database di cablaggio (molto lavoro!) database di configurazione Interfacce di online e monitoring con questi Detector Control System (DCS) Italiana tutta la parte di DCS degli RPC ed il controllo di HV e LV degli MDT Settore 13 Muoni Run combinati MDT-Tile triggerati da scintillatori Studi di sincronizzazione
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 19 MDT online calibration Required precision for t 0 and r-t autocalibration needs inclusive muon rates of 0.3 3 KHz Not suitable for EF calibration streams Need different Event Building and streaming (under study) Already possible using LVL2 infrastructure with some modifications L2PU Thread Calibration Server Local Server Gatherer Calibration farm disk Server x 25 x ~ 20 ~ 9.6 MB/s TCP/IP, UDP, etc. ~ 480 kB/s Dequeue Memory queue
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 20 Total of 99 racks can be placed in SDX Lower Level: 49 (LVL2, EB,…) Upper Level: 50 (EF) SDX1 – TDAQ P1
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 21 ROS Overview Read-Out Subsystems ( ROSs ) LVL2 Super- visor USA15 SDX1 Timing Trigger Control (TTC) 1600 Read- Out Links 10-Gigabit Ethernet RoI Builder DataFlow Manager Event Filter (EF) pROS ~ 500~1600 Regions Of Interest ~150 PCs Event data requests Delete commands Requested event data dual-CPU nodes ~100~30 Network switches Event rate ~ 200 Hz Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) In total ~150 ROS PCs will have to be installed Each ROS PC will be equipped with 3 or 4 ROBIN cards and one 4-port G-bit Ethernet NIC ROBINROS PCs in USA15 ATLAS detector Read- Out Drivers ( RODs ) First- level trigger UX15 Dedicated links VME Data of events accepted by first-level trigger
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 22 ROS Hardware Procurement 1 st batch (50 PCs)Ordered and received 2 nd batch (60 PCs)Ordered. Delivery scheduled for May Remaining ROS PCs + sparesWill be ordered later ROS PCs ROBINs 4-port NICs German production (350 cards)Ordered and received (~20 cards did not pass the production test and still need to be repaired) UK production (350 cards)Ordered. Delivery scheduled for March Ordered. Delivery scheduled for May Silicom 4-port NIC
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 23 Y A2 Y A2Y A2Y A2Y A2Y A2 Liquid Argon Control switch ROS PCs Installed Power & network cables Commissioned (ROS level) yes no yes 50% yes no yes no yes no yes no Commissioned (ROD - ROS) Current Status of ROS-Racks in USA15 Y A1 TileCal yes 50 %
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 24 Physics and Event Selection Architecture PESA Core SW is responsible for the implementation of the Steering and Control (built around standard Athena components) PESA Algorithms develops HLT software using realistic data access and handling specialized LVL2 and EventFilter algorithms adapted from on-line deployment in HLT testbeds PESA Validation and Performance evaluates algorithms on data samples to extract efficiency, rates, rejection factors, and physics coverage Stems from original structure, laid out in parallel with the organization of the Combined Performance working groups, in “vertical slices" (LVL1+LVL2+EF) Electrons and photons Muons Jets / Taus / ETmiss b-jet tagging B-physics
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 25 HLT Reconstruction Algorithms HLT Feature extraction algorithms are available for each slice Calorimeter algorithms LVL2 and EF algorithms ready for e/ implementation ready at LVL2 Offline tool adapted to the EF is ready for JetCone Muon algorithms LVL2 and EF algorithms are available for the barrel region; work has started on extending the LVL2 algorithm to the endcap ID to muon track matching tools are available at LVL2 and EF Muon isolation studies using calorimeters are being performed ID tracking Tracking with Si data ready at LVL2 and EF; more approaches studied in parallel Tools available for both track extension to the TRT and stand-alone TRT reconstruction; emphasis on providing a robust tool for commissioning and early running
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 26 Selections: LVL2 Implemented curvature radius instead of sagitta More suitable for the endcap, recover efficiency in the barrel Same algorithm across ± 2.4 in Endcap extension in progress Combined reconstruction ( Comb) with ID Refine the Fast p T by means of ID data sharper 6 GeV threshold Resolution New LUTS for Radius Slightly worse than Resolution is OK for Standard sectors Turn-on curves comparable with Resolution is OK for Standard sectors Worse efficiency in the feet region (Special Sectors)
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 27 LVL2 cosmics MDT hits RPC hits (pair of phi,eta strips) Muon track from the surface MDT hits are station centers in X-Y. X-YZ-R: bending plane Straight line extrapolation from y=+98.3 m MDT,RPC hits are there and looks fine. Conversion of RDO to coordinates seems fine too. Next steps: MuFast modifications BIS BML BMS BOS /castor/cern.ch/user/m/muonprod/cosmics/ cosmics.dig.atlas-dc3-02._0004.pool.root Monte Carlo!
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 28 Selections: EF l lower values of efficiency plateau l less sharp curves near the thresholds l more points are needed for a better curve definition Sorgenti di muoni L=10 34 No backgr. L=10 34 s.f. x1 L=10 34 s.f. x5 /K 54 Hz 48 Hz b 77 Hz 68 Hz c 30 Hz 26 Hz W 22 Hz 19 Hz t negligible Total ~185 Hz~190 Hz~180 Hz Studies on single muon selections have been performed for two scenarios: 6 GeV threshold at cm −2 s −1 luminosity and 20 GeV at cm −2 s −1. Cuts are defined so that a 95% efficiency is achieved at the threshold values. l Layout Q (barrel only) l MuId Combined used at EF l MuComb rate reduction still to be included at LVL2 l Fake rates expected to be ~1% (~12%) of total rate for s.f.x1 (s.f. x5) with this threshold (seeded mode)
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 29 Selections: b-tagging Two classes of tagging variables can be used: track variables (x T ) and collective (vertex) variables (x V ).The weight of each RoI is computed using the likelihood- ratio method where S sig and S bkg are the probability densities for signal (b-jets) and background W T : transverse (d 0 / d0 ) and longitudinal (z 0 ) W V : secondary vertex energy and mass (statistical approach) Recent work to combine SimpleVertex (1-dim fit) and VKalVrt (offline algorithm adapted to LVL2) Impact parameters Impact parameters + probabilistic vertex Impact parameters + VKalVrt/SimpleVertex combined
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 30 Trigger-aware analysis Analyses using trigger information as a “pre- processor” to correctly evaluate efficiencies, physics reach, etc. The reconstructed objects, used by the trigger are saved in the ESD/AOD file They can be used for comparison with truth/reconstructed information It is possible to re-play the trigger decision, by running the hypothesis algorithms on these objects Only the settings of the hypothesis algorithms can be changed in the analysis The effect of different threshold settings can be measured ProductionAnalysisData taking
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 31 Trigger & Physics Weeks l Aim: bring together trigger, detector performance and physics studies and expose trigger issues and strategy to a broad ATLAS audience l Focus on initial scenario: cm -2 s -1, 200Hz
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia Electrons / Photons Sketch some pre-scale HLT Crude estimates “to guide the eye” keeping total e/ output rate constant Photons not yet worked out Assessment of both di-photon thresholds and high-pT single one to be revisited Photons useful to obtain unbiased jet sample Muons Absent (or very low) cavern background makes LVL1 commissioning “easier” Full shielding, 75 ns bunch spacing Build menus allowing to Measure Xsec from (very)-low pT Can go as low as ~5 GeV Check W, Z, J/ , Y… Study ways to increase trigger acceptance HLTRate (Hz) Pre-scale factor e10i2250 e15i540 e20i361 2e10i~Hz1 Rate (Hz) Pre-scale factor LVL1 HLT LVL1 LVL1 di 31
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 33Accounting Contributo INFN alla Pre-serie 140 KCHF (ROS Racks, Monitoring, Operations, Switches, FileServer) completamenti spesi entro il 2005 Per questo e per il resto VV riceve in copia tutte le fatture Contributo CORE Online Computing System: KCHF (Monitoring, Operations) Inviati al CERN 45 KCHF a Maggio 2005 Già acquistati quattro file server Read-Out System: KCHF (ROS Racks) Gara CERN espletata con un congruo ritardo per la prima tranche (50 ROS), la parte rimanente è in consegna (60 ROS a Maggio) Imputati all’INFN per ora circa 200 KCHF (su Roma1) LVL2 processors, Event Building, Event Filter processors: KCHF In corso di perfezionamento le specifiche dettagliate (soprattutto per i processori HLT) Può darsi si possa utilizzare un marker survey fatto da CERN-IT Studi in corso anche per la valutazione delle ultime tecnologie (Moore’s law failures…) Infrastruttura: 80 KCHF (cavi, racks, cooling,…)
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 34 Cost Profile (KCHF) Total Pre-series Detector R/O LVL2 Proc Event Builder Event Filter Online Infrastructure INFN Total TDR Total INFN Percentage(%)
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 35 INFN Milestones 30/06/2005 TDAQ - Installazione, test e uso della "Pre-serie" (~ 10% TDAQ slice) “compiutamente” raggiunta in Ottobre: ritardi accumulati soprattutto sugli acquisti delle componenti Proposta di indicare il 100% e modificare la “matching date” 24/12/2005 TDAQ - Installazione e test dei ROS di Pixel, LAr, Tile, Muon (interfacciamento al ROD Crate e integrazione nel DAQ) Forte dipendenza dalla data di consegna dei ROS (lentezza gara, etc) Nessun problema “di principio”, il programma di lavoro è chiaro, l’esperienza della pre- serie è direttamente trasferibile Proposta di indicare 50% alla data prevista 30/04/2006 Completamento dei test sulla pre-serie e definizione delle funzionalità per il supporto al commissioning TDAQ 31/08/2006 Commissioning delle slice di ROS dei rivelatori utilizzando le funzionalità della pre- serie (modulo-0 del sistema finale) 31/12/2006 Presa dati integrata dei rivelatori nel pozzo con raggi cosmici
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 36Conclusioni Il progetto TDAQ sta entrando in una fase di piena maturità Rendere disponibile ai rivelatori tutte le infrastrutture necessarie per i run di cosmici Preparare il commissioning completo del sistema in preparazione allo start-up di LHC I contributi italiani sono chiaramente visibili e ben riconosciuti a livello della Collaborazione Integrazione hardware, sviluppi algoritmici, posizioni di responsabilità, finanze Il tempo a disposizione per il commissioning TDAQ è molto compresso Fondamentale per poter assicurare il data flow necessario anche allo start-up
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 37 Goal of Early Commissioning… Prepare for Unexpected Events…
Spares
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 39 LVL2 tests Data FileLVL2 Rate LVL2 LatencyProcessing TimeRoI ColDAQ TimeData RateData Size #Reqs /Event Data /Req (Hz)(ms) Fraction(MB/s)bytes mu jet e Prefiltered
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 40 Example: Data Base Schema LVL1 algorithms, jobOptions trigger menu software release HLT keys: stored in CondDB, to retrieve information (online and offline) Early prototype of HLT part already run on 6 node system with muon selection algorithm
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 41 Routing calibration data
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 42 Selections: e/ Eff %Rate L KHz L2 Calo Hz L2 ID Hz L2 Match Hz EF Calo Hz EF ID Hz EF Match Hz W e 21% Z ee 5% Direct photons or quark brem5% e from b, c decays37% rest32% Rate and efficiency studies performed for main physics triggers: e25i, 2e15i, e60, 60, 2 20i Results for perfectly in agreement with Rome results Tools have been developed to optimize the selections In the future, results will be provided as efficiency vs. rejection curves, to provide a continuous set of working points: essential for trigger bandwidth optimization Cluster composition
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 43Jets/Taus/ETmiss LVL2 calo algorithm for taus recently separated from egamma Ongoing performance studies for selection strategies on variables At present only EM calibration for cluster energies: need for a tau calibration (also for EF, H1 style as in the offline mode?) First implementation of EF “seeded” TrigtauRec is already working making use of offline tools Once the selection strategies are defined, physics trigger-aware analyses (studying the effect of the hadronic tau trigger) can be performed Three different strategies (concerning the data preparation) are being considered Read out calorimeter and unpack the cells (unpacking time may dominate) Read out calorimeter, get Ex/ Ey calculated in ROD (faster but … resolution?) Read out TriggerTower from LVL1 Preprocessor Ongoing work to define and studies general strategy for pre-scales, in particular for jet objects
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 44 Jet triggers and prescales
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 45 RoI Based B-physics Aim: use the calorimeter to identify regions of the event containing B decay products EM RoI for e and gamma. Jet RoI for hadronic B-decays Keep multiplicity low, to minimize data transfer and cpu, whilst maximising efficiency for events used in physics studies multiplicity= 1-2 The effect of different thresholds (EM&HAD and the jet RoI size on this multiplicity was studied using Rome data (1x10 33 ) with the new TTL LVL1 simulation and pile up The requirement on multiplicity implies an ET threshold of ~ 2GeV for LVL1 EM RoI RoI Multiplicity LVL1 Threshold Energy (GeV) Towerthresh=500MeV (default) Towerthresh=750MeV Towerthresh=1000MeV LVL1 EM RoI multiplicity vs. ET cut
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia PESA Milestones LVL1/HLT AODs fully available in Rel 12 for trigger-aware analyses – Apr 06 Very preliminary AOD information available in Rel 11 Detailed description of Rel 12 deliverables prepared by Simon HLT algorithm reviews complete – Jun 06 Detailed review of ID LVL2 algorithms already taken place Focus on system performance and implementation Results fed back into Rel 13 Online tests of selection slices with preloaded mixed files and large menus – Sep 06 First production version of trigger configuration Selection software ready for cosmic run – Oct 06 Already in PPT: need to refine meaning Blind test of HLT selection – Dec 06 In discussion with physics coordination Sample of representative events from initial ATLAS output & run full menu T&P Week
CSN1 Aprile 2006 Valerio Vercesi - INFN Pavia 47 PESA Planning Several interactions with PESA Slice coordinators and with Algorithms developers Try and bring together something to help reinforcing the content of proposed milestones and monitoring the development process Only gone through first iteration until now… Try always to describe the work in a “task oriented” fashion, to help identifying weak areas as well as facilitate the job assignment Attempt to build a full PESA planning (Excel) starting from this information to monitor progress and allow for updates, suggestions, improvements Clearly more details on near-future objectives than on far-away ones PESA Planning TaskCommentsExpectedPPT Workpackage LVL1 Trigger Definition of EDMDone?dec-05 …………………………………………………………………………………………… …………………………………………………………………………………… ……………………… ………………………….. Slices e/gamma implementation in common frameworkRTT, ESD, Root Analysis FrameworkFebruary 2006DH-W101 Develop tools for automatic optimisations of e/gamma selectionsscanning of parameter space, minuit fitting there, neural net, multi-variant method being developed March 2006 DH-W101 Check trigger selection w.r.t offline selection for electrons/photonsNeed new evaluations from offline groupsMarch 2006DH-W101 Establish set of pre-scaled e-triggers using Rome datasetsPhotons as wellFebruary 2006DH-W101 First evaluation of trigger efficiencies from dataFor electrons, photons and muonsMarch 2006DH-W101 Strategies for ETmiss calculations March 2006 Revised Steering Configuration DH-W110 Prototype LVL2 Hypothesis algorithm for allExamples to be further developed in validationFebruary 2006DH-W147 Provide documentation and examples to physics communityFor all selectionsMarch 2006 Milestone April 2006LVL1/HLT AODs completely available in version 12 for trigger-aware analyses