Storage (ieri, oggi e domani) Luca dell’Agnello INFN-CNAF
Tematiche della sessione Consolidamento del servizio Supporto StoRM Stato del MSS al Tier1 Integrazione gestione sistemi storage per il calcolo e per i servizi Uno sguardo sul futuro Test di file-system/sistemi di storage per HEP Idee (e dubbi ) sul futuro dello storage Incipit discussione in vista meeting storage WLCG
StoRM Product team (~ 3.5 FTE) ha in carico sviluppo, test, certificazione, supporto (v. talk F. Giacomini) ….ed alcune componenti “esterne” Plug-in gridftp per checksum Layer GPFS-TSM Stage roll-out con “early adopters” ~ 2 settimane INFN-T1, INFN-CNAF, INFN-BARI, INFN-PADOVA, … Prevista procedura per “urgent fix” Solo se per problemi bloccanti AND solo se non esiste workaround Rilascio rpm ad hoc per sito con problema 3 livelli supporto Helpdesk di EGI (smistamento ticket) Support unit in INSPIRE (esperti in shift) Sviluppatori: da interpellarsi solo per bug!
4 GEMSS: GPFS/TSM/StoRM integration GPFS is a parallel file system while TSM is a high-performance backup/archiving solution Both products (from IBM) are widely used (separately) and diffused Large GPFS installation is in production at CNAF since 2005, with increasing disk space and number of users 2 PB of net disk space (~ 6 PB in Q2 2010) partitioned in several GPFS clusters 150 disk-servers (NSD + GridFTP) connected to the SAN We combined the features introduced in version 3.2 of GPFS and TSM with StoRM, to provide a transparent grid-enabled HSM solution. An interface between GPFS and TSM (named GEMSS) was implemented to enable tape-ordered recalls CASTOR phase-out nearly completed (Q1 2010)
Disk-centric system with five fundamental components 1. GPFS: disk-storage software infrastructure 2. TSM: tape management system 3. StoRM: SRM service 4. GEMSS: StoRM-GPFS-TSM interface 5. GridFTP: WAN data transfers 5 ILM DAT A FILE GEMSS DATA MIGRATION PROCESS DAT A FILE StoRM GridFTP GPFS DAT A FILE WAN data transfers DAT A FILE TSM DAT A FILE GEMSS DATA RECALL PROCESS DAT A FILE WORKER NODE Building blocks of the new system SAN TAN LA N SAN 5
Lo storage: un tema sempre caldo in ambito WLCG Revisione critica del modello attuale di storage Forte spinta dagli esperimenti Varie idee non ancora focalizzate Alcune “tappe” fissate Storage jamboree (Amsterdam Giugno) WLCG workshop (London 7-9 Luglio) Porteremo nostra esperienza come INFN
Links Storage Jamboree (June ) ?confId= ?confId=92416 WLCG workshop (July )
StoRM production layout for CMS 2x10 Gbps 500 TB GPFS file system 4 GridFTP servers (4x2 Gbps) 6 NSD servers (6x2 Gbps on LAN) HSM STA HSM STA HSM STA 8x4 Gbps 3x4 Gbps 8x4 Gbps 11 tape drives 1 TB per tape 1 Gbps per drive TSM server SAN TAN 6x4 Gbps TAPE LIBRAR Y LAN 3 TSM Storage Agents and HSM clients 20x4 Gbps DB2 SA N DB2 9
StoRM layout for ATLAS (T1D0) 2x10 Gbps 16 ( 60 TB) GPFS fs 6 GridFTP servers (6x2 Gbps) 4 NSD servers (4x2 Gbps on LAN) HSM STA HSM STA 8x4 Gbps 2x4 Gbps 8x4 Gbps 11 13 tape drives 1 TB per tape 1 Gbps per drive TSM server SAN TAN 4x4 Gbps TAPE LIBRAR Y LAN TSM Storage Agents and HSM clients 20x4 Gbps DB2 SA N DB2 10