Scaricare la presentazione
La presentazione è in caricamento. Aspetta per favore
PubblicatoPatrizia Napoli Modificato 6 anni fa
1
Report da CHEP2013 – Track 3A Distributed Processing and Data Handling on Infrastructure, Sites and Virtualization Davide Salomoni 28/10/2013
2
CHEP2013: i numeri O(500) partecipanti 15 sessioni plenarie
1 plenaria tenuta da una persona INFN (6.6%) 6 tracks, 7 sessioni parallele (track 3 = 3A+3B) 468 contributi circa il 12.6% con almeno un autore a firma INFN 195 talk circa il 13.3% con almeno un autore a firma INFN Workshop su Data Preservation il 16/10 pomeriggio Davide Salomoni CHEP2013 Report - 28/10/2013
3
CHEP2013: i comitati International Advisory Comittee, 40 partecipanti di cui 1 INFN (Mauro Morandin) ≈ 2.5% Programme Chair: Daniele Bonacorsi Per le 6 track: 33 conveners, di cui 4 INFN ≈ 12% Track 1: Data Acquisition, Trigger and Controls, 4 conveners (0 INFN) Track 2: Event Processing, Simulation and Analysis, 4 conveners (0 INFN) Track 3: Distributed Processing and Data Handling, 5 conveners (1 INFN, Davide Salomoni) Track 4: Data Stores, Data Bases and Storage Systems, 6 conveners (1 INFN, Dario Barberis) Track 5: Software Engineering, Parallelism & Multi-core, 6 conveners (1 INFN, Francesco Giacomini) Track 6: Facilities, Infrastructures, Networking and Collaborative Tools, 8 conveners (1 INFN, Alessandro De Salvo) Davide Salomoni CHEP2013 Report - 28/10/2013
4
Davide Salomoni CHEP2013 Report - 28/10/2013
5
Miei commenti sulla Track 3A
Summary della Track 3A su In evidenza nella track: Opportunistic computing – sia nell’utilizzo di risorse HEP temporaneamente non in uso (es. HLT farms), sia nell’utilizzo di risorse normalmente “non-HEP” (es. supercomputer) Es. ALTAMIRA (Santander), SDSC (San Diego) Effetto a lungo termine sui computing model? Interessanti sviluppi di CernVM – ma con l’importante caveat che si tratta di un progetto best-effort e molto HEP-specific La mancata chiarezza sulla sostenibilità è un punto di diverse realizzazioni, es. Global Service Registry for WLCG Promettenti test sull’uso di Root con Hadoop (nel passato la coppia Root+Hadoop non aveva mostrato grandi prestazioni) Gli esperimenti continuano a sviluppare soluzioni custom anche complesse, spesso – in pratica, se non in teoria – molto experiment-specific scarsa condivisione Es. job monitoring, VM monitoring, automating usability of distributed resources Molto parlare di Cloud, che gode a volte di finanziamenti anche corposi (cf. Australia, M$); tuttavia, poche soluzioni di produzione Da sottolineare in particolare i dubbi sollevati sulla scalabilità dello scheduler di OpenStack (es. R. Medrano Llamas, CERN e I. Sfiligoi, UCSD) Cloud accounting e federazioni missing Davide Salomoni CHEP2013 Report - 28/10/2013
6
Più in generale su CHEP Abbiamo avuto uno speaker italiano per una plenary (“Designing the computing for the future experiments”, Stefano Spataro, focalizzato sugli sviluppi di Panda a FAIR) In generale, le plenaries non mi sono sembrate particolarmente interessanti o illuminanti/visionarie (vedi lista in seguito) Se guardiamo i talk a CHEP2013 che hanno “qualche firma INFN”, appare che sono circa il 13% del totale e che coprono una vasta gamma di argomenti (vedi slide successiva) Tuttavia la mia impressione è che non abbiamo una strategia come ente verso il calcolo e che i nostri contributi ad es. all’interno di esperimenti siano per lo più “strumentali” o individuali, non strategici A livello di calcolo in generale, ho menzionato a Dario Menasce la necessità di una analisi di tutti i contributi (talk e poster) inviati da persone INFN per valutare la possibilità di condivisione e integrazione delle competenze Nessuna presentazione su “middleware comuni” (un cavallo di battaglia dell’INFN nei 10 anni passati), presenti o futuri; per quanto riguarda EGI, nessun talk e un poster (S.Burke, su GLUE 2) – implicazioni per H2020? Ognun per sé, Grid (o Cloud) per tutti Davide Salomoni CHEP2013 Report - 28/10/2013
7
Le plenaries CHEP in Amsterdam: from 1985 to 2013, David Groep (NIKHEF, The Netherlands) NIKHEF, the national institute for subatomic physics, Frank Linde (NIKHEF, The Netherlands) C++ evolves!, Axel Naumann (CERN) Software engineering for science at the LSST, Robert Lupton (Princeton, USA) Horizon 2020: an EU perspective on data and computing infrastructures for research, Kostas Glinos (European Commission) Data processing in the wake of massive multicore and GPU, Jim Kowalkowski (FNAL, USA) Future directions for key physics software packages, Philippe Canal (FNAL, USA) Computing for the LHC: the next step up, Torre Wenaus (BNL, USA) Designing the computing for the future experiments, Stefano Spataro (Università di Torino/INFN, Italy) Big Data - Flexible Data - for HEP, Brian Bockelman (University of Nebraska, USA) Probing Big Data for Answers using Data about Data, Edwin Valentijn (University of Groningen, The Netherlands) Data archiving and data stewardship, Pirjo-Leena Forsström (CSC, Finland) Inside numerical weather forecasting - Algorithms, domain decomposition, parallelism, Toon Moene (KNMI, The Netherlands) Software defined networking and bandwidth-on-demand, Inder Monga (ESnet, USA) Trends in Advanced Networking, Harvey Newman (Caltech, USA) Più un talk di uno sponsor (KPMG, The Netherlands) Davide Salomoni CHEP2013 Report - 28/10/2013
8
I talk con autori INFN Davide Salomoni CHEP2013 Report - 28/10/2013
Many-core applications to online track reconstruction in HEP experiments Integrating multiple scientific computing needs via a Private Cloud Infrastructure Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology Evaluating Predictive Models of Software Quality Usage of the CMS Higher Level Trigger Farm as a Cloud Resource Testing SLURM open source batch system for a Tier1/Tier2 HEP computing facility Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analysis The future of event-level information repositories, indexing and selection in ATLAS Implementation of a PC-based Level 0 Trigger Processor for the NA62 Experiment CMS Computing Model Evolution WLCG and IPv6 - the HEPiX IPv6 working group NaNet: a low-latency NIC enabling GPU-based, real-time low level trigger systems A PCIe GEn3 based readout for the LHCb upgrade The Common Analysis Framework Project ArbyTrary, a cloud-based service for low-energy spectroscopy Integration of Cloud resources in the LHCb Distributed Computing Algorithms, performance, and development of the ATLAS High-level trigger An exact framework for uncertainty quantification in Monte Carlo simulation Scholarly literature and the media: scientific impact and social perception of HEP computing Geant4 studies of the CNAO facility system for hadrontherapy treatment of uveal melanomas Computing on Knights and Kepler Architectures Using ssh as portal - The CMS CRAB over glideinWMS experience System performance monitoring of the ALICE Data Acquisition System with Zabbix Automating usability of ATLAS Distributed Computing resources O2: a new combined online and offline > computing for ALICE after 2018 PROOF-based analysis on the ATLAS Grid facilities: first experience with the PoD/PanDa plugin Davide Salomoni CHEP2013 Report - 28/10/2013
9
I poster con autori INFN
CMS users data management service integration and first experiences with its NoSQL data storage Architectural improvements and 28nm FPGA implementation of the APEnet+ 3D Torus network for hybrid HPC systems INFN Pisa scientific computation environment (GRID HPC and Interactive analysis) GPU for Real Time processing in HEP trigger systems Arby, a general purpose, low-energy spectroscopy simulation tool The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data Testing and Open Source installation and server provisioning tool for the INFN-CNAF Tier1 Storage system Optimization of Italian CMS Computing Centers via MIUR funded Research Projects Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm New physics and old errors: validating the building blocks of major Monte Carlo codes Real-time flavor tagging selection in ATLAS Distributed storage and cloud computing: a test case Dirac integration with a general purpose bookkeeping DB: a complete general suite Toward the Cloud Storage Interface of the INFN CNAF Tier-1 Mass Storage System Preserving access to ALEPH Computing Environment via Virtual Machines The Legnaro-Padova distributed Tier-2: challenges and results Changing the batch system in a Tier 1 computing center: why and how A flexible monitoring infrastructure for the simulation requests TASS - Trigger and Acquisition System Simulator - An interactive graphical tool for Daq and trigger design The ALICE DAQ infoLogger CORAL and COOL during the LHC long shutdown A quasi-online distributed data processing on WAN: the ATLAS muon calibration system The ALICE Data Quality Monitoring: qualitative and quantitative review of 3 years of operations Long Term Data Preservation for CDF at INFN-CNAF Many-core on the Grid: From Exploration to Production R work for a data model definition: data access and storage system studies Negative improvements Installation and configuration of an SDN test-bed made of physical switches and virtual switches managed by an Open Flow controller An Xrootd Italian Federation for CMS An Infrastructure in Support of Software Development Geant4 Electromagnetic Physics for LHC Upgrade Self-Organizing Map in ATLAS Higgs Searches Compute Farm Software for ATLAS IBL Calibration Davide Salomoni CHEP2013 Report - 28/10/2013
Presentazioni simili
© 2024 SlidePlayer.it Inc.
All rights reserved.