Call for Paper


Important dates

Extension of Call for abstracts by 14th May 2010!
CFP has closed on 14th May 2010

  • Abstract submission opens: 28 February (Sunday) 2010
  • Extended Abstract submission deadline: 14 May (Friday) 2010
  • Notification of Acceptance : 17 June (Thursday) 2010

Please use On-line Abstract Form for abstract and BoF submission. You are cordially invited to submit an abstract with a maximum of 250 words, which will be reviewed by the Program Committee to select contributions for oral or poster presentation. Decisions on the acceptance of contributions for oral or poster presentation will be made before 17 June 2010.

All contributions (plenary, parallel oral and posters) will be published in an open access journal so that all papers would be fully citeable.

Note that in the abstract submission form, you can indicate the track in which you wish the contribution to be placed. The tracks of CHEP 2010 are divided into two complementary views of HEP computing: Major Functional Areas and Major Technology Areas. The list of conference tracks and the scope of topics to be covered are as follows:

 

Major Functional Areas

Online Computing

  • CPU farms for high-level triggering
  • Farm configuration and run control
  • Describing and managing configuration data
  • Online software frameworks and tools
  • Online calibration procedures
  • Remote access to and control of data acquisition systems and experiment facilities

Event Processing
(everything that happens within one executable that processes events – reconstruction, simulation, analysis, event data model …)

  • Event generation, simulation, and reconstruction
  • Detector geometries
  • Physics analysis
  • Tools and Techniques for data classification and parameter fitting
  • Event visualization and data presentation
  • Frameworks for event processing
  • Toolkits for simulation, reconstruction, and analysis
  • Event data models

Distributed Processing and Analysis
(everything that happens on a multi job and/or multi-site level – workflow management, data management …)

  • Distributed data processing
  • Data management
  • Distributed analysis
  • Distributed processing experience including experience with Grids and Clouds
  • Experience with real productions and data challenges
  • Experience with real analysis using distributed resources
  • Interactive analysis using distributed resources
  • Solutions for coping with a heterogeneous environment
  • Experience with virtualization
  • Mobile computing
  • Monitoring of user jobs and data

 

Major Technology Areas

Software Engineering, Data Stores, and Databases

  • Programming techniques and tools
  • Software testing and quality assurance
  • Configuration management
  • Software build, release, and distribution tools
  • Documentation
  • Foundation and utility libraries
  • Mathematical libraries
  • Component models
  • Object dictionaries
  • Scripting
  • Event stores
  • Metadata and supporting infrastructure
  • Databases

Computing Fabrics and Networking Technologies

  • Basic hardware, benchmarks and experience
  • Hardware trends and issues such as multi-core, GPU, FPGA…
  • Fabric virtualization
  • Fabric management and administration
  • Local site I/O and data access
  • Mass storage systems
  • Local and wide area networking

Grid and Cloud Middleware

  • Grid/Cloud middleware and monitoring tools
  • Grid/Cloud middleware interoperability
  • Grid/Cloud reliability
  • Grid /Cloud security
  • Evolution of Grids and Clouds
  • Global usage and management of resources
  • Experiment-specific middleware applications

Collaborative Tools

  • Collaborative systems, progress in technologies and applications
  • Advanced teleconferencing systems
  • Experience in the use of teleconferencing tools