[mpiwg-sessions] FW: [sc-workshop-attendee-cfp] Workshop on Exascale MPI 2018 (ExaMPI), held in conjunction with Supercomputing 2018 (SC18), Dallas, TX

HOLMES Daniel d.holmes at epcc.ed.ac.uk
Tue Jul 31 04:10:45 CDT 2018


Hi Howard,

Good idea - we should devise the structure collaboratively, then divide up the writing between us. I use ShareLatex/Overleaf for collaborative “paper in latex" writing.

Cheers,
Dan.
—
Dr Daniel Holmes PhD
Applications Consultant in HPC Research
d.holmes at epcc.ed.ac.uk<mailto:d.holmes at epcc.ed.ac.uk>
Phone: +44 (0) 131 651 3465
Mobile: +44 (0) 7940 524 088
Address: Room 3415, JCMB, The King’s Buildings, Edinburgh, EH9 3FD
—
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.
—

On 31 Jul 2018, at 00:30, Pritchard Jr., Howard <howardp at lanl.gov<mailto:howardp at lanl.gov>> wrote:

we’re controversial - so maybe good for hot topic abstract?

--
Howard Pritchard

B Schedule
HPC-ENV
Office 9, 2nd floor Research Park
TA-03, Building 4200, Room 203

Los Alamos National Laboratory





On 7/30/18, 1:18 PM, "sc-workshop-attendee-cfp on behalf of Dosanjh,
Matthew" <sc-workshop-attendee-cfp-bounces at group.supercomputing.org<mailto:sc-workshop-attendee-cfp-bounces at group.supercomputing.org> on
behalf of mdosanj at sandia.gov<mailto:mdosanj at sandia.gov>> wrote:

From: Dosanjh, Matthew
Sent: Monday, July 30, 2018 11:19 AM
To: hpc-announce at mcs.anl.gov<mailto:hpc-announce at mcs.anl.gov>
Subject: CFP:  Workshop on Exascale MPI 2018 (ExaMPI), held in
conjunction with Supercomputing 2018 (SC18), Dallas, TX

ExaMPI18 - Workshop on Exascale MPI 2018
Sunday November 11th, 2018
Dallas, TX, USA
https://sites.google.com/site/exampiworkshop2018

Held in conjunction with SC18:  The International Conference
for High Performance Computing, Networking, Storage and Analysis

===========================================================

The MPI standard and its implementations have proved surprisingly
scalable. Issues that hampered scalability have been addressed in the MPI
2.1 – 2.2 definition process, and continued into MPI 3.0 and 3.1. Thus
MPI has been robust, and been able to evolve, without fundamentally
changing the model and specification. For this and many other reasons MPI
is currently the de-facto standard for HPC systems and applications.
However, there is a need for re-examination of the message-passing
model for extreme-scale systems characterized by asymptotically
decreasing local memory and highly localized communication networks.
Likewise, there is a need for exploring new innovative and potentially
disruptive concepts and algorithms partially to explore other roads than
those taken by the recently released MPI 3.1 standard.
The aim of workshop is to bring together developers and researchers to
present and discuss innovative algorithms and concepts in Message Passing
programming models, in particular related to MPI. This year’s theme is on
concurrency in MPI and underlying networks and we especially encourage
submissions aligned with this theme.

Topics of interest (but are not limited to)

--------------------------------------------------------------------------
-------------

* Development of scalable Message Passing collective operations.

* Communication topology mapping interfaces and algorithms.

* Innovative algorithms for scheduling/routing to avoid network
congestion.

* Integrated use of structured data layout descriptors.

* One-sided communication models and RDMA-based MPI.

* MPI multi-threading and threading requirements from OSes.

* Interoperability of Message Passing and PGAS models.

* Integration of task-parallel models into Message Passing models.

* Fault tolerance in MPI.

* MPI I/O.

Paper submission and publication
--------------------------------------------------------------------------
-------------
There are two submission categories:

* Regular research paper: submission of full paper. Regular paper
submissions are limited to 10 single-space pages (including figures,
tables
and references) using a 10-point on 8.5x11-inch pages (US Letter).

Templates can be found at:
https://www.acm.org/publications/proceedings-template. Instructions for
preparing regular papers for the proceedings will be emailed to authors
of accepted papers.

* Hot topic abstract: submission of an extended abstract. These
submissions target work-in-progress research on potentially controversial
topics in Message-Passing. Hot topic extended abstracts are limited to 3
single-space pages:
https://www.acm.org/publications/proceedings-template. Accepted hot topic
extended abstracts will be published only on the ExaMPI15 website.

Regular Papers and extended abstracts should be submitted electronically
at: https://tinyurl.com/ExaMPI2018

This year, ExaMPI has streamlined the paper review and submission
process. Submissions will be accepted via the regular SC submission
system and reviews will be available via that system. For publication of
the papers, accepted papers from the SC submission system will be invited
to submit the papers that were reviewed in the conference review system
to a special issue of the Journal of Concurrency and Computation:
Practice and Experience. This will allow for quick publication in the
journal of accepted papers that only require minor revisions in the
journal process, while some papers may be shepherded with major revisions
overseen using the journal submission process.

Accepted papers at ExaMPI will only be published in the journal (CCPE),
not the workshop proceedings, so there is no need for a 30% difference
between the submitted paper and the journal submission.

Important dates
--------------------------------------------------------------------------
-------------
Paper Submissions close: October 9th
Paper Notification: October 23rd
Papers submitted to Journal: November 9th
Final notification from Journal: December 7th
Final camera ready version: December 14th

_______________________________________________
sc-workshop-attendee-cfp mailing list
sc-workshop-attendee-cfp at group.supercomputing.org
http://group.supercomputing.org/mailman/listinfo/sc-workshop-attendee-cfp_
group.supercomputing.org

_______________________________________________
mpiwg-sessions mailing list
mpiwg-sessions at lists.mpi-forum.org<mailto:mpiwg-sessions at lists.mpi-forum.org>
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-sessions

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20180731/4b5f7092/attachment-0001.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: not available
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-sessions/attachments/20180731/4b5f7092/attachment-0001.ksh>


More information about the mpiwg-sessions mailing list