[Mpi3-hybridpm] Draft of hybrid prog models group goals

Supalov, Alexander alexander.supalov at intel.com
Tue Aug 18 08:28:09 CDT 2009


Hi,

Thank you. Let me try to propose a slight restructuring/extension.

Mission

Add MPI features necessary for efficient hybrid programming.

Goals

Investigate what changes are needed in MPI to better support:

* Traditional thread interfaces (e.g., Pthreads, OpenMP)
* Emerging interfaces (like TBB, OpenCL, CUDA, and Ct)
* PGAS (UPC, CAF, etc.)

The scope will change to include other hybrid models as appropriate.

Summary

Parallel computers are increasingly being built with nodes comprising
large numbers of cores that include regular CPUs as well as accelerators
such as Cell or GPGPUs. To make better use of shared memory and other
resources within a node or address space, users may want to use a hybrid
programming model that uses MPI for communicating between nodes or
address spaces and some other programming model (X) within the node or
address space. Various options for X at present include OpenMP,
Pthreads, PGAS languages (UPC, CoArray Fortran), Intel TBB, Cilk, CUDA,
OpenCL, and Intel Ct.

Best regards.

Alexander 

-----Original Message-----
From: mpi3-hybridpm-bounces at lists.mpi-forum.org [mailto:mpi3-hybridpm-bounces at lists.mpi-forum.org] On Behalf Of Rajeev Thakur
Sent: Thursday, August 13, 2009 10:49 PM
To: mpi3-hybridpm at lists.mpi-forum.org
Subject: [Mpi3-hybridpm] Draft of hybrid prog models group goals

All working groups are supposed to come up with a set of goals for the
group (i.e., what is the group's mission?). Below is a draft of goals
for the hybrid programming models group. It is intended as a starting
point for discussion. Please suggest changes as appropriate.

Rajeev 


----------------
Parallel computers are increasingly being built with nodes comprising
large numbers of cores that include regular CPUs as well as accelerators
such as Cell or GPUs. To make better use of shared memory and other
resources within a node or address space, users may want to use a hybrid
programming model that uses MPI for communicating between nodes or
address spaces and some other programming model (X) within the node or
address space. Various options for X at present include OpenMP,
Pthreads, PGAS languages (UPC, CoArray Fortran), Intel TBB, Cilk, CUDA,
and OpenCL. The goal of this working group is to ensure that MPI has the
features necessary to facilitate efficient hybrid programming.

Specifically, the group will investigate:

* What changes are needed in MPI to better support threads (Pthreads,
OpenMP)? 

* What changes are needed in MPI to better support MPI+OpenCL or
MPI+CUDA programs?

* What changes are needed in MPI to better support MPI+UPC or MPI+CAF
programs?

* (Other hybrid models as appropriate.)



_______________________________________________
Mpi3-hybridpm mailing list
Mpi3-hybridpm at lists.mpi-forum.org
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-hybridpm
---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.





More information about the mpiwg-hybridpm mailing list