[MPIWG Fortran] ticket 351

Craig Rasmusen rasmus at cas.uoregon.edu
Tue Nov 10 12:36:39 CST 2015


Jeff,

Welcome to the arcane world of Fortran terminology.  Fortunately terms like
"processor dependent" are only in the standard and don't (hopefully) leak
out into the wild!  :-)

-craig

On Mon, Nov 9, 2015 at 1:03 PM, Jeff Hammond <jeff.science at gmail.com> wrote:

> We are diverging.  I don't know that this informs the ticket one way or
> another.  Perhaps others can weigh in.
>
> On Mon, Nov 9, 2015 at 10:41 AM, Bill Long <longb at cray.com> wrote:
>
>>
>> On Nov 9, 2015, at 11:43 AM, Jeff Hammond <jeff.science at gmail.com> wrote:
>>
>> > MPI implementations today are remarkably standardized w.r.t. process
>> launching such that standard arguments are handled prior to MPI_Init.
>> However, there have been times where that was not the case.
>> >
>> > Can you check what the Cray T3D does w.r.t. arguments passed to the
>> host node appearing on the compute nodes? :-)
>>
>> No T3D’s in service.  The more general question is whether vendors
>> support a new version of MPI on old hardware for which there is no longer
>> Fortran compiler support.  If you can’t compile the code that calls the new
>> MPI routine, them I’d assume the new routine is not relevant to users of
>> the system.
>>
>>
> The purpose of my comment was to cite a hosted MPI implementation, to
> tease out the differences between today's common practice and the realm of
> possibility.
>
>
>> >
>> > More seriously, does Fortran 2008 require that arguments passed to the
>> invocation of mpirun (or equivalent) appear at every process in a
>> multi-process MPI program?
>> >
>>
>> Well, the Fortran standard says nothing about MPI, as should be the
>> case.  The method for launching a program is intentionally outside the
>> scope of the standard.
>>
>> However, you make a reasonable point.  The current standard has a sentence
>>
>> "The effects of calling COMMAND_ARGUMENT_COUNT, EXECUTE_COMMAND_LINE ,
>> GET_COMMAND, GET_COMMAND_ARGUMENT, and GET_ENVIRONMENT_VARIABLE on any
>> image other than image 1 are processor dependent.” .
>>
>> In discussions for the upcoming Fortran 2015 standard, people realized
>> this was a silly restriction, and all the routines except
>> EXECUTE_COMMAND_LINE were removed from this sentence.  Vendors already were
>> consistent that command line stuff worked on any image, so the “processor
>> dependent” bit was unnecessary.
>>
>>
> What does processor-specific mean?  It is specific to the manufacturer,
> the ISA, the SKU number, the serial number, ...?  I can't even fathom what
> sort of terrible thinking lead someone to write such a phrase in a language
> standard.
>
> Why not do what everyone else does and say "implementation-specific"?  I
> don't even think the behavior is remotely sensitive to the processor.  The
> support for EXECUTE_COMMAND_LINE is primarily a function of the OS, which
> is not a very different thing than a processor.
>
> Jeff
>
>
>> Cheers,
>> Bill
>>
>>
>>
>> > Jeff
>> >
>> > Jeff
>> >
>> > On Mon, Nov 9, 2015 at 8:56 AM, Bill Long <longb at cray.com> wrote:
>> >
>> > On Nov 9, 2015, at 8:56 AM, Jeff Hammond <jeff.science at gmail.com>
>> wrote:
>> >
>> > >
>> > >
>> > > On Mon, Nov 9, 2015 at 6:37 AM, Bill Long <longb at cray.com> wrote:
>> > >
>> > > On Nov 9, 2015, at 8:24 AM, Jeff Hammond <jeff.science at gmail.com>
>> wrote:
>> > >
>> > > > Did you read the ticket?  We need this for the same reason
>> MPI_Init() takes argc/argv even though C main() already has them.
>> > >
>> > > But the C standard does not implicitly assume every program is
>> parallel.
>> > >
>> > >
>> > > Correct.  Nor can MPI assume that argc/argv will magically appear on
>> the compute nodes when the job is launched from somewhere else.
>> > >
>> >
>> > Agreed. My point is that Fortran is not the same as C.
>> >
>> > > >
>> > > > While it may be true that process managers are magically fixing
>> argc/argv, this is not guaranteed by any standard.
>> > >
>> > > The Fortran standard is intentionally vague to allow for environments
>> where there is no command line, but rather (for example) the code starts
>> executing by clicking a graphic button on the screen.  The standard is
>> likely to not include specific words about program launchers that might
>> turn into bit rot in the future.   If your favorite compiler returns the
>> launcher text as part of the command line, complain.
>> > >
>> > >
>> > > Complaining is fun, but has no teeth without a standard.  Many
>> vendors are quite strict about standards and reject feature requests that
>> improve user experience if they lack justification in a standard.  I can
>> cite numerous vendors here…
>> >
>> > The overriding intent of a language standard is portability.  You would
>> have poor portability of a code if moving it from one machine to another
>> resulted in different command argument information from get_command or
>> get_command_argument.  The intention is that the routine return the command
>> line arguments that the program knows about, not some cryptic stuff from
>> aprun, srun, or mpirun.   I did try all the compilers I have access to on a
>> trivial code:
>> >
>> > > cat test.f90
>> > program test
>> >   character(1000) command
>> >   integer         length
>> >   call get_command (command, length)
>> >   print *, command(1:length)
>> > end program test
>> >
>> > > ftn test.f90
>> > > aprun -n1 ./a.out -x
>> >  ./a.out -x
>> > Application 15961022 resources: utime ~0s, stime ~0s, Rss ~4176,
>> inblocks ~4479, outblocks ~11453
>> > > module swap PrgEnv-cray PrgEnv-intel
>> > > ftn test.f90
>> > > aprun -n1 ./a.out -x
>> >  ./a.out -x
>> > Application 15961023 resources: utime ~0s, stime ~0s, Rss ~4176,
>> inblocks ~2967, outblocks ~7756
>> > > module swap PrgEnv-intel PrgEnv-pgi
>> > > ftn test.f90
>> > > aprun -n1 ./a.out -x
>> >  ./a.out -x
>> > Application 15961025 resources: utime ~0s, stime ~0s, Rss ~4172,
>> inblocks ~3038, outblocks ~8308
>> > l> module swap PrgEnv-pgi PrgEnv-gnu
>> > > ftn test.f90
>> > > aprun -n1 ./a.out -x
>> >  ./a.out -x
>> > Application 15961026 resources: utime ~0s, stime ~0s, Rss ~4172,
>> inblocks ~3031, outblocks ~8027
>> >
>> > All of them produced the expected output (excluding the “aprun -n1”
>> launcher text).
>> >
>> > Perhaps we could add a Note in the Fortran standard explaining this,
>> but it looks like vendors have already figured it out.
>> >
>> > Cheers,
>> > Bill
>> >
>> >
>> >
>> > >
>> > > While the new proposed routines are trivial to implement (assuming
>> they have the same arguments as the corresponding Fortran ones),  they add
>> to one of MPI’s most serious flaws - way to many routines already.
>> > >
>> > >
>> > > As you say, they are trivial to implement.  The size of the standard
>> is an irrelevant argument.  Users don't read what they don't need to read
>> and users shouldn't be reading the spec most of the time anyways.
>> > >
>> > > Jeff
>> > >
>> > > Cheers,
>> > > Bill
>> > >
>> > >
>> > > >
>> > > > Jeff
>> > > >
>> > > > On Mon, Nov 9, 2015 at 6:21 AM, Bill Long <longb at cray.com> wrote:
>> > > > Hi Jeff,
>> > > >
>> > > > I don’t see the value of this.  The Fortran intrinsics GET_COMMAND
>> and friends should already do what you want.  Fortran programs are now
>> presumed parallel. The intrinsics should know how to strip off the
>> “launcher” part of the command line.  At least that is how the Cray
>> versions have worked from the beginning.   Experiments are  in order for
>> other vendors, as a check.
>> > > >
>> > > > Cheers,
>> > > > Bill
>> > > >
>> > > >
>> > > >
>> > > > On Nov 9, 2015, at 8:13 AM, Jeff Hammond <jeff.science at gmail.com>
>> wrote:
>> > > >
>> > > > > we did a lot of work on
>> https://svn.mpi-forum.org/trac/mpi-forum-web/ticket/351.  do we intend
>> to move forward this for MPI 4+?  if yes, we need to migrate the trac
>> ticket to a github issue.
>> > > > >
>> > > > > jeff
>> > > > >
>> > > > > --
>> > > > > Jeff Hammond
>> > > > > jeff.science at gmail.com
>> > > > > http://jeffhammond.github.io/
>> > > > > _______________________________________________
>> > > > > mpiwg-fortran mailing list
>> > > > > mpiwg-fortran at lists.mpi-forum.org
>> > > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> > > >
>> > > > Bill Long
>>              longb at cray.com
>> > > > Fortran Technical Support  &
>> voice:  651-605-9024
>> > > > Bioinformatics Software Development                     fax:
>> 651-605-9142
>> > > > Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN
>> 55101
>> > > >
>> > > >
>> > > > _______________________________________________
>> > > > mpiwg-fortran mailing list
>> > > > mpiwg-fortran at lists.mpi-forum.org
>> > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Jeff Hammond
>> > > > jeff.science at gmail.com
>> > > > http://jeffhammond.github.io/
>> > > > _______________________________________________
>> > > > mpiwg-fortran mailing list
>> > > > mpiwg-fortran at lists.mpi-forum.org
>> > > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> > >
>> > > Bill Long
>>            longb at cray.com
>> > > Fortran Technical Support  &                                  voice:
>> 651-605-9024
>> > > Bioinformatics Software Development                     fax:
>> 651-605-9142
>> > > Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN 55101
>> > >
>> > >
>> > > _______________________________________________
>> > > mpiwg-fortran mailing list
>> > > mpiwg-fortran at lists.mpi-forum.org
>> > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> > >
>> > >
>> > >
>> > > --
>> > > Jeff Hammond
>> > > jeff.science at gmail.com
>> > > http://jeffhammond.github.io/
>> > > _______________________________________________
>> > > mpiwg-fortran mailing list
>> > > mpiwg-fortran at lists.mpi-forum.org
>> > > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> >
>> > Bill Long
>>          longb at cray.com
>> > Fortran Technical Support  &                                  voice:
>> 651-605-9024
>> > Bioinformatics Software Development                     fax:
>> 651-605-9142
>> > Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN 55101
>> >
>> >
>> > _______________________________________________
>> > mpiwg-fortran mailing list
>> > mpiwg-fortran at lists.mpi-forum.org
>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>> >
>> >
>> >
>> > --
>> > Jeff Hammond
>> > jeff.science at gmail.com
>> > http://jeffhammond.github.io/
>> > _______________________________________________
>> > mpiwg-fortran mailing list
>> > mpiwg-fortran at lists.mpi-forum.org
>> > http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>
>> Bill Long
>>        longb at cray.com
>> Fortran Technical Support  &                                  voice:
>> 651-605-9024
>> Bioinformatics Software Development                     fax:
>> 651-605-9142
>> Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN 55101
>>
>>
>> _______________________________________________
>> mpiwg-fortran mailing list
>> mpiwg-fortran at lists.mpi-forum.org
>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>>
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
>
> _______________________________________________
> mpiwg-fortran mailing list
> mpiwg-fortran at lists.mpi-forum.org
> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpiwg-fortran
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpiwg-fortran/attachments/20151110/947ca1d9/attachment-0001.html>


More information about the mpiwg-fortran mailing list