[Mpi-forum] Missing user data in MPI_User_function?
Jeff Hammond
jeff.science at gmail.com
Mon Apr 10 09:58:45 CDT 2017
I do not understand why creating a wrapper function dynamically at run time
creates a portability problem. Can you elaborate?
Thanks,
Jeff
On Mon, Apr 10, 2017 at 2:23 AM, Phil Ruffwind <rf at rufflewind.com> wrote:
> I am experimenting with an interface for doing generic vector
> operations, which would have MPI as a potential backend. To implement a
> reduce over a user-provided function, I would wrap their function inside
> a wrapper compatible with the MPI_User_function signature, and then call
> MPI_Op_create. But I can only make a single wrapper as standard C does
> not provide a way to create function pointers out of thin air, so the
> pointer to user's original function needs to be somehow passed in as
> “context” into the wrapper.
>
> Normally, C APIs that accept callbacks also accept an opaque void
> pointer which is then passed into the callback function – this would
> serve as the context. But I do not see that in the declaration of
> MPI_User_function:
>
> typedef void MPI_User_function(void *invec,
> void *inoutvec,
> int *len,
> MPI_Datatype *datatype);
>
> This leaves me with several undesirable options:
>
> - Use global data, which, even ignoring the thread-safety implications,
> means that I can only wrap a single user function at any given time.
>
> - Use thread-local data, which I'm not sure is sensible either since
> there's no guarantee MPI is going to call my wrapper function on the
> same thread (and it still has the one-instance limitation).
>
> - Use invec or inoutvec to pass in the contextual data, which is awkward
> because it means now the data is complicated by the addition of
> unrelated contextual data, and it also slightly bloats the data with
> stuff that needn't be transmitted.
>
> - Create a wrapper function dynamically at run time, which means
> throwing away portability.
>
> Is there another more sensible option for this dilemma?
>
> Thanks,
> Phil
> _______________________________________________
> mpi-forum mailing list
> mpi-forum at lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
--
Jeff Hammond
jeff.science at gmail.com
http://jeffhammond.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpi-forum.org/pipermail/mpi-forum/attachments/20170410/d0926b4c/attachment.html>
More information about the mpi-forum
mailing list