Files
mckernel/test/uti/mpi
Dominique Martinet 100754f556 test: add uti tests
Change-Id: Ib59f1c4dab7cec7e67ba35ec1988f6f968a2deaa
2019-02-01 15:15:14 +09:00
..
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:00 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:00 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:00 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:00 +09:00
2018-09-04 19:52:11 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:14 +09:00
2019-02-01 15:15:14 +09:00
2018-09-04 19:52:11 +09:00
2018-09-04 19:52:11 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:14 +09:00
2019-02-01 15:15:00 +09:00
2018-09-04 19:52:11 +09:00
2019-02-01 15:15:00 +09:00
2019-02-01 15:15:00 +09:00

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

001 isend 送受信に使用するバッファは毎回異なる
002 barrier
003 isend 送受信に使用するバッファは一つ、waitの前にsleepしない
004 isend-calc-wait, all-to-all
005 lockall-accumulate-calc-unlockall, all-to-all
006 parent isend-calc-wait, child does nothing --> crash
007 parent isend-calc-wait, child psm2 send/recv --> one ep per process
008 parent psm2-init and psm2-connect, child psm2-send/recv --> receiver side crash
009 parent does nothing, child psm2-init, psm2-connect, psm2-send/recv --> receiver side crash
010 parent psm2-init, psm2-connect, psm2-send/recv, child does nothing
011 001にopenmpスレッドを追加
012 get_acc-calc-flush_local_all, all-to-all. Execute ./012.sh
013 acc-flush_local-calc, all-to-all, acc:flush_local=1:1
014 012 + async progress thread. 
015 013 + async progress thread

016 MPI_Get_accumulate()のオーバーラップ

* 通信パターンは全対全、
* CPUはいくつかをprogress thread専用に割く
* ステップは以下の通り
  (1) MPI_Get_accumulate()
  (2) MPI_Get_accumulate()とMPI_Flush_local_all()だけを行った場合の
    時間の0.i倍の計算を実行
  (3) MPI_Flush_local_all()