Closed
Description
Consider the following program where I force some processes to wait on a Bcast call
int main(int argc, char **argv) {
int myrank, world_size;
MPI_Init(&argc, &argv);
MPI_Status * status;
// opal_progress_set_event_flag(1);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int istat;
int name_len,i;
char processor_name[MPI_MAX_PROCESSOR_NAME];
MPI_Get_processor_name(processor_name, &name_len);
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, myrank, world_size);
if(myrank == 0)
while(1){};
istat = MPI_Bcast(processor_name,name_len, MPI_CHAR, 0, MPI_COMM_WORLD);
MPI_Finalize();
return 0;
}
If I run this setting the -mca mpi_yield_when_idle true in mpirun
the processes seems to spend 85% in system and 15% in user, this is what I would have expected from yield option.
However, If I remove the -mca mpi_yield_when_idle from mpirun and set this through the etc/openmpi-mca-params.conf
( adding this line mpi_yield_when_idle = 1)
I see 100% time on user:
Could you please confirm that setting this option through the configuration file does work as expected in later openmpi versions.