In continuation of the previous topic:
I don't understand how to fix the MPI code for the job requirements
Task: Compose a program using blocking and non-blocking operations according to the variant. Ensure that operations are executed in several processes. The distribution of initial data must be performed using non-blocking operations, and the collection of results must be performed using blocking operations.
b=min(A+C)
Only one if-expression if(rank==0) should be used to separate processes. In addition, neither Scatter nor Reduce should be used.
The teacher asked me what command for receiving data should work in parallel with MPI_Isend, I answered MPI_Recv (because according to the assignment we send with non-blocking and receive with blocking), but he said that it is wrong. And he said that always the same error in the code due to misunderstanding of the task - there should be a collection at 0 rank with the help of blocking should be.
UPD: Moved MPI_Waitall, not sure if it is correct, I declare MPI_Request in rank 0 only
#define N 13
int main(int argc, char* argv[])
{
int rank, size;
double* A = 0, * C = 0, localMin = DBL_MAX;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int pSize = N / size;
int remainder = N % size;
if (rank < remainder) {
pSize++;
}
cout << "Process " << rank << ", pSize = " << pSize << endl;
if (rank == 0) {
srand((unsigned)time(0));
A = new double[N];
C = new double[N];
for (int i = 0; i < N; i++) {
A[i] = (rand() % 20) / 2.;
C[i] = (rand() % 20) / 2.;
cout << i << ". sum: " << A[i] + C[i] << endl;
}
MPI_Request* requestA = new MPI_Request[size - 1];
MPI_Request* requestC = new MPI_Request[size - 1];
int offset = pSize;
for (int i = 1; i < size; i++) {
int send_count;
if (remainder == 0 || i < remainder) {
send_count = pSize;
}
else {
send_count = pSize - 1;
}
MPI_Isend(A + offset, send_count, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &requestA[i - 1]);
MPI_Isend(C + offset, send_count, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &requestC[i - 1]);
offset += send_count;
}
for (int i = 0; i < pSize; i++) {
double temp = A[i] + C[i];
localMin = min(localMin, temp);
}
MPI_Waitall(size - 1, requestA, MPI_STATUSES_IGNORE);
MPI_Waitall(size - 1, requestC, MPI_STATUSES_IGNORE);
double globalMin = localMin;
for (int i = 1; i < size; i++) {
double poluchMin;
MPI_Recv(&poluchMin, 1, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
globalMin = min(globalMin, poluchMin);
}
cout << "Minimum min(A+C) = " << globalMin << endl;
delete[] requestA;
delete[] requestC;
}
else {
A = new double[pSize];
C = new double[pSize];
MPI_Recv(A, pSize, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(C, pSize, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
for (int i = 0; i < pSize; i++) {
double temp = A[i] + C[i];
localMin = min(localMin, temp);
}
MPI_Send(&localMin, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD);
}
delete[] A;
delete[] C;
MPI_Finalize();
}
What I have tried:
I have reworked the code many times with the help of merano99, thank you very much for your help, I couldn't have done it myself