#### Question Details

[answered] Assignment 8: MPI - Algorithm and Partitioning. As usual al

We got your question! Now we're working on finding the best tutor for the job. We'll be sure to email you once they've started working on it.

Assignment 8: MPI - Algorithm and Partitioning. The purpose of this assignment is for you to learn more about

? complex algorithms for distributed memory

? impact of data partitioning.

As usual all time measurements are to be performed on the cluster.

When scheduling a job on mamba, the jobs have a fixed amount of memory available (for the entire job),

you can request additional memory using -l mem=120GB. This request a TOTAL of 120GB (and not 120GB

per node).

Strong scaling experiment. An experiment is a strong scaling experiment when you measure the speedup

an algorithm achieves when you increase the number of resources. All the experiments we conducted so far

were strong scaling experiments. Usually these are reported using a speedup chart.

Weak scaling experiment. An experiment is a weak scaling experiment when you increase the computational requirement of the problem proportionally to the number of resources allocated to the problem.

usually these are reported using a (processor,time) chart. The computation scales if the curve is flat. 1 2D heat equation A 2D heat equation is similar to the 1D equation from assignment 6.

The problem is defined on a discrete 2D space of size n ? n; let?s call it H. Initialize H in some fashion

(random works). The kth iteration of the heat equation is defined by H k is defined by

H k [i][j] = 1 k?1

(H

[i ? 1][j ? 1] + H k?1 [i ? 1][j] + H k?1 [i ? 1][j + 1]

9

+H k?1 [i][j ? 1] + H k?1 [i][j] + H k?1 [i][j + 1] +H k?1 [i + 1][j ? 1] + H k?1 [i + 1][j] + H k?1 [i + 1][j + 1])

(Take the elements out of the array as H k?1 [i][j])

The implementation probably need to keep H k and H k?1 in memory.

Question: Implement a distributed memory version of the 2D heat equation problem.

Question: Perform a strong scaling experiment to compute H 20 from 1 core to 32 cores. (Feel free to

restrict number of cores to particular numbers that matches your implementation.) Pick n such that H is

about 1GB large, 10GB large, and 50GB large.

Question: Perform a weak scaling experiment to compute H 20 from 1 core to 32 cores. (Feel free to restrict

number of cores to particular numbers that matches your implementation.) Pick n such that on one core H

is about 500MB large, 1GB large, and 2GB large.

Question: How would you increase communication and computation overlap ? 1 2 Matrix multiplication The problem is to compute iterated matrix multiplication defined by xk = Axk?1 , where A is a random

matrix of size n ? n and xk is a vector of size n. Pick x0 randomly.

P

For reference, xk = Axk?1 is computed using xk [i] = j A[i][j]xk?1 [j]. Or in other words, to compute

xk [i] multiply element wise the ith row of the matrix by xk?1 and sum the values.

You can partition the data in three ways: horizontal

blocks vertical Question: Implement iterated matrix multiplication for each of the three matrix partitioning scheme.

Question: Perform a strong scaling experiment to compute x20 from 1 core to 32 cores. (Feel free to restrict

number of cores to particular numbers that matches your implementation.) Pick n such that A is about

1GB large, 20GB large, and 80GB large.

Question: Perform a weak scaling experiment to compute x20 from 1 core to 32 cores. (Feel free to restrict

number of cores to particular numbers that matches your implementation.) Pick n such that on one core A

is about 1GB large, 2GB large, and 4GB large.

Question: How would you increase communication and computation overlap ? 3 Extra Credit Question: Implement the block version using communicators so that each process is in a row communicator

and in a column communicator. It allows the communications to be done using reduce and broadcast.

For information see

? http://mpitutorial.com/tutorials/introduction-to-groups-and-communicators/

? man MPI Comm split

? man MPI Comm free

? man MPI Broadcast

? man MPI Reduce

Question: Repeat experiments on that implmentation. Is it faster?

2

**Solution details:**

Answered

QUALITY

Approved

ANSWER RATING

This question was answered on: * Sep 18, 2020 *

* * Solution~0001005066.zip (25.37 KB)

This attachment is locked

We have a ready expert answer for this paper which you can use for in-depth understanding, research editing or paraphrasing. You can buy it or order for a fresh, original and plagiarism-free copy from our tutoring website www.aceyourhomework.com (Deadline assured. Flexible pricing. TurnItIn Report provided)

##### Pay using PayPal (No PayPal account Required) or your credit card . All your purchases are securely protected by .

#### About this Question

STATUSAnswered

QUALITYApproved

DATE ANSWEREDSep 18, 2020

EXPERTTutor

ANSWER RATING

#### GET INSTANT HELP/h4>

We have top-notch tutors who can do your essay/homework for you at a reasonable cost and then you can simply use that essay as a template to build your own arguments.

You can also use these solutions:

- As a reference for in-depth understanding of the subject.
- As a source of ideas / reasoning for your own research (if properly referenced)
- For editing and paraphrasing (check your institution's definition of plagiarism and recommended paraphrase).

#### NEW ASSIGNMENT HELP?

### Order New Solution. Quick Turnaround

Click on the button below in order to Order for a New, Original and High-Quality Essay Solutions.
New orders are original solutions *and precise to your writing instruction requirements. Place a New Order using the button below.*

WE GUARANTEE, THAT YOUR PAPER WILL BE WRITTEN FROM SCRATCH AND WITHIN YOUR SET DEADLINE.