Why Adaptive Parallelization Matters
Lately I have found myself engaging in many conversations related to the topic of parallelization and why it matters, particularly to I/O processing. Upon first hearing the phrase ‘Parallel I/O’, people often jump directly to a traditional performance discussion. While there is no doubt application workload performance is improved in a traditional sense (in terms of latency reduction, increase in operations per second, etc.), there is much more to the story, another dimension if you will.
YOU’VE MOST LIKELY BEEN HERE BEFORE
Let’s consider a real-world example we can build on to explain how this works and why it is important. For the sake of simplicity, let’s assume we are standing at a department store checkout area with a single open lane and one cashier. There are 60 customers currently waiting to checkout (or in slightly technical terms, those 60 customers are in queue). Each of the 60 customers is likely to have a varying number of items to checkout, but for the sake of simplicity, let’s assume they are all roughly the same. If it takes the cashier one minute to checkout each customer, the checkout rate is: 1 customer per minute. Pretty simple so far.
Read the entire article here, Why Adaptive Parallelization Matters
via the fine folks at DataCore Software