CPU – The New Bottleneck?
An interesting phenomenon is occurring with the relationship between the application, the CPU, and the I/O (most notably where the data resides). Prior to modern parallel I/O processing, the largest bottleneck which existed in the I/O stack was unquestionably the storage sub-system. Storage devices are at best many orders of magnitude slower than the CPU (where the I/O demand is generated), the channels to those storage devices are limited, and the storage devices themselves (which respond to the I/O requests) reside at a point in the stack furthest from the source where the I/O is generated. However, when you have an architecture which handles both the generated I/O and the response to the I/O at the same point in the stack (the CPU), the bottleneck now moves to the CPU itself, as we will explore in this article.
Don’t worry though, the situation isn’t as dire as it sounds; there will always be a relative bottleneck somewhere in the system, but when the latency of the slowest component approaches that of the fastest component, the efficiency increases significantly system-wide. If you are going to have a bottleneck anywhere in the system, I would argue its best to have it at the CPU because you want the component which is doing the heavy lifting to lift as much and as often as possible (unless your application is broken, the work which is being done is, or should be, useful).
CORRELATION: WORK PER UNIT TIME AND I/O LATENCY
Application I/O demands within an architecture tend to increase either due to the introduction of sustained high-intensity workloads such as Online Transaction Processing (OLTP) or an increase in the number of workloads running concurrently, or worst case, both. Certainly virtualization technologies such as VMware ESX and Microsoft Hyper-V have contributed to concurrency. In either scenario however.
Read the entire article here, CPU – The New Bottleneck?
via the fine folks at DataCore Software