When you build a 3PAR array, you populate it with one or more disk types, including Fibre Channel, SATA, and SSD. Now when you start to provision storage LUNs, the first thing you do is to create a Common Provisioning Group (CPG). One way to think about a CPG is to think of it as a template for creating LUNs. The most basic attributes of a CPG are disk type, RAID type, and RAID set size. For example, we might create a CPG defined as RAID 5, with a set size of 3+1, on Fibre Channel disks.
Once we have our CPG defined, we can start provisioning LUNs against that CPG. When we create a LUN and start writing data to it, 1GB chunklets start getting allocated to the CPG. If multiple LUNs are provisioned against the CPG, the LUNs share these chunklets. The data being written to our new LUN is striped across these chunklets, in 128MB regions (which are sub-allocation units within a chunklet). The chunklets are created across all of the disks in the array that are the type specified in the CPG (e.g. Fibre Channel).
RAID protection is achieved by striping the data across chunklets that are positioned across multiple disks, across disk magazines (there are four disks per magazine), and across shelves in the array, so that your data can survive a failure of any of these components. It also means that the controllers and fibre channel ports driving the disks and shelves are all moving data in parallel and in a balanced way.
This is huge for performance. On every read or write operation, every disk (of a given type), every controller, and every back-end port on the array is involved. This means that every LUN has access to all the performance, all the IOPS and all the throughput that the array is capable of.
Add to that the Adaptive Optimization (AO) feature (sub-LUN tiering). Using AO, regions can be moved automatically between CPG's based on performance. For example, we might create another CPG on SSD storage, and another on SATA disks. We might also create a RAID 10, fibre-channel CPG. Then we can tell AO to automatically move blocks up to SSD, down to SATA, or whatever, as needed.
After you've defined your CPG's and your AO policies, AO starts watching the array perform. If it detects behavior consistent over a period of time (a few hours) that matches an AO policy, such as some regions that are getting used heavily, it could move those regions to SSD, or it might find regions that sit idle, and it could move them to SATA.
The importance of AO can't be overstated. Imagine that you've got a VDI environment, and those heavily used regions are boot files. AO can automatically move them to faster storage, therefore improving performance during boot storms. Meanwhile, data that's rarely used is moved to SATA, which frees up valuable space on your fibre-channel disks, and also ensures that your SATA disks maintain fairly light duties.
The result is that your data gets automatically spread across all disks and all disk types in the array, with hot regions on fast disks and cold data on slow disks, just as it should be. All of the performance capability of the entire array is made available to all of your data, automatically. The ultimate factor that determines the performance of the array is how you build it. How many disks of each disk type that you install in the array determines the overall IOPS, throughput and capacity of the array, all of which is made available to all of your data on all of your LUNs. That's what makes 3PAR storage so, so awesome.