Today, it is actually possible to build a highly capable database
system in a laptop form factor. There is no point to running a production database
on a laptop. The purpose of this is so that consultants (i.e., me), can
investigate database performance issues without direct access to a full sized
server. It is only necessary to have the characteristics of a proper
database server, rather than be an exact replica.
Unfortunately, the commercially available laptops do not
support the desired configuration, so I am making an open appeal to laptops
vendors. What I would like is:
1) Quad-core processor with hyper-threading (8 logical processors),
2) 8-16GB memory (4 SODIMM so we do not need really expensive 8GB single rank DIMMs)
3) 8x64GB (raw capacity) SSDs on a PCI-E Gen 2 x8
interface (for the main database, not the OS)
- alternatively, 2-4 x4 externally accessible PCI-E ports for external SSDs
- or 2 x4 SAS 6Gbps ports for external SATA SSDs
4) 2-3 SATA ports for HDD/SSD/DVD etc for OS boot etc
5) 1-2 e-SATA
6) 2 1GbE
Below is a representation of the system, if this helps clarify.
The Sandy-Bridge integrated graphics should be sufficient, but
high-resolution 1920x1200 graphics and dual-display are desired. (I could live
There should also be a SATA hard disk for the OS (or SATA SSD
without the 2.5in HDD form factor if space constrained) as the primary SSD array should be dedicated to the database.
Other desirable elements would be 1 or 2 e-SATA port, to support backup and restores with consuming the valuable main SSD array,
ports (so I can test code for parallel network transfers.
The multiple processor cores allow parallel execution plans.
Due to a quirk of the SQL Server query optimizer, 8 or logical processors are more
likely to generate a parallel execution plan in some cases.
Ideally, the main
SSD array is comprised of 2 devices, one on each PCI-E x4 channel.
The point of the storage system is to demonstrate 2GB/sec+
bandwidth, and 100-200K IOPS. One of the sad fact is even today storage
vendors promote $100K+ storage systems that end up delivering less
than 400-700MB/s bandwidth and less than 10K IOPS.
So it is important to demonstrate what a proper database storage system should be capable of.
Note that is it not necessary to have massive memory.
A system with sufficient memory and a powerful storage system can run any query, while a system with very large memory but weak storage can only run read queries that fit in memory. And even if data fits in memory, the performance could still fall off a cliff on tempdb IO.
Based on component costs, the
laptop without PCI-E SSD should be less than $2000, and the SSD array should be
less than $1000 per PCI-E x4 unit (4x64GB).
It would really help if the PCI-E SSD could be powered off from SW, i.e., without having to remove it. This why I want to boot off the SATA port, be it HDD or SSD.
per below, 2 SSDs on SATA ports do not cut the mustard,
The spec above call for 8 SSDs. Each SSD is comprised of 8 NAND packages, and each
package is comprised of 8 die. So there are 64 die in one SSD, and IO is
distributed over 8 SSDs, or a total of 512 individual die.
The performance of a single NAND die is nothing special and even pathetic on
writes. However, a single NAND die is really small and really cheap. That is
why it is essential to employ high parallelism at the SSD unit level. And then,
employ parallelism over multiple SSD units.
An alternative solution is for the laptop to expose 2-4 PCI-E x4 ports (2 Gen 2
or 4 Gen 1) to connect to something like the OCZ IBIS, or an SAS controller
with 2 x4 external SAS ports.
The laptop will have 1 Intel quad-core Sandy-Bridge processor, which has 2 memory channels supporting 16GB dual-rank DDR3 memory. The processor has 16 PCI-E gen 2, DMI g2 (essentially 4 PCI-E g2 lanes) and integrated graphics. There must be a 6-series (or C20x) PCH, which connects upstream on the DMI. Downstream, there are 6 SATA ports (2 of which can be 6Gbps), 1 GbE port, and 8 PCI-E g2 lanes. So on the PCH, we can attach 2 HDD or SSD at 6Gbps, plus support 2 eSATA connections. There is only a single 1GbE port, so if we want 2, we have to employ a separate GbE chip.
While the total PCH down stream ports exceeds the upstream, it ok for our purposes to support 2 internal SATA SSDs at 6Gbps, 2 eSATA ports and 2 GbE, plus USB etc. The key is how the 16 PCI-E gen 2 lanes are employed. In the available high-end laptops, most vendors attach a high-end graphics chip (to all 16 lanes?). We absolutely need 8 PCI-E lanes for our high performance SDD storage array. I would be happy with the integrated graphics, but if the other 8 PCI-E lanes were attached to graphics, I could live with it.
The final comment (for now) is that even though it is possible to attach more than 2 SSD off the PCH, we need then bandwidth on the main set of PCI-E ports. It is insufficient for all storage to be clogging the DMI and PCH.
Thunderbolt is 2x2 PCI-E g2 lanes, so technically thats almost what I need (8 preferred, but 6 acceptable).
What is missing from the documentation is were Thunderbolt attaches.
If directly to the SandyBridge processor (with bridge chip for external?), then that's OK,
if off the PCH, then that is not good enough for the reasons I outlined above.
Also, we need serious SSDs to attach off TB, does the Apple SSD cut mustard?
The diagram below shows the Thunderbolt controller connected to the PCH,
but also states that other configurations are possible.
The problem is that most high-end laptops are designed with high-end graphics,
which we do not want squandering all 16 PCI-E lanes.
A Thunderbolt controller attached to the PCH is capable of supporting x4 PCI-E gen 2, but cannot also simultaneously support saturation volume traffic from internal storage (SATA ports), and network (not to mention eSATA). I should add that I intend to place the log on the SATA port HDD/SSD, along with the OS, hence I do not want the main SSD array generating traffic over the DMI-PCH connection.
A Thunderbolt SDK is supposed to released very soon, so we can find out more.
I am inclined to think that Thunderbolt is really a docking station connector, being able to route both video and IO over a single connector. If we only need to route IO traffic, then there are already 2 very suitable protocols for this, i.e., eSATA for consumer, and SAS for servers, each with a decent base of products. Of course, I might like a 4 bay disk enclosure for 2.5in SSDs on 1x4 SAS, or an 8-bay split over 2 x4 ports. Most of the existing disk enclosures carry over from hard disk environment, with either 12-15 3.5in bays or 24-25 2.5in bays.