Intel Xeon E3 12xx v3 - Haswell 22nm
Intel Xeon E3 12xx v3 processors based on Haswell 22nm came out in Q2-2013. Dell does not offer this in the PowerEdge T110, holding to the Ivy Bridge 22nm E3-12xx v2 processors (Ivy Bridge) and below. The HP ProLiant ML310e Gen8 v2 does offer the Intel E3-12xx v3 processor.
Is there a difference in performance between Sandy Bridge (32nm), Ivy Bridge (22nm) and Haswell (22nm)?
Ideally as far as SQL Server is concerned, we would like to see TPC-E and H benchmarks, but very few of these are published, and almost never for single socket systems.
The other benchmark is SPEC CPU integer, but we must be very careful to account for the compiler. If possible, use the same compiler version, but there are usually compiler advances between processor generations.
In general as far as SQL Server is concerned, discard the libquantum result and look only at the other.
It is possible to find Sandy Bridge and Ivy Bridge Xeon E3-1220 3.1GHz (original and v2) results on matching compiler, which seem to show about 5% improvement.
The only result for v3 is on the next compiler version (from Intel C++ 18.104.22.168 to 22.214.171.124) showing about 10% gain,
so we do not know what can be attributed to the processor architecture versus the compiler.
In any case, it would nice if Dell would ditch the external graphics, using the Intel integrated graphics in v3. I know this is a server, but I use it as a desktop because it has ECC memory.
Intel Xeon E5 26xx and 46xx v2 - Ivy Bridge 22nm - in Sep 2013
Intel Xeon E5 26xx and 46xx v2 processors based on Ivy Bridge 22nm with up to 12 cores supporting 2 and 4 socket systems respectively should come out soon (September), super ceding the original Xeon E5 (Sandy Bridge 32nm).
The 2600 series will have 12-core 2.7GHz, 10-core 3GHz and 8-core 3.3GHz at 130W.
The general pattern is E5 processors will follow E3 and desktop by 12-18 months?
Intel Xeon E7 v2? - Ivy Bridge 22nm - in Q1 2014
There will be an E7 Ivy Bridge with up to 15 cores in Q1 2014 for 8 socket systems, replace Westmere-EX.
I am not sure if it will be glue-less.
The current strategy is that there will be an E7 processor every other generation?
EMC VNX2 in Sep 2013?
VNX2 was mention as early as Q3 2012. I thought it would come out at EMC World 2013 (May).
Getting 1M IOPS out of an array of SSDs is not an issue, as 8(NAND)-channel SATA SSDs can do 90K IOPS.
Similarly, revving the hardware from 1-socket Westmere-EP to 2-socket Sandy Bridge EP poses no problems.
Perhaps however, changing the software stack to support 1M IOPS was an issue?
EMC Clariion used Windows XP as the underlying OS. One might presume VNX would be Windows 7 or Server?
or would EMC have been inclined to unify the VMAX and VNX OSs?
In any case, the old IO stack intended for HDD arrays would probably be replaced with NVMe, with much deeper queues, designed for SSD. It would not be unexpected that several iterations were required to work out
the bugs for a complex SAN storage system?
IBM FlashSystem 720 and 820 5/10TB SLC, 10/20TB eMLC (raw capacity 50% greater, with or w/o RAID) 4x8 Gbps FC or 4x40Gbs QDR Infini-Band
HP MSA 2040 with four 16/ or 8Gbps FC.
I still prefer SAS in direct attach storage, or if it must be a SAN, the Infini-Band.
FC even at 16Gbps is just inconvenient in not properly supporting multi-lane operation.
Crossbar made a news splash with the announcement of
Resitive RAM (RRAM or ReRAM) Nonvolatile Memory with working samples from a production fab partner. Products should be forth coming. Since this is very different from NAND, it would require a distinct PCI-E or SATA interface to RRAM controller, analogous to the Flash controllers for NAND.
Current thought is that NAND Flash technology may be near its effective scaling limits (increasing bit density). Any further increase leads to higher error rates and lower endurance. My view is that for server products, 25nm or even the previous generation is a good balance between cost and endurance/reliability. The 20nm technology should be the province of consumer products.
Some companies are pursing Phase-change Memory (PCM)
Crossbar is claiming better performance, endurance and power characteristics for RRAM over NAND.
Seagate lists 1200GB and 900GB 10K 2.5in HDD, along with enterprise version of 7200 RPM HDD 4TB 3.5in FF. HP lists these as options on their ProLiant servers. Dell too.
I would think that a 2TB 2.5in 7.2K disk should be possible?
Dell HDD pricing:
7.2K 3.5in SATA 1/2/4TB $269, 459, 749
7.2K 3.5in SAS 1/2/3/4TB $369, 579, 749, 939
10K 2.5in SAS 300/600/900/1200GB $299, 519, 729, 839
6Gbps MLC SAS 800GB/1.6TB $3499, 6599
Samsung described the idea of using a small portion of an MLC NAND as SLC to improve write performance in certain situations. So apparently a NAND designed as MLC can also be used as both SLC and MLC, perhaps on a page or block basis. I am thinking this feature is worth exposing?
The Samsung 2013 Global SSD Summit was in Jul. Video on youtube, I cannot find a pdf.
PCI-E interface in 2.5in form factor, i.e. NVMe.
Tom's HWG seems to have the best coverage.
Supermicro is advertising 12Gbps SAS in their products, presumably the next generation of servers will have it.
There is a company with a SSD product attaching via the memory interface.
There is a huge disparity in characteristics between DRAM and NAND, that I would have serious concerns.
The Intel Xeon E5 2600/4600 processors have 40 PCI-E gen 3 lanes, capable of supporting 32GB/s IO bandwidth, so I don't see the need to put NAND on the memory channel.