I'm upgrading from a gen 12 dell server with a single raid 10 solid state array to a gen 14 with 3 solid state and 1 spinning arrays and am wondering which needs the better performance, temp tables or transaction log and if my idea is a good idea or just crazy...
Array number 1 is a dell BOSS controller card with mirrored 240GB SATA M.2 SSD's @ 6Gbps
Array's 2 & 3 are running on the same PERC H740P RAID Controller with 8GB NV Cache
Array number 2 is 4 - 480GB SSD SAS Mix Use 12Gbps 512n RAID 10
Array number 3 is 4 - 1.2TB 10K RPM SAS 12Gbps 512n RAID 5
Array number 4 is yet to be defined as either RAID 0 or RAID 1 and is 2 - 800GB, NVMe, Mixed Use Express Flash (2.5 SFF Drive, U.2, PM1725a) running at 8.0 GT/s. RAID will be software defined.
Array #1 will host the operating system and page file. Array #3 will just be used for backups etc.
My understanding is that splitting the various db files (mdf, ldf & tmp tables) onto as many different physical buses as possible is how to best maximize speed beyond a single ssd array so my thought was to put the mdf onto array #2, the ldf onto array #1 and the tmp tables onto array #4 in raid 0.
Question 1: Is it OK to put tmp tables onto raid 0? I assume if one drive fails causing the array to break, sql will just shut down with errors but not actually lose anything since it's not the mdf or ldf? I'm having a hard time finding anything that says either way on this.
Question 2: If putting the tmp tables onto RAID 0 is a bad idea, what about just running it in RAID 1? Obviously it won't be as fast as array #2 but it should still be significantly faster than trying to run both the mdf and tmp tables on array 2 right?
Question 3: Similar to Question 2...I realize array #2 is faster than array #1 as well but splitting the LDF onto it's own physical pci channel (array #1) should be faster than putting them both on array #2 right?
Question 4: If I'm completely crazy for considering this layout, what would be the best layout on the given hardware?
Thanks in advance for any advice!