Cache Me If You Can...

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
Or Fighting With Samsengs... 2 can play at the same game


The Plextor M6 PRO SSD

sata3.jpg




Official Website
Plextor PX-M6Pro


Specs
M6 Pro


"Using a new generation of A19 nm Toshiba toggle NAND flash, a generous on-board cache, and the latest Marvell multi-core controller, the Plextor M6 PRO is able to take hardware performance to the practical maximum and provide ultra-stable performance at 6 Gb/s.Sustained sequential mode read performance up to 545 MB/s* and sequential write speeds up to 490 MB/s*.Real-world random read speeds reach a maximum of 100K IOPS* and up to 88K IOPS* for random writes.Speeds using exclusive PlexTurbo were up to ten times faster during testing."


standards.jpg



Also interesting are these noteworthy numbers
  • Consumption 0.25W(MobileMark) / 0.2mW(DEVSLP)
  • MTBF > 2,400,000 hours


Packaging
Pretty is not the usual word used to describe any SSD but the Plextor M6 PRO is definitely no shrinking violet when it comes to aesthetics. It is endowed with a very fanciful cardboard box, gleaming in textured Rose Gold (or is it more Pink Champagne?).

Thanks or no thanks to Apple, the world is again awash in varying shades of gold which is decidedly the premium metal of choice in Asia from where Plextor & Lite-on originate.


IMG_1467+(Copy).JPG


IMG_1470+(Copy).JPG



*Do note that the higher capacity devices come with more cache RAM which should boost the overall performance as well.



Bundled Goodies
software CD with the PlexTool utility, NTI clone & backup utilities, VIP cert & warranty flyer, storage screws & tray, 1 x SATA cable

IMG_1476+(Copy).JPG




...
 

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
The M6 Pro SSD
is itself decked out in a brushed aluminium case, again in Rose Gold. Here it is, looking splendid and well ensconced in thick protective foam.

IMG_1471+(Copy).JPG




Ooh, a fresh new batch straight out of the factory indeed...

IMG_1486+(Copy).JPG




Here's the label side

IMG_1481+(Copy).JPG




And the warranty flyer certifying 5 years of peace of mind

5-year_warranty.jpg


IMG_1482+(Copy).JPG




...
 

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
PlexTool v1.1.5
comes on the bundled CD as was previously said and this version sports SSD caching capability to possibly boost benchmarks and performance to dizzy astronomical heights.

BundledDiscContents+(Copy).PNG


PlexTools+(Copy).PNG



And yes, it only works for Plextor branded devices so no similar luck here for this Intel 530 SSD - everything appears greyed out

PlexTools_Intel530+(Copy).PNG




Next, let's jump straight to the new PlexTurbo feature on the Plex M6 Pro and click on the Enable button pronto!

PlexTool_PlexTurbo+(Copy).PNG


PlexTool_PlexTurbo_Enabled+(Copy).PNG


PlexTool_PlexTurbo2+(Copy).PNG




Of course, you can always disable it if you ever feel the need

PlexTool_PlexTurbo_Disable_Option+(Copy).PNG




...
 

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
Preview Benchmarks

Test Setup
★ Asus Z87-A | i7-4970K@4.6GHz | 8GB KVR@2133 | NiCu HK 3.0-GTX360 | R9 290 | Thermaltake Smart SE 630W ★

*EIST is left enabled throughout the benching as in the usual 24/7 manner of PC usage for most rigs



Crystal Disk Info

CrystalDiskInfo+(Copy).PNG





Crystal Disk Mark


Stock

CrystalDiskMark+(Copy).PNG



PlexTurbo

CrystalDiskMark_Turbo+(Copy).PNG





AS SSD Bench


Stock

AS+SSD+Bench+(Copy).PNG



PlexTurbo

AS+SSD+Bench_Turbo+(Copy).PNG





ATTO Disk Benchmark


Stock

ATTO+(Copy).PNG



PlexTurbo

ATTO_Turbo+(Copy).PNG




Now, how about that? Crazy numbers indeed, this looks like a serious challenge to its SSD caching competitors.



...
 
Last edited:

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
Initial Impressions
• Pretty!
• Nice components - Japanese chips, Marvell controller
• Fast drive
• Fast to market by local distro
• Plextool SSD cache boosted performance
• Sensibly long warranty

Can be improved...
Plextool lacks TRIM and automated scheduling features
 

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
Availability & Pricing
Amazingly, the M6 Pro is already available locally ahead of the online giants like Newegg or Amazon so check it out here
SRP in SGD
 

SPEED

Supremacy Member
Joined
Jan 1, 2000
Messages
28,729
Reaction score
174
Actually, the 256GB model is not really fast enough.

Check out the numbers shown recently on the higher capacity model which has more onboard DRAM cache.
Plextor PlexTurbo Demo and M6 Pro Release Imminent- FMS 2014 Update | The SSD Review

629x419xPlextor-1-8.jpg.pagespeed.ic.vawulweLPS.webp



Now, those are truly scary numbers! :eek:

If you look carefully
this Plexturbo runs on 128GB drive and this is the results

While your 256GB runs slower than the 128GB with the Plexturbo
Highly the current version of Plextool is not really optimized with the M6 Pro yet

I wonder when will Plexturbo be release for M5 Pro Xtreme drives...
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,502
Reaction score
1,259
Misleading numbers. Writing to the cache does not equate to writing to the disk. There is nothing that prevent a sudden crash of the system that ends up with nothing on the disk.

Caching at such level is superficial when u consider OS are already doing caching. That means accessing to the file will first hit the OS file buffer before hitting the disk cache.

Such access pattern are not easily tested using such benchmarking tools, totally misleading.
 

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
Sounds so bloody scary!

imo, everything's a fine balance between performance and probability.

With such impending danger and doom, maybe you had also better disable the onboard cache already existing on all your SSDs too.
The Impact Of A DRAM Buffer

As mentioned, the interaction between the DRAM cache and the file system cache is complex. Clearly, though, disabling the device cache alone does not eliminate the risk of data loss in the event of a power loss. The extent of that danger is subject to a number of variables. However, it has to be considered against the benefits of improved write performance and less wear.


Wait a minute, my mainboard DRAM contents also goes out with a crash & reboot too... should I disable the dangerous system RAM as well?

Maybe buy the APC UPS from Convergent as a bundle for precaution too? Think they would be happy to sell you that as well! =:p
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,502
Reaction score
1,259
Lets is concise in what kind of storage system we are talking about. Generally we split memory system into 2 main category. Persistent and Volatile.

Onboard memory and all caching memory are volatile memory in nature. As such, there is general consensus that anything store in them are temporary and there is no agreeable impression that they are the final resting place of any documents.

EPROM, Persistent Flash and Magnetic medium are normally recognise as persistent storage. In layman terms, it means when you save something from your application to any of these persistent storage, we expect them to be there unless some unexpected issue crops up.

Modern filesystem uses journals to split up metadata information from actual data blocks. Consistency is one aspect of durable data storage they we like to have, persistency is another characteristic that we want to get from persistent storage.

Caching is often in the realm of lookup and not updating. Read caching can be done as elaborate as possible with no issue to the consistency and durability of the data since they are normally backed by already persistent data storage.

However writing is something we need to be mindful off. Those values in gigabyte range are currently are derived from dynamic memory which themselves are not persistent in nature. What I want to share is those values are pretty pointless. That is because once it reach down to the persistent storage, those value will quickly slow down to what the persistent storage can do.

Modern operating system offers, synchronous and asynchronous, 2 modes of persistent approach to data. Once it get pass a certain queue size, performance must eventually slow down to what the persistent storage can manage. In unix, we can sync from the OS to sync down to the persistent storage that persistent storage must respect. I believe the same goes for other operating systems too.

As such, those insane values as seen in the benchmark is pretty much useless. While UPS can save some scenarios, they are not the silver bullet to all failure scenarios. If these enhancement to the storage subsystem actually give a complete status back to the OS despite the data are just written into the cache(using DRAM from the main board), that would be something you ought to be worry about. That means "SAVE" in your application should not be trusted at all. That means when you write something to the disk, it is just in the memory. I don't think that is something especially in the IT industry recognise as persistency. So I'm rather skeptical if that is the case for the cache we are talking about here.

If it's read cache, the benefits are very little since much of the caching is already done at the file level, intercepted by the OS since all access to hardware resources must go through the OS. There is claim made that the read cache at the storage level is block level instead of file level. However even so, benchmark done on Samsung's RAPID mode doesn't show significant advantages. So it seems the OS is doing a rather good job managing file buffetings.

If you want to activate it, go ahead if you have sufficient memory, if not, just let your OS do the job.

As for those that recognise larger sized SSD gives better performance, it's useless for consumers because it is hardly you have sufficient QD to get those performances. Most users even power users have QD below 5.

Read some reviews from tweak town or anywhere else that shows the graph of QD versus performance, you will see the initial low QD < 5 have very little differences. Only those that reach QD=32 start to set apart the better SSD from the entry level ones. Reason is due to well spread of concurrent I/O and hence there is sufficiently concurrency that when spread across multiple channels access on the NAND blocks coming into effect.

With respect to the RAPID statement made in Samsung whitepaper. One must read it CAREFULLY. The words I want to highlight are "RAPID strictly adheres to Windows conventions in its treatment of any buffered writes in DRAM -- RAPID obeys all “flush” commands, so any writes buffered by RAPID will make it to the persistent media just like the Windows OS cache or the HDD cache. (Consequently, the data loss risk is identical to that of Windows OS cache or HDD cache)."

That means if you are already using write buffers, you are having the same risk as with or without RAPID. That doesn't mean you are doing something safe right now. If your application push out a flush command down to subsystem, the performance that you get is not going to be what you see in the benchmark.

That samsung statement make it very clear, but what is not clear is how does benchmarking results you see match up to if every I/O on the safe side makes flush ATA commands. This is not unusual in database applications. So you choose either to believe the tremendously good benchmarking results, or fall back to reality choosing what you feel is safe for you.

I remember there are still a lot of uninformed consumer still didn't pro-actively backup their data thinking RAID is not something they are interested in or too expensive to stomach. In the end, when something fails in their storage, they rather spend anything from hundreds to thousands for data recovery service that doesn't guarantee them 100% recovery. We see such requests in this forum from time to time isn't it ?
 
Last edited:

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
Guess you are worried that not everyone understands the logic behind this push to fantastical big numbers cos yes... imo, it is ultimately a numbers game that they are playing hence check again, the reason why the very 1st line of this thread reads as such.

Or Fighting With Samsengs... 2 can play at the same game

If one manufacturer goes around shouting big numbers albeit system DRAM cached numbers and the market responds favourably, the competition is wont to perk up its ears and join the game likewise or potentially lose out.

The eventual bottleneck, as you had painstakingly point out in your "concise" reply (actually, very nice to see someone bothered!), is still down to the RW speeds of the flash cells.

The risks as Samsung slyly points out is the same as with Windows own caching mechanism though it does not point out the larger the cache the larger the potential data loss either.

In the end, the discerning buyer will have to read between the lines and decide for oneself which is more important - big numbers or data safety.

Fortunately as was earlier said, the DRAM cache feature can be as easily disabled as it is enabled, with just the click of a radio button.

Ultimately it is also debatable who it is to blame - manufacturers shouting big numbers or the over-eager mass market or maybe both for leading each other on?

Should be interesting & illuminating to see how the other SSD brands are going to respond too. =:p
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,502
Reaction score
1,259
Guess you are worried that not everyone understands the logic behind this push to fantastical big numbers cos yes... imo, it is ultimately a numbers game that they are playing hence check again, the reason why the very 1st line of this thread reads as such.



If one manufacturer goes around shouting big numbers albeit system DRAM cached numbers and the market responds favourably, the competition is wont to perk up its ears and join the game likewise or potentially lose out.

The eventual bottleneck, as you had painstakingly point out in your "concise" reply (actually, very nice to see someone bothered!), is still down to the RW speeds of the flash cells.

The risks as Samsung slyly points out is the same as with Windows own caching mechanism though it does not point out the larger the cache the larger the potential data loss either.

In the end, the discerning buyer will have to read between the lines and decide for oneself which is more important - big numbers or data safety.

Fortunately as was earlier said, the DRAM cache feature can be as easily disabled as it is enabled, with just the click of a radio button.

Ultimately it is also debatable who it is to blame - manufacturers shouting big numbers or the over-eager mass market or maybe both for leading each other on?

Should be interesting & illuminating to see how the other SSD brands are going to respond too. =:p

Well I agree with you on that fact that business is business. I think it's a fair game everyone is playing here. I just want to point out where the caveats are.

Not every consumer is equipped with the knowledge that some of us possess given for me is because I work in the IT industry looking into such things from time to time. General consumers are often given numbers and benchmarks that doesn't speak much of what they are measuring and how they are measured. Even if the benchmark are detail and unbiased, it still require the well informed people to read into the depth.

Since you have brought it up in this thread, therefore I just put in my opinion to inform the audience how to read it properly and what they are getting into. Ultimately it's the consumer choice to understand the pros and cons to everything.

:)
 

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
Still, I thank you for your measured and diligent discourse on data integrity.

In my real world profession, it is oft frowned upon to exploit any knowledge asymmetry where it exists so it is rather eye opening for me to see how other more recent industries get creative with numbers and with profit and impunity. :)
 

MacClipper

High Supremacy Member
Joined
Jan 1, 2000
Messages
33,617
Reaction score
4
This important difference was just reiterated to me that Plextor has this feature while its competitors do not:


head03.jpg


Safe Power Loss

Automatically protects against loss of data from the RAM cache during a power interruption or unexpected system crash.


The following screenshots are captured from the Plextor M6 Pro Product Kit

P10.PNG


P11.PNG


P12.PNG



FYI
 

Asure7

Supremacy Member
Joined
Jul 2, 2009
Messages
5,106
Reaction score
240
Gimmicks aside, I don't really see how there can be protection against power-loss unless data is written to some form of persistent storage (which in a typical setup, the fastest is the ssd).
 

davidktw

Arch-Supremacy Member
Joined
Apr 15, 2010
Messages
13,502
Reaction score
1,259
I agree. I don't see where the protection against data lost is.

As long as the OS receive a I/O complete but the data is not found inside the persistent data storage, then there will always be the chance of data lost. That is why I/O flush is always synchronous and synchronous operation means it will always be delayed by the slowest component in the whole data flow.

The only thing that comes to my mind that can provide asynchronous write back into the storage system and still able to provide high availability is battery backed memory. That means the single component storage subsystem must provide battery backed to the temporary memory unit and the actual persistent storage. The amount of charge must be sufficient to write all pending data in the temporary storage into the persistent data storage in the event of an immediate power failure outside of the storage subsystem.

If not at least the temporary memory unit must be implemented the same as static memory module where only current is required to sustain the data until full data is available. This is the approach used by high end H/W RAID system.

I hope you guys know that DRAM requires frequent refreshing of the capacitors since they will discharge very quickly, that's why they are called Dynamic RAM.

Static RAM(SRAM) do not require refresh cycles, just current to sustain the charge.

In the last pictorial of the PlexTurbo WTP, I infer the single circle as an incomplete safety to a dual circles for write-thorough. That means SAFE has to be read in a different manner.
 

watzup_ken

High Supremacy Member
Joined
Nov 21, 2003
Messages
25,672
Reaction score
2,123
Not sure if I missed it, but this ram caching is using system ram or the built in ram on the hard drive? It may be possible to have some kindda power lost protection if Plextor is using ram built onto the SSD.

I've dabbled in ram caching about a year back, and the results always look very impressive on benchmarks. The effects are not so obvious in real life, but still there. Of course as pointed out, the risk is power lost.
 
Important Forum Advisory Note
This forum is moderated by volunteer moderators who will react only to members' feedback on posts. Moderators are not employees or representatives of HWZ. Forum members and moderators are responsible for their own posts.

Please refer to our Community Guidelines and Standards, Terms of Service and Member T&Cs for more information.
Top