@itsjasper, thx for your point.
We need to split perception of the Mass market and rest of users, who know little bit more.
I understand the point of Synology - they found a hole in the market = Mass market doesn’t understand about the redundancy, etc. This is correct.
Then Synology created the SHR(1) = solution for “give me more space and I don’t need care about details”. As you stated the final performance of the array is driven by slowest disk in the array.
Troubles come when such Mass market (space eaters) needs to purchase additional more space. And majority of them just purchase a new disk. Situation:
1. existing SHR(1) with two disks - old 500GB and “better” 4TB. Don’t blame it, I’ve seen such scenario . Budget driven people are creative.
2. then upgrade with new 10TB disk (because it’s cheap now)
3. finally they get 4.5TB available space and 4TB for the protection (they don’t see it, then they don’t care about it).
But:
- same result they can get with new 4TB or 6TB or 8TB disk. Still just 4.5TB of available place. Then such investment (new 10TB disk) to new bigger space is really pointless.
They don’t know, that better scenario is purchase new 4TB disk only and next year additional big disk (cheaper than last year). Or just remove the old 500GB and substitute by new 4TB (similar space and better performance).
Yeap, still out of data cleanup - remove useless data, that makes a mess in our disks. Purchase of the new disk is more comfortable as the cleaning.
PS: yes I know, few people (minority) can prepare the SHR(1) in better way (performance and stability driven). And it’s OK. But they don’t need the SHR(1), because they know more about data tiering. It is still about a preparation phase.
We need to split perception of the Mass market and rest of users, who know little bit more.
I understand the point of Synology - they found a hole in the market = Mass market doesn’t understand about the redundancy, etc. This is correct.
Then Synology created the SHR(1) = solution for “give me more space and I don’t need care about details”. As you stated the final performance of the array is driven by slowest disk in the array.
Troubles come when such Mass market (space eaters) needs to purchase additional more space. And majority of them just purchase a new disk. Situation:
1. existing SHR(1) with two disks - old 500GB and “better” 4TB. Don’t blame it, I’ve seen such scenario . Budget driven people are creative.
2. then upgrade with new 10TB disk (because it’s cheap now)
3. finally they get 4.5TB available space and 4TB for the protection (they don’t see it, then they don’t care about it).
But:
- same result they can get with new 4TB or 6TB or 8TB disk. Still just 4.5TB of available place. Then such investment (new 10TB disk) to new bigger space is really pointless.
They don’t know, that better scenario is purchase new 4TB disk only and next year additional big disk (cheaper than last year). Or just remove the old 500GB and substitute by new 4TB (similar space and better performance).
Yeap, still out of data cleanup - remove useless data, that makes a mess in our disks. Purchase of the new disk is more comfortable as the cleaning.
PS: yes I know, few people (minority) can prepare the SHR(1) in better way (performance and stability driven). And it’s OK. But they don’t need the SHR(1), because they know more about data tiering. It is still about a preparation phase.