Quantcast
Channel: StableBit DrivePool
Viewing all 864 articles
Browse latest View live

SSD Optimizer and Disk Space Equalizer Problem

$
0
0
SSD balancing doesn't seem to work - well of course it works I've just got something screwed up!

upon looking at it more it seems the "disk space equalizer" is the problem?
 
it seems no matter what I do I wind up with files staying on my SSDs - I want nothing on my SSDs, I want them to be a landing zone.  I'd also like my drives to be evenly used.
 
Win10 Pro 64 bit
Stablebit 2.1.561
 
The link here should show you everything? I've got the Disk Space Equalizer disabled for the moment. I have tried re-ordering the balancers, but might not have hit on the right arrangement?
 
 
The SSD's (drives C: and D:) are used for the OS and temp files I don't care to have in the pool (like the Plex Metadata)

Let me know what I'm doing wrong? or can the two not be used together?

Thanks!


Other files and Sys Vol Info

$
0
0

If you have a large amount of "other" files taking up space on your pool drive and would like to reduce the amount of "wasted" space then...

 

Warning doing this could stuff up you pc so go carefully and if it goes to pot its not my fault :)

 

Your friend here is the powershell cmdlet "vssadmin"

 

if you run the command via an elevated powershell window

 

vssadmin list shadowstorage

 

it will list your vss stores etc

 

and then run

 

vssadmin resize shadowstorage /on=[drive letter]: /For=[drive letter]: /MaxSize=[maximum size]

 

See http://woshub.com/how-to-clean-up-system-volume-information-folder/

 

for more details

 

got rid of 70GB+ of unneeded vss data for me :)

 

Enjoy

Pool/Disk Performance

$
0
0

Should i see activity on the pool when its balancing as files are being moved around?

 

I do see new files being copied to the pool just nothing when balancing.

 

Also why is the balancer so slow to move files off the ssd cache - can take more than a day to clear them out - have loads of free space ?????

Looking for Debug Help

$
0
0

Good morning everyone!

 

I'm looking for some debug advice.  I have a Windows 10 setup with a Sans Digital 5-bay external HDD box.  I'm using the SIIG SC-SA0M11-S1 eSATA host adapter (driver 2.0.8.0) recommended by StableBit.  Have both StableBit DrivePool and Scanner loaded with DrivePool managing 3 of the 5 disks in the Sans Digital box. 

 

This box was setup in early 2014 has been running flawlessly since until last Friday (12/16).  Then it just started silent deaths requiring a hard reboot to recover.  I've been able to isolate the problem to the external HDD box.  If I leave the Sans Digital box turned off, the system will run normally with no problems.  When I turn the Sans Digital on and reboot, the system will boot fine and run, but will die within 24 hours.  I do not see any issues logged in the Windows event logs.

 

I have been able to recreate the symptoms.  I was focusing on the SIIG adaptor, and when you try to open Windows Devices from File Explorer, the system will go into la la land every time.  Interestingly enough, File Explorer will not launch from the Task Bar - you have to open it from the Start Menu.  Again, this happens every time you have the Sans Digital powered on.  If you reboot with it turned off, this scenario does not cause the death.

 

Just to test, I uninstalled the December Windows security patch, but that was not the problem.

 

Thoughts? 

Backups

$
0
0

I am curious as to what backup software/methodology that most of the people here use?

 

WHS11 has failed with the upgrade to Windows 10, and Windows Essentials while barely cost effective has gone over the edge with the upgrade to the 2016 version.

 

IS there a definitive software package that works with DrivePool as the pool size goes into the terrrabye realm?  Or are most people using NAS solutions.

 

I've turned on file duplication for now, but that's not a true solution.

 

Any thoughts, comments, or pointers to other conversations would be greatfully appreciated!

Nested Drivepools?

$
0
0

I currently have 68TB worth of drives set up, 55 on one system directly hooked up with a Mellanox ConnectX2 via IPoIB so ide, the IO controllers are LSI 9211-8i for internal drives and 2x RocketRaid 642L Port multipliers (replacing it with this next week) https://www.amazon.com/gp/product/B019R09N10/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

 

I have a 9200-8e but even after flashing to IT firmware it did not pick up my esata boxes. At any rate my real question here is about nested drivepools. On server (A) I have 55TB of drives including to SSDs I'll be using for staging data with the SSD optimizer, on server (B) where I do most of my day2day and initiate most of my downloads from I have 12.7TB worth of drives (both setups excluding the boot drives). My goal here is to use clouddrive and drivepool to create one massive pool of data, the underlying storage 55+12.7 are drivepool themselves. 

 

I wanted to carve out 10TB drivepools and rope those together into one large drivepool. Would this be a recommended (and supported) setup? All of it is formatted in REFS as well for the bitrot factor. The underlying pools would have duplication (so I get some semblance of parity) and the main pool would have no duplication in the drivepool. 

Is there any flaw anyone can see here, or a better approach to handle this? I want to make sure I do this right as my goal here is to have a more long term storage setup for media streaming and general data storage. 
 

I was considering an alternative of setting up an iSCSI target/initiator setup seeing as I have a dedicated connection between the machines. However I am trying to get the optimal suggestions to making sure this setup is a good balance on performance, redundancy and capacity.

Drivepool + Clouddrive integration - limiting cloud backups for files w/ 3x duplication

$
0
0

Hi all,

 

Need some tips on cloud backups: I'm running a drivepool with 2x duplication, and would like to create a secondary cloud backup for critical files. These files are in a specific folder, where I've turned on 3x duplication (e.g. 2x local, 1x cloud). Specifically, the cloud drive should only store files with 3x duplication.

 

Tried fiddling around with the settings, but Drivepool keeps wanting to move files with 2x duplication over to the cloud. Or rather that's what I think, because it's proposing to move 1TB of data from my HDD pool to the cloud (my critical filesizes are below that).

 

My approach of doing this:

 

1) Limit the cloud drive to only duplicated files (Balancing section)

2) In the Placement Rules section, set the root folders to not use the cloud drive, but create a separate rule that sets the specific sub-folder to use the clouddrive. The subfolder rules are ordered above the root folder rules.

3) Turn on 3x duplication for the specific folders

 

Thanks again, and happy holidays!

 

EDIT: I went ahead with the duplication, and the copying logic seems correct. However, I get the same issue as this:

http://community.covecube.com/index.php?/topic/2061-drivepool-not-balancing-correctly/

 

Furthermore, when I click on the "increase priority" arrow in Drivepool, the duplication process speeds up but Clouddrive's uploads dramatically slow down. Any idea why?

 

Capture.PNG

 

Clouddrive: 1.0.0.0.784 BETA

Drivepool: 2.2.0.737 BETA

 

 

 

Slow copy Speeds, 10G Lan, 2012R2E and Disk Write Cache

$
0
0

Mainly for information as not a Drivepool problem but it initially looks like it is when first experienced.

 

I was getting slow copy performance to the pool via my 10G network - basically it would run fine at close to 500mb/s (sata ssd to sata ssd) about as fast as it can go with the disks involved. This would continue for a minute or so then the copy would slow down to 20-30mb/s or in some cases virtually zero - it was during a copy of a +200gb backup file so not a small file problem which i initially thought it might be. With further testing i found it would happen on virtually any file with a reasonable size so the file size is not the contributing factor although it does show the problem more fully the larger the file being copied is.

 

One thought was that something was overheating and being throttled - checked Scanner no disk was above 35c. Sending pc also had low temps as well. Also checked the 10G network cards and they were showing no errors and normal temps according to the driver info.

 

The server with DP installed was very sluggish to respond when the slow copy occurred but memory load was at most 50% on a 32GB system - hmmmm odd

 

When monitoring a copy more closely on the server i could see the "modified" memory in Task manager climb as a large copy started and would peak out at about 12GB at which point the copy would slow down to an effective crawl. Over time (few minutes) this would decrease back to a lower value and then the copy would speed up again but would drop to a crawl when the "modified" memory peaked again.

 

So i checked the write cache settings on all drives and found all but one had write cache enabled - odd why would one not have it checked - well it turns out that write cache is disabled on the system drive of a domain controller by default. So after enabling it the copy speed went back to normal.

 

However this does not survive a reboot as windows 20212 r2 Essentials must check this value and turn it off. - Grrr

 

So to prevent slow copies on your network (occurs on 1g and 10g Lan) dont include your OS drive or any partition (my mistake) of the drive in your pool especially if its an ssd with SSD Optimizer as when files get written to that drive the performance slows to a crawl and will speed up again when copying to other ssd's - so you get very variable copy performance and a very sluggish pc until the memory is cleared out.

 

I am experimenting with an option to modify the memory cache size in 2012 r2 to see if it makes any difference to the performance. As i can see that on a 10g network the performance does still decrease (with write cache enabled) when the memory cache is filled to approx 50% of normal copy speed - this might be related to DP doing real time duplication but not sure yet. Will report back when i understand more :)

 

Hope this helps somebody with similar issues :)


Disk Performance panel no longer shows active files

$
0
0

Just a minor annoyance: The Disk Performance panel in the lower right side of the window used to list files currently being read or written. Now it never does even as lots of activity is happening. Is this known?

"Based on Files sizes. Actual disk space used may be less"

$
0
0

What does the above indicate as have had this a couple of times recently and when you check the disks they are empty?

 

A reboot appears to clear the misleading info - the ghost data is indicated as duplicated - across two ssd's used for cache both showing the same ghost size

 

So is the data duplicated or not?

 

Is this a symptom of something else?

 

Is it an interface bug?

 

etc

NAS server with Drivepool under KVM/Proxmox Hypervisor.

$
0
0

I was thinking running my file server under a Linux KVM hypervisor like Proxmox, pass the disks to a Windows 10 with Drivepool, are there any considerations? anyone knows if SMART works under KVM?

Questions before switching

$
0
0

I'm currently using FlexRaid and I'm getting sick of constant false reports of corrupt files.  All my drives check out OK and files are all good but it's validation checks fail immediately after re-doing the parity.  Needles to say, I don't trust it any more if a drive dies to successfully restore my info.

 

Before I switch over I have a few questions and things I'd like confirmed if anyone here can help me.

 

First, and most importantly can I add or create drives to a pool that already have data on the drives?  I have 15TB of data to move so this is huge for me.  Assuming I can add to the pool without destroying my data, is there anything else I need to do?

 

Second, after the pool is created, can I later add additional drives to expand my pool or am I stuck with the number of drives I start with in the pool?

 

 

Next, if I need to pull a drive out of the pool for any reason, am I able to read the data off the drive?  If there's one thing FlexRaid has done well is I can pull a drive out of the pool, plug it into another PC and read all the data off of it.

 

Last, how does the backup functionality work here?  My current setup has 7 4TB drives that are pooled with an 8TB drive used to store parity information.  After formatting, this gives me between 25 and 26 TB or usable space in the pool as the parity drive obviously isn't counted.  I'd like to not lose any of that space due to parity if I can help it but at least want to be properly informed on the expectations.

 

Thanks in advance!  It takes a while for Flexraid to rebuild after pulling things out so I'm a bit hesitant to go for the free trial until I know what I'm getting into.

 

DrivePool Disks not showing correct usage

$
0
0

Hi,

 

Since I clicked on remeasure, the DrivePool disk allocation is not showing the Pooled usage for 4 of the disks.

Any ideas what might be causing this? I'm running WHS 2011 with StableBit DP 1.3.6.7585, all balance settings are default.

 

Thanks

 

StableBit.png

Advice More Storage for Windows 10 Server External not USB

$
0
0

For drivepool want more storage.

 

I have in my Windows 8.1 Windows Media Sever / Plex 

 

1 x SSD

1 x Blu-ray

4 x 6TB WD Red

3 x 8TB Seagate Archive

1 x 3 TB Hitachi

http://www.asrock.com/mb/Intel/Z77%20Extreme6/

1 x Asmedia 1061 Controler Card

3 x Tuner Cards

 

I need more storage SATA drives. I have spare random ones and plan  to buy some more WD Reds

 

What is good for external that will work reliably With DrivePool and Scanner.

I have had no luck with Scanner and External USB and getting any SMART and even tried Advanced settings in Scanner but had computer lock up. Cheap temporary no name USB bay keeps disconnecting every few weeks and I want it gone.

Is eSata an option?

 

If suggesting other technology I will need hand holding as I don't understand SAS etc and don't have an enterprise budget but could spend up to AUD $500 or maybe more if robust.

 

I am in Australia

 

This is all I could easily find that takes greater than 4 TB hard drives readily available.

 

This is the only Case that seemed to have eSata I could easily find but some say power supply unreliable 

 

https://www.pccasegear.com/products/23687/hotway-h82-su3s2-8-bay-usb-3-0-non-raid-enclosure

 

I assume there would also be reliable enterprise second hand equipment based on some of the builds I see here on the forums.

 

Thanks

 

 

Pool duplication mysteriously disappeared?

$
0
0

I'm not sure what happened exactly, but I booted up my home server and noticed in My Computer that my pooled drives were half empty.   I soon realized that much of the duplication had somehow disappeared.   In theory the entire pool should have duplication, but currently (DrivePool is only 65% rechecked as I'm writing this) the legend under the pie chart show it has ~8TB duplicated, ~13TB unduplicated, and ~17TB "other", but there's no "other" shown on the pie chart itself.   Also, My Computer says the pooled drive is 35.4TB with 14.4TB free.   The fact My Computer is reporting that much free space (there should be less than 1TB free) is what really has me concerned...


RAID5 drives or JBOD?

$
0
0

Hi,

 

I am about to extending my pool with a new set of drives.

 

In the past I moved from drive bender to this because 1 drive died / got stuck and I wanted to take advantage of the Scanner.

 

So my current setup is a bunch of single disks and a RAID5 pool to prevent data loss of one drive, all added to one pool.

 

I am now asking myself if it is still clever with drivepool and the scanner in place to build a second RAID5 drive with the new drives and add it to the pool or to keep it as a bunch of disks and rely on the scanner?

 

Is there any advantage / disadvantage in using RAID5 for pooled drives or leaving them as a bunch of disks?

 

I don't want to duplicate stuff, because this would lead to heavy requirements. At the moment I am at about 20TB for my media library and I don't want to double the required space :D With the new disks I would now be able to move the data off the RAID and reset them as a bunch of disks. This is why I am asking.

 

Thank you in advance.

 

Kind regs :)

Snapraid + Drivepool vs. Just Drive Pool

$
0
0
Hello All,

I have an opinion question to ask the forum. For years I have been using Drivepool for pooling, and snapraid for protecting myself against drive failure.

I am currently running into an unknown issue with snapraid, where my sync command gets stuck at 0%, which I am unable to figure out.

That leads me to a point where I am questioning whether I need snapraid. Since I have Scanner along with Drivepool, I am wondering if the evacuation functionality in Scanner can protect me against the drive failures. Essentially if I always ensure that I have at least the capacity of one drive of free space available in my Drivepool I would survive one drive failure.

Currently I am running 6 data drives in the pool, along with 2 drives for parity. If I were to just add the parity drives to the pool I would have more than 2 drives worth of "free capacity" for scanner to evacuate to if there was a drive failure.

While snapraid has serve me well, and allowed me to recover files, just leveraging scanner / Drivepool could simplify my setup slightly.

Just wanted to get the communities thoughts on this.

Thanks in advance

Network Errors While Renaming on Windows 10

$
0
0

If you're using Windows 10 Anniversary update to access a Windows Server 2012R2 or 2016 machine, and you're experiencing odd issues when creating, renaming or deleting folders.... 

 

 

This is a bug with Windows Search, and not actually StableBit DrivePool. 

 

We've had a few users report this, including myself.   Worse, is that this only affects indexed shares (such as shares created by the Essentials Dashboard), so likely will effect your pooled shares. 

 

 

The fix is a brute force fix, and involves disabling the Search Service or removing these folders from the shares. 

 

This means that if you're using the media streaming feature (built in or via the Media Pack for Essentials), or using the Remote Access Website, this will break the ability to view your files. 

 

 

You can do this quickly from a command prompt by running "sc config wsearch start=disabled" on the server.

 

 

 

Relevant KB Article about this issue:

https://support.microsoft.com/en-us/kb/3198614

Dedupe and folders in drivepool or underlying drives

$
0
0

Hi,

I was planning on trying dedupe (server 16 standard essentials experience). ddpeval showed 27% saving as of now. Should i run dedup on the underlying drives or the drivepool virtual drive? Any recommendations?

 

Also for that matter, is it recommended to point the server shared folders (setup via the server essentials dashboard ) to the drivepool drive or again should I instead point it to one of the underlying physical drives. I currently have it pointing to the drivepool pool drive and its working fine expect for the one time when I got a message that the user defined folders were missing, that resolved itself (may be coincidental).

 

thank you

Large files and SSD Cache - Question/feature request

$
0
0

Problem

 

I have my nightly backups (Veeam) for each machine copied to my DP server each night but as these files created by Veeam just grow in size over time (no shrink option other than restart the backup history?) they become very large.

 

Recently a single file was getting into the 200GB range before i reset the history (i.e. start again)

 

When the files are updated they are stored on my NAS and then copied to my DP Server via SyncBackPro 

 

The copying is fine and not any issue.

 

Question

 

When the backups are copied to my DP server they tend to fill up the cache (4x 256GB drives with real time duplication) then a rebalance occurs sometime later and the data is moved to the archive drives - this works fine.

 

So is there anyway for large files to bypass the ssd cache and get written to the archive disks directly (accepting the copy might be slower) without disabling the ssd cache for "other" files

 

I guess i am asking for an exception list for the ssd cache by size/type of file?

 

I dont think this is possible but would like to be proved wrong :)

Viewing all 864 articles
Browse latest View live