Quantcast
Channel: StableBit DrivePool
Viewing all 864 articles
Browse latest View live

Can you remove more than one drive at once?

$
0
0

Splitting my pool in two

 

can i remove more than one drive at once - it appears so as the option to remove another drive is not greyed out

 

be scared to try it incase it breaks something???


[Feature Request] - Drive Removal - Specify Target Drives

$
0
0

When removing a drive can we have the option to specify which drive(s) to put the data on and/or which drive(s) not to put the data on - obviously DP gives us a list of options based on space and duplicated data restrictions etc.

 

Reason being i am splitting my pool and the first drive i am removing is mainly moving its data onto another drive that i also want to remove - so getting double or more handling of the same files!!!!

Problems with Removing a Drive From Pool

$
0
0

Last evening, I decided to swap out an old 2TB disk for a new 3TB disk to my 4 drive pool which is duplicated 2x. The drives are as follows:

 

1x 4TB

1x 3TB

2x 2TB

 

Disks 2, 3, 4, 5 are in the pool with 6 being the Pool itself. Disk 4 is at 100% use.

 

The 2TB drive got to around 75% completed removal before Windows 10 decided to do one of its famous "I'll restart when I want" moves. After coming home from work a few hours ago, I'm to see what can be done to remedy this situation. For the past hours, it's been 'Measuring', and I'm not sure what to do. My data seems intact and its VERY slow to access via Windows Explorer, although the drives aren't showing up in the DrivePool app.

 

I'm currently running Version 2.2.0.740 BETA x64 on Windows 10 Pro if that helps. Screenshots are attached.

 

DrivePool also seems to be very slow to start up in the beginning like never before "Service starting in 1s..2s.....1 minute....etc"

 

Am I totally borked or is there anyway to salvage this situation?

Attached Thumbnails

  • DrivePool.UI_2017-02-14_19-54-33.png
  • Taskmgr_2017-02-14_19-55-04.png

1:1 Duplication and their "Free Space" in the Pool

$
0
0

I am currently using DrivePool on two sets of disks, one of which are two 1TB drives that I have set up to duplicate each other as well as show up as a single pool. This is mostly used for more important documents, photos as well as the most accessed disks on the system such as the downloads folder.

 

One "issue" I have been having with doing it this way is that the "Free Space" shown as available is the total of the two, which in most cases is what you'd want. The problem I have with that in my use case is it is really half as much. This is not so much of an issue if I Always (not the case) remember that it should be half when I go to allocate space for something but it also becomes a bother since I do have some automated tasks going to that disk and they will look at the total free space before executing. If there any way to work around this?

Issue re-adding a drive after a RAID rebuild.

$
0
0

So this is the first time I've had a drive failure while using DrivePool. I'd like to first say I am quite happy with how the software handled the missing drive, locking the rest of the drives from writing when one went missing helped prevent any errors with SnapRAID.

 

I am now coming across an issue though. As SnapRAID has recreated all of the files on the disk, including the PoolPart folder, everything sees the drive as the same one it replaced beyond its UUID being different which seems to have caused a conflict in DrivePool. More specifically is that the drive "reappeared" in the pool but it also sees the drive as being one it can add.

Attached Thumbnails

  • Capture.PNG

Only duplicates on a CloudDrive

$
0
0

I know this has been asked before - I've even read a few threads on it, but I still can't seem to find a 100% "for sure" answer...

 

What I have in my pool is this:

Drive A - local drive

Drive B - local drive

Drive C - local drive

Drive D - clouddrive

 

What I want:

1) All my files to be saved to drives A, B and C

2) Folders that I choose to duplicate ONLY to be duplicated to drive D

 

What I don't want:

1) Duplicates on any of my local drives.

2) Non-duplicated files on drive D

 

The thread here:  http://community.covecube.com/index.php?/topic/1231-how-do-i-keep-duplicated-data-on-the-pool-but-other-copies-anywhere-else/

is suggesting that I don't change the Drive Usage Limiter balancer, and just create rules for only the folders I want duplicated. In those rules I would ONLY choose drive D.

In doing that, am I not limiting that folder to only be saved to drive D? It's not making much sense to me. How does that put just the duplicates on drive D and allow the "original" file to go on drives A, B or C - when they're all unchecked?

 

Hopefully someone has a definitive answer for this as it's not very clear.

 

Thanks!

[Bug] SSD Cache - Copy fails when nearly full and [Bug] "Unusable for duplication"

$
0
0

As i understand it when the SSd cache becomes full it should fall back to the Archive drives

 

But

 

If you are copying large volumes of data (internally within the server in this case) it will fail 50% of the time

 

Basically what appears to be happening is

 

If the ssd's are beyond their limit % full setting the next file will fall back to the archive drives - as expected - (in my case 95% full)

 

However if the ssd's are not at or near the % full setting and the next file exceeds the remaining space on the ssd's (large video files) then the copy fails with a STREAM(????) error - not good for unattended large copies  :P

 

If you get the pool to balance and create sufficient space the copy will proceed - however this "paused" copy has an unwanted side effect as it generates a large amount of "Unusable for duplication" data in my case over 700GB worth - checking the disks sizes and comparing file size totals - with checking for hidden files and running a full chkdsk /f on all drives did not show any errors or orphaned files - its appears to be a bug in the calc by DP - as copying more data to the pool reduces the "Unusable for duplication" by the amount copied :blink:. In my case there are no other files on these disks only the pool part dir, recycle bin and SVI - both SVI and recycle bin are turned off and empty as they were the first things i checked.

 

Have a suspicion that the amount of "Unusable for duplication" is the size of the copy which "fails" and if you have more than one it is cumulative - difficult to check as it depends what files are copied and their sizes to induce the error.

 

I did reboot to see if it would solve the issue - no joy

I remeasured the pool a couple of times - no joy

 

i just copied 203GB of data to the pool and the  "Unusable for duplication" dropped from 502GB to 299GB - as each file was copied the total dropped by that amount during the copy.

Doing a second copy of 296GB to see if i end up with ~ 3GB left in "Unusable for duplication" - total is dropping by file copied as seen in first copy. [edit] - ended up with 3.34GB unusable left after copy. Now it is back to normal as have copied more data across to the pool and do not have Unusable anymore.

 

DP version 740 Beta

Win 2012r2 SE

Pool in question is not duplicated

Pool is second pool on server - if that makes any difference

 

If you need any logs etc let me know :)

Notice different behavior for Win7 "copy" vs "move" - data recovery

$
0
0

I am recovering a drive I pulled out of a hot swap box ( the drive lost the drivepool partition because I didn't unmount the drive from Windows first). 

 
I now have all the recovered folders/files sitting on a spare drive (thanks to WonderShare Data Recovery) and am slowing copying them back to the pool.
 
My questions is this: 
Is there some difference between using Windows 7 copy, vs Win7 move? (as far as the drivepool emulation is concerned?)
 
What I have noticed is this:  (I know how to force Windows to copy or move, as I need to of course)
 
a) If I use "copy", I may or may not get a "merge folders" dialog, and a "these files are already present, replace or no?) dialogs. If I use copy,
it will often times just look like its copying, I will come back after some time, expecting to see a merge dialog up, and I see nothing. No copy going on, no dialogs, nothing. Like it just copied the first few folders and then stopped?
 
B) If I use "move" on the other hand, it seems to always do a reliable deep transfer of files all the way down the tree, across all folders, very reliably. 
 
I was not aware to expect different behaviors between these 2 operations, except of course, files that are moved will be deleted from the source drive.
 
I am of course, copying from a spare recovered filed drive, to the emulated pooled drive.
I assume this is not a bug in windows. I'm thinking it's a bug in DrivePool filesystem emulation?
 
I'm on version: 2.1.1.561
 
Anyone else ever notice this behavior?

Folder Duplication - backup file of settings? Settings lost with C: SSD failure

$
0
0

I lost my C: SSD drive some time ago. I seem to have lost a good working backup of it as well...

 

So I installed a new Windows 7 on a new SSD. I now have the pool back up and running, with 1 difference.

 

I don't know how to restore my Folder Duplication settings precisely as they were.

 

I have a few questions regarding how drivepool will act for me, so please follow, and I'll be brief:

 

 

1) Can I save my folder duplication config file (or, where is it and what's it called?) so that I can reload it sometime later if I lose my Windows C drive?  (assuming I am using same version of drivepool on new configuration? )
 
2) What happens to my backed up folders if I lose my C drive, have to re-install Windows 7?
Will the original duplicated folders still be duplicated across 2 or more drives (for example, if I have 10 drives pooled)?
 
3) If for any reason I don't set up backup folders immediately after a Windows re-install, will those duplicate folders/files stay as is, or will they be deduplicated over time by DrivePool?
 
I have used this forum many times before, and I don't remember these questions ever being addressed, it may be a special situation. Please point me to other threads if this is a duplicate question.

New pool creating failure

$
0
0

New install of Beta .651 on Windows 10. Only rebooted once not twice.  :(   Didn't have a new pool button, just an option to add drives so I clicked on the  + Add link for the D: drive, it hung at 95%.  When the pool was never created I cancelled the creation, then tried with the E: Drive.  This sat at 95%, then it appeared the pool creation canceled on it's own.  I have no pool and I cannot add the D: or E: drives as they error out saying they are in the pool already.   I found the article that I need to reboot 2x for Windows 10 after I tried creating the pool.  

 

How do I remove the pool that exists but I cannot see and start over?

 

[Edit] - I have rebooted 2x and no change in behavior.

how can I optimize DP to minimize drive overheats

$
0
0

This isn't really a DrivePool question directly, but my comp tends to have drive overheats, which isn't good. Obviously the real fix is hardware, (fans, etc) but in the interim, would it be better to have DP fill my biggest drive, then the next, then the next, etc. instead of having it spread across all the drives, allowing the smaller/older drives at the end of that list to be unused and idle and not generating heat in the case, OR.... is it best to spread it out among all the drives evenly even though the case seems to have trouble keeping up.

 

If that makes sense. Which it may not.

 

Basically, acknowledging that this is a hardware issue and not a DP issue, how can I optimize DP to minimize drive overheats?

8TB Archive showing up as 1.3 TB

$
0
0

I'm not sure where else to ask, you all seem to be the type of people that live out on the edge like I do.

 

I have a Windows 7 VM running in VMWare 6.5. The system currently has two 5TB WD Red drives that I use in a DrivePool for storage. Opening "Computer" shows the two drives as 4.54 TB, all is well.

 

So, I want to add an 8TB Archive drive. I set up the RDM in VMWare the same exact way I set up the 5TB drives. VMWare sees the drive as a capacity of 7.28 TB. In the Windows 7's VM settings, it also sees the drive there as 7.28 TB, and I have all the settings the same as how I configured the 5TB drives.

 

BUT

 

I go into W7 Disk Management and it shows the drive as only being 1307.91 GB. I've tried everything I can think of to get this drive to show as 8TB but nothing.

 

Oh, and it's that way even when Unallocated. I did set it up as GPT (as are the 5TB drives) but still it's just 1.3TB.

Empty PoolPart folder - cause of mis-reporting Other?

$
0
0

Hi

 

I've set up a 2-disk pool using a pair of 4TB hard drives. Both drives are dedicated to the pool i.e. have no non-pooled data, yet the statistic displayed in DrivePool reports 1.5TB of Other.

 
I did a bit of investigation and there appears to be a rogue PoolPart directory on one the 2nd disk which is empty. Could this be the cause? 
 
If so, I tried to delete the empty folder, but Windows wouldn't let me. Is there a quick fix?
If not, do you have any ideas what it could be? 
 
Disk 1 - F:
PoolPart.be5e2572-871d-4287-b954-8fa08274ea08
 
Disk 2 - G:
PoolPart.74b7a9ae-35c3-47d7-b3a4-6076082c3e1d
PoolPart.8129887a-a4f0-4a26-bfd5-dcd223b33c91 <<<< This is the rogue / empty folder
 

 

Changes to file name slow to propigate on network shares?

$
0
0

I've noticed that I fairly often run into a situation where if I rename a file on a network share, it's very slow to reflect the name change on other clients on the network.  Is this a normal situation with DrivePool? It's definitely not normal with any other Windows share that I've ever had. So for instance:

rename a file on client 1

client 2, if it has the same folder open at that time, doesn't reflect tihs change. In fact, I'm not sure how long, if ever, it will reflect the change.  Refreshing the folder doesn't update the file name. It seems like I have to leave the folder (up a level for instance) and then re-enter to get the file name change to be reflected.

 

I currently have the windows indexer on the server disabled, although I've never seen the indexer be responsible for this kind of behavior before. At a guess, Windows isn't being informed of the change, so cached values are sent to other clients until an operation that forces an explicit re-read happens.

 

Thank you

 

Feature Request - Assign drive priority for dupicated data

$
0
0

I have 3 cloud drives that are all pooled together and are set at duplication x3 for the entire pool so each drive has the same data.

 

Drive 1: 100 TB ACD

Drive 2: 100 TB GDrive

Drive 3: 100 TB GDrive

 

What I would like to accomplish is when accessing the data that is duplicated on all three drives, I want to assign a weight or priority to accessing the data to the two google drives as they have much better access time and speed and avoid using the ACD as it is there just as another level of redundancy.

 

Ideally this would not be needed it DrivePool was able read from all the drives at the same time for the file being accessed.

 

Please let me know if this is a possibility.

 

Thanks


Error - Files not constistant across pooled drives

$
0
0

I have three cloud drives that are all pooled together with DrivePool. I have it set to duplicate the data across all three cloud drives for 3x duplication for redundancy.

 

Drive 1: 100 TB Amazon Cloud Drive

Drive 2: 100 TB Gdrive

Drive 3: 100 TB Gdrive

 

I went to access a media file on the pooled drive and I was unable to play the file. The file appeared corrupt. At first I thought that it may have gotten corrupted in transit for the initial transfer. Then I checked other files in the same show and other seasons in the same show, all tv eposides for the one show exhibit the same corruption and refuse to play even copying the file locally from the cloud.

 

I manually went into each drive in the pool and found the file and downloaded it. What I found was, the file was corrupt and would not play on both GDrives volumes but the file was working properly off of the ACD volume.

 

I believe when I added the show I only had 2 cloud drives, 1 ACD and 1 GDrive. When I added the 3rd drive I think it replicated from the 1 GDrive that already had the error with the file and thus duplicating the error to the second GDrive.

 

My question is, how is the file in an inconsistent state across the pool? Shouldn't the file be an exact copy on each volume? I tested removing one of the episodes on both GDrives and it proceeded to mirror the file and it is now working as expected and plays without issue. I would like to be able to tell if there are more shows like this and correct the issue before it becomes unrecoverable. Drivepool should be able to see if the files are in an inconstant state and perhaps even prompt me for which version I would like to keep and mirror to the other drives.

 

I have left the rest of the show in an inconstant state so that I can assist with troubleshooting how to track down and fix the issue.

 

OS: Windows Server 2012 R2 x64

CloudDrive Version: 10.0.0.842

DrivePool Version: 2.2.0.740

Recycling old PC for Home Server/NAS duty

$
0
0

Hi all.

I'm currently researching into the feasibility of using my old tower PC for home server duty.  Knowing that there are a lot of ways to currently do that (Linux, FreeNAS, etc etc), I would rather try it my preferred way - which is running a copy of Windows 10 Pro with DrivePool and Scanner.

First off, the hardware (prepare to cringe!)

  • AMD Phenom II x3 710
  • ASUS M4A780 uATX w/ Realtek GbE on board
  • 4GB DDR2-800, 2 open slots. (ASUS board is ECC compatible)
  • Corsair Force 120GB SSD for boot
  • 2x WD Black 1TB to start off with
  • OCZ ModXstream 600W power supply

Main goals for this rig are:

  • Plex
  • File Sharing within home
  • iTunes Server
  • Local backup for 1-2 PCs & 2 iPhones' worth of photos
  • BackBlaze to backup

The main reason I want to stick with Windows since that's what I'm most familiar with.  I plan on using RDP on Win10 Pro to manage the machine headless from my laptop.  While not the most power efficient, I do not plan to have it up 24/7 - more like "as needed, when needed" for a few hours at a time.  After burning my brain out with RAID-this, along with RAID-that...I don't really want anything to do with RAID on this which is why DrivePool is perfect.

I don't think it's too far-fetched of an idea, but I've never really done anything like this before.  What do you folks think?

Using SSD Optimizer for archive (shingled) drives

$
0
0

I have two regular hard drives (WD Red) and one Archive drive. I want files to first copy to the regular drives, then be moved off later to the archive drive. I'm guessing I can do this with the SSD Optimizer. Under Drives, I set the two regular drives as SSD, then the archive drive as Archive. I'm not sure what the best way to set the rest of the settings would be. 

 

Also, would this affect how duplication is performed? In my case, it really won't matter if all the duplication is done on the two regular drives (SSD in optimizer) but I certainly do not want to end up with no duplicated files.

 

I have set the file placement to drop all of my movie rips and TV shows onto all three, since I never duplicate these. I can always re-rip them if I need to, and since they are what takes up 99% of my space. Everything else is stored just on the two regular drives since these are more dynamic.

Need Help. LSI SAS 9207-8i Hangs at boot!

$
0
0

Any help would be appreciated. I have recieved all of my new parts & the server is running great. Except my LSI SAS 9207-8i will not let the computer boot.

 

I get to the first few lines of the LSI screen and all it does is have a flashing cursor.

 

I have tried all 3 PCI-E x8 slots. No good. If I remove the card then windows boots normaly.

 

Any help would be appreciated.

 

Phil

Feature request: Byte to Byte comparison of duplicated files

$
0
0

Hello.

I used to make a file comparison of my data with FreeFileSync (i make backups just mirroring needed data to NAS). I was forced to do so because sometimes got corrupted photos (RAW). It is rare, the drives are ok (of couse they have some degradation, but will live another 5years, at least) but as far as there are a lot of files and their number constantly grows - the chance of corruption raises also. The problem was that I learn about corrupted file only when i try to export photo from Lightroom. I look for the photo in the backup .... ooops it is corrupted too, because bad copy replaced good one during last syncing. So now i run bitwise comparison before syncing.

 

So the problem is that duplicated data is spread among several disks and i can't compare them with FreeFileSync or another tool. Is would be very good to implement some kind of bitwise comparison (on a folder level) to be able to find such corrupted duplicates.

 

Regards.

 

P.S. I think i missed the thread. It should be in DrivePool discussion.

Viewing all 864 articles
Browse latest View live


Latest Images