Splitting my pool in two
can i remove more than one drive at once - it appears so as the option to remove another drive is not greyed out
be scared to try it incase it breaks something???
Splitting my pool in two
can i remove more than one drive at once - it appears so as the option to remove another drive is not greyed out
be scared to try it incase it breaks something???
When removing a drive can we have the option to specify which drive(s) to put the data on and/or which drive(s) not to put the data on - obviously DP gives us a list of options based on space and duplicated data restrictions etc.
Reason being i am splitting my pool and the first drive i am removing is mainly moving its data onto another drive that i also want to remove - so getting double or more handling of the same files!!!!
Last evening, I decided to swap out an old 2TB disk for a new 3TB disk to my 4 drive pool which is duplicated 2x. The drives are as follows:
1x 4TB
1x 3TB
2x 2TB
Disks 2, 3, 4, 5 are in the pool with 6 being the Pool itself. Disk 4 is at 100% use.
The 2TB drive got to around 75% completed removal before Windows 10 decided to do one of its famous "I'll restart when I want" moves. After coming home from work a few hours ago, I'm to see what can be done to remedy this situation. For the past hours, it's been 'Measuring', and I'm not sure what to do. My data seems intact and its VERY slow to access via Windows Explorer, although the drives aren't showing up in the DrivePool app.
I'm currently running Version 2.2.0.740 BETA x64 on Windows 10 Pro if that helps. Screenshots are attached.
DrivePool also seems to be very slow to start up in the beginning like never before "Service starting in 1s..2s.....1 minute....etc"
Am I totally borked or is there anyway to salvage this situation?
I am currently using DrivePool on two sets of disks, one of which are two 1TB drives that I have set up to duplicate each other as well as show up as a single pool. This is mostly used for more important documents, photos as well as the most accessed disks on the system such as the downloads folder.
One "issue" I have been having with doing it this way is that the "Free Space" shown as available is the total of the two, which in most cases is what you'd want. The problem I have with that in my use case is it is really half as much. This is not so much of an issue if I Always (not the case) remember that it should be half when I go to allocate space for something but it also becomes a bother since I do have some automated tasks going to that disk and they will look at the total free space before executing. If there any way to work around this?
So this is the first time I've had a drive failure while using DrivePool. I'd like to first say I am quite happy with how the software handled the missing drive, locking the rest of the drives from writing when one went missing helped prevent any errors with SnapRAID.
I am now coming across an issue though. As SnapRAID has recreated all of the files on the disk, including the PoolPart folder, everything sees the drive as the same one it replaced beyond its UUID being different which seems to have caused a conflict in DrivePool. More specifically is that the drive "reappeared" in the pool but it also sees the drive as being one it can add.
I know this has been asked before - I've even read a few threads on it, but I still can't seem to find a 100% "for sure" answer...
What I have in my pool is this:
Drive A - local drive
Drive B - local drive
Drive C - local drive
Drive D - clouddrive
What I want:
1) All my files to be saved to drives A, B and C
2) Folders that I choose to duplicate ONLY to be duplicated to drive D
What I don't want:
1) Duplicates on any of my local drives.
2) Non-duplicated files on drive D
The thread here: http://community.covecube.com/index.php?/topic/1231-how-do-i-keep-duplicated-data-on-the-pool-but-other-copies-anywhere-else/
is suggesting that I don't change the Drive Usage Limiter balancer, and just create rules for only the folders I want duplicated. In those rules I would ONLY choose drive D.
In doing that, am I not limiting that folder to only be saved to drive D? It's not making much sense to me. How does that put just the duplicates on drive D and allow the "original" file to go on drives A, B or C - when they're all unchecked?
Hopefully someone has a definitive answer for this as it's not very clear.
Thanks!
As i understand it when the SSd cache becomes full it should fall back to the Archive drives
But
If you are copying large volumes of data (internally within the server in this case) it will fail 50% of the time
Basically what appears to be happening is
If the ssd's are beyond their limit % full setting the next file will fall back to the archive drives - as expected - (in my case 95% full)
However if the ssd's are not at or near the % full setting and the next file exceeds the remaining space on the ssd's (large video files) then the copy fails with a STREAM(????) error - not good for unattended large copies
If you get the pool to balance and create sufficient space the copy will proceed - however this "paused" copy has an unwanted side effect as it generates a large amount of "Unusable for duplication" data in my case over 700GB worth - checking the disks sizes and comparing file size totals - with checking for hidden files and running a full chkdsk /f on all drives did not show any errors or orphaned files - its appears to be a bug in the calc by DP - as copying more data to the pool reduces the "Unusable for duplication" by the amount copied . In my case there are no other files on these disks only the pool part dir, recycle bin and SVI - both SVI and recycle bin are turned off and empty as they were the first things i checked.
Have a suspicion that the amount of "Unusable for duplication" is the size of the copy which "fails" and if you have more than one it is cumulative - difficult to check as it depends what files are copied and their sizes to induce the error.
I did reboot to see if it would solve the issue - no joy
I remeasured the pool a couple of times - no joy
i just copied 203GB of data to the pool and the "Unusable for duplication" dropped from 502GB to 299GB - as each file was copied the total dropped by that amount during the copy.
Doing a second copy of 296GB to see if i end up with ~ 3GB left in "Unusable for duplication" - total is dropping by file copied as seen in first copy. [edit] - ended up with 3.34GB unusable left after copy. Now it is back to normal as have copied more data across to the pool and do not have Unusable anymore.
DP version 740 Beta
Win 2012r2 SE
Pool in question is not duplicated
Pool is second pool on server - if that makes any difference
If you need any logs etc let me know
I am recovering a drive I pulled out of a hot swap box ( the drive lost the drivepool partition because I didn't unmount the drive from Windows first).
I lost my C: SSD drive some time ago. I seem to have lost a good working backup of it as well...
So I installed a new Windows 7 on a new SSD. I now have the pool back up and running, with 1 difference.
I don't know how to restore my Folder Duplication settings precisely as they were.
I have a few questions regarding how drivepool will act for me, so please follow, and I'll be brief:
New install of Beta .651 on Windows 10. Only rebooted once not twice. Didn't have a new pool button, just an option to add drives so I clicked on the + Add link for the D: drive, it hung at 95%. When the pool was never created I cancelled the creation, then tried with the E: Drive. This sat at 95%, then it appeared the pool creation canceled on it's own. I have no pool and I cannot add the D: or E: drives as they error out saying they are in the pool already. I found the article that I need to reboot 2x for Windows 10 after I tried creating the pool.
How do I remove the pool that exists but I cannot see and start over?
[Edit] - I have rebooted 2x and no change in behavior.
This isn't really a DrivePool question directly, but my comp tends to have drive overheats, which isn't good. Obviously the real fix is hardware, (fans, etc) but in the interim, would it be better to have DP fill my biggest drive, then the next, then the next, etc. instead of having it spread across all the drives, allowing the smaller/older drives at the end of that list to be unused and idle and not generating heat in the case, OR.... is it best to spread it out among all the drives evenly even though the case seems to have trouble keeping up.
If that makes sense. Which it may not.
Basically, acknowledging that this is a hardware issue and not a DP issue, how can I optimize DP to minimize drive overheats?
I'm not sure where else to ask, you all seem to be the type of people that live out on the edge like I do.
I have a Windows 7 VM running in VMWare 6.5. The system currently has two 5TB WD Red drives that I use in a DrivePool for storage. Opening "Computer" shows the two drives as 4.54 TB, all is well.
So, I want to add an 8TB Archive drive. I set up the RDM in VMWare the same exact way I set up the 5TB drives. VMWare sees the drive as a capacity of 7.28 TB. In the Windows 7's VM settings, it also sees the drive there as 7.28 TB, and I have all the settings the same as how I configured the 5TB drives.
BUT
I go into W7 Disk Management and it shows the drive as only being 1307.91 GB. I've tried everything I can think of to get this drive to show as 8TB but nothing.
Oh, and it's that way even when Unallocated. I did set it up as GPT (as are the 5TB drives) but still it's just 1.3TB.
Hi
I've set up a 2-disk pool using a pair of 4TB hard drives. Both drives are dedicated to the pool i.e. have no non-pooled data, yet the statistic displayed in DrivePool reports 1.5TB of Other.
I've noticed that I fairly often run into a situation where if I rename a file on a network share, it's very slow to reflect the name change on other clients on the network. Is this a normal situation with DrivePool? It's definitely not normal with any other Windows share that I've ever had. So for instance:
rename a file on client 1
client 2, if it has the same folder open at that time, doesn't reflect tihs change. In fact, I'm not sure how long, if ever, it will reflect the change. Refreshing the folder doesn't update the file name. It seems like I have to leave the folder (up a level for instance) and then re-enter to get the file name change to be reflected.
I currently have the windows indexer on the server disabled, although I've never seen the indexer be responsible for this kind of behavior before. At a guess, Windows isn't being informed of the change, so cached values are sent to other clients until an operation that forces an explicit re-read happens.
Thank you
I have 3 cloud drives that are all pooled together and are set at duplication x3 for the entire pool so each drive has the same data.
Drive 1: 100 TB ACD
Drive 2: 100 TB GDrive
Drive 3: 100 TB GDrive
What I would like to accomplish is when accessing the data that is duplicated on all three drives, I want to assign a weight or priority to accessing the data to the two google drives as they have much better access time and speed and avoid using the ACD as it is there just as another level of redundancy.
Ideally this would not be needed it DrivePool was able read from all the drives at the same time for the file being accessed.
Please let me know if this is a possibility.
Thanks
I have three cloud drives that are all pooled together with DrivePool. I have it set to duplicate the data across all three cloud drives for 3x duplication for redundancy.
Drive 1: 100 TB Amazon Cloud Drive
Drive 2: 100 TB Gdrive
Drive 3: 100 TB Gdrive
I went to access a media file on the pooled drive and I was unable to play the file. The file appeared corrupt. At first I thought that it may have gotten corrupted in transit for the initial transfer. Then I checked other files in the same show and other seasons in the same show, all tv eposides for the one show exhibit the same corruption and refuse to play even copying the file locally from the cloud.
I manually went into each drive in the pool and found the file and downloaded it. What I found was, the file was corrupt and would not play on both GDrives volumes but the file was working properly off of the ACD volume.
I believe when I added the show I only had 2 cloud drives, 1 ACD and 1 GDrive. When I added the 3rd drive I think it replicated from the 1 GDrive that already had the error with the file and thus duplicating the error to the second GDrive.
My question is, how is the file in an inconsistent state across the pool? Shouldn't the file be an exact copy on each volume? I tested removing one of the episodes on both GDrives and it proceeded to mirror the file and it is now working as expected and plays without issue. I would like to be able to tell if there are more shows like this and correct the issue before it becomes unrecoverable. Drivepool should be able to see if the files are in an inconstant state and perhaps even prompt me for which version I would like to keep and mirror to the other drives.
I have left the rest of the show in an inconstant state so that I can assist with troubleshooting how to track down and fix the issue.
OS: Windows Server 2012 R2 x64
CloudDrive Version: 10.0.0.842
DrivePool Version: 2.2.0.740
Hi all.
I'm currently researching into the feasibility of using my old tower PC for home server duty. Knowing that there are a lot of ways to currently do that (Linux, FreeNAS, etc etc), I would rather try it my preferred way - which is running a copy of Windows 10 Pro with DrivePool and Scanner.
First off, the hardware (prepare to cringe!)
Main goals for this rig are:
The main reason I want to stick with Windows since that's what I'm most familiar with. I plan on using RDP on Win10 Pro to manage the machine headless from my laptop. While not the most power efficient, I do not plan to have it up 24/7 - more like "as needed, when needed" for a few hours at a time. After burning my brain out with RAID-this, along with RAID-that...I don't really want anything to do with RAID on this which is why DrivePool is perfect.
I don't think it's too far-fetched of an idea, but I've never really done anything like this before. What do you folks think?
I have two regular hard drives (WD Red) and one Archive drive. I want files to first copy to the regular drives, then be moved off later to the archive drive. I'm guessing I can do this with the SSD Optimizer. Under Drives, I set the two regular drives as SSD, then the archive drive as Archive. I'm not sure what the best way to set the rest of the settings would be.
Also, would this affect how duplication is performed? In my case, it really won't matter if all the duplication is done on the two regular drives (SSD in optimizer) but I certainly do not want to end up with no duplicated files.
I have set the file placement to drop all of my movie rips and TV shows onto all three, since I never duplicate these. I can always re-rip them if I need to, and since they are what takes up 99% of my space. Everything else is stored just on the two regular drives since these are more dynamic.
Any help would be appreciated. I have recieved all of my new parts & the server is running great. Except my LSI SAS 9207-8i will not let the computer boot.
I get to the first few lines of the LSI screen and all it does is have a flashing cursor.
I have tried all 3 PCI-E x8 slots. No good. If I remove the card then windows boots normaly.
Any help would be appreciated.
Phil
Hello.
I used to make a file comparison of my data with FreeFileSync (i make backups just mirroring needed data to NAS). I was forced to do so because sometimes got corrupted photos (RAW). It is rare, the drives are ok (of couse they have some degradation, but will live another 5years, at least) but as far as there are a lot of files and their number constantly grows - the chance of corruption raises also. The problem was that I learn about corrupted file only when i try to export photo from Lightroom. I look for the photo in the backup .... ooops it is corrupted too, because bad copy replaced good one during last syncing. So now i run bitwise comparison before syncing.
So the problem is that duplicated data is spread among several disks and i can't compare them with FreeFileSync or another tool. Is would be very good to implement some kind of bitwise comparison (on a folder level) to be able to find such corrupted duplicates.
Regards.
P.S. I think i missed the thread. It should be in DrivePool discussion.