Feature request / optimization for NAS drives / folders in the same NAS

The best solution for finding and removing duplicate files.
Post Reply
Dup
Posts: 1
Joined: Tue Jan 24, 2023 3:17 pm

Feature request / optimization for NAS drives / folders in the same NAS

Post by Dup »

One very interesting optimization for Duplicate Cleaner:
- it has to be possible to prioritize HASH calculation of one folder or another (in the "search screen, a "pause button" per hash calculation would be ideal, for example.
- other alternative way, it has to be possible to decide to do a "parallel" hashing (from differente hardware locations, ie. different hard disks) or a "sequential" one.
- other alternative way, it has to be possible to decide to only do a sequential hashing of network folders located in the same IP location, or located in the same disk.

That's because when you try to scan and match duplicates from several network folders of the SAME local server, the "parallel" processing is much slower than the "sequential" one (ie. trying to copy 100 files from the server to another disk works much quickier when a single copying proccess is working than if you try to single copy, at the same time, each one of the 100 files; the filesystem trying to focus on each single part of each single file at the same time makes the system sluggish, unresponsive, much slower and prone to errors).

In my case, trying to make hash calculations of ONE single NAS folder reaches round 100-130 MB/s, but trying to make calculations of 5 different folders in the same NAS makes the global speed go down to round 20 MB/s... A pity. If we were capable of telling Duplicate Cleaner to just go sequentially from folder 1 to folder 2, to folder 3, and so on, the time employed would go down, in my case, to one sixth of the current "parallel" time.
Post Reply