Some sugestions
Posted: Mon Jul 03, 2017 4:17 pm
1. I find it very helpful that Duplicate Cleaner keeps track of groups in which all members have been marked for deletion and that it warns about it, as it could be a mistake. But then I would think it should be rather easy to list the group numbers in case, which would greatly faciltate checking for possible mistakes.
2. In the most recent version rescanning the files requires a renewed entry of the scan locations. It would be helpful if the scan could be restarted, i.e. the scan button was available even after a scan was already started: Frequently the scan reveals that two folders contain a large number of equal files. It is then more efficient to use a folder comparison program for eliminating duplicates, rather than checking each duolicate file. Rescanning then after the scan was already in progress then simplyfies the remaining tasks
3. Not a suggestion, but a useful feature I just discovered for myself (although perhaps already in the manual?) I get many duplicates from web searches. Downloading a web page generally produces a file *.html or *.html, mostly with an associated directory *_files, which contain graphic elements, java scripts etc.: *.gif, *.jpg, *.png, *.js etc. As the same elelements are typically used in many web pages, they would produce many duplicates. They could be excluded from the search by listing them as *.gif;*.jpg*;*.js;...in the Excluded Search Filter. However all of these elements can be excluded simply by entering *_files and specifying that the exclusion should apply to folder names too. Windows has the nice feature (Linux unfortunately does not) that moving or deleting a file *.html or *.htm also moves or deletes the associated directories *_files. So this is a very useful feature in Duplicate Cleaner for removing duplicate web pages. As far as I have seen, it works correctly.
2. In the most recent version rescanning the files requires a renewed entry of the scan locations. It would be helpful if the scan could be restarted, i.e. the scan button was available even after a scan was already started: Frequently the scan reveals that two folders contain a large number of equal files. It is then more efficient to use a folder comparison program for eliminating duplicates, rather than checking each duolicate file. Rescanning then after the scan was already in progress then simplyfies the remaining tasks
3. Not a suggestion, but a useful feature I just discovered for myself (although perhaps already in the manual?) I get many duplicates from web searches. Downloading a web page generally produces a file *.html or *.html, mostly with an associated directory *_files, which contain graphic elements, java scripts etc.: *.gif, *.jpg, *.png, *.js etc. As the same elelements are typically used in many web pages, they would produce many duplicates. They could be excluded from the search by listing them as *.gif;*.jpg*;*.js;...in the Excluded Search Filter. However all of these elements can be excluded simply by entering *_files and specifying that the exclusion should apply to folder names too. Windows has the nice feature (Linux unfortunately does not) that moving or deleting a file *.html or *.htm also moves or deletes the associated directories *_files. So this is a very useful feature in Duplicate Cleaner for removing duplicate web pages. As far as I have seen, it works correctly.