This is the mail archive of the cygwin mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: slow handling of large sets of files?


I too have been seeing a problem with very slow file access in large
directories.

Specifically,

On a Cygwin/Win2k box, I have a mirror of an FTP site.  The site has 2.5
million files spread between 100 directories.  
(20,000 - 30,000 files per directory)  I have previously run this number
of files in an NT4 NTFS filesystem without significant performance
problems.

In this site, operations like these are __VERY__ slow.
ls ${some_dir}
ls ${some_dir}/${some_path}
cp ${some_file} ${some_path}
cp -R ${some_path_with _only_a_few_files} ${some_path}


If I look at the performance monitor, I can see a queue depth of 1-2 and
300-500 disk reads per second.  (That's real.  It's a fast array)  The
reads appear to be single-block reads, as the throughput during these
events is 1.5 - 3MB/sec.

I am beginning to think the disk activity relates to NTFS permission
checking, which can be complex under Win2k.

I don't know how to debug or tune this.

Any ideas?


--
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple
Problem reports:       http://cygwin.com/problems.html
Documentation:         http://cygwin.com/docs.html
FAQ:                   http://cygwin.com/faq/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]