abudabit wrote:Would it take too long for the system to seach for a file if there are 10000's of files in one directory? Would it be faster if I had a bunch of directories referenced by a random directory name in the database, such as "images/fdasfd123ds/filename.***" instead of "images/filename.***"? Or to take it further, two levels of random directory names such as "images/fdafs/fdafsd/filename.***"? I don't know enough about operating systems to know if any of this matters.
This is actually a pretty good question. But the answer is highly dependent upon which filesystem you're using. Some filesystems are optimized for very large files, some filesystems are optimized for a lot of small files, and some are somewhere in the middle.
I recommend that you first figure out what filesystem you're using. If you're on Windows NT/2000/XP it's probably NTFS. If you're on Win98 or Me, it's probably FAT16 or FAT32, and you seriously shouldn't be using it anyway. Upgrade. If you're on a Mac, it's probably HFS. And if you're on *nix, well, then it could be anything. Ext2fs, JFS, ReiserFS, etc. Once you figure out what you're working with, Google it. You can probably find some good information in the Filesystems HOWTO