I have a centralized folder location on a network drive (traditional hard disk) that is shared by a few web services running on different application servers. The services will conitinually process incoming files via HTTP requests and will write to this location.
Every request will get its own sub folder with a unique name. Once all the files for a particular request is saved, the service that saved the files will notify another internal service that will read those files from that request folder and carry out further tasks.
For example,
If D:/MyNetworkFolder/
is the parent directory and if ServiceA is processing Request1 and ServiceB is processing Request2, both the services will be trying to save the incoming files of that request (total size upto 2GB) in D:/MyNetWorkFolder/Request1
and D:/MyNetworkFolder/Request2
respectively. Once all the files are saved for a request, another service will read the files from D:/MyNetworkFolder/RequestNumber.
and carry out its tasks.
So, during peak hours, there will always be a set of services trying to write new files to the network folder and another set of services trying to read from the saved files in network folder. And possibly, another service trying to delete the files that are completely processed.
Is this type of parallel file processing possible? Would it affect the application's I/O performance or the hard disk's health because multiple services are trying to read/write from the same Parent location at the same time? The other option we have is to make sure each service gets its own physical network drive or to consider using SSDs.
All servers are running on Windows Server 2008 and above and the web services are written using C# and .NET.