We have following scenario:
Our client has a web application hosted on his web server. His customers connect to this web app and upload huge files (300MB -1,5 GB). Before, the storage was done on his server, now we use Azure storage block blobs for this. The goal is not only save storage
on client side but more important is not to use his bandwidth and his server resources (eg. memory).
Current implementation is using the .NET Storage Client Library, the problem we have is on huge files: upload & download takes lot of time + sometimes we are getting timeouts + looks like with current implementation client’s customers are not uploading
/ downloading their files directly on / from Azure, instead they are still using client’s bandwidth and his web server memory (eg. In code we build fileStream and the upload is done by blockBlob.UploadFromStream(fileStream). Based on feedback we got so far,
for huge files we need to chunk these into smaller parts, this way we can manage huge files upload/download on/from Azure storage. We need to make sure with our implementation upload/download will be done just between user browser and Azure storage directly,
not using our client server resources.
What options we have to achieve our goal for this scenario?
Which options do we have to split files into chunks for a webapplication?