1

I have an ML dataset that consists of 2.7M images, ~1TB. For ML experiments, I uploaded these as blobs in an Azure storage account under "hot" access (which I thought would be appropriate for ML training). While not actively training, I would like this dataset to be in "Archive" mode to avoid accumulating "leaky faucet" charges.

The document on blob storage access tiers describes how it can be toggled on a per-blob basis, but AFAICT that is per-file (not even per-directory, which would be fine).

Is there a way to toggle the access tier of an entire storage account?

Absent that, what might be a good strategy? Perhaps creating a default-archive-access data_archive storage account and do a cloud-based copy to a temporary data_for_training_purposes hot-access storage account that I create and delete as needed?

Larry OBrien
  • 111
  • 2

0 Answers0