Aws initiate archive retrieval how to download
Post as a guest Name. Email Required, but never shown. The Overflow Blog. Who owns this outage? Building intelligent escalation chains for modern SRE.
Podcast Who is building clouds for the independent developer? Featured on Meta. Now live: A fully responsive profile. Reducing the weight of our footer. Related Hot Network Questions. The Overflow Blog. Who owns this outage? Building intelligent escalation chains for modern SRE. Podcast Who is building clouds for the independent developer? Featured on Meta. Now live: A fully responsive profile.
Reducing the weight of our footer. Related Hot Network Questions. The file contains the type of the job archive-retrieval , the archive id and a description.
It should look like below. There is one more thing to check: Data Retrieval Settings. The command for initiating the download archive request is the following:. And you have to wait. If you configured a SNS topic for the vault, you will get a notification when the job has finished. At any time, you can interrogate the status of the job using the describe-job command. When the archive is ready for download, the describe-job output will look like below. Finally, you are ready to download the archive.
If the archive is big like more than 4GB in my case it would be wiser to get it in chunks. You can see the size of the archive either in the inventory file or in the output of the describe-job command. I split the file as following 1GB chunks :. To download the archive you have to use the same get-job-output command previously used for downloading the inventory file.
You need the vault name, the job id, the range of bytes to retrieve from output and the file name where the content will be saved. The chunk size that I used was too big as the download command failed a couple of times, like this: [Errno ] An existing connection was forcibly closed by the remote host. The Glacier documentation says you initiate a retrieval request, when it finishes you have at least 24 hours to download it. This might mean paying double the bandwidth charges.
You could also use the new File based S3 Storage Gateway, but you'd have to set it up. You could also use EFS, but it's expensive. Finally it might be possible to map S3 as a hard drive using something like s3fs , but I have no experience with that. If your download from Glacier fails for any reason you have to start it again.
For a single large 3. Range retrievals can help, but if it's one large file you'd have to stitch it back together. I assume that downloading from Glacier to EC2 will be much faster and more reliable than directly to your PC. It's difficult to make a single recommendation without more information, particularly around connection speed, reliability, and whether the Glacier download is one file or many.
To get the files quickly you're probably best just downloading it from Glacier, ideally with range retrievals. To be safe, download to EC2, upload to S3, then download from S3. S3 supports parallel downloads , so it should use all of your available bandwidth. Retrieval pricing has been simplified from the previous model. Sign up to join this community. The best answers are voted up and rise to the top.
Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams?
0コメント