By default Amazon’s S3 doesn’t set any Cache Control headers. Even worse, the only way to set them via their S3 management console is file-by-file.
Not exactly scalable.
Until they give us a quick and easy way to do this, here’s a simple Python script using Boto (thanks to Blackpawn).
Note that the script iterates over every object in your buckets, so if you have a ton of objects, it may not be the solution for you.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from boto.s3.connection import S3Connection | |
connection = S3Connection('aws access key', 'aws secret key') | |
buckets = connection.get_all_buckets() | |
for bucket in buckets: | |
for key in bucket.list(): | |
print('%s' % key) | |
if key.name.endswith('.jpg'): | |
contentType = 'image/jpeg' | |
elif key.name.endswith('.png'): | |
contentType = 'image/png' | |
else: | |
continue | |
key.metadata.update({ | |
'Content-Type': contentType, | |
'Cache-Control': 'max-age=864000' | |
}) | |
key.copy( | |
key.bucket.name, | |
key.name, | |
key.metadata, | |
preserve_acl=True | |
) |