The current implementation of S3 configured as a web site and distributed by CloudFront () is working well. The only problem is that if the URL of the S3 web site were to "escape" into the wild, some users might be inadvertently trying to access the S3 version. Even worse, a search engine might index the S3 version. In any case, it's not a good idea to have two public versions of a site, even if one is somewhat hidden.
There have been discussion of several good ideas on Stack Overflow about how to prevent direct access to the S3 web site. The official way would be to serve the S3 content as a non-website S3 bucket, as I alluded to in GUISE-82, but the big drawback (besides being more complicated) is that redirects (routing rules) won't work. Perhaps one could replicate routing rules that using Lambda@Edge, as someone proposed on Stack Overflow, but that is getting pretty complex. Besides the main idea here is not absolute security of the original site, but rather prevention of inadvertent access and indexing of the S3 site.
The most "bang for the buck" solution, which was mentioned in a comment on Stack Overflow, seems to be to leave S3 configured as a web site but set up the bucket policy using a aws:UserAgent condition key to only allow access by CloudFront, which always uses a certain User-Agent header:
With the clearer understanding that deploying to S3 configured as a web site is different from merely deploying to an S3 bucket, and to make it easier to set up CloudFront to serve non-web-site S3 buckets, this ticket will also refactor the S3 deploy target into separate S3 and S3WebSite targets to differentiate the two.