I can imagine a few good use cases for this. For example, a web page like https://coinb.in/#newAddress, which lets the user create a new bitcoin address, along with the corresponding private key, using client-side javascript-based crypto running in the web browser.
This is a handy tool, and there is no reason why this page should not be static. But how can the user trust that the newly generated private key is not sent back to the server? There is a statement at the bottom of the page, that reads, This page uses javascript to generate your addresses and sign your transactions within your browser, this means we never receive your private keys...'
but how can the user trust this?
This is the familiar chicken-and-egg problem with browser cryptography. If you can't trust the server with your secrets (the bitcoin private key), then how can you trust that the code that the server is serving is not malicious (and will steal the bitcoin private key)?
One way to solve this problem might be for a trusted reviewer to review the source code, then post an attestation on his (https) web site (or sign the attestation using his pgp key), saying 'I, [trusted reviewer], have reviewed the source code for the web page at https://coinb.in/#newAddress, with the SHA256 checksum xxxxx, and I have verified that this source code does not contain malicious code.'
But, even if the source code for the page has been reviewed by someone that the user trusts, and the user is able to verify the authenticity of the attestation by the trusted reviewer - how can the user be sure that the source code for the page is in fact static, and that the source code has not changed since the trusted reviewer reviewed the code? In other words, how can the user be sure that the code that is currently loaded in his browser is the same as the code that the trusted reviewer reviewed?
This is why it would be nice, as the op alluded, if the web browsers provided a way for the user to view a hash-based checksum for the page that is currently loaded. This way, the user could view the checksum of the currently loaded page, verify that it matches the checksum posted in the attestation made by the trusted reviewer, then rest assured that the page does not contain malicious code. But, (as far as I know) there is no feature in any of the mainstream browsers that shows the checksum of the currently loaded page. As a workaround, the user could load the page, then save the source code of the page to their system, then use a tool like SHA256SUM to take a checksum of the saved file, verify that it matches the checksum in the attestation by the trusted reviewer (similar to the way that one would verify the integrity of a downloaded iso file from the web), then proceed to use the page.
Of course, this would require that all supporting files (e.g. javascript files and css files) are referenced using subresource integrity (otherwise, code in these files could change without the code in the root document changing).
Related:
How To Prove That Client Side Javascript Is Secure?
What’s wrong with in-browser cryptography in 2017?
Javascript crypto in browser
Problems with in Browser Crypto