I would like to start this post by mentioning that I expect a downvote from the OP, and possibly a flag, as this question is in fact part of an elaborate straw man which was originally an argument which resulted in my posting of this question. No need to get into that here. But I will certainly answer the question. I would also like to note that Steffen has provided quite an informational answer. However, I would like to add to it in order to maybe clarify some things.
1) Pre HTML 5 where the applications and so on lived in plug ins
outside the browser sandbox by way of Flash and Java installed and
enabled.
By viewing the excellent answers provided in the link to the question above, you can clearly see the overall end-user security benefits of using interpreted languages such as HTML/JavaScript, compared to a technology such as Flash that requires at least some pre-compilation. Which brings me to:
2) Post HTML 5 and applications being moved into the browser sandbox
so you no longer need to breach the browser environment to get at this
data and applications, and hooks into system hardware like cameras and
so on given directly to the browser by way of HTML and JS accessible
hooks built into the browser it's self.
A big point I would like to make here, is that the vulnerabilities being found and utilized via Java and Flash have very little to do with the actual features and system resources that they have access to, and much more to do with the possibility of modifying the code between compilation and execution. I would also like to point out that modern JavaScript sandbox implementations are excellent, and continuing to improve. Node.js even provides methods in its documentation to take advantage of completely separate contexts of JavaScript sandboxes, all completely controllable by a single master script. I've used it myself, the documentation is here and it proves we that we can easily create our own secure sandboxes (at least for HTML/JS-based apps).
Now, in no way am I saying that these sandboxes can't be overcome. Software that interacts with an app in a sandbox would need to be trusted for all of the jobs it is doing, and it is indeed kind of like a weakest link in the chain type of thing. However I think apps for a futuristic WebOS would work much the same way browser plugins or add-ons work nowadays. They are usually given much more access to the system than regular webpages, but again they work off of the same interpreted-language-based technologies (unless it would be considered "native" such as with Java or Flash, in which case it would also have much more access to your computer).
I’m not talking about server-side security. Nor the core OS that may
run the browser environment. I am referring to User Data and as the
Browser becomes the OS as far as the user is concerned what they care
about being safe from attack.
Java/Flash exploits are common because of the fact that there are so many ways of causing a crash by modifying its pre-compiled code before the file has a chance to be executed. A crash happens when either illegal memory is referenced within the application, or legal memory is executed, but corrupt or otherwise not in an executable format. Non-official compilers even get this to happen by accident. Taking advantage of this type of flaw to produce an exploit is usually only a matter of skill and time. By the end of it, the app doesn't even have to crash.
On the contrary, it is absolutely impossible to modify JavaScript code between the stages of compilation and execution. This makes those very exploits that are being utilized by Flash and Java inherently impossible to reproduce in JavaScript. Modifying the binary of a JavaScript file maliciously to take advantage of the way it is interpreted is made impossible due to the fact that any irregularity is picked up by the compiler as a syntax error. The entire contents of the file have to abide by strict rules based on the syntax of the language itself, otherwise not a single line will be executed. This type of check cannot be performed on pre-compiled/binary files such as those associated with Java/Flash, as that check is instead done by the compiler on the developer's machine. The client (end-user) has no such protection.
It is, however, important to note that compiled languages suffer from all of the same types of security flaws that interpreted languages suffer from. Having said that, I would also like to make it clear that any software that allows an unsolicited website to arbitrarily run code on your system will inherently come with risks. In a security context, rule number one is to minimize risk as much as possible while not affecting necessary functionality. If a vulnerability is found in Flash that lets it execute arbitrary code on the system, you consider it a system compromise. Many Flash vulnerabilities fit into this category. This is an inherent security flaw that pre-compiled code suffers from (as I explained above). Since in a future Web-based-OS we would be worrying more about XSS and CSRF vulnerabilities (which would not necessarily be considered a system compromise) than arbitrary code execution, I think it is safe to say (if anything), yes, HTML5/JavaScript will end up being more secure.
Disclaimer: It is entirely possible for external forces to complicate the answer between now and in the future if/when Web-based operating systems actually come into effect. This is only speaking in terms of my own current knowledge pertaining to the inherent security model behind entirely interpreted and non-entirely-interpreted languages, as they compare to each other in terms of end-user security, and as it applies within the context of the question asked. This contains theoretical opinions based on the assumption that not much will change technologically speaking if/when a move is made to using Web-based operating systems.