Actually most languages are "secure" with regard to buffer overflows. What it takes for a language to be "secure" in that respect is the conjunction of: strict types, systematic array bound checks, and automatic memory management (a "garbage collector"). See this answer for details.
A few old languages are not "secure" in that sense, notably C (and C++), and also Forth, Fortran... and, of course, assembly. Technically, it is possible to write an implementation of C which would be "safe" and still formally conforms to the C standard, but at a steep price (for instance, you have to make free()
a no-operation, so allocated memory is allocated "forever"). Nobody does that.
"Secure" languages (with regards to buffer overflows) include Java, C#, OCaml, Python, Perl, Go, even PHP. Some of these languages are more than efficient enough to implement SSL/TLS (even on embedded systems -- I speak from experience). While it is possible to write secure C code, it takes (a lot of) concentration and skill, and experience repeatedly shows that it is hard, and that even the best developers cannot pretend that they always apply the required levels of concentration and competence. This is a humbling experience. The assertion "don't use C, it is dangerous" is unpopular, not because it would be wrong, but, quite to the contrary, because it is true: it forces developers to face the idea that they might not be the demigods of programming that they believe to be, deep in the privacy of their souls.
Note, though, that these "secure" languages don't prevent the bug: a buffer overflow is still unwanted behaviour. But they contain the damage: the memory beyond the buffer is not actually read from or written to; instead, the offending thread triggers an exception, and is (usually) terminated. In the case of heartbleed, this would have avoided the bug from becoming a vulnerability and it might have helped to prevent the full-scale panic that we observed in the last few days (nobody really knows what makes a random vulnerability go viral like a Youtube video featuring a Korean invisible horse; but, "logically", if it not had been a vulnerability at all, then this ought to have avoided all this tragicomedy).
Edit: since it was abundantly discussed in the comments, I thought about the problem of safe memory management for C, and there is a kind-of solution which still allows free()
to work, but there is a cheat.
One can imagine a C compiler which produces "fat pointers". For instance, on a 32-bit machine, make pointers 96-bit values. Each allocated block will be granted a unique 64-bit identifier (say, a counter), and an internal memory structure (hashtable, balanced tree...) is maintained which references all blocks by ID. For each block, its length is also recorded in the structure. A pointer value is then the concatenation of the block ID, and an offset within that block. When a pointer is followed, the block is located by ID, the offset is compared with the block length, and only then is the access performed. This setup solves double-free and use-after-free. It also detects most buffer overruns (but not all: a buffer may be a part of a bigger structure, and the malloc()
/free()
management only sees the outer blocks).
The "cheat" is the "unique 64-bit counter". This is true only as long as you don't run out of 64-bit integers; beyond that, you must reuse old values. 64 bits ought to avoid that issue in practice (it would take years to "wrap around"), but a smaller counter (e.g. 32 bits) could prove to be a problem.
Also, of course, the overhead for memory accesses may be non-negligible (quite a few physical reads for each access, although some cases may be optimized away), and doubling pointer size implies higher memory usage, too, for pointer-rich structures. I am not aware of any existing C compiler which applies such a strategy; it is purely theoretical right now.