5
2
Suppose that I have a program that relies on a newer glibc version that is not available in the system via packages. And it gives:
version `GLIBC_2.xxx' not found
One solution is compiling the binary with glibc statically.
The other solution that is derailed by many people as "not safe" goes in putting newer libc.so.6
instead of the one shipped by operating system.
How exactly this second solution is not safe or a bad idea, provided that libc.so.6
includes prior ABI endpoints?
E.g. if I run strings /usr/lib/libc.so.6 | grep --perl-regexp "^GLIBC_"
I can see a lot of those ABI versions like:
...
GLIBC_2.10
GLIBC_2.11
GLIBC_2.12
GLIBC_2.13
GLIBC_2.14
GLIBC_2.15
GLIBC_2.16
GLIBC_2.17
...
So if I'm overwriting with a newer libc.so.6
with additional glibc ABI versions inside it, how does it break older apps or leads system to breakage?
Or doesn't it...? :)
An excellent answer; I don't think my addendum is worth a separate filing. Ultimately this boils down to library versioning in general. If you have an API with three functions,
{foo, bar, baz}
, and want to release an update including a new one, the only safe way to do so is by appending it,{foo, bar, baz, diz}
. If you add the new entry anywhere else, the "index" of each function at or beyond that point will change, and every program compiled against that library will fail to point at the correct functions. The same is true of in-memory data structures. Any non-append change will break. – GothAlice – 2019-06-21T02:04:18.743With the proviso that this can be accounted for, e.g. by leaving "dead space", "padding" or gaps you can later fill. There are other techniques to protect against change, such as indirection like branch tables, e.g. PalmOS ROM, or thunks, popular with Microsoft.
– GothAlice – 2019-06-21T02:47:05.467