13

Over the years using various linux boxes, I've gotten into the habbit of using prelink ritually to accelerate load times of applications.

However, the benefits of running prelink are negated every time a package is reinstalled, as it, all its dependencies, and its dependents, need to be re-prelinked.

This prelinking can cause multiple problems, and one as such is binary MD5 invalidation, which problematic for things that compare MD5 vs upstream revisions or use MD5 for determining whether or not the binary has been changed and is thus not wanted to be cleaned upon package removal.

Recently, computers have gotten a lot faster, and the benefit prelink yields is now hardly notable.

Is using prelink still a rational concept, or can it be casually discarded and left behind as something of a past era?

Kent Fredric
  • 571
  • 1
  • 5
  • 13

6 Answers6

5

You can't read it until 23rd July 2009 unless you're subscribed to LWN.net, but you might find http://lwn.net/Articles/341244/ useful.

David Pashley
  • 23,151
  • 2
  • 41
  • 71
1

I wouldn't say that it should be arbitrarily discarded, however I would definitely say that it's use should be given a little more thought.

On a modern higher-end machine that is being frequently updated, prelink may not be a useful optimization. However, there are still a number of cases where it could be worth using. For example, on an older or lower end machine, or on machines that are fairly static and don't experience frequent changes or updates. It could also be worthwhile if you have a high rate of programs being run repeatedly (I can think of a couple of situations where you might have programs being run in rapid succession or in parallel where prelinking could improve performance).

All in all, you need to consider your specific situation, and then decide whether the benefits outweigh the additional work and effort.

Christopher Cashell
  • 8,999
  • 2
  • 31
  • 43
  • 1
    "a high rate of programs being run repeatedly" - if you're in that situation, the binaries and libraries will end up in your filesystem cache. The only time prelinking would help is if you are so memory starved that you have very little fs cache available – Daniel Lawson Jul 19 '09 at 20:50
  • 2
    Prelinking will speed up program start even if the program is stored in filesystem cache. Admittedly, when the program (and associated libraries) are cached, the performance increase is less noticeable. However, depending on the rate of programs being run, a few microseconds can add up to eventually make a difference. – Christopher Cashell Jul 20 '09 at 15:50
1

Gentoo uses prelink. They get around the md5sum issue by ignoring the prelink info hen calculating the hash.

Prelink will always give you a speed boost although it may become less and less noticeable as hardware gets faster. The only way to know for sure on your hardware is to turn off prelink and see how you like the slowdown on app launches.

Sidenote: OS X used to do a form of prelinking as well but that has been abandoned in favor of a linked cache that the linked maintains itself. Best of both worlds, no binary changing and no real overhead versus normal linking. I hope Linux picks up this idea at some point :)

Update: I recently tried prelinking on Linux, and for a compilation of cscope with many files and processes I got a 5% speed boost.

w00t
  • 615
  • 8
  • 14
  • 1
    It doesn't really... its still something you have to install and configure, I say this because *I* am using gentoo. And you cant exactly "turn off" prelink, you can only stop running prelink, or go and unprelink your whole system. Also, for some reason unbeknownst to me, paludis has issues with prelinked binaries, and without an undo-prelink hook ( unsupported ) it leaves binaries behind, yielding cruft. Recently I discovered a few KDE apps that were left behind because of fact before I installed the hook, and they were in the path before the newer ones in a different location, causing seg's – Kent Fredric Jul 14 '09 at 11:31
  • Possibly, enabling linker optimization ( -Wl,-O1 ) ¸and the new changes in gnu-hash allocation is more akin to what OSX has moved to, which is possibly a more effective choice. – Kent Fredric Jul 14 '09 at 11:33
  • I must admit it's been a while since I used gentoo... I have since moved on to OS X :). I remember a test I once did on OS X: start all applications at once and time that (about 1 minute iirc). Then, remove all prelink information and start all apps again. That time it took 5 minutes... This was in 2005 on a tower Mac, a real beast. – w00t Jul 16 '09 at 07:07
  • 1
    As a counter to your idea that prelinking speedups may get less noticeable: they are likely to become more important as programs skyrocket in their use of use of runtime loadable libs. A gvim from 2009 used 55 runtime libs. One from 2 years ago used 73. 'mount' from 2009 used 7, mount from today, uses 10 with 4 of them in /usr/lib64 and 6 in /lib64... so they are expanding, becoming larger and more spread out. -- same as it ever was -- as soon as HW gets faster, SW gets alot more complex to soak up the boost. – Astara May 23 '15 at 01:55
  • @astara true, but the growth in library use is not as fast as the growth in harddisk and memory speed. – w00t May 24 '15 at 06:01
  • Neither memory nor disks have kept up. In the first PC, the memory ran at the same speed as the CPU. Now, they have 3-4 levels of cache to compensate for the difference in cpu v. memory speed. Rotating disks have shown less change, in so much as 7200 RPM HD's have been around as the consumer-standard for top end for at least 15 years & 15K SCSI has been around for about as long for business use (usually at about 1/6-1/8th the capacity as the 7200). I still see 5400 HD's -- coming back as 'green' (as in using 'less power'). ARG! If SSD's drop in price and rise in capacity -- maybe...but.. – Astara May 24 '15 at 23:21
  • (ran out of room) -- today's SSD's underperform in large data transfers by a considerable margin (I have a RAID0 w/4 '500MB/s' SSD's that tops out at about 6-7 hundred MB/s, vs. a RAID10 using 7200 rpm's w/4 disks in series that easily tops over 1GB/s for reads, (though slower for random I/O... which _points_ to prelink+moving common libs so they can be preloaded w/1 linear read (not an easy solution to implement though). My raid perf dropped considerably, recently because it's internal battery died so it is doing write-throughs instead of write-backs.. :-( – Astara May 24 '15 at 23:27
1

I would say prelink is definitely useful on multi-user desktop servers such as LTSP servers used in schools and net cafes for example. Not only does prelink speed up application loading, but it also improves RAM utilization and disk thrashing due to contention between users, allowing many more simultaneous users on a server.

0

I think with the prices of memory falling prelinking is becoming less useful. If you still want to speed things up slightly you might look into preload.

  • I tried preload, I just found it made startup times *slower* while it sat there chewing both cores doing the readahead thing. And it also for some reason I cant make out caused X to die during boot. Also, if you don't reboot often, preload stops being useful at all. – Kent Fredric Jul 12 '09 at 06:40
0

I leave that decision to OS version. If by default OS chooses to call prelink regularly using cron then fine else may be it is not so useful. I hope the creators of distributions have given thought before choosing to add / remove prelink option by default. So I go with them rather than analyzing things again myself.

Saurabh Barjatiya
  • 4,643
  • 2
  • 29
  • 34
  • er, its not really defaultable, its a package you have to install, and if its not installed you don't get prelinked stuff. If its installed, it tends to create a cron script, which is off by default, which you have to manually enable. – Kent Fredric Jul 12 '09 at 06:37
  • It is default on fedora not off by default. It is reniced to 19 but not off. It has been same since Fedora 6 or 7. – Saurabh Barjatiya Jul 12 '09 at 10:54