Object code optimizer

An object code optimizer, sometimes also known as a post pass optimizer or, for small sections of code, peephole optimizer, takes the output from a source language compile step - the object code or binary file - and tries to replace identifiable sections of the code with replacement code that is more algorithmically efficient (usually improved speed).

A binary optimizer takes the existing output from a compiler and produces a better execution file with the same functionality.

Examples

  • "IBM Automatic Binary Optimizer for z/OS[1]" (ABO) was introduced in 2015 as a cutting-edge technology designed to optimize the performance of COBOL applications on IBM Z[2] mainframes without the need for recompiling source. It uses advanced optimization technology shipped in the latest Enterprise COBOL[3]. ABO optimizes compiled binaries without affecting program logic. As a result, the application runs faster but behavior remains unchanged so testing effort could be reduced. Clients normally don't recompile 100 percent of their code when they upgrade to new compiler or IBM Z hardware levels, so code that's not recompiled wouldn't be able to take advantage of features in new IBM Z hardware. Now with ABO, clients have one more option to reduce CPU utilization and operating costs of their business-critical COBOL applications.
  • The earliest "COBOL Optimizer" was developed by Capex Corporation in the mid 1970s for COBOL. This type of optimizer depended, in this case, upon knowledge of 'weaknesses' in the standard IBM COBOL compiler, and actually replaced (or patched) sections of the object code with more efficient code. The replacement code might replace a linear table lookup with a binary search for example or sometimes simply replace a relatively slow instruction with a known faster one that was otherwise functionally equivalent within its context. This technique is now known as strength reduction. For example, on the IBM/360 hardware the CLI instruction was, depending on the particular model, between twice and 5 times as fast as a CLC instruction for single byte comparisons.[4][5]

Advantages

The main advantage of re-optimizing existing programs was that the stock of already compiled customer programs (object code) could be improved almost instantly with minimal effort, reducing CPU resources at a fixed cost (the price of the proprietary software). A disadvantage was that new releases of COBOL, for example, would require (charged) maintenance to the optimizer to cater for possibly changed internal COBOL algorithms. However, since new releases of COBOL compilers frequently coincided with hardware upgrades, the faster hardware would usually more than compensate for the application programs reverting to their pre-optimized versions (until a supporting optimizer was released).

Other optimizers

Some binary optimizers do executable compression, which reduces the size of binary files using generic data compression techniques, reducing storage requirements and transfer and loading times, but not improving run-time performance. Actual consolidation of duplicate library modules would also reduce memory requirements.

Some binary optimizers utilize run-time metrics (profiling) to introspectively improve performance using techniques similar to JIT compilers.

Recent developments

More recently developed 'binary optimizers' for various platforms, some claiming novelty but, nevertheless, essentially using the same (or similar) techniques described above, include:

gollark: …
gollark: Are you holding the item to be smelted in your hand and do you have fuel in your inventory?
gollark: Oops. I shall contact the administrators.
gollark: Indeed. It generally mostly works, but sometimes this sort of thing happens…
gollark: Pistons break horribly too.

See also

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.