36-bit computing

In computer architecture, 36-bit integers, memory addresses, or other data units are those that are 36 bits (six six-bit characters) wide. Also, 36-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size. 36-bit computers were popular in the early mainframe computer era from the 1950s through the early 1970s.

Friden mechanical calculator. The electronic computer word length of 36-bits was chosen, in part, to match its precision.

Starting in the 1960s, but especially the 1970s, the introduction of 7-bit ASCII led to the move to machines using 8-bit words, notably the IBM System/360. By the mid-1970s the conversion was largely complete, and microprocessors quickly moved from 8-bit to 16-bit to 32-bit over a period of a decade. The number of 36-bit machines rapidly fell during this period, offered largely for backward compatibility purposes running legacy programs.

History

Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator, such as those manufactured by Friden, Marchant and Monroe. These calculators had a column of keys for each digit, and operators were trained to use all their fingers when entering numbers, so while some specialized calculators had more columns, ten was a practical limit. Computers, as the new competitor, had to match that accuracy. Decimal computers sold in that era, such as the IBM 650 and the IBM 7070, had a word length of ten digits, as did ENIAC, one of the earliest computers.

Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. Computers with 36-bit words included the MIT Lincoln Laboratory TX-2, the IBM 701/704/709/7090/7094, the UNIVAC 1103/1103A/1105 and 1100/2200 series, the General Electric GE-600/Honeywell 6000, the Digital Equipment Corporation PDP-6/PDP-10 (as used in the DECsystem-10/DECSYSTEM-20), and the Symbolics 3600 series.

Smaller machines like the PDP-1/PDP-9/PDP-15 used 18-bit words, so a double word was 36 bits.

These computers had addresses 12 to 18 bits in length. The addresses referred to 36-bit words, so the computers were limited to addressing between 4,096 and 262,144 words (24,576 to 1,572,864 six-bit characters). The older 36-bit computers were limited to a similar amount of physical memory as well. Architectures that survived evolved over time to support larger virtual address spaces using memory segmentation or other mechanisms.

The common character packings included:

  • six 6-bit IBM BCD or Fieldata characters (ubiquitous in early usage)
  • six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.
  • six DEC Radix-50 characters packed into 32 bits, plus four spare bits
  • five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five-seven ASCII)[1][2]
  • four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits
  • four 9-bit characters[1][2] (the Multics convention).

Characters were extracted from words either using machine code shift and mask operations or with special-purpose hardware supporting 6-bit, 9-bit, or variable-length characters. The Univac 1100/2200 used the partial word designator of the instruction, the "J" field, to access characters. The GE-600 used special indirect words to access 6- and 9-bit characters. the PDP-6/10 had special instructions to access arbitrary-length byte fields.

The standard C programming language requires that the size of the char data type be at least 8 bits,[3] and that all data types other than bitfields have a size that is a multiple of the character size,[4] so standard C implementations on 36-bit machines would typically use 9-bit chars, although 12-bit, 18-bit, or 36-bit would also satisfy the requirements of the standard.[5]

By the time IBM introduced System/360 with 32-bit full words, scientific calculations had largely shifted to floating point, where double-precision formats offered more than 10-digit accuracy. The 360s also included instructions for variable-length decimal arithmetic for commercial applications, so the practice of using word lengths that were a power of two quickly became commonplace, though at least one line of 36-bit computer systems are still sold as of 2019, the Unisys ClearPath Dorado series, which is the continuation of the UNIVAC 1100/2200 series of mainframe computers.

CompuServe was launched using 36-bit PDP-10 computers in the late 1960s. It continued using PDP-10 and DECSYSTEM-10-compatible hardware and retired the service in the late 2000s.

Other uses in electronics

The LatticeECP3 FPGAs from Lattice Semiconductor include multiplier slices that can be configured to support the multiplication of two 36-bit numbers.[6] The DSP block in Altera Stratix FPGAs can do 36-bit additions and multiplications.[7]

gollark: We're speaking English. We do not need to follow Japanese rules regarding pluralization.
gollark: Er, no, not really.
gollark: ... why not? 🌵
gollark: Okay 🌵
gollark: Of course not.

See also

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.