What exactly is VGA, and what is the difference between it and a video card?

30

9

Operating system development tutorials pinpoint reaching screen data by writing directly to VGA or EGA or Super VGA, but what I do not get is what is the real difference between writing to a fixed address for display, and writing to a video card directly, either onboard or removable? I just want the basic clarification of my confusion on this on my issue

And since it's not such a simple case with variables in cards, connective-interfaces, buses, architectures, system on a chip, embedded systems, etc., I find it to be hard to find a way to understand the idea behind this 100%. Would the fixed addresses differ from a high-end GPU to a low-end onboard one? Why and why not?

It is one of my goals in programming to host a kernel and make an operating system, and a farfetched dream indeed. Failing to understand the terminology not only hinders me in some areas, but makes me seem foolish on the subjects of hardware.

EXTRA: Some of these current answers speak of using the processors maximum addressable memory in the specifics on 16-bits. The problem is some of these other arising issues:

1.What about the card's own memory? That would not need system RAM for screen data itself.

2.What about in higher-bit modes? And can't you not neglect BIOS in real mode(x86)and still address memory through AL?

3.How would the concept of writing to a fixed address remain unchanged on a GPU with multitudes of registers and performance at or above the actual microprocessor?

user192573

Posted 2013-01-24T17:14:01.970

Reputation:

For a little historical context, check out my answer to a related question: http://superuser.com/questions/357328/how-computers-display-raw-low-level-text-and-graphics/357388#357388

– Russell Borogove – 2013-01-24T23:31:19.860

It should be noted that, in addition to referring to the display card technology/protocol, the term have come to designate particular electrical standards and display resolutions. It's hard to guess which meaning is being applied, even when you see the terms "in context". – Daniel R Hicks – 2013-01-26T17:52:28.253

Answers

65

Technically VGA stands for Video Graphics Array, a 640x480 video standard introduced in 1987. At the time that was a relative high resolution, especially for a colour display.

Before VGA was introduced we had a few other graphics standards, such as hercules which displayed either text (80 lines of 25 chars) or for relative high definition monochrome graphics (at 720x348 pixels).

Other standards at the time were CGA (Colour graphic adapter), which also allowed up to 16 colours at a resolution of up to 640x200 pixels. The result of that would look like this:

enter image description here

Finally, a noteworthy PC standard was the Enhanced graphics adapter (EGA), which allowed resolutions up to 640×350 with 64 colours.

(I am ignoring non-PC standards to keep this relative short. If I start to add Atari or Amiga standards -up to 4096 colours at the time!- then this will get quite long.)

Then in 1987 IBM introduced the PS2 computer. It had several noteworthy differences compared with its predecessors, which included new ports for mice and keyboards (Previously mice used 25 pins serial ports or 9 pins serial ports, if you had a mouse at all); standard 3½ inch drives and a new graphic adapter with both a high resolution and many colours.

This graphics standard was called Video Graphics Array. It used a 3 row, 15 pin connector to transfer analog signals to a monitor. This connector is lasted until a few years ago, when it got replaced by superior digital standards such as DVI and display port.

After VGA

Progress did not stop with the VGA standards. Shortly after the introduction of VGA new standards arose such as the 800x600 S uper VGA (SVGA), which used the same connector. (Hercules, CGA, EGA etc all had their own connectors. You could not connect a CGA monitor to a VGA card, not even if you tried to display a low enough resolution).

Since then we have moved on to much higher resolution displays, but the most often used name remains VGA. Even though the correct names would be SVGA, XVGA, UXGA etc etc.

enter image description here

(Graphic courtesy of Wikipedia)


Another thing which gets called 'VGA' is the DE15 connector used with the original VGA card. This usually blue connector is not the only way to transfer analog 'VGA signals' to a monitor, but it is the most common.

Left: DB5HD Right: Alternative VGA connectors, usually used for better quality) enter image description here


A third way 'VGA' is used is to describe a graphics card, even though that card might produce entirely different resolutions than VGA. The use is technically wrong, or should at least be 'VGA compatible card', but common speech does not make that difference.


That leaves writing to VGA

This comes from the way the memory on an IBM XT was devided. The CPU could access up to 1MiB (1024KiB) of memory. The bottom 512KiB was reserved for RAM, the upper 512 KiB for add-in cards, ROM etc.

This upper area is where the VGA cards memory was mapped to. You could directly write to it and the result would show up on the display.

This was not just used for VGA, but also for same generation alternatives.

  G = Graphics Mode Video RAM
  M = Monochrome Text Mode Video RAM
  C = Color Text Mode Video RAM
  V = Video ROM BIOS (would be "a" in PS/2)
  a = Adapter board ROM and special-purpose RAM (free UMA space)
  r = Additional PS/2 Motherboard ROM BIOS (free UMA in non-PS/2 systems)
  R = Motherboard ROM BIOS
  b = IBM Cassette BASIC ROM (would be "R" in IBM compatibles)
  h = High Memory Area (HMA), if HIMEM.SYS is loaded.

Conventional (Base) Memory:   
First 512KB (or 8 chunks of 64KiB). 

Upper Memory Area (UMA):

0A0000: GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
0B0000: MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
0C0000: VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0D0000: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
0E0000: rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
0F0000: RRRRRRRRRRRRRRRRRRRRRRRRbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbRRRRRRRR

(Source of the ASCII map).

Hennes

Posted 2013-01-24T17:14:01.970

Reputation: 60 739

1Nit comment: It's not a "DB15" connector. A DB15 connector has only 2 rows of pins (just like DB9 and DB25). The VGA connector has 3 rows of pins, and is often called "HD15" (HD for high density compared to DB) (although some assert that "HD15" is not an official name). – sawdust – 2013-01-24T21:04:46.497

2Nit comment #2: The original IBM PC may have been released with a 512/512 split, but it was soon changed to a 640/384 split (this is referenced in your source page). Graphics memory starts at the 640K mark (hex 0A0000). I don't think anybody ever really became aware of a "512K boundary" in the way that the "640K boundary" eventually came to be a well-known issue. – Hellion – 2013-01-24T22:00:42.793

Heh. I know they 'later' adjusted it to 10 64KB segments. I still like the clean half/half design though. Both in the XT (512/512) and in the Amiga (first half of addresses on board, rest for expansion memory). As, -sort of- in XP. Half of 32 bit space (2GB) for programs, half for kernel/PCI space. It seems engineers keep coming up with the same solution. – Hennes – 2013-01-24T22:05:18.570

2

@sawdust: HD15 is definitely not an official name (but is as good as, these days). In the Dx-nn connector family, x is the size of the shell, nn is the number of pins. Shell B is the same size as a parallel port (or an old, full implementation 25-pin serial port). Shell E is the same size as the serial port. So technically, the VGA 15-pin connector would be DE-15, but this was never part of the original line-up of connectors. AFAIK it never even existed before IBM's use on the PS/2 MCGA, VGA & 8514/a. Wikipedia has a good explanation: http://en.wikipedia.org/wiki/D-subminiature

– Alexios – 2013-01-24T23:15:53.080

3I love the superlative inflation of resolutions: ultra-wide-super-extended-hyper-quad-graphics-array. You can't call it high-def, because one day it won't be! – aidan – 2013-01-25T01:20:33.897

This still isn't paintibg the picture clear enougg for me. – None – 2013-01-25T02:46:28.240

Can you clarify the history, please? VGA was introduced by IBM in 1987. – Reinstate Monica - M. Schröder – 2013-01-25T10:00:08.650

@Lacking Confidence which part is not clear enough. Memory locations? Martin: Will edit later today to expend that part. – Hennes – 2013-01-25T11:47:54.063

1Is the video card's internal instructions and specification set specifically the same thing as a limited section of screen resolutions? What about per-pixel plotting? Many things are still unclear with VGA and video card terminology as a whole. – None – 2013-01-26T00:56:21.290

1The card have specific IO locations. Writing the right value to those sets the mode. Say I set a 40x25 text mode (1000 chars on the screen, 40 on the first line, 25 more on the second line ... up to the 25th line). If I write the right value (e.g. 65) to the first location mapped to the text buffer then and A (ASCII value 65) will appear in the left upper corner. If I write the value 66 then B would appear. If I wrote to the start of the registers plus 28, then a char would appear on the third place of the second line (28-25=3). For graphics it is similar but individual pixels get drawn – Hennes – 2013-01-26T01:13:43.883

1rather than predefined characters. A lot of this depends on the mode you set the card in. Not just text mode vs graphical mode, but also on bit depth, palette selection, ... For this you really want to read a manual, which will be much longer than any post here on S.U. can be. – Hennes – 2013-01-26T01:14:01.607

Can you direct me to such a manual? – None – 2013-01-26T06:28:16.003

That book's a bit old to look forward to, and the price isn't so pretty either. – None – 2013-01-26T17:56:21.127

1Aye, that book is old. So is the introduction of the video graphics array, well over two decades ago. As for the price, you might find it cheaper elsewhere, I just linked the first website I could find the book on. – Hennes – 2013-01-26T18:02:09.143

So can I ask you one more qustion here in the comments? Does the cards "instruction set" based on scre des and pixels, I mean like in linear memory? – None – 2013-01-28T15:48:09.963

1It depends on the card. For old 'real VGA' cards I would say yes. Modern cards however are usually 'programmed' by speaking to a proprietary driver. How that communicates with the card and how they optimise things are the great secrets from AMD/ATI and Nvidia. – Hennes – 2013-01-28T16:40:24.623

What about a lower-end onboard card? Would it potentially follow the same lines? What I'm trying to get at here is that based on a different card and instruction set, specification, driver, etc., how would one know how to write and fully-use this card within the confines of its design without having to reference thousands of lines in manuals, ask dozens of questions, and endless errors? Is it really this daunting in the real-world of low-level direct-hardware interfacing and using? I want full potential, and I seek to be a low-level developer for systems and kernels, so I want clarity of this. – None – 2013-01-28T21:07:42.580

I would like to write my own very small operating system, but I want to be assured I know all the ups-and-downs here so I don't run in to mistakes constantly. If every card is different, there's different drivers for different cards, and there's thousands of cards, I need to know where to draw the line on proprietary access and use of the hardware adequately. I can't develop only for my specific card, because that application would reign useless and bad practice for the future, especially considering I want others to use my OS, would I literally need to write/abstract every card with drivers? – None – 2013-01-28T21:10:27.010

That's extremely overwhelming and time-consuming for just one person to manage. Is there any way to "cut" the line here and accomplish a sparse-amount of different interfaces without needing thousands of different driver software? Could I maybe write just a simple driver for limited video-access, and narrow it out to work on multitudes of different display-hardware by anticipating a hardware-independent approach to accessing video memory, so that way the kernel could run on different hardware without needing thousands of different sets of software for different instructions, etc.? – None – 2013-01-28T21:14:06.377

@Hennes Are you there? – None – 2013-01-28T21:58:27.983

2@LackingConfidence if you want the performance the cards can offer, you need to use their individual proprietary interfaces. If you don't care about the performance, there is the VGA BIOS to set up a VESA framebuffer for you. Look at Linux's vesafb.txt for details (and of course source code as well in Linux). – derobert – 2013-01-30T21:05:15.557

@derobert A true framebuffer has to be hardware designated. – None – 2013-01-31T04:57:37.737

1

@LackingConfidence I'm unsure what you're trying to tell me. A call to the VESA BIOS Extensions gets the hardware to set up a framebuffer.

– derobert – 2013-01-31T16:10:42.520

Reads and writes from mapped memory at B800:0000 would directly modify bytes of RAM in "usual" fashion; that range was used for CGA graphics emulation as well as text. Because EGA and VGA cards had 256K of RAM, and the region from A000:0000 to A000:FFFF is only 64K, reads and writes to the latter region went through some rather "interesting" logic; the effect of a write would depend not only upon the data written, but also upon the last address read. – supercat – 2014-04-28T18:06:14.303

10

Writing to a "fixed address" was essentially writing to a video card directly. All those video ISA video cards (CGA, EGA, VGA) essentially had some RAM (and registers) mapped directly into the CPUs memory and I/O space.

So when you wrote a byte to a certain memory location, that character (in text mode) appeared on screen immediately, since you in fact wrote into a memory located on a video card, and video card just used that memory.

This all looks very confusing today, especially considering that today's video cards sometimes are called VGA (and they have bear resemblance to "true" VGA cards from 1990s). However even modern cards emulate some of the functionality of these older designs (you can boot DOS on most modern PCs and use DOS programs that write to video memory directly). Of course, nowdays it's all emulated in video card's firmware.

haimg

Posted 2013-01-24T17:14:01.970

Reputation: 19 503

So how would that make sense with an onboard video card? I still don't get how VGA can be the addreds if the card is not VGA dictated. – None – 2013-01-25T02:45:35.103

1Even if you video card is integrated, it is still connected to the rest of the system via some kind of a bus: PCIe, PCI, AGP, ISA, etc. These buses can connect external components to the motherboard, and can connect internal components inside the chipset (SATA, video, etc.) – haimg – 2013-01-25T04:15:00.840

1But how do the buses know what to do with the addresses? This will differ between PCI and onboard cards, GPUs even, or integrated graphics microprocessors. – None – 2013-01-26T01:01:56.297

1

There is no difference whatsoever, whether wires are routed to the PCI connector, or if all connections are inside your northbridge. http://en.wikipedia.org/wiki/Conventional_PCI#PCI_address_spaces

– haimg – 2013-01-26T06:05:13.317

3

There isn't really a difference: if you're writing to the address of video memory, then the hardware will route that to the video card.

If you're writing your own operating system, you will probably have to do qute a lot of work in getting the graphics card to map its memory how you want, starting by scanning the PCI bus to find the card.

pjc50

Posted 2013-01-24T17:14:01.970

Reputation: 5 786

My graphics card is onboard on my northbridge, it is not PCI-connected. I think it's an Intel GMA. – None – 2013-01-25T02:43:29.910

3Your graphics processor may not occupy a PCI slot, but it's certainly sitting on one of the system's buses... even if it's on the motherboard, heck even if it's integrated directly as part of a system-on-a-chip. The same way your motherboard's SATA controllers are, or USB controllers, or... You should see the onboard GPU listed (and SATA, USB, etc. controllers), along with its PCI ID, if you use a sufficiently barebones PCI-bus inspection tool for your OS. Under linux it's just 'lspci' on the command line. For Windows, I prefer Gabriel Topala's "SIW". Macs... might also have an 'lspci'? – FeRD – 2013-01-25T05:57:31.203

The OS doesn't matter in such a case, because ilthe hardware architecture and platform is what counts. The OS is just built atop that, and it is the main entry point to interact with all hardware thr kernel supports a service to. The architecture andnits specification is what you're interfacing with. To determine your hardware just look up your motherboard online. That's a fairlu easy start. – None – 2013-01-25T15:57:41.843

+1 starting by scanning the PCI bus to find the card. – n611x007 – 2013-09-08T16:10:26.690

2

So far the answers have explained that old video cards worked by having having video memory mapped into the processor's address space. This was the cards own memory. The northbridge knows to redirect requests for this mapped memory to the VGA device.

Then on top of that there were amny expansions and new modes for VGA-compatible cards. This lead to the creation of VESA BIOS Extensions (VBE), which operate through int 10h. This supports basic 2D acceleration (BitBlt), hardware cursors, double/tripple buffering, etc. This is the basic method for full color display at any supported resolution (including high resolutions). This normally used memory internal to the card too, with the northbridge performing redirection like with classic VGA. This is the simplest way to utilize full collor/full-resolution graphics.

Next we some direct method of accessing the a GPU without using the bios, which provides access to the same features as VBE, and possibly additional ones. My understanding is pretty fuzzy here. I think this interface is device specific, but I'm not at all sure of that.

Then there is the GPU interface that can support 3D acelleration/GP-GPU computation etc. This definately requires manufacturer provided drivers or specifications for full use, and frequently there are substancial differences even between devices of the same manufacturer.

Kevin Cathcart

Posted 2013-01-24T17:14:01.970

Reputation: 471

A devce driver is only necessary on an operating system. All hardware can be accessed directly on the architecture. – None – 2013-01-26T01:05:59.660

1Sure, the problem with direct access for the 3D portions is that substancial portions of the protocol are considered trade secrets by some of the major GPU manufacturers, and thus unless reversed engineered, or a non-disclosure agreement is signed, a driver that already contains said knowledge is needed. – Kevin Cathcart – 2013-01-28T15:20:14.357

Depends on the card. If it's nVidia or AMD it's going to be proprietary. An onboard Intel GMA would be very much easier than an nVidia GEForce. – None – 2013-01-29T05:14:22.923

1I've updated the line in question to reqad "divers or specifications". Specifications are sufficent when you can get them, which is indeed the case for many recent Intel graphics solutions. – Kevin Cathcart – 2013-01-29T17:23:17.367

1For what it's worth, all modern Intel and most AMD graphics cards have very large swaths of their programming specifications published. Nvidia still remains silent on the issue, but the Nouveau open source graphics driver contains a lot of documentation (in the form of source code) on programming Nvidia graphics cards. Intel/AMD/Nvidia are more open than proprietary ARM ASICs these days; the embedded/mobile chips are the most secretive of all. – allquixotic – 2013-02-06T18:20:15.493

+1 The northbridge knows to redirect requests for this mapped memory to the VGA device – n611x007 – 2013-09-08T16:12:27.900