Frontier: Elite 2 Graphics Study

Intro Web Demo

Source Code

Introduction

Frontier: Elite 2 (FE2) was released in late 1993 on the Amiga by David Braben. The game was an ambitious spaceship sim / open galaxy sandbox, squeezed into a single floppy disk. You could launch your ship from a space station, jump to another star system, travel to a distant planet, land and dock at the local spaceport, fighting pirates and avoiding police along the way. Each planet had its own atmosphere, rendered differently based on the type. Moons, planets and stars moved across the sky. Any planet in a system could be visited. Any star system in the galaxy could be jumped to. Whole cities could be traversed.

Before playing the game, I got ahold of a demo disk that was just the intro sequence playing in a loop. I must have watched the game intro 10s of times. The intro had a kicking classical soundtrack and impressive visuals for an Amiga game. It hinted at the huge and complex galaxy simulation. Screenshots of the game in magazines also looked gorgeous (for the time!), futuristic cities framed with alien skies, planets and moons moving across the sky. How was it all done? Due to the scope and variety of shapes in the scenes, I was a bit bamboozled really. I had somehow come across the basics of 3D projection, and it was small step to figure out lines and maybe filled triangles from there, but I ultimately ended up with more questions than answers. And - as is often the case when you haven't seriously got into solving a problem yet - I had the wrong questions to begin with!

The Frontier 3D engine was tailored for rendering planets, moons, stars, spaceships and habitats on an Amiga. Here are some notable features of the Frontier 3D engine:

Target Platform

The original Amiga was based on the Motorola 68000 16-bit 7MHz CPU, and this was the instruction set that was targeted. At that time the 68K CPU was pretty commonly used, other home computers using it were the AtariST and Mac. An AtariST version was developed at the same time as the Amiga.

The resolution targeted was 320x200 with up to 16 colours onscreen at any one time. The colour palette was from a possible 4096 colours. The 16 colours were assigned via a 12-bit RGB format: 4 for Red, 4 for Green, 4 for Blue.
The 3D scene was rendered in the top 168 rows, the ship cockpit / controls were displayed in the bottom 32 rows (or the case of the intro - the Frontier logo).

The base Amiga could support a 32 colours palette, but the AtariST was limited to 16 colours so the engine had to work with just 16 colours available. This may be why the Amiga version was limited to 16 colours as well.

The Amiga had a co-processor called the Blitter, capable of drawing lines and filling spans (i.e. polygons), but FE2 only used it to clear the screen, not for filling polygons. The Blitter was best suited to working in parallel with the CPU and doing a few large operations, to make use of the available memory bandwidth. But it was tricky to use for rendering polygons, due to the multi-step process of drawing lines to a temp buffer, then span filling the temp buffer and then compositing that to the screen buffer for each polygon. Each new operation in this process would have to be feed either by via interrupt or from a Copper list - for small polygons this would not have been worth it. This is probably why the CPU was used as the rasteriser - partly due to having to write CPU rasteriser code for the AtariST anyway and the marginal, if any, performance return on using the Blitter. See: StackExchange - Why Amiga Blitter was not faster than AtariST for 3D

The PC version came slightly later, and had a few improvements on the Amiga version. It was translated to x86 assembly from the 68K assembly by Chris Sawyer. PCs were a lot more powerful at the time, and many improvements were added to the PC version over the Amiga version, listed below.

Models

Each star / planet / ship / city / starport / building / etc model is represented as a series of opcodes, which are executed every frame to build an ordered raster list. There are approximately 32 opcodes. The first few opcodes specify a 3D primitive e.g. a circle, line, triangle, quad, or polygon to render. There's instructions to do matrix setup, projections and rotations. There are also instructions for conditional logic, branches and calculations, these are used many creative purposes, chiefly:

Other instructions render entire 3D models in their own right:

Per Frame Breakdown

On the Amiga, screen clears are done using the Blitter This happens in the background while the game gets on with updating game logic and deciding what models to render. The models are then rendered, and a raster list consisting of triangles, quads, lines, circles, etc is built. The raster list is stored as a binary tree sorted by z-depth. Once all models are done adding items to the raster list, the game checks that the Blitter is done clearing the screen's backbuffer (which it normally is by this point). Then everything is rendered back to front by traversing the binary tree.

Sidenote: I got a comment about triple buffering the output to get higher FPS. I guess the implication being that while the CPU is waiting around for a vertical blank to display the next frame, it could be doing real work on a future frame. I'm sure that could be exploited in some scenarios, but normally with this scheme you are increasing the frame rate at the expense of latency, and latency issues are normally why you want to increase the frame rate in the first place. This technique is more applicable when the graphics are non-interactive or an extra frame of latency just doesn't matter that much (but keep in mind, some people are more sensitive to input latency than others!).

Raster List

The raster list is organised as a binary tree keyed off a depth (z distance) per raster item. The depth of any given raster item is typically given by one of its vertices, or min / max of all its vertices. The model code had wide control over which method is used.

The binary tree can contain batch nodes, these contain a list of raster items all at the same depth (making insertions after the first item in the batch a simple append). This enforces a raster order for a list of items, that is independent of the view direction / object rotation. Typically, this is used to add extra detail to a polygon without the raster items z fighting. With a modern renderer you would disable the depth test, or bake into a texture / shader.

Another interesting thing is the model code can add sub-tree nodes with their own internal sorting. The whole sub-tree has one z-depth in the outer tree, and any nodes added to the raster list while this sub-tree is active only get added to the sub-tree. This includes depth values that would otherwise be sorted in front of, or behind objects in the outer tree. Frequently used for landing gear, so you don't get any part of the landing gear poking through the main ship model / wing.

Lighting

Lighting in-game was directional and coloured from the main star. Each object rendered can have its light source direction & colour setup independently, so that the direction is correct for each model when the star is placed in the middle of scene, e.g. orrery view or viewing a distant planet.

In the intro and ship view screen the light source was fixed and came from the top-right of the screen, i.e. above the subject and to the side, much like in film.

In most places where a colour is specified, two special bits of the colour are used to indicate how lighting should be applied.

The 12-bit colour format expected by the HW is as follows:

  | Colour | Red  | Green | Blue |
  | Bits   | ba98 | 7654  | 3210 |

But in FE2 when a colour is specified, one bit from each colour was given a special meaning:

  Bit 8 - If set emit colour directly / don't add normal colour
  Bit 4 - Add the scene base colour (sometimes diffuse, sometimes just object instance colour)
  Bit 0 - This bit is not used, typically the model OpCode borrows this bit

So the colour per bit layout was as follows instead:

  | Red | !Shade | Green | Add Global Colour | Blue | Unused |
  | ba9 | 8      | 765   | 4                 | 321  | 0      |

The computed normal colour (directional light from the star) and global (diffuse) are added directly to the base colour. Since the object colour can only be specified at half intensity, the rest of the colour contribution comes from the global diffuse colour and / or the normal colours.

The set of normal colours to add are setup with each new reference frame (either copied form the parent model, or passed in from the code that started rendering the model).

The normal colours and global colour could be overridden by the model code for sub-models, this allowed for additional lights in the scene (albeit one at a time). E.g. hazard lights on space stations and ground control.

Matrix Setup

Rotations were done with a 3x3 fixed point matrix. Since with rotation matrices all the values of the matrix are between -1 and 1, each value could be represented as a signed 16-bit integer implicitly divided by 0x8000, i.e. 0x8000 was -1 and 0x7FFF was ~1.

A scale is built into each model & object, and applied directly on the vertices using bit shifts (the shifting can be cancelled out if needed, say you are rendering an extremely large object from far away).

Offsets / translations are done with separate 3D vectors, and rotated as needed.

When projecting vertices:

It should be noted this is done lazily on a per-vertex basis, as the vertex is requested by the model code. This is because A) not all vertices are rendered, since the model code contains conditionals and B) vertices can be dynamically positioned by model code.

Scale and Numeric Precision

The 68K processor could only do 16-bit multiplies and divides, and each took many cycles to complete (MULS - Multiply Signed: took 70 cycles, DIVS - Divide Signed - took a whooping 158 cycles, compare this to add / subtract which took around 6 cycles). Hardware supported floating point was very rare as it required an upgraded or expensive Amiga. Not much software supported HW floating point and for real time 3D it often wasn't faster than a fixed point scheme anyway (as much of the precision was unneeded / could be worked around). Anyway, end result, the Frontier engine implemented its own fixed point & floating point code using integer instructions.

There are two schemes used (in the renderer):

There are lookup tables & supporting code for many operations:

Two software floating point number sizes are supported:

Note that's a lot of bits for the exponent allowing for some very large numbers (that's not 15 bit shifts, it 32K bit shifts!). This is mainly due to it being easier this way since it is implemented with 16-bit integer math primitives. Compare the exponent size to IEEE floating point 754, which used only 11 bits for the exponent in 64-bit floating point!

Palette Fitting

Two separate palettes are used, one for the 3D area (top 168 rows) and one for the ship cockpit UI (bottom 32 rows), each with 16 colours. The palette is adjusted by a vertical blank interrupt (for the top part) and a copper interrupt (for the bottom part). Note that this is done based on the beam position, so it is independent of the 3D frame render loop, at a constant 50 times per second.

The colour space is 12-bit for a total of 4096 colours, but only 16 of these can be used at any one time. A clever scheme is used to get the most of out of these 16 colours. A virtual palette with space for up to 62 colours is created. As new colours are requested by the renderer, they are recorded in the virtual palette. Items added to the draw list reference these virtual colour indexes. Later these will be resolved to real HW palette entries. An array of 4096 slots is used to keep track of which virtual colours are used so far in a frame (it stores a virtual index or 0 for unassigned slots).

If there are less than 15 colours, then we are done - the virtual colours can be assigned to the palette directly (It's 15 because 1 colour out of the 16 is the background colour, and this is never grouped with another colour).

If we've more than 15 colours, well, now some magic needs to happen! The virtual colours need to be merged to fix into the 15 available palette entries. This is done by recursively splitting / partitioning the assigned virtual colours into 15 buckets (sort of like quicksort but for clustering, if you know the name of this clustering algorithm let me know!). Each partition splits based on the largest color component difference. So, for example, if there is bigger red range (than blue or green) in the sublist of virtual colours, the red midpoint is used to partition the sublist. This is done repeatedly - recursively splitting the virtual colours into ranges until we've 15 total ranges. The size of each range at this point is variable, a range can have 1 colour or in the absolute worst case ~48 (62-14) colours in a given range. Bit, normally there will small number of colours in the range, e.g. 1, 2 or 3 colours. At the end of this process, the ranges are resolved to a single colour by picking a colour in the midpoint of the range of colours (by each colour component). Finally, these colours are copied to be picked up by the HW on the next 3D frame swap.

I should note the PC version doesn't use this colour matching scheme, it has more colour palette entries available than even the virtual colour list on the Amiga, so it rarely needs to do any colour merging / palette compression at all. This is the version I've implemented.

Planet Rendering

The planet renderer is very special, it has its own raster list primitives, uses its own floating point number scheme, and is mostly custom code vs being rendered using existing model opcodes.

There are three types of surface features:

The surface polygons are mapped to the surface of the planet by treating them as arcs across the surface. Procedural sub-division of the surface polygon line segments are used to add surface detail as you get closer to the planet.

A set of atmospheric colours can be specified, these are rendered when looking at a planet side on.

Which brings us to the view dependant rendering paths:

When it comes to rasterising the planet, a custom multi-colour span renderer is used. The renderer generates an outline of the planet, then adds surface details, and finally great arcs for shading were applied on top. This generated a set of colour flips / colour changes per span line within an overall outline, which were sent to the rasteriser.

Planets rings were implemented in the model code, using the primitives available there, as were weather pattern effects.

Model Data

Each model in the game had the same basic model structure:

Vertex Encoding

Each vertex was specified by 32 bits.

The vertex type controls how the vertex is computed and projected. Even though I said the last 3 bytes of a vertex were the (X, Y, Z) values, this is not true for all vertex types. For some types these values are references to other vertices, or even model code registers. The different vertex types are:

Hex# Vertex Type Params Description
00, 01, 02 Regular X, Y, Z Regular vertex with X, Y, Z directly specified
03, 04 Screenspace Average Vertex Index 1, Vertex Index 2 Take screenspace average of two other vertices
05, 06 Negative Vertex Index Negative of another vertex
07, 08 Random Vertex Index, Scale Add random vector of given size to vertex
09, 0A Regular X, Y, Z Regular vertex with X,Y,Z directly specified
0B, 0C Average Vertex Index 1, Vertex Index 2 Average of two vertices
0D, 0E Screenspace Average Vertex Index 1, Vertex Index 2 Take screenspace average of two other vertices, but don't project sibling vertex
0F, 10 AddSub Vertex Index 1, Vertex Index 2, Vertex Index 3 Add two vertices, subtract a 3rd
11, 12 Add Vertex Index 1, Vertex Index 2 Add two vertices
13, 14 Lerp Vertex Index 1, Vertex Index 2, Register Lerp of two vertices, as specified by model code register

When vertices are referenced by model code, negative numbers reference vertices in the parent model. Indexes are multiplied by 2, with even numbered indexes referencing vertexes specified in the current model as is, and odd numbers referencing the same vertices but the x-axis sign flipped.

The parent model can choose what (0-4) vertices to export to sub-models, this is done as part of the 'MODEL' opcode.

A vertex index in model code is stored as 8-bits, with only positive vertices from the current model, and a bit for flipping the x-axis, this resulted in a total of 63 possible vertices per model / sub-model.

Normal Encoding

Normals were also encoded in 32-bits.

A normal index in model code is stored as 8-bits, and one of these bits (LSB) flips the x-axis. This results in a total of 127 possible normals specified per model / sub-model.

When normals are referenced by index, the index is divided by 2 to get actual normal from the model data, with even numbers referencing normals from the list as-is, and odd numbered normals being x-axis flipped.

Model Code

Frontier renders the 3D world using its own bytecode (well 16-bit code, because it was written with 16-bit buses in mind). This code specifies the primitives to display, level of detail calculations, animations, variation and more.

The original source code was largely, if not completely, written directly in assembly (the ony code that looks a bit different is the planet renderer). So the model code was likely directly typed in as data blocks in the assembly files, most instructions / model data aligns to 4, 8, 12 or 16 bits which is perfect for just typing in the hex values (which is 4 bits per character, i.e. 7002 0102 - draws a red line from vertex #1 to vertex #2). To easily read the model code without documenting every line, I added code to convert it to an intermediate text format - this syntax is completely made up by me on the spot and not part of the original game.

This text can be compiled into the original model code as well - which allows making modifications / trying different things out. This all adds a lot of complexity, so I've put it in its own set of optional files modelcode.h and modelcode.c.

Here's a full list of the commands:

# Hex# Opcode Description
0 00 DONE Exit this models rendering
1 01 CIRCLE Add circle, highlight or sphere to draw list
2 02 LINE Add line to draw list
3 03 TRI Add triangle to draw list
4 04 QUAD Add quad to draw list
5 05 COMPLEX Add complex polygon to draw list
6 06 BATCH Start / end a draw list batch (all items in the batch have same Z value)
7 07 MIRRORED_TRI Add a triangle and its x-axis mirror to draw list
8 08 MIRRORED_QUAD Add a quad and its x-axis mirror to draw list
9 09 TEARDROP Calculate Bézier curves for engine plume and add to draw list
10 0A VTEXT Add vector text (3D but flat) to draw list
11 0B IF Conditional jump
12 0C IF_NOT Conditional jump
13 0D CALC_A Apply calculation from group A
14 0E MODEL Render another model given a reference frame
15 0F AUDIO_CUE Trigger an audio sample based on distance from viewer
16 10 CYLINDER Draw a cylinder (one end can be bigger than other so really it's a right angle truncated cone)
17 11 CYLINDER_COLOUR_CAP Draw a capped cylinder
18 12 BITMAP_TEXT Add bitmap text to draw list
19 13 IF_NOT_VAR Conditional jump on variable
20 14 IF_VAR Conditional jump on variable
21 15 ZTREE_PUSH_POP Push / pop sub ztree
22 16 LINE_BEZIER Add Bézier line to draw list
23 17 IF_SCREENSPACE_DIST Jump if two vertices are within / past a certain distance in screen coordinates
24 18 CIRCLES Add multiple circles / spheres / stars to draw list
25 19 MATRIX_SETUP Setup tmp matrix, either rotate to light source or set to identity
26 1A COLOUR Set base colour (added to all colours), or directly set a normals colour tint
27 1B MODEL_SCALE Render sub-model with change of scale
28 1C MATRIX_TRANSFORM Transform tmp matrix using rotation about axis and axis flips
29 1D CALC_B Apply calculation from group B
30 1E MATRIX_COPY Overwrite model view matrix with tmp matrix
31 1F PLANET Astronomical body renderer w/ surface details and halo / atmosphere effects

Calculations (CALC_A & CALC_B)

The calculation opcodes write to a set of 8 16-bit temporary registers. These can be read as inputs to various other commands e.g. IF_VAR, IF_NOT_VAR, COLOUR and also can be used to calculate vertex positions.

The calculation opcodes can read two 16-bit variables from:

The object inputs are things like the tick #, time of day, date, landing time, object instance id, equipped mines / missiles, etc.

The list of available operations is as follows:

# Hex# List OpCode Description
00 0 A Add Add two variables and write to output
01 1 A Subtract Subtract one variable from another and write to output
02 2 A Multiply Multiple two variables and write to output
03 3 A Divide Divide one variable by another and write to output
04 4 A DivPower2 Unsigned divide by a given power of two
05 5 A MultPower2 Multiply by a given power of two
06 6 A Max Select the max of two variables
07 7 A Min Select the minimum of two variables
08 8 B Mult2 Second multiple command (one is probably signed other unsigned)
09 9 B DivPower2Signed Signed divide by a power of two
10 A B GetModelVar Read model variable given by offset1 + offset2
11 B B ZeroIfGreater Copy first variable or 0 if greater than variable 2
12 C B ZeroIfLess Copy first variable or 0 if less than variable 2
13 D B MultSine From 16-bit rotation calculate sine and multiply by another input variable
14 E B MultCos From 16-bit rotation calculate cosine and multiply by another input variable
15 F B And Binary AND of two variables

PC Differences

The PC version rendered at the same resolution as the Amiga version (320x200) but with 256 colours instead of 16. This was done in 'Mode 13h' with a byte per pixel, i.e. 'chunky' and not planer graphics. Colours were still assigned in 4-bit Red, 4-bit Green, 4-bit Blue (i.e. 12-bit colour, for a total of 4096 possible colours). Since the Blitter is Amiga specific, the screen clear was done via the CPU writing the background colour index to the screen back buffer. With chunky graphics the rasterizer could be simplified, pixels could be written directly without the need for bit operations.

Since there's no Copper the 256 colors had to cover both the UI and 3D parts of the display, the palette was split 50/50 128 colours for each.

With way more palette colours available than the Amiga version, a new colour matching for the 3D scene was developed. Changing the colour palette on the PC was slow, so the number of colour changes between frames should be minimized. A reference counting / garbage collection mechanism was employed, to keep track of colours between frames. Because of this need to keep colours consistent, the virtual palette list is maintained between frames, and garbage collection is done to free space for new colours. This was achieved with a virtual palette free list and a per entry state. This is the version actually used in the remake.

Another big change was texture mapping was added. The background star-field was replaced with a texture of the galaxy added to a skybox. Many ships were given textures for extra detailing. The texture maps were fixed to 64x64 texels. But I still preferred the look of the Amiga version! This was probably a lack of time spent on art on the PC version, rather than a fundamental jankyness with texturing at the time. With better textures the texturing mapping could have been quite a visual improvement over the original - there's some great lo-res art styles e.g. picoCAD.

Remake Updates

When converting the renderer to C, I couldn't help but fix a few bugs / add a couple of minor quality improvements. I tried to be true to the original version though, and not change the overall feel or look of the engine.

Reverse Engineering

I started with the PC version, mainly as way to learn the ins and outs of x86 DOS coding, plus it was possible to use the free version of IDA to disassemble and annotate the assembly code.

It's important to say there's nothing too special about reverse engineering some code, it's like programming on someone else's project, but you just stop after the part where you try to understand the existing code :) We are all guilty of skipping the understanding part at some point, but when reverse engineering that's literally the entire job, so we are forced to do it, its good practice. Doing this for assembly is like a mix of a crossword puzzle and archaeology, dusting off little bits of code at a time, and trying to fit them into the whole.

At a basic level a program is just a bunch of transforms on input into output. And generally we know something about the input and output, so we can start reverse engineering from the code that touches the input or output. To find the code being executed, you can disassemble the original executable in some cases, but often it is better to dump the memory of the process once the process has started executing. This is because code can be loaded into memory many different ways, and it's not always possible for reverse engineering tools to infer what the final layout will be. Generally when you dump the code you can also dump the stack and maybe even a program counter trace. This can be used to determine which sections of memory to disassemble. From here you want to figure out common functions, common data structures and global state. Figuring out any of these can allow you to cross-reference new parts of the code, and cross-reference them in turn and so on.

There's a programming saying that goes something like, "to understand a program show me the data layout, the code can be inferred form there". So finding global state and data structures is probably the most important, because you can often infer new code from there, without painstakingly having to follow program flows and what's stored in each register / stack ("But I ...like... the misery, Father").

So, first I looked for the VGA interrupt triggers used to control the graphics, and worked backward to find the low level raster routines and raster list traversal. Running in the DOSBox debugger I was able to verify which code was executed at which point. Initially I concentrated on the startup sequence (that install the interrupts) and the interrupts themselves. Then how they were used in first scenes of the intro, while revealing some of the mainloop. At this point I started coding my own versions of the draw algorithms into C, with a test harness using DearIMGUI. For many functions it was easy to see what was happening, and I made many guesses based off the inputs to each draw function.

The trickiest code to figure out at this stage was for rendering complex polygons w/ Bézier curves. After examining the code a bit, it became clear it was a span rasterizer - with a setup stage, list of vertices & edges, and then a final draw function for filling the spans. When drawing certain edges, the span rasterizer would subdivide the specified edge into further edges, this was clearly for rendering curves - there was several paths for rendering based on low long the edge was; the longer the edge the more subdivisions generated. The subdivision code generate a list of values that were used as a series of additions in the loop to generate successive edges, so I guessed the curves were drawn adding a series of derivatives. Searching around I found Bézier curves could be rendered this way and filled in the missing pieces. The main issue from here was just verifying I was doing calculation the same as FE2.

Using Béziers for rendering vector fonts is not surprising - but it turns out they are used for many purposes: rendering curved surfaces of ships, the galaxy background, planets, stars, planet rings, the bridge in the intro, etc - the engine really makes heavy use of them. The efficiency they are rendered with is impressive, and gives the game its own unique look.

Anyway, after getting comfortable with x86 assembly and the patterns that were used in the game, things started to get a lot easier. The biggest time sink was time spent in the debugger, trying to keep track of registers, globals, and stack. So I tried to avoid that as much as possible. My approach was to first understand the raster list and be able to render everything sent to it, so I would dump raster lists from the main game and try to render them with my own the raster code. Largely ignoring the details of FE2 raster functions for now, just figuring out the input and output from viewing the functions in IDA.

Later I built an DearIMGUI UI for understanding the execution of the model code. I setup a way to jump to any part of the intro at any time and save the timestamp between executions, so I could focus on specific scenes one at a time. This was also useful for scrubbing for bugs after making any potentially breaking changes. The model code and raster list could be dumped and execution traced in realtime. This was great for understanding what was being used in any given scene and annotating the model code to make bug hunting easier.

Back in IDA land, it started to become clear I should have been using the graph view for the disassembly all along! The graphical structure really helps give you a mental model of the code - there is not much structure that is quickly visible from a flat assembly listing (unlike indented code), so it's easy to get lost.

At some point in the process, I found the following existing reverse engineering work, this was an immense help both to verify existing work, and to find new jumping off points in the code when I started to run out of steam:

Music & Sound

The FE2 music is implemented in a custom module format. The custom format has many of the features typical of .mod player: sample list, tracks and sample effects (volume fade, volume changes, vibrato, portamento). There are only 8??? total samples / instruments used by all builtin tunes.

When sound effects are required by the game, they are slotted into free audio channels - or temporarily override an audio channel used by the music if no free audio channel is found. It's surprising how well this works.

All music and sounds can be played by the C remake, you will need to modify the code to hear them though.

On the PC, the sound hardware available was varied. Many modes were supported from built-in PC Speaker tones to SoundBlaster card. Music was played directly as MIDI output if the hardware was available, but I still preferred the Amiga version. Soft Spot for Amiga Alert. #amigadidnothingwrong #theamigathatgotaway.

Module System

This is not part of the 3D engine or even covered by my intro conversion, but the original code had a module system, where different parts of the game slotted into the main run-loop. Each one was responsible for some part of the game, and had hooks for setup & unload, 2D rendering, 3D rendering, input, and game logic updates. The main game had a big structure for game state and a jump table for game functions, each module could use this to interact with the main game, to query and modify game state, play sounds, render 3D and 2D graphics, etc. The PC version made use of this to fit into 640K, by only loading parts of the code that were needed e.g. unloading the intro during the main game. It may have been useful during development as well, as a kind of hot-loading system, since compiling the whole game probably took a while given the amount of assembly! But this is pure speculation...

Links

Acknowledgements

Based on original data and algorithms from "Frontier: Elite 2" and "Frontier: First Encounters" by David Braben (Frontier Developments).

Original copyright holders: