In the old days, applications were build by compiling many .c files into .o files. These files often had inter-related references that weren't resolved at compile time. The information on these references are stored in a reloc (relocation) object.
Later, at link time, the linker would merge all the .o files, building a table of where symbols are ultimately located. Then the linker would run through the set of relocs, filling them in.
A reloc consists of three parts: where in memory the fix is to be made the symbol which is involved in the fix an algorithm that the linker should use to create the fixup
The most interesting part of this paper is the latter element. The algorithm can be as simple as "use the memory location; store it in binary" (R_386_PC32 for example). Or it may be more complicated, such as "calculate the distance from here to the symbol, divide by 4, subtract 2 and add the result to the 3 lower bytes" (R_ARM_PC26 for example).
These relocs are scattered through the .o files, and are used at link time create the correct binary file. Once all the relocs are resolved, the linker is pretty well done its job.
At least this is the way things used to work, in the days of static linking.
With the introduction of run-time linking, the designers of the ELF format decided that relocs are a suitable entity to hold run-time resolution information. So now we have executable files which still have relocs in them, even after linking.
However, new algorithms are required to signal how these fixups are to be done. Hence the introduction of a new family of reloc numbers (i.e. algorithms)
The appendix of this paper analyses the existing i386 ELF relocs. [After bringing the whole ArmLinux ELF system up, it seems to me that the best design for ArmLinux is to mimic the i386 design, with a one-to-one correspondance of relocs]