Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / ASM

Beginning Operating System Development, Part Three

4.98/5 (23 votes)
17 Nov 2009CPOL23 min read 67.2K   689  
Descriptor tables and interrupts.

Introduction

  1. Environment setup
  2. C++ support code and the console
  3. Descriptor tables and interrupts
  4. The Real-Time Clock, Programmable Interrupt Timer and KeyBoard Controller

This article is part of a series. It can stand alone, but will make a lot more sense if you read the first two parts first. We’ll be covering descriptor tables and interrupts in this article; by the conclusion, you’ll be able to handle both internal and external interrupts, and will have some of the infrastructure needed to run code in different rings. We’ll start with the Global Descriptor Table.

The GDT is an important part of the i386’s protection system. It allows us to define the behaviour and privileges of certain parts of memory. This allows the kernel to handle an exception if a process is attempting to violate these constraints, and kill the process in some way (by giving its address to a ‘reaper thread’ which terminates the process, or by terminating it directly). All memory accesses pass through the given GDT; it is therefore the first line of defense against malicious computer programs.

The GDT is a collection of entries, which are 64 bits long and tell the processor about segments of memory. Segmentation is a method of organising memory; it’s largely been superseded by paging (which is better supported by most C compilers, supporting only a fixed memory model). However, it has a substantial advantage over paging – it can set a privilege level. Additionally, some instructions need segmentation to work, so we’ll be setting up two large code and data segments which cover the whole of memory.

I’ve touched upon this before, but it’ll be helpful to understand this as much as possible. The x86 and x86-64 processor architectures use the concept of rings for security. The x86 processor architecture supports rings 0, 1, 2, and 3, while x86-64 supports only rings 0 and 3. The lower the ring number, the more instructions can be accessed. The core of the kernel will always run in ring 0 (supervisor mode) because it will often need to run code that needs access to privileged instructions. Depending on the design of your Operating System, other parts of the kernel may join user-created programs in ring 3 (user mode) for security purposes.

As you can see, only half of the possible rings are normally used. However, some Operating Systems run device drivers in rings 1 and/or 2. This prevents them from accessing kernel code, so it is necessary to provide some other way for them to be able to work. Some of these methods are system calls, SYSENTER and SYSCALL.

Let’s go back to the concept of segments. If you’ve worked in assembly code before, you’ll know that there are six segment registers. The two that we’re interested in are CS and DS. They stand for Code Segment and Data Segment, respectively, and provide the CPU with the access privileges needed to execute or store the code or data. Now, this means that we’ll have to set up a code and a data segment in our GDT for any instruction which refers to these two registers to work properly.

If it were that simple, then we’d be able to run straight through to some practical code. But remember, a segment also contains access privileges. Since we’ll eventually want to have code running in ring 3, we need to provide the GDT with these entries as well. So, all in all, we need five entries; the null entry, kernel-mode code, kernel-mode data, user-mode code, and user-mode data.

Now, all this might sound fairly simple. We just give the processor a list of five entries which gives it some idea of the code and the data’s privileges. But, we have to provide more than that. Each of the five entries must have Granularity and Access fields. These fields have very specific bit layouts, which are shown below:

Granularity

Bits

Field size (bits)

Description

0 : 3

4

Bits 16 : 19 of the segment length.

4

1

Always 0.

5

1

Always 0.

6

1

Operand size. If set, 32-bit operands are used; otherwise, 16-bit operands are used.

7

1

Granularity. If set, granularity is 4 KiB; otherwise, granularity is 1 byte.

Access

Bits

Field size (bits)

Description

0 : 3

4

Entry type

4

1

Descriptor type; always 1

5 : 6

2

CPU ring

7

1

If set, segment is present

As you can see, both of these fields are one byte long. Now, we could set up structures for these. But, we’ll only use it once per processor, so there isn’t really much point to this. The GDT entries, on the other hand, are important enough to merit their own structure. We’ll come onto those in a while. First of all, we need to know the necessary values of the granularity byte. Basically, every bit is set, apart from four and five. This means that we use 32-bit operands, and work with 4 KiB pages instead of individual bytes. This setup is normal on a 32-bit Pentium model (which works with segmentation and paging). The finished binary value is 11001111, or 0xCF in hexadecimal.

The access byte is slightly different. Because we’ve got four proper segments to set up, we need to alter the values slightly depending upon whether the segment will be for code or data, and the privilege level. To make things a little simpler, I’ll put them into a table, showing the values needed.

Code

Data

User-mode

11111010 (0xFA)

11110010 (0xF2)

Kernel-mode

10011010 (0x9A)

10010010 (0x92)

This is relatively simple. User-mode segments have the second and third from the left bits set to indicate user-mode, or a CPU ring of 3 (basic question – why are only two bits needed to represent the CPU ring?). Data has the fourth bit set, to alter the entry type.

Now that we’ve got the values, we just need to tell the processor about them. To do this, we use a GDT entry, the format of which is shown below:

Bits

Field size (bits)

Description

0 : 15

16

Lower 16 bits of the limit

16 : 31

16

Lower 16 bits of the base

32 : 39

8

Middle 8 bits of the base

40 : 47

8

Access flags

48 : 55

8

Granularity byte

56 : 63

8

Last 8 bits of the base

As we can see, the bits each correspond to the size of an unsigned short and an unsigned char. How convenient! We can represent it as a structure like so:

C++
struct GDTEntry
{
   ushort LimitLow;           //The lower 16 bits of the limit.
   ushort BaseLow;            //The lower 16 bits of the base.
   uchar  BaseMiddle;         //The middle 8 bits of the base.
   uchar  Access;             //Access flags.
   uchar  Granularity;
   uchar  BaseHigh;           //The last 8 bits of the base.
} __attribute__((packed));

Now, as most compilers do, G++ will try to rearrange the layout of the GDT entry. We don’t want this to happen, so we apply the packed attribute to prevent this. You’ll also notice that we don’t give the upper 16 bits of the limit; this is because every segment has the same upper 16 bits of the limit, so we can give this to the CPU at the same time as we give it the pointer to the GDT. As a point of interest, the Base field is spread all over the place for compatibility with the Intel 2086, which has a 24-bit Base field.

We’ve made the GDT. Now, we just pass it to the processor. This is a simple process, and just requires a simple structure, which has this bit layout:

Bits

Bit count

Description

0 : 15

16

The total size of every GDT entry

16 : 47

32

Address of the first GDT entry

I hope you can create a structure layout from this. Don’t forget the packed attribute. As you’ll notice in a little while, every descriptor pointer has this format.

You won’t find much actual code here. This is not preschool – you’ll have to write it yourself. You can find some pseudo-code below, but there will be an Assembly routine which I’ll be giving verbatim. There’s still a need to understand it though; it has a few small, yet crucial behaviours. To set up our GDT, we need to do something like this:

C++
GDTEntry gdt[5] ={
    //NULL segment, sometimes used to hold the GDT address
    {.LimitLow = 0, .BaseLow = 0, .BaseMiddle = 0, .Access = 0, 
     .Granularity = 0, .BaseHigh = 0},
    //Kernel-mode code
    {.LimitLow = 0xFFFF, .BaseLow = 0, .BaseMiddle = 0, 
     .Access = 0x9A, .Granularity = 0xCF, .BaseHigh = 0},
    //Kernel-mode data
    {.LimitLow = 0xFFFF, .BaseLow = 0, .BaseMiddle = 0, 
     .Access = 0x92, .Granularity = 0xCF, .BaseHigh = 0},
    //User-mode code
    {.LimitLow = 0xFFFF, .BaseLow = 0, .BaseMiddle = 0, 
     .Access = 0xFA, .Granularity = 0xCF, .BaseHigh = 0},
    //User-mode data
    {.LimitLow = 0xFFFF, .BaseLow = 0, .BaseMiddle = 0, 
     .Access = 0xF2, .Granularity = 0xCF, .BaseHigh = 0}
}

DescriptorPointer pointer;

void GDT::SetupGDT()
{
    pointer.Limit = sizeof(GDTEntry) * 5  - 1;
    pointer.Address = (uint)&gdt;

    Processor_SetGDT((uint)&pointer);
}

We set our GDT pointer’s Limit property to the total size of the GDT. There shouldn’t be any surprises here – it’s fairly basic. What might be a little trickier, though, is the Processor_SetGDT method (don’t forget the extern "C" directive in the declaration, or you’re in for a whole world of linker-induced pain). For you to understand this, you need to understand precisely what CS and DS are.

Put simply, they’re offsets. The CPU adds CS to the GDT address (that’s the GDT, not the GDTPointer), and the result is the start of a GDT entry. From there, it can check code privileges, and make sure that it’s actually code executing, not data. The process is almost identical with DS. The only problem is that GrUB’s GDT might be different, so CS and DS could be pointing somewhere else. When you think about it, this means that if we don’t change these registers, what we might think is ring-3 code is actually running in ring 0. So we need to alter these. C++ doesn’t allow us to do this natively, so we have to drop down to assembly. This code snippet is very important, so it’s just copied verbatim below:

ASM
[GLOBAL Processor_SetGDT] 
Processor_SetGDT:
    mov eax, [esp+4]
    lgdt [eax]
    
    mov ax, 0x10
    mov ds, ax
    mov es, ax
    mov fs, ax
    mov gs, ax
    mov ss, ax
    jmp 0x8:.codeFlush
.codeFlush:
    ret

Now watch carefully. First of all, we set EAX to the first parameter, and execute the LGDT instruction, which actually loads the table into the processor. Then, we move the value 0x10 into the lower 16 bits of EAX, and set DS, ES, FS, GS, and SS to those lower 16 bits. This does the simple bit of loading the data and stack segments. Then, we do a jump. This isn’t just an ordinary jump though; we explicitly state CS to be 0x8.

It would probably be useful for us to understand why we set CS and DS to 8 and 16, respectively, and how the processor works when we use a memory address. The general sequence of events is something like this:

  1. We reference a memory address (whether to get the next instruction or dereference a pointer).
  2. The processor looks at the GDT pointer, in particular the last 32 bits.
  3. CS is added to the last 32 bits of the GDT pointer, and the result of that addition is de-referenced.
  4. At this point, the CPU has the relevant GDT entry and can obtain the privilege level et al.
  5. If the protection levels are violated, an exception is invoked.

By looking at this sequence of events, we can deduce that CS is pointing to kernel-level code, and DS is pointing to kernel-level data. This isn’t too difficult, but requires a little consideration.

The IDT

The GDT isn’t only the descriptor table that a normal desktop computer uses. Among one or two others, it uses the IDT. I realise that the last section was quite theory heavy, so I’ll try to break it down slightly. In a nutshell, the IDT is a consecutive array which is indexed by the interrupt number. The processor can use this to signal the kernel when an internal or external event occurs.

Events that come into the processor can be either internal or external. External events come from separate hardware, such as the keyboard, mouse, timer, or a PCI device. They are routed through the Programmable Interrupt Controller. Internal events originate from within the processor. They are not routed through the PIC, and are usually referred to as exceptions.

It is also very important to realise that the IDT is a protected mode structure. In real mode, its role is performed by an IVT, or Interrupt Vector Table. This IVT resides at memory location 0x0, and usually extends to 0x3FF. It’s important that we don’t overwrite this at any point, because it is extremely useful if we start to use the Virtual 80x86 mode. This makes null pointers even more reprehensible, because it can safely be classified as a Bad Thing if we muck up the code we’ll be executing.

Now that some of the theory is out of the way, we can have a look at the first 31 interrupt numbers and what they mean. Bear in mind that these interrupts are internal. To receive external interrupts, we need to reprogram the two PICs to get us away from the real mode kludges and towards protected mode.

It’s easiest if I just provide you with a big table. You can use this in your code as an array:

Exception

Name

Comments

0

Divide by zero

Unrecoverable. The cause of this is fairly self-descriptive.

1

Reserved

Sometimes known as Debug, the Intel manuals say this is reserved.

2

Non-Maskable Interrupt

It’s a watchdog. If the interrupt isn’t responded to, this exception is raised. Used to prevent malicious code from disabling interrupts.

3

Breakpoint

A program will often put the INT 3 instruction before an instruction to act as a breakpoint.

4

Overflow

Fairly rare. Happens when we execute the INTO instruction when a bit is set in FLAGS.

5

Bound range exceeded

Safety feature. Invoked when we want to compare the index to the upper and lower bounds of an array.

6

Invalid opcode

Major problem. We’re executing invalid code. Since the compiler wouldn’t generate this to start with, chances are we’re executing either the stack or random places in memory.

7

Device not available

Occurs when an FPU instruction is invoked without an FPU being present.

8

Double fault

This happens when we’ve messed up the IDT.

9

Coprocessor segment overrun

Mostly reserved. Not used in processors after the Intel 386.

10

Invalid TSS

We won’t run into this just yet. While working on executing ring 3 code, we’ve messed up the TSS.

11

Segment not present

A segment in the GDT has got bit 7 of the Access byte set to 0.

12

Stack segment fault

Just like it sounds; the stack’s been messed up.

13

General protection fault

Commonly used in Virtual 80x86 mode. However, this also happens when privileged instructions are used in ring 3.

14

Page fault

Something’s gone wrong with paging, or we’ve accessed a page which doesn’t exist yet. Some people don’t map the first page so that null pointers are easily caught using this exception.

15

Reserved

16

Floating point exception

We’re waiting for another floating point exception, or we’ve not switched on the FPU.

17

Alignment check

Only happens in ring 3.

18

Machine check

Usually disabled, but is invoked when the processor detects an internal error.

19

SIMD floating point exception

This is also disabled by default, but when enabled, occurs when there’s been a floating point exception.

20

Reserved

21

Reserved

22

Reserved

23

Reserved

24

Reserved

25

Reserved

26

Reserved

27

Reserved

28

Reserved

29

Reserved

30

Reserved

31

Reserved

As I said, we can represent this in our code as an array, indexing it on the exception number. We’ll be setting up every fault handler, because if we don’t handle the exception, then we’ll eventually end up triple faulting.

A triple fault is a hard reset of a computer. It occurs when the double fault handler cannot be found. For example, a program may divide by zero. The processor will see this, and look through the IDT for the relevant fault handler. If the fault handler isn’t present, then it’ll try to do the same thing, but will search for the double fault handler. If the same thing happens again, then the computer is reset.

Okay, that’s enough theory. Now, we can start with some pseudo-code. We, unfortunately, have to provide a series of function declarations, instead of just chucking the address of the first ISR at the processor. You could write a script to create the method definitions automatically, or you could simply write them out by hand.

The IDT is a lot more straightforward than the GDT. There is a collection of up to 256 function pointers, all of which should be handled (even by a completely zeroed-out entry). The functions need to set up a bunch of stuff which can’t be done in C++, so unfortunately, we’ll have to drop down to assembly, then move back up to C.

Each IDT entry has an identical, simple format. There are the lower and upper 16 bits of the function pointer, a value to get transferred to CS when the interrupt strikes us, some flags, and a reserved value.

Bits

Field size (bits)

Description

0 : 15

16

Lower 16 bits of the function pointer.

16 : 31

16

Code segment offset. See the GDT section for the full explanation.

32 : 39

8

Reserved, set to zero.

40 : 47

8

Flags byte.

48 : 63

16

Upper 16 bits of the function pointer.

The flags byte also has its own format:

Bits

Field size (bits)

Description

0 : 3

4

Type of gate. We want a 32-bit interrupt gate, so this is 0xE.

4

1

If this is zero, the segment offset refers to a code or data segment.

5 : 6

2

Ring that this should be called from. Currently zero, but eventually will be three.

7

1

If this is one, then the IDT entry is present.

Now, we only need to set a few of these bits – to be precise, the second, third, fourth, and eight. This gives us an overall value of 0x8E. When we move to ring 3, this will change though.

So, our IDT entries should look something like this:

C++
IDTEntry idtEntries[256] = 
{
    {.LowerFunction = (uint)exception0 & 0xFFFF, .CS = 0x8, .Reserved = 0, 
     .Flags = 0x8E, .UpperFunction = ((uint)exception0 >> 16) & 0xFFFF},

    ...

    {.LowerFunction = (uint)exception31 & 0xFFFF, .CS = 0x8, .Reserved = 0, 
     .Flags = 0x8E, .UpperFunction = ((uint)exception31 >> 16) & 0xFFFF}
}

Obviously, you’ll have to do the irritating typing yourself. It’s repetitive, but worth it. When we fill in the assembly side of things, we’ll be able to use NASM’s macro facility to make things easier.

Speaking of the assembly side of things, you need this bit. Something important to realise is that some exceptions push an additional piece of data to the stack; an error code. This is particularly useful in situations such as page faults; we want this.

Just to complicate things, only a few exceptions do this. So we need two macros – one which defines an exception which pushes an error code, and one which doesn’t push an error code. I’ll give these to you ad verbatim, because of their sheer importance.

ASM
%macro Exception_NoErrorCode 1
     [GLOBAL exception%1]     exception%1
        cli
        push byte 0
        push byte %1
        jmp commonExceptionHandler
%endmacro

%macro Exception_ErrorCode 1
     [GLOBAL exception%1]     exception%1
        cli
        push byte %1
        jmp commonExceptionHandler
%endmacro

Not too difficult. Both of them disable interrupts (to avoid interrupt nesting and other nasty things), push the interrupt number, and jump to a common assembly-level exception handler. The only difference is that Exception_NoErrorCode pushes a dummy error code, while Exception_ErrorCode does not. You’ll also notice that we jump, instead of calling. The reason for this is remarkably simple – a call messes up the stack, when we’re trying to put it together.

Now that we’ve built the per-exception code, we can be a little more generic. commonExceptionHandler just pushes some of the registers, loads the kernel data segment descriptor into DS, calls the C code that we use, reverts to the state the CPU last saved, and enables interrupts again. To do this, we use this code:

ASM
commonExceptionHandler:
    pusha            ; Pushes EDI, ESI, EBP, ESP, EBX, EDX, ECX and EAX

    mov ax, ds            ; Set AX to the current data segment descriptor
    push eax            ; Save the data segment descriptor on the stack

    mov ax, 0x10        ; Give the CPU the kernel’s clean data segment descriptor
    mov ds, ax
    mov es, ax
    mov fs, ax
    mov gs, ax

    call exceptionHandler

    pop eax            ; Get the orginal data segment descriptor back
    mov ds, ax
    mov es, ax
    mov fs, ax
    mov gs, ax

    popa            ; Pops EDI, ESI, EBP, ESP, EBX, EDX, ECX, EAX
    add esp, 8            ; Get rid of the pushed error code and interrupt vector
    sti                ; Re-enable interrupts
    iret            ; Tidy up the stack, ready for the next interrupt

This is a little more difficult than the previous couple of snippets, but still relatively simple in theory. We push the necessary registers so that our exception handling code can read them when we get a stack trace working. Then, we save the current data segment descriptor and load our own. This is so a malevolent program can’t muck up its descriptor and give it to the kernel. All that we have to do then is call the generic exception handler, restore the data segment descriptor, pop the necessary registers, and remove the error code and interrupt vector from the stack. We do this by adding 8 to ESP. Remember, each element is four bytes long and that the stack grows downwards.

Once this is written, we’re back in C code. Your method definition will look something like this:

C++
extern "C" void exceptionHandler(StackState stack)
{
    //Handle our exception here
}

To create multiple exception handlers, you could use an array of function pointers and index them by the interrupt vector. We’ll be doing more work on this function in a few pages’ time, so try to keep this handy.

Remapping the PICs

To point out why we need to do this, follow this link. If you look closely, you’ll see that IRQ 0 is mapped to exception 8. Now, if you glance up, you’ll see that exception 8 is a double fault. So every time IRQ 0 fires, we get a double fault. To prevent this, we need to reprogram the PIC, or Programmable Interrupt Controller.

There are actually two PICs, each handling eight interrupts. We need to communicate with and reprogram both of them. We do this through accessing ports. Just for consistency, we’ll put them into a known state so that we don’t get put into some form of limbo which could lead to very subtle bugs later.

The actual process of this can best be explained with code. Pardon the hard-coded values, but in this situation, using constants would only add an unnecessary level of indirection.

C++
outportByte(0x20, 0x11);
outportByte(0xA0, 0x11);

outportByte(0x21, 0x20);
outportByte(0xA1, 0x28);

outportByte(0x21, 0x04);
outportByte(0xA1, 0x02);

outportByte(0x21, 0x01);
outportByte(0xA1, 0x01);

outportByte(0x21, 0x0);
outportByte(0xA1, 0x0);

Yes, it’s ugly. Yes, it’s necessary. You’ll notice that we’re sending our values to ports 0x20, 0x21, 0xA0, and 0xA1. These are the couplet of ports for the primary and secondary PICs, respectively.

The first thing we send is Initialisation Control Word One, or ICW1 for short. This tells the PIC we’re communicating with that we’re setting stuff up, so what follow are tweaks to settings. Next, we send an offset (ICW2). Bear in mind that IRQs and exceptions usually operate in the same address space. The last guaranteed-to-be-unused IDT entry is 32, so we simply have to tell the master PIC to send IRQ 0 to entry 32 (or 0x20) in the IDT, and the slave PIC to send IRQ 8 to entry 40 (0x28) in the IDT.

At this point, you’re probably wondering why we don’t specify an ending point for the interrupts. This is quite simple; looking up a few paragraphs, we see that each PIC handles eight interrupts. This means that IRQ 0 is routed to entry 32, IRQ 7 is routed to entry 39, and so on. You can’t get much simpler than that.

The third ICW which we send is virtually insignificant to the CPU. We simply tell the PICs how to communicate with one another. In ICW1, we told the PICs to work in cascade mode; now we tell them how. Because of this arrangement, the values we send to the primary and secondary PICs are different. I’ll cover both.

We send the value 4 to the primary PIC. This connects the primary PIC to the secondary PIC using interrupt line 2. This is because the x86 architecture dictates that they should be connected this way, and 4 in decimal has bit two set (don’t forget that we’re using base-zero when counting the bit position in binary).

In order to make this communication two-way, we need to connect the secondary PIC to the primary PIC using the same line. We already know that we’re using the second interrupt line, so we send this value to the secondary PIC.

Now that we’ve told the PICs how to communicate, we give it one last piece of information. To understand this, it’s important to realise that this collection of circuitry is used in hundreds of different electronics projects. It existed before the common computer. Because of this, it’s got a lot more features than we need. To disable all of this extra junk, we send a final ICW – ICW4. This sets flags. The only one we need to be worried about is bit zero, which switches to x86 mode. That’s the only bit we set.

When we’ve set the bit, we send the value to both of the PICs. This is shown in the second-last couple of lines; the value is one.

A neat feature that we use is interrupt masking. In the time it takes for a CPU to receive an interrupt and run our code, we could be handling more important interrupts. To help counteract this, we can mask the interrupts at the PIC level. To do this, we simply write a bitmask to the usual port. Every bit which is set represents the interrupt number which should be masked, or not delivered to the CPU.

There’s one final thing that we need to do. At the end of every interrupt, we need to tell any PICs which may have seen the interrupt that we’ve finished. This is a fairly elegant solution to a problem which would arise fairly quickly; what if an interrupt arrived when we were processing interrupts? To tell the PICs, we need to write the value 0x20 to port 0x20, and 0xA0 if we’ve been through the secondary PIC.

This changes our exception handler slightly. At the end, we simply need to send the byte 0x20 to port 0x20. It’s a fairly simple change, but one which allows us to receive more than one interrupt.

Now that the theory used for interrupts and exceptions is in place, I’ll provide one more NASM macro and a quick function. This is mostly similar to the ISR macro without an error code, and the function which passes control to C is along similar lines. The only difference is the name of the function it jumps to.

ASM
%macro IRQ 2
  [GLOBAL IRQ%1]   IRQ%1:
    cli
    push byte 0
    push byte %2
    jmp commonIRQHandler
%endmacro

There’s nothing overly complex here. We declare a function which disables interrupts, pushes a zero and the interrupt vector onto the stack, and then jumps to commonIRQHandler.

ASM
commonIRQHandler:
    pusha            ; Pushes EDI, ESI, EBP, ESP, EBX, EDX, ECX, EAX

    mov ax, ds        ; Set AX to the current data segment descriptor
    push eax        ; Save the data segment descriptor on the stack
    
    mov ax, 0x10        ; Give the CPU the kernel’s clean data segment descriptor
    mov ds, ax
    mov es, ax
    mov fs, ax
    mov gs, ax

    call irqHandler

    pop ebx            ; Get the original data segment descriptor back
    mov ds, bx
    mov es, bx
    mov fs, bx
    mov gs, bx

    popa            ; Pops EDI, ESI, EBP, ESP, EBX, EDX, ECX, EAX
    add esp, 8        ; Get rid of the pushed error code and interrupt vector
    sti                ; Re-enable interrupts
    iret            ; Tidy up the stack, ready for the next interrupt

Now that we’ve got the assembly level code working (and don’t worry, this is the last you’ll see until dynamic linking and task switching), we can make a start on the C handler. Don’t forget that we need to override the name-mangling as we do with every method that needs to be called explicitly from assembly.

C++
extern "C" void irqHandler(StackState stack)
{
    if(stack.InterruptNumber > 39)
        outportByte(0xA0, 0x20);
    outportByte(0x20, 0x20);
}

Don’t forget that if the interrupt number is above 39, we need to acknowledge the secondary PIC as well.

To give the CPU the IDT, we just use the same method as we did the GDT. The only difference is that we just have to execute the LIDT instruction instead of going down to assembly level code. Remember to change the pointer and limit fields of the structure we give the CPU to the start of the IDT.

Now that the interrupt infrastructure is complete, we just need to pass the interrupts along. You can do this in quite a lot of ways; you can have an array of function pointers, you could handle them in the base exception handler, or you could have a linked list of arrays of function pointers. I prefer the third one, because one interrupt can be shared by several different devices, and it’s easier to build the functionality in from the start.

To initialise the GDT and IDT, all you have to do is call GDT::SetupGDT and IDT::SetupIDT. Call them in that order, and do it as soon as you can – the quicker you get into a stable operating environment, the better.

Fini

And we’re done. It’s been a long slog, but I believe that the descriptor tables, exceptions, and interrupts go hand in hand. By now, your Operating System can write to the console, handle exceptions, and process interrupts. This is all you need to make a basic keyboard driver and use a timer.

If you download the attachments, you’ll find the relevant source code. The key word is relevant. I’ve not included the console driver, assembly bootstrap, etc., because it’s covered in the previous chapter, and I’d like to keep the material short and snappy so that you don’t have to search through lots of code files to find what you’re looking for.

The next part in the series will be somewhat of an interlude; a break from the more hard-core stuff. In it, I’ll show you how to build a trio of basic drivers: keyboard, timer, and the real-time clock. After that, we’ll be going through the more meaty stuff of physical memory management.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)