Topics what must to know:

  1. Introduction to Kernel
  2. What is a Kernel?
  3. Kernel Services
  4. User Mode and Kernel Mode
  5. Interrupt
  6. Buffer Cache

Introduction to Kernel

The kernel is the core of the UNIX operating system. Basically, the kernel is a large program that is loaded into memory when the machine is turned on, and it controls the allocation of hardware resources from that point forward.

The kernel knows what hardware resources are available (like the processor(s), the on-board memory, the disk drives, network interfaces, etc.), and it has the necessary programs to talk to all the devices connected to it.

The kernel isolates itself from the user programs and programs are independent of the hardware. This makes UNIX system to run on different machines. The kernel provides the file system, CPU scheduling, memory management and other OS functions through system calls.


What is Kernel?

Introduction to system concepts – overview of file subsystem

The internal representation of the file is in the form of inode . This inode contains the information about the file such as its layout on the disk, its owner, its access permissions and last accessed time.

This inode is short form for index node. Every file has one inode. The inodes of all the files on the system are stored in inode table. When we create a new file a new entry in the inode table is created.

The kernel contain two data structures file table and user file descriptor table. The file table is global table at the kernel level but the user file descriptor table s for every process. When a process creates a file or opens a file the entry for that is made in both the tables.

The information about the current state of the file is maintained in the file table. For example if the file is being written the information about the current cursor position is kept in the file table. This file table also checks whether the accessing process has access to that file or not.

The user file descriptor table keeps a track of all the files opened by the processes and what are the relationships between these files.

The regular files and folders are kept on block devices like disks and tape drives. The drive has logical block numbers and physical block numbers and the mapping from logical to physical block numbers is done by disk driver.


File system layout

The boot block occupies the beginning of the file system. This contains the bootstrap code that is required for the machine to boot.

Super block describes the state of the file system i.e. its size, maximum number of files that can be stored and the free space information.

The inode list contains the inode table and the kernel references the inode list area to get the information about the files stored on the machine.

The data block is the end of the inode list and starting of the blocks that can be used to store the user files. The starting area of the data block will contain some administrative files and information and the later block contains the actual files.


Introduction to system concepts – Process subsystem

A process on UNIX can be created by executing the fork system call. Only the process 0 is created without using this system call and all other processes are created using the fork system call. (Process 0 was created manually by booting the system.)

The process that executes fork is called the parent process and the process that was created is called the child processes. A process can have many child but only one parent.

Kernel identifies every process by a unique identifier called the process ID or PID.


Process regions

Text: The information about the instructions of the process

Data: The uninitialized data members (buffer)

Stack: logical stack frames created during function calls. This is created automatically and grows dynamically.

Since the processes in UNIX executes in two modes, kernel and user. There are separate stacks for both the modes.

All the processes in the system are identified by PID which are stored in the process table. Every process has an entry in the kernel process table.

Every process is allocated the u-area (user area in the main memory. The region is the contiguous area of process addresses.

The processes table entry and u area controls the status information about the process. U-area is extension of process table entry.


Context of a process

Context of a process is state. When a process is executing the process has a context of execution. When the process shifts from the running to waiting state the context switching takes place.

  • Process states
  • Process state in UNIX could be one of the following:
  • Ready
  • Running in user mode
  • Running in kernel mode
  • Sleeping/waiting
  • Terminated
  • Kernel data structures
The kernel data structures occupy fix size tables rather than dynamically allocated space. This approach has one advantage, the kernel code is simpler but there is one disadvantage of this approach too. It limits the number of entries in these data structures.

So if there are free entries in the kernel data structures we are wasting the potential resources of kernel and if, on the other hand the kernel data structure table is free we need to find a way to notify the processes that something has gone wrong.

The simplicity of kernel code, which is because of this limited size of data structures has far too many advantages than disadvantages.


System Administration


Processes that do various functions performed for the general welfare of the user community. Conceptually there is no difference between administrative process and user process. They use same set of system calls user processes do.

They only differ from user processes in rights and privileges they have. So to summarize, kernel cannot distinguish between kernel process and user process it is just the permission of processes and files that some behave as administrative processes and some behave as the user processes.


Kernel Services

The service provided by kernel are:

  • It controls the execution of process by allowing their creation, termination and communication.
  • It controls the CPU scheduling of process in a time-shared manner.
  • It allows the process to access the peripheral devices such as terminals, tapes, drives, disk drives and network devices.
  • It controls the allocation of main memory for executing process.
  • It also controls the allocation of secondary memory for efficient storage and retrieval.
  • User Mode and Kernel Mode

A processor in a computer running Windows has two different modes:

User mode Vs. Kernel mode

The processor switches between the two modes depending on what type of code is running on the processor. Applications run in user mode, and core operating system components run in kernel mode.

At any one time we have one process engaging the CPU. This may be a user process or a system routine (like ls, chmod) that is providing a service.

The following three situations result in switching to kernel mode from user mode of operation:

The scheduler allocates a user process a slice of time (about 0.1 second) and then system clock interrupts. This entails storage of the currently running process status and selecting another runnable process to execute. This switching is done in kernel mode.

A point that ought to be noted is: on being switched the current process's priority is re-evaluated (usually lowered ).


The UNIX priorities are ordered in decreasing order as follows:

  • HW errors
  • Clock interrupt
  • Disk I/O
  • Keyboard
  • SW traps and interrupts

Services are provided by kernel by switching to the kernel mode. So if a user program needs a service (such as print service, or access to another file for some data) the operation switches to the kernel mode. If the user is seeking a peripheral transfer like reading a data from keyboard, the scheduler puts the currently running process to “sleep” mode.

Suppose a user process had sought a data and the peripheral is now ready to provide the data, then the process interrupts. The hardware interrupts are handled by switching to the kernel mode. In other words, the kernel acts as the via-media between all the processes and the hardware as depicted in the below figure.


Interrupt

  1. An interrupt is a mechanism that interfaces with the execution of process in progress.
  2. In UNIX, devices such as I/O peripherals or the system clock interrupt the CPU asynchronously.
  3. This means that tie of their occurrence is unpredictable and interrupt may suspend other processes that may have nothing in common with blocking process.

There are three types of interrupts –

  • Hardware interrupt (peripheral devices or system clock)
  • Software interrupts (programs)
  • Exceptions (page fault, addressing illegal memory)

When an interrupt occurs, the interrupted process cannot continue its execution. The interrupt can cause inconsistency in kernel data which can change kernel state information. As a result, the resource utilization and system throughput goes down.

The system kernel is responsible for handling interrupts whether they results from hardware, software or any other cause.

On receipt of the interrupt, the kernel saves its current context (a frozen image of what a process was doing), determine the cause of interrupt and services to interrupt. The kernel invokes the interrupt handler to communicate with interrupted process and disable the interrupt. After the kernel services the interrupt, it restores its interrupted context and proceeds as if nothing had happened.

The kernel services the interrupt according to priority. It blocks out lower priority interrupts but services higher priority interrupts.

Since interrupt can corrupt data, the kernel sets its execution level at the critical region of code i.e. during critical activity. This mask off interrupts from that level. The kernel maintain the consistency of its data structures by enforcing the policy of non-preemption.


Buffer Cache

The kernel can read and write directly to and from the disk for all file system accesses, but the system responses time and throughput will be poor because of the slow disk transfer rate.

The kernel therefore attempts to minimize the frequency of disk access by keeping a pool of internal data buffers called buffer cache, which contains the data in recently used disk blocks.


Buffer header

At the time of system initialization, the kernel allocates space for a number of buffer according to the memory size and system performance constraints. The buffer contains the data in recently used disk blocks. When reading or writing data from the disk, the kernel attempts to read or write from the buffer.

A buffer consists of two parts – a memory array that contains data from the disk and buffer header that identifies the buffer.


Advantages of Buffer Cache

The use of buffer allows uniform disk access and modular code, resulting in a simpler system design.

The system places no data alignment restriction on user processes doing I/O because the kernel aligns data internally. The kernel eliminates the need for special alignment of user buffer by copying data from user buffer to system buffer or vice-versa. This makes user program simpler and more portable.

The buffer cache reduces the amount of disk traffic resulting in the increase of overall system throughput and decrease in response time.


Disadvantages of Buffer Cache
In the cache strategy, for a delayed write since kernel does not write data immediately to the disk, the system is vulnerable to crash that leave disk data in incorrect state. Also the user does not know when the kernel writes the data to the disk.

The use of buffer cache requires an extra copy when reading and writing to and from user processes.




Translate

CR Zaman. Powered by Blogger.

Recent Posts

Popular Posts