Most Asked Infosys Operating System (OS) Interview Questions


Mastering Operating Systems: Processes, Memory Management, and Concurrency Control

Operating systems are the unsung heroes of our computing experience. Every time you open an application, browse the web, or even type a document, you're interacting with your operating system. Understanding how these systems function is crucial for both software developers and anyone seeking a deeper appreciation of computer technology. This blog post delves into key OS concepts, explaining them in a clear and concise manner.


What is an operating system and what are its functions?

An operating system (OS) is a software that acts as an intermediary between computer hardware and the user. It manages computer hardware and software resources and provides common services for computer programs. It's the foundation upon which all other software runs. Core functions include managing processes (running programs), memory (RAM allocation and management), input/output devices (keyboard, mouse, printer, etc.), file systems (organization and storage of files), and security (access control and protection from malware). Examples include Windows, macOS, Linux, Android, and iOS. The user interface, whether it's a graphical user interface (GUI) or a command-line interface (CLI), is also a crucial part of the OS, allowing users to interact with the system.


Difference between process and thread.

A process is an independent execution context, having its own memory space, resources, and address space. It's a complete program in execution. A thread, on the other hand, is a lightweight unit of execution within a process. Multiple threads can exist within a single process, sharing the same memory space and resources. Think of a process as a building and threads as individual workers within that building. Key differences are summarized below:

Feature Process Thread
Memory Space Independent Shared
Resource Sharing Limited sharing Extensive sharing
Creation Overhead High Low

Using threads instead of processes often leads to better performance, especially in situations requiring concurrent operations within a single application, because of reduced overhead associated with context switching between threads.


What is context switching in OS?

Context switching is the mechanism by which an operating system rapidly switches between different processes or threads. The OS saves the current state of a process (including CPU registers, program counter, memory pointers etc.), suspends its execution, and loads the state of another process, allowing it to run. The CPU scheduler plays a vital role here, selecting which process or thread gets executed next based on various scheduling algorithms. This switching happens very frequently, giving the illusion of multiple programs running simultaneously. Frequent context switching, while allowing multitasking, can cause performance overhead due to the time it takes to save and restore context.


Difference between multitasking, multithreading, and multiprocessing.

Let's define each term and then compare them:

  • Multitasking: The ability of an OS to run multiple processes concurrently (though not simultaneously on a single-core processor). The OS switches between processes rapidly, creating the illusion of parallel execution.
  • Multithreading: The ability to run multiple threads within a single process concurrently. Threads share the same memory space, making communication between them efficient.
  • Multiprocessing: The ability of an OS to utilize multiple processors or cores to execute multiple processes simultaneously. Each process can run on a separate core, leading to true parallel execution.

Here's a comparison table:

Feature Multitasking Multithreading Multiprocessing
Units of Execution Processes Threads within a process Processes on different cores
Memory Separate memory spaces Shared memory space Separate memory spaces (usually)
Parallelism Pseudo-parallel (on single-core) Pseudo-parallel (on single-core) True parallelism (on multi-core)


What is deadlock? Explain necessary conditions.

A deadlock is a situation where two or more processes are blocked indefinitely, waiting for each other to release resources that they need. This creates a standstill where no process can proceed. Imagine two trains approaching each other on a single track – neither can move until the other moves, resulting in a deadlock. Four necessary conditions must be met for a deadlock to occur:

  1. Mutual Exclusion: At least one resource must be held in a non-sharable mode. Only one process can use the resource at a time.
  2. Hold and Wait: A process holding at least one resource is waiting to acquire additional resources held by other processes.
  3. No Preemption: Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task.
  4. Circular Wait: There must exist a set {P0, P1, ..., Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, ..., Pn–1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Deadlocks can be prevented or avoided using various techniques like resource ordering, deadlock detection and recovery, and deadlock prevention algorithms.


What is paging in OS?

Paging is a memory management scheme that divides both logical (program) and physical (main) memory into fixed-size blocks called pages and frames, respectively. A page is a portion of a program's address space, while a frame is a portion of physical memory. The OS uses a page table to map logical addresses (generated by the program) to physical addresses (actual memory locations). When a program needs to access a particular memory location, the OS checks the page table; if the page is in memory (loaded in a frame), it performs the memory access directly. If the page is not in memory (a page fault), the OS loads the page from secondary storage (hard drive) into a free frame. Paging provides efficient memory allocation and allows for flexible allocation of memory to processes.


Explain segmentation in OS.

Segmentation is another memory management scheme that divides logical memory into variable-sized blocks called segments. Each segment represents a logical division of a program, such as code, data, stack, etc. This allows for a more modular program design. A segment table maps logical addresses to physical addresses. The advantage is that it allows efficient access to program modules and provides good memory management for programs of different sizes, as opposed to paging’s fixed-size approach. However, external fragmentation may still occur.


Difference between paging and segmentation.

Feature Paging Segmentation
Block Size Fixed-size (pages) Variable-size (segments)
Memory Allocation Contiguous allocation of frames Non-contiguous allocation of segments
Address Translation Page table Segment table
External Fragmentation Less More


What is virtual memory?

Virtual memory is a memory management technique that provides each process with a private address space (virtual address space) that is larger than the physical memory available. It achieves this by using a combination of RAM and secondary storage (typically a hard drive). When a process needs to access memory, the OS checks if the required page is in RAM. If it's not (page fault), it loads the page from the hard drive (swap space). Only the currently needed pages reside in RAM; the others remain on the hard drive until they are requested. This allows running programs larger than available physical memory.


What is thrashing in OS?

Thrashing occurs when the system spends more time swapping pages between RAM and secondary storage than actually executing processes. This is often caused by a high degree of paging activity, leading to poor system performance. It happens when there is insufficient physical memory to hold all actively used pages, leading to a constant cycle of swapping and a high page fault rate. This renders the system almost unusable. To avoid thrashing, the OS may use algorithms to adjust the balance of memory usage between active processes, potentially terminating lesser-used processes.


Explain critical section problem and its solutions.

The critical section problem arises when multiple processes access and modify shared resources concurrently. This can lead to race conditions, where the final outcome depends on unpredictable order of execution. A critical section is a code segment where shared resources are accessed. Solutions to this problem aim to achieve mutual exclusion – ensuring that only one process can access the critical section at a time. Common solutions include:

  • Semaphores: Integer variables used for process synchronization. Processes use `wait` (decrement) and `signal` (increment) operations to access resources controlled by semaphores.
  • Mutexes (Mutual Exclusion): Similar to semaphores, but generally used to protect a single resource. Only one process can acquire the mutex at a time.
  • Monitors: High-level constructs that encapsulate shared resources and provide synchronized methods for accessing them, simplifying synchronization and helping prevent race conditions.

Conclusion: A Deeper Understanding of Operating Systems

This exploration of key operating system concepts – processes, memory management, and concurrency control – provides a foundation for understanding the complexities of modern computing. From context switching to deadlock prevention and virtual memory, these concepts are fundamental to how our computers operate. Continued exploration of these topics will deepen your understanding of operating systems and their vital role in our technology-driven world. To delve further, consider researching specific scheduling algorithms, memory allocation strategies, and advanced concurrency control techniques.

```