UNIT 6
File Management, Disk Management, I/O Hardware
One of the vital jobs of associate package is to manage varied I/O devices together with mouse, keyboards, touch pad, disk drives, show adapters, USB devices, Bit-mapped screen, LED, analog-digital converter, On/off switch, network connections, audio I/O, printers etc.
An I/O system is needed to require associate application I/O request and send it to the physical device, then take no matter response comes back from the device and send it to the appliance. I/O devices is divided into 2 classes
• Block devices − A block device is one with that the motive force communicates by causing entire blocks of information. as an example, Hard disks, USB cameras, Disk-On-Key etc.
• Character devices − a personality device is one with that the motive force communicates by causing and receiving single characters (bytes, octets). as an example, serial ports, parallel ports, sounds cards etc.
Device Controllers
Device drivers are code modules that may be blocked into associate OS to handle a specific device. package takes facilitate from device drivers to handle all I/O devices.
The Device Controller works like associate interface between a tool and a tool driver. I/O units (Keyboard, mouse, printer, etc.) usually accommodates a mechanical part associated an electronic part wherever electronic part is termed the device controller.
There is continuously a tool controller and a tool driver for every device to speak with the operational Systems. a tool controller could also be ready to handle multiple devices. As associate interface its main task is to convert serial bit stream to dam of bytes, perform error correction as necessary.
Any device connected to the pc is connected by a plug and socket, and also the socket is connected to a tool controller. Following could be a model for connecting the computer hardware, memory, controllers, and I/O devices wherever computer hardware and device controllers all use a standard bus for communication.
Synchronous vs asynchronous I/O
• Synchronous I/O − during this theme computer hardware execution waits whereas I/O issue
• Asynchronous I/O − I/O issue at the same time with computer hardware execution
Communication to I/O Devices
The computer hardware should have some way to pass info to associated from an I/O device. There ar 3 approaches out there to speak with the computer hardware and Device.
• Special Instruction I/O
• Memory-mapped I/O
• Direct operation (DMA)
Special Instruction I/O
This uses computer hardware directions that ar specifically created for dominant I/O devices. These directions usually enable information to be sent to associate I/O device or scan from associate I/O device.
Memory-mapped I/O
When exploitation memory-mapped I/O, identical address house is shared by heart and I/O devices. The device is connected on to sure main memory locations so I/O device will transfer block of information to/from memory while not browsing computer hardware.
While exploitation memory mapped IO, OS allocates buffer in memory and informs I/O device to use that buffer to send information to the computer hardware. I/O device operates asynchronously with computer hardware, interrupts computer hardware once finished.
The advantage to the present methodology is that each instruction which might access memory is wont to manipulate associate I/O device. Memory mapped IO is employed for many high-speed I/O devices like disks, communication interfaces.
Direct operation (DMA)
Slow devices like keyboards can generate associate interrupt to the most computer hardware when every computer memory unit is transferred. If a quick device like a disk generated associate interrupt for every computer memory unit, the package would pay most of its time handling these interrupts. therefore a typical pc uses direct operation (DMA) hardware to cut back this overhead.
Direct operation (DMA) suggests that computer hardware grants I/O module authority to scan from or write to memory while not involvement. DMA module itself controls exchange of information between main memory and also the I/O device. computer hardware solely|is merely|is simply|is just|is barely} concerned at the start and finish of the transfer and interrupted only when entire block has been transferred.
Direct operation desires a special hardware referred to as DMA controller (DMAC) that manages the information transfers and arbitrates access to the system bus. The controllers ar programmed with supply and destination pointers (where to read/write the data), counters to trace the amount of transferred bytes, and settings, which has I/O and memory sorts, interrupts and states for the computer hardware cycles.
The operating system uses the DMA hardware as follows −
Step | Description |
1 | Device driver is instructed to transfer disk data to a buffer address X. |
2 | Device driver then instruct disk controller to transfer data to buffer. |
3 | Disk controller starts DMA transfer. |
4 | Disk controller sends each byte to DMA controller. |
5 | DMA controller transfers bytes to buffer, increases the memory address, decreases the counter C until C becomes zero. |
6 | When C becomes zero, DMA interrupts CPU to signal transfer completion. |
Polling vs Interrupts I/O
A pc should have some way of sleuthing the arrival of any sort of input. There ar 2 ways in which this could happen, referred to as polling and interrupts. each of those techniques enable method|the method} or to traumatize events that may happen at any time which don't seem to be associated with the process it's presently running.
Polling I/O
Polling is that the easiest method for associate I/O device to speak with the processor. the method of sporadically checking standing of the device to ascertain if it's time for consecutive I/O operation, is termed polling. The I/O device merely puts the knowledge during a standing register, and also the processor should return and find the knowledge.
Most of the time, devices won't need attention and once one will it'll have to be compelled to wait till it's next interrogated by the polling program. this is often associate inefficient methodology and far of the processors time is wasted on inessential polls.
Compare this methodology to a coach frequently asking each student during a category, one when another, if they have facilitated.
Obviously the more efficient method would be for a student to inform the teacher whenever they require assistance.
Interrupts I/O
An alternative scheme for dealing with I/O is the interrupt-driven method. An interrupt is a signal to the microprocessor from a device that requires attention.
A device controller puts an interrupt signal on the bus when it needs CPU’s attention when CPU receives an interrupt, It saves its current state and invokes the appropriate interrupt handler using the interrupt vector (addresses of OS routines to handle various events). When the interrupting device has been dealt with, the CPU continues with its original task as if it had never been interrupted.
2. Explain in detail Operating System - I/O Softwares.
I/O package is usually organized within the following layers −
• User Level Libraries − This provides easy interface to the user program to perform input and output. for instance, stdio may be a library provided by C and C++ programming languages.
• Kernel Level Modules − This provides utility to act with the device controller and device freelance I/O modules employed by the device drivers.
• Hardware − This layer includes actual hardware and hardware controller that act with the device drivers and makes hardware alive.
• A key thought within the style of I/O package is that it ought to be device freelance wherever it ought to be attainable to put in writing programs which will access any I/O device while not having to specify the device before. for instance, a program that browses a file as input ought to be able to read a file on a diskette, on a tough disk, or on a ROM, while not having to switch the program for every totally different device.
Device Drivers
Device drivers are package modules which will be obstructed into associate OS to handle a specific device. OS takes facilitate from device drivers to handle all I/O devices. Utility’s encapsulate device-dependent code and implement a typical interface in such how that code contains device-specific register reads/writes. Device driver, is usually written by the device's manufacturer and delivered alongside the device on a ROM.
A device driver performs the subsequent the subsequent
• To settle for request from the device freelance package higher than to that.
• Interact with the device controller to require and provides I/O and perform needed error handling
• Making positive that the request is dead with success
How a tool driver handles missive of invitation is as follows: Suppose missive of invitation involves browse a block N. If the motive force is idle at the time missive of invitation arrives, it starts effecting the request at once. Otherwise, if the motive force is already busy with another request, it places the new request within the queue of unfinished requests.
Interrupt handlers
An interrupt handler, conjointly referred to as associate interrupt utility routine or ISR, may be a piece of package or additional specifically a recall performs in a veryn OS or additional specifically in a utility, whose execution is triggered by the reception of associate interrupt.
When the interrupt happens, the interrupt procedure will no matter it's to so as to handle the interrupt, updates knowledge structures associated wakes up method that was awaiting an interrupt to happen.
The interrupt mechanism accepts associate address ─ variety that selects a particular interrupt handling routine/function from a tiny low set. In most architectures, this address is associate offset hold on in a very table referred to as the interrupt vector table. This vector contains the memory addresses of specialized interrupt handlers.
Device-Independent I/O package
The basic perform of the device-independent package is to perform the I/O functions that are common to all or any devices and to supply an even interface to the user-level package. tho' it's tough to put in writing utterly device freelance package however {we can|we will|we ar able to} write some modules that are common among all the devices. Following may be a list of functions of device-independent I/O package
• Uniform interfacing for device drivers
• Device naming - mnemotechnic names mapped to Major and Minor device numbers
• Device protection
• Providing a device-independent block size
• Buffering as a result of knowledge returning off a tool can't be hold on in final destination.
• Storage allocation on block devices
• Allocation and cathartic dedicated devices
• Error coverage
User-Space I/O package
These ar the libraries which offer richer and simplified interface to access the practicality of the kernel or ultimately interactive with the device drivers. Most of the user-level I/O package consists of library procedures with some exception like spooling system that may be a manner of addressing dedicated I/O devices in a very concurrent execution system.
I/O Libraries (e.g., stdio) ar in user-space to supply associate interface to the OS resident device-independent I/O SW. for instance putchar(), getchar(), printf() and scanf() ar example of user level I/O library stdio accessible in C programming.
Kernel I/O scheme
Kernel I/O scheme is accountable to supply several services associated with I/O. Following ar a number of the services provided.
• Scheduling − Kernel schedules a collection of I/O requests to see an honest order during which to execute them. once associate application problems a block I/O call, the request is placed on the queue for that device. The Kernel I/O hardware rearranges the order of the queue to enhance the system potency and therefore the average reaction time skilled by the applications.
• Buffering − Kernel I/O scheme maintains a memory space referred to as buffer that stores knowledge whereas they're transferred between 2 devices or between a tool with associate application operation. Buffering is finished to address a speed mate between the producer and shopper of {a knowledge|a knowledge|an information} stream or to adapt between devices that have totally different data transfer sizes.
• Caching − Kernel maintains cache memory that is region of quick memory that holds copies of knowledge. Access to the cached copy is additional economical than access to the first.
• Spooling and Device Reservation − A spool may be a buffer that holds output for a tool, like a printer, that can't settle for interleaved knowledge streams. The spooling system copies the queued spool files to the printer one at a time. In some in operation systems, spooling is managed by a system daemon method. In different in operation systems, it's handled by associate in kernel thread.
• Error Handling − associate OS that uses protected memory will guard against several types of hardware and application errors.
3. Write some of the Application I/O interface
I/O Interface:
There is need of surface whenever any CPU wants to communicate with I/O devices. The interface is used to interpret address which is generated by CPU. Thus, surface is used to communicate to I/O devices i.e. to share information between CPU and I/O devices interface is used which is called as I/O Interface.
Various applications of I/O Interface:
Application of I/O is that we can say interface have access to open any file without any kind of information about file i.e., even basic information of file is unknown. It also has feature that it can be used to also add new devices to computer system even it does not cause any kind of interrupt to operating system. It can also used to abstract differences in I/O devices by identifying general kinds. The access to each of general kind is through standardized set of function which is called as interface.
Each type of operating system has its own category for interface of device-drivers. The device which is given may ship with multiple device-drivers-for instance, drivers for Windows, Linux, AIX and Mac OS, devices may is varied by dimensions which is as illustrated in the following table:
S. No. | Basis | Alteration | Example |
1. | Mode of Data-transfer | character or block | terminal disk |
2. | Method of Accessing data | sequential or random | modem, CD-ROM |
3. | Transfer schedule | synchronous or asynchronous | tape, keyboard |
4. | Sharing methods | dedicated or sharable | tape, keyboard |
5. | Speed of device | latency, seek time, transfer rate, delay between operations |
|
6. | I/O Interface | read only, write only, read-write | CD-ROM graphics controller disk |
A character stream or block both transfers data in form of bytes. The difference between both of them is that character-stream transfers bytes in linear way i.e., one after another whereas block transfers whole byte in single unit.
2. Sequential or Random Access:
To transfer data in fixed order determined by device, we use sequential device whereas user to instruct device to seek to any of data storage locations, random-access device is used.
3. Synchronous or Asynchronous:
Data transfers with predictable response times is performed by synchronous device, in coordination with others aspects of system. An irregular or unpredictable response times not coordinated with other computer events is exhibits by an asynchronous device.
4. Sharable or Dedicated:
Several processes or threads can be used concurrently by sharable device; whereas dedicated device cannot.
5. Speed of Operation:
The speed of device has range set which is of few bytes per second to few giga-bytes per second.
6. Read-write, read only, write-only:
Different devices perform different operations, some supports both input and output, but others supports only one data transfer direction either input or output.
4. What is Kernel I/O subsystem in an operating system?
The kernel provides several services associated with I/O. many services like planning, caching, spooling, device reservation, and error handling – area unit provided by the kernel, s I/O scheme designed on the hardware and device-driver infrastructure. The I/O scheme is additionally accountable for protective itself from the errant processes and malicious users.
1. I/O planning –
To schedule a group of I/O request suggests that to see an honest order within which to execute them. The order within which application problems the call area unit the most effective selection. planning will improve the general performance of the system, will share device access permission fairly to all or any the processes, cut back the common waiting time, time interval, turnaround for I/O to finish.
OS developers implement planning by maintaining a wait queue of the request for every device. once associate application issue a interference I/O call, The request is placed within the queue for that device. The I/O computer hardware set up the order to enhance the potency of the system.
2. Buffering –
A buffer could be a memory space that stores information being transferred between 2 devices or between a tool associated an application. Buffering is finished for 3 reasons.
1. 1st is to deal with a speed mate between producer and client of a knowledge stream.
2. The second use of buffering is to produce adaptation for information that have completely different data-transfer sizes.
3. Third use of buffering is to support copy linguistics for the appliance I/O, “copy linguistics” suggests that, suppose that associate application desires to write down information on a disk that's hold on in its buffer. it decisions the write() system’s call, providing a pointer to the buffer and also the number specifying the quantity of bytes to write down.
Q. when the call returns, what happens if the appliance of the buffer changes the content of the buffer?
Ans. With copy linguistics, the version of the information written to the disk is certain to be the version at the time of the appliance call.
3. Caching –
A cache could be a region of quick memory that holds a replica of knowledge. Access to the cached copy is far easier than the initial file. for example, the instruction of the presently running method is hold on on the disk, cached in physical memory, and traced once more within the CPU’s secondary and first cache.
The main distinction between a buffer and a cache is that a buffer could hold solely the prevailing copy of a knowledge item, whereas cache, by definition, holds a replica on quicker storage of associate item that resides elsewhere.
4. Spooling and Device Reservation –
A spool could be a buffer that holds the output of a tool, like a printer that can't settle for interleaved information streams. though a printer will serve just one job at a time, many applications may need to print their output at the same time, while not having their output mixes along.
The OS solves this downside by preventing all output continued to the printer. The output of all applications is spooled in a very separate computer file. once associate application finishes printing then the spooling system queues the corresponding spool file for output to the printer.
5. Error Handling –
An OS that uses protected memory will guard against several types of hardware and application errors, so an entire system failure isn't the standard results of every minor mechanical flaw, Devices, and I/O transfers will fail in some ways, either for transient reasons, as once a network becomes full or for permanent reasons, as once a controller becomes defective.
6. I/O Protection –
Errors and also the issue of protection area unit closely connected. A user method could conceive to issue misappropriated I/O directions to disrupt the traditional operate of a system. we are able to use the varied mechanisms to make sure that such disruption cannot ensue within the system.
To prevent misappropriated I/O access, we have a tendency to outline all I/O directions to be privileged directions. The user cannot issue I/O instruction directly.
5. Explain Transforming I/O request to hardware operation.
We know that there's acknowledgement between driver and device controller however here question is that however OS connects application request or we are able to say I/O request to line of network wires or to specific disk sector or we are able to notify hardware -operations.
To understand idea allow us to take into account example that is as follows.
Example –
We area unit reading file from disk. the applying we tend to request for can refers to information by file name. inside disk, classification system maps from file name through file-system directories to get area allocation for file. In MS-DOS, name of file maps to variety that indicates as entry in file-access table, which entry to table tells America that that disk blocks area unit allotted to file. In UNIX, name maps to inode variety, and inode variety contains data regarding space-allocation. however here question arises that however affiliation is formed from file name to disk controller?
The method that's employed by MS-DOS, is comparatively straightforward OS. the primary a part of MS-DOS file name, is proceeding with colon, is string that identifies that there's specific hardware device.
The UNIX operating system uses completely different methodology from MS-DOS. It represents device names in regular file-system name area. in contrast to MS-DOS file name, that has colon centrifuge, however UNIX operating system path name has no clear separation of device portion. In fact, not a part of path name is name of device. UNIX operating system has mount table that associates with prefixes of path names with specific hardware device names.
Modern operational systems gain important flexibility from multiple stages of search tables in path between request and physical device stages controller. there are general mechanisms is that is employed to pass request between application and drivers. Thus, while not recompiling kernel, we are able to introduce new devices and drivers into laptop. In fact, some OS have the power to load device drivers on demand. At the time of booting, system first off probes hardware buses to see what devices area unit gift. it's then loaded to necessary drivers, consequently I/O request.
The typical life cycle of obstruction scan request, is shown within the following figure. From figure, we are able to recommend that I/O operation needs several steps that along consume sizable amount of mainframe cycles.
Figure – The life cycle of I/O request
1. supervisor call instruction –
Whenever, any I/O request comes, method problems obstruction read() supervisor call instruction to antecedently opened file descriptor of file. Basically, role of system-call code is to see parameters for correctness in kernel. If information we tend to place in type of input is already on the market in buffer cache, information goes to came back to method, and in this case I/O request is completed.
2. different approach if input isn't on the market –
If the info isn't on the market in buffer cache then physical I/O should be performed. the method is removes from run queue and is placed on wait queue for device, and I/O request is scheduled. once programing, I/O system sends request to driver via procedure decision or in-kernel message however it depends upon OS by that mode request can send.
3. Role of driver –
After receiving the request, driver got to receive information and it'll receive information by allocating kernel buffer area and once receiving information it'll schedules I/O. in spite of everything this, command is given to device controller by writing into device-control registers.
4. Role of Device Controller –
Now, device controller operates device hardware. Actually, information transfer is finished by device hardware.
5. Role of DMA controller –
After information transfer, driver could poll for standing and information, or it should have got wind of DMA transfer into kernel memory. The transfer is managed by DMA controller. eventually once transfers complete, it'll generates interrupt.
6. Role of interrupt handler –
The interrupt is sent to correct interrupt handler through interrupt-vector table. It stores any necessary information, signals driver, and returns from interrupt.
7. Completion of I/O request –
When, driver receives signal. This signal determines that I/O request has completed and conjointly determines request’s standing, signals kernel I/O system that request has been completed. once transferring information or come back codes to deal with area kernel moves method from wait queue back to prepared queue.
8. Completion of supervisor call instruction –
When method moves to prepared queue, it means that method is unblocked. once the method is appointed to mainframe, it means that method resumes execution at completion of supervisor call instruction.
6. Explain pseudo parallelism. Describe the process model that makes parallelism easier to deal with.
All modern computers can do many things at the same time. For example, computer can be reading from a disk and printing on a printer while running a user program. In a multiprogramming system, the CPU switches from program to program, running each program for a fraction of second.
Although the CPU is running only one program at any instant of time. As CPU speed is very high so it can work on several programs in a second. It gives user an illusion of parallelism i.e. several processes are being processed at the same time. This rapid switching back and forth of the CPU between programs gives the illusion of parallelism and is termed as pseudo parallelism. As it is extremely difficult to keep track of multiple, parallel activities, to make parallelism easier to deal with, the operating system designers have evolved a process model.
The process Model
In process model, all the run able software on the computer (including the operating system) is organized into a sequence of processes. A process is just an executing program and includes the current values of the program counter, registers and variables. Each process is considered to have its own virtual CPU. The real CPU switches back and forth from process to process. In order to track CPU switches from program to program, it is convenient/easier to think about a collection/number of processes running in (pseudo) parallel. The rapid switching back and forth is in reality, multiprogramming.
One Program Counter Process Switch
This figure shows multiprogramming of four programs.
Conceptual model of 4 independent sequential processes.
Only one program is active at any moment. The rate at which processes perform computation might not be uniform. However usually processes are not affected by the relative speeds of different processes.
7. Shown below is the workload for 5 jobs arriving at time zero in the order given below −
Job | Burst Time |
1 | 10 |
2 | 29 |
3 | 3 |
4 | 7 |
4 | 12 |
Now find out which algorithm among FCFS, SJF And Round Robin with quantum 10, would give the minimum average time.
For FCFS, the jobs will be executed as:
Job | Waiting Time |
1 | 0 |
2 | 10 |
3 | 39 |
4 | 42 |
5 | 49 |
| 140 |
The average waiting time is 140/5=28.
For SJF (non-preemptive), the jobs will be executed as:
Job | Waiting Time |
1 | 10 |
2 | 32 |
3 | 0 |
4 | 3 |
5 | 20 |
| 65 |
The average waiting time is 65/5=13.
For Round Robin, the jobs will be executed as:
Job | Waiting Time |
1 | 0 |
2 | 32 |
3 | 20 |
4 | 23 |
5 | 40 |
| 115 |
The average waiting time is 115/5=23.
Thus, SJF gives the minimum average waiting time.
8. What is Highest Response Ratio Next (HRN) Scheduling?
Priority = (waiting time + service time) / service time.
9. Explain time slicing. How its duration affects the overall working of the system?
Time slicing is a scheduling mechanism/way used in time sharing systems. It is also termed as Round Robin scheduling. The aim of Round Robin scheduling or time slicing scheduling is to give all processes an equal opportunity to use CPU. In this type of scheduling, CPU time is divided into slices that are to be allocated to ready processes. Short processes may be executed within a single time quantum. Long processes may require several quanta.
The Duration of time slice or Quantum
The performance of time slicing policy is heavily dependent on the size/duration of the time quantum. When the time quantum is very large, the Round Robin policy becomes a FCFS policy. Too short quantum causes too many process/contexts switches and reduces CPU efficiency. So, the choice of time quanta is a very important design decision. Switching from one process to another requires a certain amount of time to save and load registers, update various tables and lists etc.
Consider, as an example, process switch or context switch takes 5 m sec and time slice duration be 20 m sec. Thus, CPU has to spend 5 m sec on process switching again and again wasting 20% of CPU time. Let the time slice size be set to say 500 m sec and 10 processes are in the ready queue. If P1 starts executing for first time slice then P2 will have to wait for 1/2 sec; and waiting time for other processes will increase. The unlucky last (P10) will have to wait for 5 sec, assuming that all others use their full time slices. To conclude setting the time slice.
10. What are the different principles which must be considered while selection of a scheduling algorithm?
The objective/principle which should be kept in view while selecting a scheduling policy are the following −
- Response time
- Turnaround time
- Waiting time
The objective should be to minimize above mentioned times.