Real Time Systems Previous Year Paper Solution 2017
Real Time Systems Previous Year Paper Solution 2017

Q1 Answer the following questions:[Marks-2X10]

a) What do you understand by laxity of a task?

Ans: Laxity is also known as slack time. It is the amount of time left after a task completes if the task was started now.

b) Can PIP and PCP be considered as greedy algorithms? Explain

Ans: No, Only PIP can be considered as greedy algorithm.

In PIP, whenever a request for a resource is made, the resource will be
allocated to the requesting task if it is free. However, in PCP a resource may not
be granted to a requesting task even if the resource is free. This strategy in PCP
helps in avoiding potential deadlocks.

c) How task scheduling techniques can be used to achieve effective fault- tolerance in real-time systems?

Ans: Task scheduling is an efficient technique as it requires very little redundant hardware resources. Fault-tolerance can be achieved by scheduling additional ghost copies in addition to the primary copy of a task. The ghost copies may not be identical to the primary copy but may be stripped-down versions that can be executed in a shorter time than the primary. The ghosts copies of different tasks can be overloaded on the same slot and in case of successful execution of a primary, the corresponding backup may be deallocated

d) What is chain blocking? How is unbounded priority inversion avoided in PCP?

Ans: A task is said to undergo chain blocking if each time it needs a resource, it undergoes priority inversion.

A higher priority task suffers unbounded priority inversion when it is waiting for a lower priority task to release some of the resources acquired by it, and meanwhile, intermediate priority tasks preempt the low priority task from CPU usage. But such a situation can never happen in PCP since whenever a high priority task waits for some resources which is currently being used by a low priority task, then the executing lower priority task is made to inherit the priority of the high priority task. So, the intermediate priority tasks can not preempt lower priority tasks from the CPU usage. Therefore, unbounded priority inversions can not occur under PCP.

e) Why multiprocessor systems are called tightly coupled systems whereas, distributed systems are called loosely coupled systems?


f) Distinguish between execution time and response time of a task?

Ans: The execution time of a given task is defined as the time spent by the system executing that task, including the time spent executing run-time or system services on its behalf.

The response time of a given task is defined as the time elapsed between the dispatch (time when task is ready to execute) to the time when it finishes its job (one dispatch).

g) What do you understand by semaphore shuffling time?

Ans: Semaphore shuffling time refers to the time period from releasing signal by one task to another task waiting for the semaphore be activated with it, as shown below

h) Differentiate between synchronous and asynchronous I/O? Which one is better suited for use in real‐time applications?

Ans: Synchronous I/O means that some flow of execution (such as a process or thread) is waiting for the operation to complete. Asynchronous I/O means that nothing is waiting for the operation to complete and the completion of the operation itself causes something to happen. Asynchronous I/O is better suited for use in real-time applications.

i) Explain why 2PL-WP protocol is not free from deadlocks.


j) List the essential differences between a real-time database and a conventional database?

Ans: The following are the main differences between a real‐time database and a conventional database:

  • Unlike traditional databases, timing constraints are associated with the different operations carried out on real‐time databases.
  • Real‐time databases have to deal with temporal data compared to static data as in the case of traditional databases.
  • The performance metrics that are meaningful to the transactions of these two types of databases are very different.


a) What is a “fail-safe” state of a system? Since safety-critical systems do not have a fail-safe state, how is safety guaranteed?  [Marks-4]

Ans:- A fail‐safe state of a system is the one which is entered when the system fails, no damage would result.

All traditional non‐real‐time systems do have one or more fail‐safe states. However, safety‐critical systems do not have a fail‐safe state. A safety‐critical system is one whose failure can cause severe damages. This implies that the reliability requirement of a safety‐critical system is very high.

b) List the different types of timing constraints that can occur in a real-time system?

Construct the EFSM of a telephone system whose behavior is described as follows:

If you press the button of the handset for less than 15 sec, it connects to the local operator. If you press the button for any duration lasting between 15 sec to 30 sec, it connects to the international operator. If you keep the button pressed for more than 30 sec, then on releasing it would produce the dial tone. [Marks-2+4]

Ans: Timing constraints can broadly be classified into:

  1. Performance Constraint
  2. Behavioral Constraint

Performance and Behavioral constraints can further be classified into three types:

  1. Delay Constraint
  2. Deadline Constraint
  3. Duration Constraint

EFSM of a telephone system:

EFSM of a telephone system

EFSM of a telephone system


a) The following table shows the details of tasks in a real-time system. The tasks have zero phasing and repeat with a period of 90 mSec. Determine a feasible schedule to be used by a table-driven scheduler. [Marks-4]

[table id=2 /]


Step 1: Sort the tasks in increasing order of their deadlines.

T2 T3 T4 T1

Step 2: Schedule tasks as late as possible without violating constraints.

a feasible schedule

Step 3: Move tasks as early as possible without altering the schedule.


b) A cyclic scheduler is used to run the following set of periodic tasks. Assume a single processor is present in the system and all timings are given in msec. Select the appropriate frame size.
(e1 = 1, p1 = 4), (e2 = 2, p2 = 5), (e3 = 5, p3 = 20)  [Marks-6]


Using the first constraint, we have F ≥ 5.

Using the second constraint, we have the major cycle M = LCM(4, 5, 20) = 20. So, the permissible values of F are 5, 10, and 20.

Checking for a frame size that satisfies the third constraint, we can find that no value of F is suitable. To overcome this problem, we need to split the task that is making the task-set not schedulable. It is easy to observe that the task T3 has the largest execution time, and consequently due to constraint 1, makes the feasible frame sizes quite large.
We try splitting T3 into two or three tasks. After splitting T3 into three tasks, we have:

T3.1 = (20, 1, 20), T3.2 = (20, 2, 20), T3.3 = (20, 2, 20).

The possible values of F now are 2 and 4. We can check that now after splitting the tasks, F=2 and F=4 become feasible frame sizes.

It is very difficult to come up with a clear set of guidelines to identify the exact task that is to be split and the parts into which it needs to be split. Therefore, this needs to be done by trial and error. Further, as the number of tasks to be scheduled increases, this method of trial and error becomes impractical since each task needs to be checked separately. However, when
the task set consists of only a few tasks we can easily apply this technique to find a feasible frame size for a set of tasks otherwise not schedulable by a cyclic scheduler.


a) In a distributed system with six clocks, the maximum difference between any two clocks is 10 mSec. The individual clocks have a maximum drift rate of 2 X 10‐6. Ignoring clock setup times and communication latencies, determine the rate at which the clocks need to re-synchronize using a simple central time server method?


When clocks are resynchronized every ΔT s, maximum drift between any
two clocks is limited to 2ρΔT = ε

=> 2*2*10‐6*ΔT = 10*10‐3
=> ΔT = (10*10-3)/2*2*10-6
=> ΔT = 2500 s.

b) A set of real-time periodic tasks need to be scheduled on a uni-processor using RMA. The following table contains the details of these periodic tasks and their use of non-preemptable shared resources. Can the tasks T2 and T3 meet their respective deadlines when the priority ceiling protocol is used for resource scheduling? R1, R2, and R3 entries indicate the time duration for which a task needs the named resource in non-preemptive mode.

[table id=3 /]


Q5 a) Why is dynamically changing the priority levels of tasks important for traditional operating systems? How does this property affect real-time systems?


One major design decision in traditional operating systems is to provide acceptable response times to all the user processes. It is normally observed that I/O transfer rate is primarily responsible for the slow response time. Processors are extremely fast compared to the transfer times of I/O devices. To mitigate this problem, it is desirable to keep the I/O channels as busy as possible. This can be achieved by assigning the I/O bound tasks high priorities. To keep the I/O channels busy, any task performing I/O should not be kept waiting very long for the CPU. For this reason, as soon as the task blocks for I/O, its priority is increased by a priority recomputation rule. This gives the interactive users good response

However, for hard real‐time systems dynamic shifting of priority values is clearly inappropriate, as it prevents tasks being constantly scheduled at high priority levels, and also prevents scheduling under popular real‐time task scheduling algorithms such as EDF and RMA.

b) Evaluate the suitability of Windows-NT as a Real-time OS.



a) Trace the crucial features for a real-time operating system to possess[Marks-4]

Ans: The following are the main features that a real-time operating system(RTOS) should possess:

  • Reliability: Any RTOS must be reliable. This means that it operates for a reasonably long time without human interference. Reliability also means the configuration to enable the system to choose the right or most profitable action for current operations. For example, a system for the telecom or banking industries needs to consider the cost of terminating or engaging certain actions, compared to a phone or calculator whose cost is limited to an individual, game, app function, etc.
  • Predictability: A system must execute actions within a known time frame and produce known results. Such results are determined by the procedures or operations taking place. The determination is by targets set during production or procedural planning.
  • Performance: real-time operating systems are designed to make work easier. Every system must solve a problem or reduce the workload. As such, the developer should provide a system that is easily assimilated with existing software and hardware as well as aligned to the goals of the organization.
  • Manageability: This means a system whose veracity or bulkiness is manageable. The software and hardware required to operate the RTOS must be of reasonable size. Technicians should also be easy to find and orient. The idea is to reduce the cost of implementation.
  • Scalability: The needs of any production or event environment change with time. This means that a system may require an upgrade or downgrade. Such provisions must be made during the design and installation of any RTOS.

b) Discuss the various features of Global priority-based protocol, Calendar based protocol, and Bounded access protocol for hard real-time communication in a LAN.



a) Why traditional 2PL is not suitable for use in real-time databases?

Ans: The conventional 2PL is unsatisfactory for real‐time applications for several reasons:

  1. possibility of priority inversions
  2. long blocking delays
  3. lack of consideration for timing information and deadlock.

A 2PL is also restrictive and prevents concurrent execution which could have easily have been allowed without causing any violation of consistency of the database.

b) Briefly discuss at least three optimistic concurrency control protocols used in real-time databases.

Ans: Optimistic concurrency control is based on the assumption that conflict is rare, and that is more efficient to allow transactions to proceed without delays. When a transaction desires to commit, a check is performed to determine whether a conflict has occurred.

A few Optimistic concurrency controls are:

  1. Forward OCC: In this protocol, transactions read and update data items freely, storing their updates into a private workplace. These updates are made public only at a transaction commit time. Before a transaction is allowed to commit, it has to pass a validation test that checks whether there is any conflict between the validating transaction and the other transactions that have committed since the validating transaction started executing. A transaction is aborted if it doesn’t pass the validation.
  2. OCC Broadcast Commit: In OCC Broadcast commit(OCC-BC) protocol, when a transaction commits, it notifies its intension to all other currently running transactions. Each of these running transaction carries out a test to check whether it has any conflict with the committing transaction. If any conflicts are detected, then the transaction carrying out the check immediately aborts itself and restarts.
  3. OCC-Sacrifice: This protocol explicitly considers the priorities of the transactions. Also, this protocol introduces the concept of a conflicting set. A conflicting set is a set of currently running transactions that conflict with the validating transaction. A transaction once reaches its validation stage, checks for conflicts with the currently executing transactions. If conflicts are detected and one or more of the transactions in the conflict set has higher priorities than the validating transaction, then the validating transaction is aborted and restarted. Otherwise, all transactions in the conflict set are aborted and restarted

Q8 Write short answer on any TWO: (5 x 2)
a) Host-Target approach
b) IEEE 802.5

Host-Target approach:

Host target operating systems are properly being deployed in an embedded system. In this approach, the real-time application development is done on a host machine which is either a traditional Unix operating system or a windows system. The real-time application is developed on the host and the developed application is downloaded onto a target board that is to be embedded in a real-time-system. A ROM-resident small real-time kernel is used in the target board. This can be represented as shown below:


The main idea behind this approach is that the real-time operating system running on the target board needs to be kept as small as possible. This implies that the operating system on the target board would lack virtual memory management support, neither would it support any utilities such as compilers, program editors, etc. The processor on the target board would run the real-time operating system

IEEE 802.5:

The IEEE 802.5 standard specifies the characteristics of Token Ring networks. Token Ring was introduced by IBM in the mid-1980s and quickly became the network topology of choice until the rise in popularity of Ethernet. It is unlikely that you will encounter a ring network in your travels and even more unlikely that you will be implementing a ring network as a new installation. For what it’s worth, Token Ring is a solid network system, but Ethernet has all but eliminated it.

The following is a list of the specific characteristics specified in the 802.5 standards:

  • Speed The 802.5 Token Ring specifies network speeds of 4 and 16Mbps.
  • Topology Token Ring networks use a logical ring topology and most often a physical star. The logical ring is often created in the multistation access unit (MSAU).
  • Media Token Ring networks use unshielded twisted pair cabling or shielded twisted pair.
  • Access method 802.5 specifies an access method known as token passing. On a Token Ring network, only one computer at a time can transmit data. When a computer has data to send, it must use a special type of packet known as a token. The token travels around the network looking for computers with data to send. The computer’s data is passed along with the token until it gets to the destination computer at which point, the data is removed from the token, and the empty token placed back on the ring.


PSOS is a popular real-time operating system that is being primarily used in embedded applications. It is available from Wind River Systems, a large player in the real-time operating system arena. It is a host-target type of real-time operating system. PSOS is being used in several commercial embedded products. An example application of PSOS is in the base stations
of the cellular systems.

PSOS-based Development of Embedded Software

PSOS-based Development of Embedded Software

The host computer is typically a desktop. Both Unix and Windows hosts are supported. The target board contains the embedded processor, ROM, RAM, etc. The host computer runs the editor, cross-compiler, source-level debugger, and library routines. On the target board PSOS+ and other
optional modules such as PNA+, PHILE, and PROBE are installed on a RAM. PNA+ is the network manager. It provides TCP/IP communication over Ethernet and FDDI. It conforms to Unix 4.3 (BSD) socket syntax and is compatible with other TCP/IP-based networking standards such as FTP and NFS. Using these, PNA+ provides efficient downloading and debugging
communication between the target and the host. PROBE+ is the target debugger and XRAY+ is the source-level debugger. The application development is done on the host machine and is downloaded to the target board. The application is debugged using the source debugger (XRAY+). Once the application runs satisfactorily, it is fused on a ROM and installed on the target board.

PSOS consists of 32 priority levels. In the minimal configuration, the footprint of the operating system is only 12KBytes. For sharing critical resources among real-time tasks, it supports priority inheritance and priority ceiling protocols. It supports segmented memory management. It allocates tasks to memory regions. A memory region is a physically contiguous block of memory. A memory region is created by the operating system in response to a call from an application.