EE509 Assignment 1 Discrete Event Simulation of a Random Early Detection

Assignment 1

Discrete-Event Simulation of a Random Early Detection (RED) Congestion Avoidance Scheme

1 Introduction

The aim of this assignment is to develop a discrete-event simulation model to examine the operation of a congestion avoidance scheme in a packet switch. Using your simulator, the performance of the scheme (the delay and throughput) will be measured. In EE509 Assignment 2 (given at a later date), approximate measures will be derived using mathematical analysis methods and compared to discrete-event simulation results.

1.1 The Output Buffered Router (OBR)

Interconnected routers form the backbone of the Internet, allowing IP packets to be transmitted from any source host to destination host. The performance of a router is sensitive to the load placed on its outgoing links. If loading is too high the link is said to become congested and the consequences are long delays for packets passing through the router and/or loss of packets (a reduction in throughput). In this assignment, an eventadvance discrete-event simulation model will be developed to investigate the effectiveness of a congestion avoidance algorithm whose goal is to keep link delays low and throughput high, even when a link is overloaded with incoming traffic.

The basic function of a packet switch (also referred to as a router or gateway) is to forward packets of data on incoming links to outgoing links, where the outgoing link for each packet is determined by an address contained in the packet’s header. Links operate at a fixed speed (bits per second) which, in combination with the number of bits in the packet (which varies from packet to packet), determines the time required to transmit a packet at an outgoing router port. It is possible that an outgoing port is busy transmitting a packet when another packet, destined for the same port, arrives at a router input port. In this case the newly arriving packet must be stored (queued) until the outgoing port is free. In an output buffered router (Figure 1), queuing is done only at outgoing ports.

Figure 1: An Output Buffered Packet Switch (Router)

Packets arriving at input ports are quickly forwarded to the outgoing queues and so the delay to forward packets from an input port to a queue is considered to be negligible. Thus, the delay a packet experiences in passing through the router is determined by the queuing delay at the output port queue (the time waiting for packets ahead in the queue to be transmitted) plus the eventual transmission time of the packet, when it reaches the head of the queue.

We note that the output queue operates using the first-in-first-out (FIFO) discipline and has some limited number of queuing spaces of L packets. When a packet has finished its transmission the next waiting packet is transferred to a transmission buffer in the output port (which holds one complete packet) and immediately begins its transmission, bit by bit. Thus the queue and output port together can hold up to L+1 packets in total at any time. If a packet arrives when all queuing spaces are occupied the packet is discarded (lost) from the system. No further action is taken at the router with regard to lost packets.

1.2 Transmission Queue Model

Consider a network of connected output buffered routers (Figure 2), which provide the core inter-connection network between different local area networks (Net 1, Net 2), where local networks connect to the core network via access points (APs). In the scenario studied, hosts in Net 1 are the source of packets to be sent to hosts in Net 2 and we focus on modelling a single router output buffer and transmission port/link on the path between the networks, as shown in Figure 2.

Figure 2: An Output Port (Link) Carrying Traffic from Hosts in Net 1 to Hosts in Net 2

Multiple data connections may exist between hosts in Net 1 and Net 2 with each connection contributing traffic on the path, however, we will assume some total aggregate traffic offered to the link, which has the following properties. Packets destined to the link are assumed to have negative-exponentially distributed inter-arrival times. The lengths of the packets are also assumed to be approximated by a negativeexponential distribution, that is, packet lengths are modelled as real numbers of bits, as opposed to an exact integer number of bits. We note that a transmission link can be viewed as a “server” in a queuing model, that is, the transmission time (service time) is determined by the packet length (which is negatively-exponentially distributed) and by the constant link transmission bit-rate. Thus, the service time is also negativelyexponentially distributed according to the cumulative distribution function

FS (t) = 1 – e-μt

where μ is the mean service (transmission) rate in packets per second or, equivalently, 1/μ is the mean packet service (transmission) time in seconds. Similarly, with a mean packet arrival rate of λ (mean packet inter-arrival time of 1/λ) the packet inter-arrival times are distributed according to the distribution function

FA (t) = 1 – e-λt.

We will also use the concept of offered load ρ, the ratio of the mean arrival rate λ and the mean service rate μ,

ρ = λ / μ (measured in units of Erlangs).

When the arrival rate equals the service rate, ρ=1, the queue/link is fully loaded (100% loaded). Offered load may be greater than 1 in which case the system is overloaded. Unlike offered load, carried load from a system’s output cannot exceed 100%. Carried load is often referred to as the system throughput, which may be measured as the mean number of packets served by the system per unit time, or normalised as the mean carried load divided by maximum mean load that could be carried. An alternative measure of throughput of a link is the number of bits transmitted in a unit time, which of Assignment cannot be more than the link transmission rate (in bits per second).

1.3 Congestion on a Link and Congestion Avoidance

Traffic offered to a link varies over time and for some sustained periods may constitute very high load (e.g. in the range 90%-150% offered load). During these periods, because the arrival rate can exceed the transmission rate, the transmission queue length quickly increases and new packets joining the queue will suffer long delays. Even after a high loading period has abated it will take some time to clear the queue and for queuing delays to reduce to a normal level. This condition is known as congestion.

Considering the layered structure of Internet protocols, congestion at the packet level (IP layer) can have further undesirable effects on higher layers. Consider a TCP connection between two hosts, where a link on the path between the hosts suddenly becomes congested. Due to growing queues in the router, a packet’s transmission time may become long enough that TCP at the sending host does not receive an acknowledgement before its transmission timer expires, causing TCP to assume the packet is lost, which prompts a resend of the packet (even if it has been successfully received at the destination host). This unnecessary resend of packets increases the effective offered load to the link, exacerbating the congestion problem, see [1].

A simple method of reducing congestion is to limit the number of packets that may be buffered in link transmission queues. If a packet arrives to a full queue it is simply discarded (dropped). This is referred to as a drop-tail congestion avoidance policy. As delay is limited by the fixed queue length, packets that have been successfully transmitted are then unlikely to be resent due to TCP timeout. At the same time, dropped packets signal the congestion condition to TCP, which causes a reduction in TCP’s sending rate at the source host. Overall, sustained periods of long queuing delay (congestion periods) can thus be avoided.

We note that choosing an appropriate queue length limit for a drop-tail policy considers a trade-off between too many dropped packets at the router if the queue length is short and too many false re-transmissions by TCP if the queue length is long. In addition the policies performance needs to be evaluated in the full context of TCPs congestion control mechanism which adds complexity to the analysis. In this assignment, in order to examine the fundamentals of congestion control in a manageable exercise, we do not model the behaviour of the TCP protocol.

Although the simple drop-tail policy can avoid congestion, it has some drawbacks. One of the main disadvantages is that of TCP global source synchronization. A link will typically carry traffic from many TCP connections. When the transmission queue is full, it may remain so over a period of time, causing dropped packets in multiple different TCP connections sharing the link. This event signals congestion to the TCP senders all at the same time, causing them to reduce sending traffic rates in synchrony causing a sudden loss in throughput. When the router transmission queue size subsequently reduces, the TCP senders increase sending rate in synchrony. This leads to bursty traffic on the link which degrades performance. To alleviate this problem, other congestion control schemes have been proposed where packets are dropped (with some probability) before the queue becomes full. This aims to avoid periods when all packets are dropped avoiding synchronisation of TCP traffic.

1.4 Random Early Detection

In the Random Early Detection (RED) congestion avoidance scheme, arriving packets are dropped with a probably which increases with the current average queue length. When the queue is busy more packets are dropped; when less busy, fewer packets are dropped. Although some packets are dropped that could be accepted to the queue and then transmitted, the scheme can none-the-less improve the overall average throughput of TCP connections and reduce link delay by avoiding congestion periods. In this assignment the RED algorithm, as described in [2], will be modelled. This paper should be studied carefully; particularly the algorithm as described on pages 5 and 6. We give a summary of the algorithm here.

The RED controller uses a smoothed moving average of queue size when deciding if a packet should be dropped. This smoothed measure avoids over-reaction to short-term bursts of traffic arriving to the queue. The moving average queue size is updated every time a new packet arrives, as follows:

avg ← [1 – wq] avg + wq q (1)

where avg is the average queue size, q is the current number in the queue at the time of arrival and wq is a fixed weighting factor that controls how heavily the current queue size influences the average (i.e. the degree of smoothing).

We note that the average queue size measure is updated only when a new packet arrives. If there is a period with no arrivals the queue will begin to empty and may be completely empty when finally the next arrival occurs. The procedure for updating of the average queue, when an arrival to an empty queue occurs, needs to be modified to reflect the previous idle period with no updates. Otherwise, the average queue size estimate will be too large as it will only be reduced with a single sample of q=0 when the next arrival occurs. To make this adjustment we need to estimate the number of times the average should have been updated if it were updated at a similar rate to when packets are regularly arriving. The number of packets that could have arrived since the queue became idle is estimated as

m [timeq_time] / s

where time is the time at which an arrival finds the queue empty, q_time is the start of the queue idle period, and s is a transmission time for a small packet. For this assignment we will estimate s as

s = 0.2 × [1 / mean_packet_service_rate].

To reflect how the average should have been updated if m packets had arrived in the idle period, the avg in this case is instead calculated as

avg [1 – wq] m avg. (2)

In summary, equation (1) is used to update avg if a packet arrives to a non-empty queue and equation (2) is used if the queue is empty.

A decision is made for each arriving packet, using this measure of average queue size, whether the packet is accepted to the queue or dropped (discarded), as follows. If the average queue size is less than a chosen threshold minth, then the packet is accepted to the queue. If the queue size is greater than or equal to maxth, then the packet is dropped. Between these thresholds the packet is dropped with a probability pa calculated as follows

pb ← maxp [avg – minth] / [maxth – minth] (3a)

pa ← pb / [1 - count . pb] (3b)

The probability pb increases linearly with the average queue size, up to a maximum value of maxp which we specify. This probability is modified in equation (3b) to ensure packets are dropped at more regular (less bursty) intervals, where count is the number of packets not dropped since the last dropped packet (see [2] for discussion).

The end objective of this assignment is to investigate the effectiveness of this algorithm in controlling mean delay and throughput in the transmission queue.

1.5 Discrete-Event Simulation of the System

The basic operation of a discrete-event simulator centres on a loop which removes the event at the top of the list, reads the information in the event, advancing the current simulation time to that of the event, and inserts appropriate new event(s) in the list. Reading of an event may cause an update to variables representing the state of the system and may also trigger calculation/storage of statistic values. The simulation stops when some predetermined simulation time is reached. The events of interest to simulate the queuing system in this assignment are:

  • Arrival Events: An arrival event corresponds to a new packet arrival to the system. Arrival events repeat in accordance with the specified inter-arrival time distribution of packets.
  • Departure Events: A departure event signals the end of transmission of a packet at the router port/link. The delay of an individual packet in passing through the system is determined as its departure time minus its arrival time.
  • Statistic Events: Statistics relating to average delays and throughputs over a time period (batch statistics) will be generated by the simulator at periodic intervals. Occurrence of a statistic event signals that the statistic is to be calculated and outputted at a given time. (See below for a discussion of this).

It will be necessary to maintain variables representing the state of the queue/link. For example, the current number of packets in the queue will be updated when a new packet is accepted into the queue or when a packet leaves the queue to start its transmission in the output port.

As a general rule, for an efficient simulation implementation, only the state information relevant to the statistics of interest should be represented in a simulation model. As a hint towards an efficient implementation, we note that it is possible to calculate packet departure instances, and record the correct queuing/transmission delay for packets during the simulation, by maintaining only a small number of state variables for the queue. That is, it is not strictly necessary to keep a record of all packet lengths currently in a queue.

1.6 Performance Metrics / Simulation Statistics

The performance metrics of interest in this assignment are:

  • Packet Delay: The time taken for a packet to clear the system. The end points for the delay measure are defined as the point in time when a new packet is generated (accepted to the end of the transmission queue) to the point in time when it finishes its transmission into the link.
  • Throughput (Normalised): The average proportion of time the link is busy. (Given this number T, the bit-rate throughput can be calculated as T * link_bit_rate). T may be found by averaging the system busy periods and system of idle periods over some time interval. This may be done repeatedly during a simulation run to gain multiple samples of T from successive intervals, from which an overall average and confidence measure in the result can be determined. (See the batch mean method (see [3], pg. 128)).

In the case of the packet delay measure, each packet accepted to the queue generates a delay value, which if outputted from the simulator could generate a large volume of data and slow the progress of the simulation. Instead, batch mean statistics should be calculated as the simulation time progresses and this information outputted. Two batch statistics relating to delay are of interest:

  • the mean packet delay and
  • the mean proportion of packets with a larger delay than some given value.

These statistics are derived from the individual packet delays by (i) summing delay values over some fixed period and dividing by the number of packets and for (ii) counting packets with longer delays than the specified value over some fixed period and dividing by the number of (accepted) packets over the same period. For both measures, the process is repeated during a simulation run to give multiple samples of each statistic for successive time periods.

Batching of statistics can be implemented in a simulator by repeated insertion of an event that expires some fixed time t in the future. When simulation time reaches the event, the batched statistic is calculated and outputted and a new event inserted to expire t seconds from the current time, and so on.

1.7 Simulator Design and Testing

It may help if you plan and structure your code, by defining appropriate classes such as Queue, Simulation, ExpGenerator, REDController, Statistic, etc. These classes are not prescriptive; you should choose your own code design. Testing components of the code as you progress through the assignment is recommended. For example the numbers produced by your negative-exponential distribution can be checked for correct mean and variance and a histogram (e.g. using Matlab/Excel) can quickly verify that the shape of the distribution is as expected.

Bibliography

  • F. Kurose, K. W. Ross, “Computer Networking: A Top-Down Approach Featuring the Internet”, Addison Wesley, (any of Third Edition (2004) to Sixth Edition (2012)).
  • Floyd and V. Jacobson, “Random Early Detection gateways for Congestion Avoidance,” IEEE/ACM Transactions on Networking, vol. 1, no. 4, August 1993.
  • Perros, “Computer Simulation Techniques : the definitive introduction!,” 2009. (available at http://www4.ncsu.edu/~hp/simulation.pdf).
  • McArdle, “A Short Introduction to Probability, Queuing Theory and Network Analysis – Part I,” (EE509 long-form queuing analysis class notes, available on moodle.dcu.ie), September 2011.

2 The Assignment

Scenario I - Modelling a Queue with No Loss

The first part of this assignment is to develop simulation code to model a single queue/transmission link, with an unlimited length queue, to measure the system delay and to compare your simulation results to known exact analytical results (the M/M/1 queue detailed in the EE509 lecture notes [4]). The purpose of this exercise is to familiarise you with the fundamentals of random variate transform, event-advance simulation programming and to allow you to validate some basic simulation results against easily-calculated exact mathematical formulae. This simulation will form the basis of the simulators in Scenarios II and III. The specification of the system to be modelled is as follows:

Link rate10 Mbps
Mean Packet length8000 bits
Packet Length Distribution*Negative-Exponential
Interarrival Time DistributionNegative-Exponential
Maximum Queue LengthUnlimited
Queue disciplineFIFO

*Note: As an approximation, assume that packet lengths are real numbers drawn from a negative-exponential distribution with mean of 8000 bits. Equivalently, packet transmission times are negative-exponentially distributed with mean 8000/107 seconds.

Scenario II - Modelling a Queue with a Drop-Tail Policy

The second part of this assignment involves modifying your unlimited length queue to model a limited length queue. Your simulation code will implement the measurement of system throughput and packet delay. This queue will reflect the behaviour of the droptail policy in a router and results will serve as a basis for comparison with the RED congestion control scheme, in Scenario III. The specification of the system to be modelled is the same as Scenario I but with a finite queue length of 30 packet buffering spaces. (Note that when a packet enters service it is removed from the head of the queue and stored in a transmission buffer in the output port, immediately starting its transmission. Thus, in total there are 31 storage spaces in the queue/port/link system).

Scenario III - Modelling a Queue with a RED Congestion Control

In this part of the assignment, you will extend the Scenario II model to implement the RED congestion control scheme and investigate its effectiveness when compared to the simple drop-tail policy. The overall purpose of the exercise is to familiarise you with simulation methods for performance evaluation of complex protocols and to illustrate the role of simulation in network protocol design. You will gain an understanding of how to take a description of a quite complex physical system and reduce it to a model to obtain estimates of pertinent performance measures. (In Assignment 2 (given at a later date), you will also compare simulation results to approximate analytical results and explore where inaccuracies in analytic approximations arise and explore the relative merits of simulation and analysis). The specification of the system to be modelled is the same as the previous scenarios with the addition of the following RED algorithms parameters:

wq

0.005

maxp

0.05

minth

5

maxth

15

s

0.2 / μ

Maximum Queue Length

30

Assignment Deliverables

This assignment requires submission of a project report containing the information detailed in the sections below. The report is marked out of 100 marks (with marks for each section as indicated below). This assignment contributes 17.5% to the overall EE509 module mark. (Assignment 2 will contribute 7.5% to the overall module mark). The assignment will be turned in via Moodle. You must attach a completed assignment coversheet (available on Moodle). This is an individual assignment. Your submission must be entirely your own work. The University plagiarism policies apply in full.

Details of the assignment deliverables and marking scheme are given below. Please follow the same headings and section numbers in your report.

Simple Queue Simulator (Scenario I)

Implement the M/M/1 queuing model simulation as specified in Scenario I. Your simulator should include a batched mean-delay statistic that calculates the mean delay at periodic intervals (see Section 1.6).

(1.1) Give a brief description of how your simulator generates random numbers with the specified distribution and a given mean value. [4 marks]

(1.2) Describe how the departure time of a packet from the system (completion of transmission) is calculated given the arrival time of the packet (the time the packet joined the end of the queue). Make reference to any relevant variables that are used in the code to represent the current state of the system. [4 marks] (1.3) Give a brief description of the functioning of the event processing loop, making reference to insertion and removal/processing of arrival, departure and statistic events during a simulation run. [6 marks]

Set up your simulation according to the parameters of Scenario I and with a mean offered load of ρ=0.8 Erlangs. Run the simulation for 1000 simulated seconds. Gather mean delay statistics in batches of 10 sec intervals (100 batch means in total).

(1.4) Show how you have calculated the mean service rate μ (packet transmission rate) and the mean arrival rate λ for the given offered load. [3 marks]

(1.5) Plot the set of batch means against time (e.g. using MS Excel, Matlab or Scilab) and calculate the mean value over all batches. [4 marks]

(1.6) Calculate the 95% confidence interval, stating the answer as a ± percentage.

Show your calculation, briefly commenting on the method/formulae you have used. [6 marks]

(1.7) Compare your mean simulation result to the theoretic result for system delay in the M/M/1 queuing system (see [4], pp. 61). Show your calculation. Comment briefly on the result. [4 marks]

(1.8) Using the analytic formula, plot a graph for mean system delay for the range of offered load values ρ = {0.4, 0.5, 0.6, 0.7, 0.8, 0.9} (with load values as x-axis). [4 marks]

Drop-Tail Queue Simulator (Scenario II)

Modify your previous simulator of Scenario I to implement the drop-tail queue of Scenario II. As well as a batched mean-delay statistic, your simulator should also implement a normalised throughput statistic and a statistic that records the proportion of packets whose delay is greater than a given value, as described in Section 1.6. A batch length of 5 seconds of simulated time should be used for all statistics.

(2.1) Explain briefly how the simulator of Scenario I has been modified to implement the drop-tail queue. [4 marks]

(2.2) Plot the instantaneous queue size vs time (all changes in queue size as packets arrive and depart the system) over a one second simulated interval, for ρ=1.2. [4 marks]

(2.3) Execute your simulator for the load values in Table 1 below. Use a total simulation run time that ensures that the 95% confidence interval for all mean delay measures is no greater than ±3%. What length simulation run (in simulated seconds) was required? Complete Table 1 with mean throughput, mean delay and the corresponding 95% confidences intervals, calculated from the set of batch means in each case. [12 marks]

Table 1: Drop-tail Policy Simulated Performance Statistics

Offered Load

(Erlang)

Mean Throughput

(As fraction of time link busy)

95% C.I.

(as ±%)

Mean Delay

(ms)

95% C.I.

(as ±%)

0.7

0.8

0.9

1.0

1.1

1.2

1.3

1.4

(2.4) Calculate the theoretic value of normalised throughput (λeff / μ) for an offered load of 1 Erlang and compare to your simulation result. (See analysis of M/M/1/K queue in [4] pp. 68, noting that K=31 in this case). [4 marks]

(2.5) From your simulation statistics, when offered load is 1.1 Erlangs, what proportion of accepted packets are delayed by more than 25ms. Quote the 95% confidence interval with your answer. [4 marks]

RED Queue Simulator (Scenario III)

Modify your previous simulator to implement the RED congestion avoidance policy described in Section 1.4. Use the same total queue length as in the previous exercise.

(3.1) Explain briefly how the previous simulator has been modified to implement the RED algorithm, making reference in particular to the following questions and, where appropriate, listing code snippets to illustrate your implementation,

  1. When is the instantaneous queue size updated?
  2. When is the average queue size updated?
  3. How is the start of an idle period detected?
  4. How is it decided if a packet is to be dropped given some value of pa? [8 marks]
  • Plot the instantaneous queue size over a one second interval when ρ=1.2 and the windowed average on the same plot. Choose an interval that includes at least one idle period (so that the effect of equation (2) is illustrated). Compare with the plot of (2.2). [6 marks]
  • Execute your simulator for the load values in Table 2 below. Use the same total simulation run time and batch length as used in (2.3) above. Complete Table 2 with mean throughput, mean delay and 95% confidences intervals. [12 marks]

Table 2: RED Congestion Control Policy Simulated Performance Statistics

Offered Load

(Erlang)

MeanThroughput

(As fraction of time link busy)

95% C.I.

(as ±%)

Mean Delay

(ms)

95% C.I.

(as ±%)

0.7

0.8

0.9

1.0

1.1

1.2

1.3

1.4

  • Using your data from Tables 1 and 2, plot (i) a graph comparing the mean delay of the drop-tail and RED congestion avoidance policies and (ii) mean normalised throughput of the two policies (using offered load on x-axis of graphs). Briefly comment on the differences. [6 marks]
  • From simulation results, when offered load is 1.1 Erlangs, what proportion of accepted packets are delayed by more than 25ms? Quote the 95% confidence interval with your answer. Assuming that a higher layer protocol ignores/discards packets that have been delayed longer than 25ms, estimate and compare the total effective throughput for the RED policy and the drop-tail policy (using your answer of (2.5) for the calculation for drop-tail). [5 marks]

Report Appendix

A full listing of your program code is to be included as an appendix to the report. All simulation results quoted in your report must agree with those produced if your code is executed by the examiner.

Appendix

The following code may be used as a basis for your simulator implementation

EventListManager.java

{`
//package yourPackage... 
 import java.util.LinkedList; import java.util.ListIterator; 
/** 
*	This class implements the functionalities of an event list in the "event 
*	advance" simulation design. It provides methods to manage the insertion, the 
*	sorting and the removal of an event in the list. Only one instance of this 
*	class is admissible (Singleton java pattern). To create an instance of this class  
*	use the method getInstance() and not the constructor. For Example : 
 *  
*	EventList myEventList = EventList.getInstance(); 
 *  
 *  
*	@author Daniele Tafani 10/10/09 
*	 */ 
public class EventListManager { 
 	/** 
*	The instance of the EventList class. 
 	 */ 
 	private static EventListManager instance;  
 	/** 
*	The event list implemented as a Java LinkedList object. 
 	 */ 
 	private static LinkedList<Event> eventList;  
 	/** 
*	Constructor of the class. It creates a LinkedList object. 
 	 */ 
 	private EventListManager() {  	 	eventList = new LinkedList<Event>(); 
 	} 
 	/** 
*	Returns the singleton instance. 
 	 *  
*	@return instance : The EventListManager instance. 
 	 */ 
 	public static synchronized EventListManager getInstance() {  	 	if (instance == null)  	 	 	instance = new EventListManager(); 
 	 	return instance; 
 	} 
 	/** 
*	Insert and sort an event to the list of events according to its  	 * associated clock value. 
 	 *  
*	If the list is empty, it adds the event as the first element of the list. 
*	If the list is not empty, it inserts and sorts the event according to its 
*	clock value (for example if the list has 2 events which will occur at 
*	CLOCK_1 = 1.5 and CLOCK_2 = 2.4 and if the event to be inserted has CLOCK 
*	= 1.6, the method will insert the event between the 2 events). If the  	 * event clock is greater than all the event clocks of the list, the method  	 * will insert it as the last element. 
 	 *  
*	@param event (Event) : the event to be inserted in the list. 
 	 *  
 	 */ 
 	public void insertEvent(Event event){ 
 	 	boolean isInserted = false; 
 	 	if(isEmpty()) { 
 	 	 	eventList.addFirst(event); 
 	 	 	isInserted = true; 
 	 	} 
 	else { 
 	while (
 	ListIterator<Event> iterator = eventList.listIterator(); iterator.hasNext()) { 
Event listEvent = iterator.next(); 
if (event.getEventClock() <= listEvent.getEventClock()) { 

 	  	iterator.previous();  	iterator.add(event); 
 	isInserted = true; 
 	break; 
} 	} 	
 	} 
if (!is  	Inserted	) eventList.addLast(event); 	
} 
 	} 
/** 
*	Removes and returns the first event from the list. 
 *  
*	@return firstEvent (Event) : the first event of the list. 
*	 */ 
public Event poll() { 
 	Event firstEvent = eventList.removeFirst(); 
 	return firstEvent; 
} 
/** 
*	Check if the list does not contain events. 
 * 
*	@return isEmpty (boolean) : "true" if the list is empty, "false" viceversa. 
*	 */ 
public boolean isEmpty() {  	boolean isEmpty = eventList.isEmpty(); 
 	return isEmpty; 
} `}

Event.java

{`
//package yourPackage... 
/** 
*	This class is an implementation of the event object to be inserted in the 
*	event list. In the current version it contains just an attribute, the clock value 
*	at which the event occurs. 
*	It also contains a set of get() and set() methods to retrieve and modify its clock value.
*	The student has to define the constructor of this class and other eventual additional attributes 
*	that are necessary to uniquely define the event.  *  
*	@author Daniele Tafani 10/10/09 
*	 */ 
public class Event { 
 	/** 
*	The clock value of the event. 
 	 */ 
 	private double eventClock;  
 	/** 
*	Constructor of the class: to be developed by the student... 
 	 *  
 	 */ 
 	public Event(double eventClock) {  
 	 	// Fill in with your code.... 
 	} 
 	/** 
*	Returns the event clock value. 
 	 *  
} 
 	*	@return (double) eventClock : the even clock value (double). 
 */ 
public double getEventClock() {  	return eventClock; 
} 
/** 
*	Modifies the event clock value. 
 *  
*	@param (double) eventClock 
*	: the new event clock value (double). 
 */ 
public void setEventClock(double eventClock) { 
 	this.eventClock = eventClock; 
} 

//Add attributes that are necessary to uniquely define the event object. // .... 
`}