Motivations underlying Leaky ARQ and progressively reliable packet delivery:


In the InfoPad project, we were motivated by the desire to deliver interactive image data over a noisy low-bandwidth wireless link that accessed the Internet and Web.  Over time-varying fading wireless links, my hypothesis was that the conventional approach to delivering Web-based images using TCP would result in sporadically frozen browser screens.  The resulting user experience would be one of unpredictable interaction, though images would be displayed error-free.  I felt that the dominating subjective design goal of protocols for wireless Web access should be to promote consistent interaction, i.e. consistent low-latency image delivery, and that reliability, while a worthy design objective, was secondary to promoting interaction.   The current paradigm of Web access via TCP does the opposite, promoting reliability over interactivity.  Therefore, I felt that an alternative to TCP was necessary to support interactive wireless Web access.

Supporting interactivity as an integral design objective of a network protocol implies emphasizing UDP-like low-latency delivery at the cost of admitting occasionally unreliable packet delivery.  Reliability mechanisms available to network protocols, like retransmission-based ARQ protocols and forward error correction (FEC), add latency.  This added delay far exceeds the interactive latency bound during typical fades (1% BER), as quantified in my thesis.  Real-time applications on the Internet today, such as interactive video conferencing, already forego TCP delivery - even in the absence of a wireless link - primarily because of latency concerns with TCP.  To have any hope of achieving delivery within the interactive latency bound, one must back off of the guarantee of total reliability and instead tolerate some unreliable packet delivery.

Tolerating unreliable packet delivery implies tolerating packet loss, but more subtly it also provides for tolerating packet corruption.  The traditional approach to networking is motivated by Shannon's separation theorem, which states that end-to-end distortion is minimized when images are compressed as aggressively as possible, and network protocols are left separately to provide as much error protection as is necessary to achieve the distortion target.  However, Shannon's separation theorem assumes that 1) compression/decompression engines, as well as error coding/decoding algorithms, have unlimited time in which to execute 2) that these codecs are allowed to have unlimited complexity, and 3) that the statistics of the source/image and channel/network are stationary.  Clearly, the delay limits imposed by interactivity violate assumption 1).  The complexity of image codecs and error codecs can also be limited, e.g. on PDA's, thereby violating assumption 2).  Moreover, wireless channels are highly nonstationary, violating assumption 3).  In place of Shannon's separation theorem, there is a large body of literature known as joint source/channel coding (JSCC), which provides counterexamples to show that end-to-end distortion in a delay-limited/complexity-limited system can be minimized be sharing information about network statistics with the image codecs, and by sharing information about the image statistics with the error codec.  For example, JSCC advocates that image codecs practice error-resilient coding that tolerates bit errors, instead of the traditional approach of aggressive compression.  JSCC also advocates that error codecs/protocols practice unequal error protection and forwarding of corrupt packets by network protocols, rather than uniformly aggressive FEC and discarding of all corrupt packets by ARQ protocols.

An end-to-end protocol which directly forwards all received packets to the receiving application is incomplete.  The quality of the received packets will fluctuate directly with the quality of the wireless channel.  Rather, such a protocol should be designed to provide packet versions that are progressively more reliable to the receiving application.  Each packet version can be ensured to be statistically more error-free than previously delivered versions of the same packet by an FEC technique called code combining.  In its most general form, code combining caches all retransmitted versions of a packet that are received and combines this packet history to reconstruct packet versions that have fewer and fewer bit errors.  Incorporating the mechanism of progressive reliability with the concept of forwarding possibly noisy packets forms the basis for the Leaky ARQ protocol.