World's most popular travel blog for travel bloggers.

Fastest mode of data transfer

, , No Comments
Problem Detail: 

Which of the following modes of data transfer is the fastest?

a. DMA

b. Interrupt-based

c. Polling

d. All are equally fast

I do not have the answer, so I cannot check, that's why I am posting here.

I think the answer is D. All are equally fast. In all data transfer cases the speed of data transfer is same, but in case of DMA we save CPU time by that we improve the efficiency of CPU since data transfer job is done by DMA. Is my understanding correct ?

Asked By : avi

Answered By : Paul A. Clayton

First, "fastest" is an ambiguous term--it can refer to latency or throughput/bandwidth.

Second, DMA is orthogonal to interrupts and polling. I.e., the completion of the DMA can trigger an interrupt or be determined by polling an I/O register (or the memory that is being written in some cases). (Explicit processor copying to/from memory is the alternative to DMA.)

Third, the tradeoffs between polling and interrupts will vary depending on the hardware. In general, because an interrupt involves saving and restoring processor state, it has higher latency (saving and restoring state takes time). When I/O events are extremely frequent and of a limited number of types, the overhead of repeatedly reading an limited number of I/O registers will be less than the overhead of context switching. (Obviously, increasing the number of events polled--in terms of unique I/O registers or memory accesses--decreases the relative disadvantage of interrupts.)

With polling, the polling thread is taking up processor resources that could otherwise be used to run other code and the reading of an I/O device register (or ordinary memory location) when the polled event has not happened (the probed value has not changed) uses bandwidth that could otherwise be used for productive purposes. Excessive polling could also reduce the availability of buffers between processors and I/O devices, delaying fast I/O communication behind slow communication (and in some cases the I/O bus is optimized for larger packets so small read responses [which do not communicate any new information] would waste bandwidth).

However, hardware can optimize interrupts in several ways. It can coalesce interrupts at the I/O device or at an intermediate point between I/O devices and the processor. At the processor, it is possible to fold interrupts together so that instead of immediately returning from an interrupt (with the overhead of restoring state) a pending interrupt is handled. Some processors also have shadow registers that are used to avoid the need to save and restore state on interrupts. (Software can also combine the use of interrupts and polling by performing some polling while in the interrupt handler. This can reduce the number of interrupts.)

In theory, hardware could also reduce the overhead of polling with multithreading and throttling the execution of a polling thread that is not performing useful work. (Even with efficient polling--where every read indicates an I/O has completed--multithreading can allow work to be done while waiting for the read of the I/O device register to be completed--this tends to have relatively high latency.)

In a very theoretical sense, I/O device register values could be pushed into cache when they change (a variant on DMA). Such could reduce the wait time for reading such values (helping both interrupt handlers and polling) and facilitate hardware thread throttling (because the processor could quickly determine that no change had occurred). However, such is incompatible with how most ISAs are defined and with how I/O devices are implemented, and such would greatly increase the complexity of the system.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/10238

0 comments:

Post a Comment

Let us know your responses and feedback