English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Worst-Case Efficient External-Memory Priority Queues

MPS-Authors
/persons/resource/persons44187

Brodal,  Gerth Stølting
Algorithms and Complexity, MPI for Informatics, Max Planck Society;

Katajainen,  Jyrki
Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Brodal, G. S., & Katajainen, J. (1998). Worst-Case Efficient External-Memory Priority Queues. In S. Arnborg, & L. Ivansson (Eds.), Proceedings of the 6th Scandinavian Workshop on Algorithm Theory (SWAT-98) (pp. 107-118). Berlin, Germany: Springer.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-380E-9
Abstract
A priority queue $Q$ is a data structure that maintains a collection of elements, each element having an associated priority drawn from a totally ordered universe, under the operations {\sc Insert}, which inserts an element into $Q$, and {\sc DeleteMin}, which deletes an element with the minimum priority from $Q$. In this paper a priority-queue implementation is given which is efficient with respect to the number of block transfers or I/Os performed between the internal and external memories of a computer. Let $B$ and $M$ denote the respective capacity of a block and the internal memory measured in elements. The developed data structure handles any intermixed sequence of {\sc Insert} and {\sc DeleteMin} operations such that in every disjoint interval of $B$ consecutive priority-queue operations at most $c \log_{M/B} \frac{N}{M}$ I/Os are performed, for some positive constant $c$. These I/Os are divided evenly among the operations: if $B \geq c \log_{M/B} \frac{N}{M}$, one I/O is necessary for every $B/(c\log_{M/B} \frac{N}{M})$th operation and if $B < c \log_{M/B} \frac{N}{M}$, $\frac{c}{B}\log_{M/B} \frac{N}{M}$ I/Os are performed per every operation. Moreover, every operation requires $O(\log_2 N)$ comparisons in the worst case. The best earlier solutions can only handle a sequence of $S$ operations with $O(\sum_{i=1}^{S}\frac{1}{B}\log_{M/B}\frac{N_{i}}{M})$ I/Os, where $N_{i}$ denotes the number of elements stored in the data structure prior to the $i$th operation, without giving any guarantee for the performance of the individual operations.