 Tue, Jan 18
Thu, Jan 20
[lecture notes]

Lazy binary and lazy Fibonacci counters
after [Bentley Saxe 1980]
Decomposable searching problems: the BentleySaxe logarithmic method; local rebuilding; lazy deamortization; O(P(n)log n/n) worstcase insertion time; antidata structures for invertible queries; weak deletions; (lazy) global rebuilding; O(P(n)log n/n) worstcase insertion time and O(P(n)/n + D(n)) worstcase deletion time.
 Lars Arge and
Jeffrey Scott Vitter.
Optimal external memory interval management.
SIAM J. Computing 32(6):1488–1508, 2003.
[Among other results, describes weightbalanced Btrees, a prime example of a data structure that directly supports insertions and weak deletions.]

Jon L. Bentley. Decomposable searching problems.
Information Processing Letters 8:244–251, 1979.

Jon L. Bentley. Multidimensional binary search trees used for associative
searching. Communications of the ACM 18:509–517, 1975.
[Introduces kdtrees, a prime example of a data structure that easily supports weak deletions, but not insertions or real deletions.]
 Jon L. Bentley and James B. Saxe*.
Decomposable searching problems: 1. Statictodynamic transformations.
J. Algorithms 1(4):301–358, 1980.
[Introduces the logarithmic method for adding insertion to any static data structure for a decomposable search problem.]

Michael J. Clancy* and Donald E. Knuth. A Programming and ProblemSolving Seminar. Stanford University Computer Science Department Report STANCS77606, April 1977.
[One of the earliest examples of nonstandard number systems being used to design efficient dynamic data structures—in this case, finger search trees.]
 Kurt Mehlhorn.
Lower bounds on the efficiency of transforming static data structures
into dynamic data structures. Mathematical Systems Theory
15:1–16, 1981.
 Kurt Mehlhorn.
Data Structures and Algorithms III: Multidimensional
Searching and Computational Geometry. EATCS Monographs,
Springer, 1984.
[Begins with a thorough overview of dynamization techniques for decomposable search problems. Still one of the best references for this material.]
 Kurt Mehlhorn
and Mark H. Overmars*.
Optimal dynamization of decomposable search problems.
Information Processing Letters 12:93–98, 1981.
 Mark H. Overmars*.
The Design of Dynamic Data Structures.
Lecture Notes in Computer Science 156, Springer, 1983.
[Mark's PhD thesis. Describes several families of methods for transforming static data structures into fully dynamic data structures, along with several geometric applications. Still one of the best references for this material.]

Mark H. Overmars*
and
Jan van Leeuwen.
Two general methods for dynamizing decomposable searching problems.
Computing 26:155–166, 1981.
[Describes a lazy version of BentleySaxe to modify data
structures that support weak deletions and insertions, so that they support
real deletions.]

Mark H. Overmars*
and
Jan van Leeuwen.
Worstcase optimal insertion and deletion methods for
decomposable searching problems.
Information Processing Letters 12(4):168–173, 1981.
[Introduces the lazy local rebuilding technique for deamortizing BentleySaxe.]

Himanshu Suri and Victor Vazquez. Combination Pizza Hut and Taco Bell. Shut Up, Dude, 2010.
[They're at the Pizza Hut. They're at the Taco Bell. They're at the combination Pizza Hut and Taco Bell. Nearest neighbor searching is a decomposable search problem over an idempotent but noninversive semigroup.]
 Tue, Jan 25

No class today — Jeff is at SODA
 Thu, Jan 27
[lecture notes]

A wellbalanced tree?
[Andersson,
J. Alg. 1999]
Scapegoat trees: O(log n) worstcase search and amortized update time using O(1) extra space
 Arne Andersson*.
Improving
partial rebuilding by using simple balance criteria.
Proc. Workshop on Algorithms and Data Structures, 393–402.
Lecture Notes in Computer Science 382, Springer, 1989.
[The first discovery of scapegoat trees.]
 Arne Andersson.
General balanced trees.
J. Algorithms 30:1–28, 1999.
[The journal version of the previous paper.]
 Arne Andersson and
Tony W. Lai*.
Fast updating of wellbalanced trees.
Proc. 2nd Scandinavian Workshop on Algorithm Theory 111–121.
Lecture Notes in Computer Science 447, Springer, 1990.
[Scapegoat trees with depth lg n+c require only
O(log²n/c) amortized time per insertion or deletion, for any constant c.
The amortized update time can be reduced to O(log n/c) using a
standard indirection trick: replace the leaves of the scapegoat tree
with perfectly balanced subtrees of size Θ(log n).]
 Igal Galperin* and
Ronald L. Rivest.
Scapegoat trees.
Proc. 4th th Annual ACMSIAM Symposium on Discrete Algorithms,
165–174, 1993.
[The second independent discovery of scapegoat
trees and the origin of the name.]
A SelfAdjusting Search Tree
by Jorge Stolfi (1987)
Splay trees: rotations, rollercoasters and zigzags, weighted amortized analysis, O(log n) amortized time, static optimality theorem, static and dynamic finger properties, working set property, dynamic optimality and other splay tree conjectures
 Ranjan Chaudhuri
and Hartmut Höft.
Splaying a search tree in preorder takes linear time.
International J. Mathematics and Mathematical Sciences
14(3):545–551, 1991. Also appeared in SIGACT News
24(2):88–93, 1993.
 Richard Cole,
Bud Mishra,
Jeanette Schmidt, and
Alan Siegel.
On the dynamic finger conjecture for splay trees. Part I: Splay
sorting log nblock sequences. SIAM J.
Computing 30:1–43, 2000.
 Richard Cole.
On the dynamic finger conjecture for splay trees. Part II:
The proof. SIAM J. Computing 30:44–85, 2000.
 Amr Elmasry.
On the sequential access theorem and deque conjecture for splay trees.
Theoretical Computer Science 314(3):459–466, 2004.
[Best upper bound known for the sequential access
theorem: Accessing all n elements of a splay tree in increasing order
requires at most 4.5n rotations.]
 Seth Pettie.
Splay trees, DavenportSchinzel sequences, and the deque conjecture.
Proc. 19th Annual ACMSIAM Symposium on Discrete Algorithms 1115–1124,
2008.
[Proves that n deque operations on an nnode splay
tree require at most O(n α*(n)) time, where α*(n) is the
iterated inverse Ackermann function. The paper includes
a thorough survey of other splay tree results, up to 2008.]

Daniel D. Sleator
and
Robert E. Tarjan.
Selfadjusting binary search trees.
J. ACM 32:652686, 1985.
[HTML demo and C code]
[The original splay tree paper.]
 Rajamani Sundar*. On the deque conjecture for the splay algorithm.
Combinatorica 12(1):95–124, 1992.
[Proves that n deque operations on an nnode splay
tree require at most O(n α(n)) time, where α(n) is the inverse
Ackermann function.]
 Robert E. Tarjan.
Data Structures and Network Algorithms.
CBMSNSF Regional Conference Series in Applied Mathematics 44,
SIAM, 1983.
[Includes a chapter on splay trees. Yes, it
took less time to publish an entire 130page book than one
35page journal paper.]
 Robert E. Tarjan.
Sequential access in splay trees takes linear time.
Combinatorica 5:367–378, 1985.
[Established the sequential access theorem:
Accessing all n elements of a splay tree in increasing order
requires O(n) time.]
 Tue, Feb 1
Thu, Feb 3

G
REEDYF
UTURE in action
[Demaine et al., SODA 2009]
Lower bounds for dynamic binary search trees: arborally satisfied sets, independent rectangle lower bound, Wilber's interleave and alternation lower bounds, SIGNEDGREEDY lower bound, GREEDYFUTURE

Erik D. Demaine, Dion Harmon*,
John Iacono,
Daniel Kane*, and
Mihai Pătraşcu.
The geometry
of binary search trees. Proc. 20th Annual ACMSIAM
Symposium on Discrete Algorithms 496–505, 2009.
[Introduces arborally satisfied sets, and proves
their equivalence to dynamic binary search trees. See also Derryberry
et al. 2005.]
 Jonathan Derryberry*,
Daniel D. Sleator,
and Chengwen Chris Wang*.
A lower bound framework for binary search trees with rotations.
Technical Report CMUCS05187, Carnegie Mellon University,
November 22, 2005.
[A variant of the geometric view of dynamic binary
search trees, developed independently from Demaine et al. 2009]
 George F. Georgakopoulos.
Splay trees: a reweighting lemma and a proof of competitiveness
vs. dynamic balanced trees.
J. Algorithms 51:64–76, 2004.
[Defines a class of socalled parametrically
balanced dynamic binary search tree and proves that
splay trees are O(1)competitive within this class.]
 Dion Harmon*.
New Bounds on Optimal Binary Search Trees.
Ph.D. dissertation, Department of Mathematics, MIT, June 2006.
[Describes the results in Demaine et al. 2009 in
more detail, using slightly different terminology.]
 John Iacono.
Key independent optimality.
Algorithmica, 42:310, 2005.
 Joan M. Lucas.
Canonical forms for competitive binary search tree algorithms.
Tech. Rep. DCSTR250, Rutgers University, 1988.
[Introduces an offline dynamic binary search
tree algorithm, called GREEDYFUTURE
by Demaine et al.; see also Munro 2000.]
 J. Ian Munro.
On the competitiveness of linear search.
Proc. 8th Annual European Symposium on Algorithms
338–345. Lecture Notes in Computer Science 1879, Springer, 2000.
[Independently describes the offline
dynamic binary search tree algorithm called IAN (“Introduce As Necessary”)
in Harmon's thesis and GREEDYFUTURE
by Demaine et al.; see also Lucas 1988.]
 Robert E. Wilber*.
Lower bounds for accessing binary search trees with rotations.
SIAM J. Computing 18(1):5667, 1989.
[Proves two important lower bounds on the total
access cost of dynamic binary search trees.]
Representation of a preferred path in a zipper tree
[Bose et al., SODA 2008]
O(log log n)competitive binary search trees:
tango trees, multisplay trees, chainsplay trees, skipsplay trees, poketrees, zipper trees
 Avrim Blum,
Shuchi Chawla*, and
Adam Kalai*.
Static optimality and dynamic searchoptimality in lists and trees.
Algorithmica 36(3):249–260, 2003.
[Describes a simple but computationally expensive
dynamic binary search tree that is 14competitive if the cost of
rotations is ignored.]
 Prosenjit Bose,
Karim Douïeb*,
and Stefan Langerman.
Dynamic optimality for skip lists and Btrees.
Proc. 19th Annual ACMSIAM Symposium on Discrete Algorithms
1106–1114, 2008.
[Proves that among “weakly bounded” skip lists,
the working set bound is a lower bound on the access time, and describes
a “weakly bounded” deterministic selfadjusting skip list whose access
time is within a constant factor of the working set bound.]
 Prosenjit Bose,
Karim Douïeb,
Vida Dujmović, and
Rolf Fagerberg.
An
O(log log n)competitive binary search tree with optimal
worstcase access times.
Proc. 12th Scandinavian Symposium and Workshops on Algorithm
Theory, 38–49.
Lecture Notes in Computer Science 6139, Springer, 2010.
[Zipper trees, a O(log log n)competitive
dynamic binary search tree.]

Erik D. Demaine, Dion Harmon*,
John Iacono, and
Mihai Pătraşcu.
Dynamic
optimality—almost. SIAM J. Computing
37(1):240–251, 2007.
[Tango trees, the first O(log log n)competitive
dynamic binary search tree. Only competitive for searches, not for
insertions or deletions.]
 Jonathan Derryberry*
and
Daniel D. Sleator.
Skipsplay: Toward achieving the unified bound in the BST model.
Proc. Workshop on Algorithms and Data Structures, 194–205.
Lecture Notes in Computer Science 5664, Springer, 2009.
[Another O(log log n)competitive dynamic binary
search tree.]
 George F. Georgakopoulos.
Chainsplay trees, or, how to achieve and prove
log log ncompetitiveness by splaying.
Information Processing Letters 106(1):37–43, 2008.
[Another O(log log n)competitive dynamic binary
search tree, based on splaying.]
 John Iacono.
Alternatives to splay trees with O(log n) worstcase access times.
Proc. 12th Annual ACMSIAM Symposium on Discrete Algorithms,
516–522.
[Defines the unified bound, and describes a dynamic
dictionary (but not a binary search tree) that satisfies it.]
 Jussi Kujala* and Tapio Elomaa.
Poketree: A dynamically competitive data structure with good
worstcase performance. Proc. 17th International Symposium
on Algorithms and Computation, 277–288. Lecture Notes in
Computer Science 4288, Springer, 2006.
[A O(log log n)competitive dynamic data structure
with O(log n) worstcase access time, but strictly speaking not a
dynamic binary search tree.]

Chengwen Chris Wang*.
Jonathan Derryberry*, and
Daniel D. Sleator.
O(log log n)competitive binary search trees.
Proc. 17th Annual ACMSIAM Symposium on Discrete Algorithms,
374383, 2006.
[Multisplay trees, the first O(log log n)competitive
dynamic binary search tree that competitively supports insertions and
deletions.]

Chengwen Chris Wang*.
MultiSplay Trees.
Ph.D. dissertation, Department of Computer Science,
Carnegie Mellon University, July 2006.
 Tue, Feb 8
Thu, Feb 10

A multilevel partition of an edgeweighted binary tree, and the resulting topology tree.
[Frederickson 1997]
Dynamic forest maintenance:
Linkcut trees; CUT, JOIN, and FINDROOT in O(log n) amortized time; path queries and updates in O(log n) amortized time via lazy propagation; Eulertour trees; subtree queries and updates in O(log n) amortized time; sketches of topology trees, top trees, rakecompress trees, and selfadjusting top trees.
 Umut A. Acar*,
Guy E. Blelloch,
Robert Harper,
Jorge L. Vittes*, and
Shan Leung Maverick Woo*.
Dynamizing static algorithms, with applications to dynamic trees
and history independence. Proc. 15th Annual ACMSIAM
Symposium on Discrete Algorithms, 531–540, 2004.
[Rakecompress trees.]

Stephen Alstrup,
Jacob Holm*,
Kristian de Lichtenberg*,
and
Mikkel Thorup.
Maintaining information in fully dynamic trees with top trees.
ACM Transactions on Algorithms 1(2):243–264, 2005.
 Greg Frederickson.
Data structures for online updating of minimum spanning trees,
with applications. SIAM J. Computing 14:781–798, 1985.
[Topology trees.]
 Greg Frederickson.
A data structure for dynamically maintaining rooted trees.
J. Algorithms 24:37–65, 1997.
[Directed topology trees.]
 Andrew V. Goldberg,
Michael D. Grigoriadis, and
Robert E. Tarjan.
Use of dynamic trees in a network simplex algorithm for the maximum
flow problem. Mathematical Programming 50(1–3):277–290, 1991.
[Added subtree operations to linkcut trees.]
 Monika Rauch Henzinger
and
Valerie King.
Randomized fully dynamic graph algorithms with polylogarithmic time
per operation.
J. ACM 46:502–516, 1999.
[Eulertour trees; see also Tarjan 1997.]
 Gary L. Miller and
John H. Reif.
Parallel tree contraction and its application.
Proc. 26th Annual IEEE Symposium on Foundations of Computer
Science, 478–489, 1985.
[Introduces the rake and compress operations
in the context of parallel algorithms.]
 Gary L. Miller and
John H. Reif.
Parallel tree contraction part 1: Fundamentals.
Randomness and Computation, 47–72.
Silvio Micali, editor.
Advances in Computing Research 5, JAI Press, 1989.
[Full version of the previous paper.]

Daniel D. Sleator*
and
Robert E. Tarjan.
A data structure for dynamic trees.
J. Computer and System Sciences 26:362–391, 1983.
[The original formulation of linkcut trees
via biased trees, with O(log n) worstcase operation time.
Uses dynamic trees to obtain the first maximumflow algorithm that
runs in O(mn log n) time, based on blocking flows.]
 Robert E. Tarjan.
Data Structures and Network Algorithms.
CBMSNSF Regional Conference Series in Applied Mathematics 44,
SIAM, 1983.
[Includes a chapter on linkcut trees, reformulated
to use splay trees.]
 Robert E. Tarjan.
Efficiency of the primal network simplex algorithm for the minimumcost
circulation problem. Mathematics of Operations Research
16(2):272–291, 1991.
[Added directedpath operations to linkcut trees.]
 Robert E. Tarjan.
Dynamic trees as search trees via euler tours, applied to the
network simplex algorithm. Mathematical Programming
78(2):169–177, 1997.
[Eulertour trees; see also Henzinger and King 1999.]
 Robert E. Tarjan
and Uzi Vishkin. An efficient parallel biconnectivity algorithm.
SIAM J. Computing 14(4):862—874, 1985.
[Introduces the Euler tour technique in the
context of parallel algorithms.]
 Robert E. Tarjan
and
Renato Werneck*.
Selfadjusting top trees.
Proc. 16th Annual ACMSIAM Symposium on Discrete Algorithms,
813–822, 2005.
 Robert E. Tarjan
and
Renato Werneck*.
Dynamic trees in practice.
J. Experimental Algorithmics 14, article 4.5, 2009.
[Compares several dynamic forest structures, including Eulertour trees, rakecompress trees, splaybased linkcut trees, top trees, selfadjusting top trees, and a bruteforce linkcut tree with only parent pointers. For small graphs (up to a few thousand vertices), brute force wins. For larger graphs, linkcut trees win in applications with more links and cuts, there is no clear winner in applications with more queries. Really, ACM, what is this "Article 4.5" nonsense?]
 Tue, Feb 15
Thu, Feb 17

An edge pivots into a planar shortestpath tree as the source node moves along an edge.
After [Chambers Cabello 2007]
Applications of dynamic trees:
Maximum flows via network simplex; parametric and multiplesource shortest paths; dynamic graph connectivity
 Glencora Borradaile*
and Philip Klein.
An O(n log n) algorithm for maximum stflow in a directed planar graph.
J. ACM 56(2), article 9, 2009.
[Does what it says on the tin.]
 Sergio Cabello
and
Erin W. Chambers*.
Multiple source shortest paths in a genus g graph.
Proc. 18th Annual ACMSIAM Symposium on Discrete Algorithms
89–97, 2007.
[Generalizes Klein's planar multiplesource
shortest path algorithm for planar graphs to graphs embedded on
more general surfaces.]
 David Eppstein.
Dynamic generators of topologically embedded graphs.
Proc. 14th Annual ACMSIAM Symposium on Discrete Algorithms
599–608, 2003.
[Uses Thorup's dynamic connectivity data structure
to dynamically maintain the generators of a fundamental group of a
surface represented by a rotation system of a graph. Introduces
treecotree decompositions for surface graphs. Those two sentences
should make sense if you took my computational topology class last
year.]
 Jeff Erickson.
Maximum flows and parametric shortest paths in planar graphs.
Proc. 21st Annual ACMSIAM Symposium on Discrete Algorithms
794–804, 2010.
[Expresses and reanalyzes Borradaile and Klein's
planar maximumflow algorithm in terms of parametric shortest paths.
Hey, it is my class.]
 Andrew V. Goldberg,
Michael D. Grigoriadis, and
Robert E. Tarjan.
Use of dynamic trees in a network simplex algorithm for the maximum
flow problem. Mathematical Programming 50(1–3):277–290, 1991.
[Used dynamic trees to obtain another maximumflow
algorithm that runs in O(mn log n) time, based on network simplex,
using a pivot rule of Goldfarb and Hao.]
 Monika Rauch Henzinger
and
Valerie King.
Randomized fully dynamic graph algorithms with polylogarithmic time
per operation.
J. ACM 46:502–516, 1999.
[Edge insertions and edge deletions in
O(log^{3} n) amortized expected time, and connectivity
queries in O(log n/log log n) time.]
 Jacob Holm*,
Kristian de Lichtenberg*,
Mikkel Thorup.
Polylogarithmic deterministic fullydynamic algorithms
for connectivity, minimum spanning tree, 2edge, and
biconnectivity. J. ACM 48(4):723760, 2001.
[Edge insertions and edge deletions in
O(log^{2} n) time and connectivity queries in
O(log n/log log n) time.]
 Raj D. Iyer, Jr.*,
David Karger,
Hariharan S. Rahul*, and
Mikkel Thorup.
An experimental study of polylogarithmic, fully dynamic,
connectivity algorithms.
J. Experimental Algorithmics 6, article 4, 2001.
 Philip Klein.
Multiplesource shortest paths in planar graphs.
Proc. 16th Annual ACMSIAM Symposium on Discrete Algorithms
146–155, 2005.
[Implicitly computes all shortestpath trees rooted
on a single face of a planar graph in O(n log n) time. Stores
both the shortest path tree and the dual cotree in dynamic tree data
structures.]
 Robert E. Tarjan.
Dynamic trees as search trees via euler tours, applied to the
network simplex algorithm. Mathematical Programming
78(2):169–177, 1997.
[Simplifies the O(mn log n)time networksimplex
maximumflow algorithm of Goldberg, Grigoriadis, and Tarjan using
Eulertour trees.]
 Mikkel Thorup.
Nearoptimal fullydynamic graph connectivity.
Proc. 32nd Annual Acm Symposium on Theory of Computing,
343–350, 2000.
[Edge insertions and edge deletions in
O(log n (log log n)^{3}) time and
connectivity queries in O(log n/log log log n)
time, almost matching the Ω(log n) lower bound.]
 Tue, Feb 22
Thu, Feb 24

Reducing dynamic prefixsum to dynamic connectivity
[Demaine Pǎtraşcu 2006]
Lower bounds for dynamic connectivity:
Cell probe model, prefix sums, reductions, time trees, nondeterministic encoding
 Michael L. Fredman and
Monika Rauch Henzinger.
Lower bounds for fully dynamic connectivity problems in graphs.
Algorithmica 22(3):351–362, 1998.
[Among other results, observes an easy reduction
from parity prefix sums to dynamic reachability. See also
Miltersen et al. 1994.]
 Michael L. Fredman and
Michael E. Saks.
The cell probe complexity of dynamic data structures.
Proc. 21st ACM Symposium on Theory of Computing, 345–354, 1989.
[Introduces the chronogram technique for proving
cell=probe lower bounds. Proves an Ω(log n/log log n)
lower bound for dynamic parity prefix sum.]
 Peter Bro Miltersen.
Cell probe complexity: A survey.
Manuscript, 2000.
Invited talk/paper at Advances in Data Structures, a
preconference workshop of FSTTCS'99.
[Powerpoint slides]
 Peter Bro Miltersen,
Sairam Subramanian*,
Jeffrey Scott Vitter,
and Roberto Tamassia,
Complexity models for incremental computation.
Theoretical Computer Science 130(1):203–236, 1994.
[Among other results, observes an easy reduction
from parity prefix sums to dynamic reachability. See also Fredman
and Henzinger 1998.]
 Mihai Pătraşcu.
Lower bounds for dynamic connectivity.
Encyclopedia of Algorithms, 463–477.
Springer, 2008.
[Overview of the author's two Ω(log n) cellprobe
lower bound proofs.]
 Mihai Pătraşcu.
Unifying the landscape of cellprobe lower bounds.
To appear in SIAM J. Computing, 2011.
[Proves several cellprove lower bounds, including
an Ω(log n/log log n) lower bound for dynamic connectivity,
via reductions from lopsided set disjointness.]

Mihai Pătraşcu*
and Erik D. Demaine.
Logarithmic lower bounds in the cellprobe model.
SIAM J. Computing 35(4):932–963, 2006.
[Proves Ω(log n) cellprobe lower bound for
dynamic connectivity (by reduction from partial sums) using the
“time tree” technique.]
 Mihai Pătraşcu* and
Corina E. Tarniţă*.
On dynamic bitprobe complexity.
Theoretical Computer Science 380, 2007.
[Reproves Ω(log n) cellprobe lower bound for
dynamic connectivity (by reduction from partial sums) using a
modification of Fredman and Saks' chronogram technique.]
 Andrew ChiChih Yao.
Should tables be sorted? J. ACM 28(3):615–628, 1981.
[Formally introduces the cellprobe model of
computation. The answer is “Not if you can hash.”]
 Tue, Mar 15
Thu, Mar 17

Bit masks used to solve QBF in polynomial time, on a unitcost RAM with multiplication.
Unrealistic integer RAMs: Solving anything in #PSPACE in polynomial time and exponential space (sic)
 Alberto Bertoni, Giancarlo Mauri, Nicoletta Sabadini.
A characterization of the class of functions computable in polynomial
time on Random Access Machines.
Proc. 13th Annual ACM Symposium on Theory of Computing
168–176, 1981.
 Peter van Emde Boas.
Machine models and simulations.
Handbook of Theoretical Computer Science, Volume A:
Algorithms and Complexity, 1–66.
Jan van Leeuwen, editor, MIT Press, 1990.
 Juris Hartmanis and Janos Simon.
On the power of multiplication in random access machines.
15th Annual IEEE Symposium on Switching and Automata Theory
13–23, 1974.
 Vaughan R. Pratt, Michael O. Rabin, and Larry J. Stockmeyer.
A characterization of the power of vector machines.
Proc. 6th Annual ACM Symposium on Theory of Computing
122–134, 1974.
Predecessor search over a finite universe
[Mihai Pătraşcu]
Part of a exponential tree, with the root on the left and children on the right
[Andersson Thorup 2007]
Realistic integer RAMs: transdichotomous RAM, word RAM, AC⁰ RAM
Ordered dictionaries: van Emde Boas trees, xfast tries, yfast tries via indirection, fusion trees, exponential search trees
 Arne Andersson.
Faster deterministic sorting and searching in linear space.
Proc. 37th Annual IEEE Symposium on Foundations of Computer
Science 135–141, 1996.
[Dynamic ordered dictionary with O(√log n)
query and amortized update time using O(n) space, for any word
size w, via exponential trees.]
 Rene De La Briandais.
File searching using variable length keys.
1959 Proc. Western Joint Computer Conference, 295–298,
1959.
[Describes a tree data structure similar to tries,
but with (sparse) highdegree nodes represented by linked lists.]
 Arne Andersson,
Peter Bro Miltersen,
and
Mikkel Thorup.
Fusion trees can be implemented with AC⁰ instructions only.
Theoretical Computer Science 215(12):337344, 1999.
[...but they may require AC⁰ instructions that
are not standard C operations.]

Arne Andersson and
Mikkel Thorup.
Dynamic ordered sets with exponential search trees.
J. ACM 54(3), article 13, 2007.
[Combined journal version of Andersson's FOCS 1996
paper and Andersson and Thorup's STOC 2000 paper.]
 Arne Andersson and
Mikkel Thorup.
Tight(er) worstcase bounds on dynamic searching and priority queues.
Proc. 32nd Annual ACM Symposium on Theory of Computing,
335–342, 2000.
[Dynamic ordered dictionary with
with optimal worstcase update and query time
O(sqrt(log n/log log n)), using O(n) space,
for any word size w, using only C instructions (including
multiplication), via worstcase efficient exponential trees.]
 Paul Beame and Faith E. Fich.
Optimal bounds for the predecessor problem and related problems.
J. Computer and System Sciences 65(1):38–72, 2002.
[Dynamic ordered dictionary with optimal
query and amortized update time
O(min{log w/log log w,
sqrt(log n/log log n)}), using Andersson's exponential
search trees, as well as matching cellprobe lower bounds. The paper
includes an impressively thorough survey of previous upper and lower
bounds for ordered dictionaries and several related data structure
problems.]
 Peter van Emde Boas.
An O(n log log n) online algorithm for the InsertExtract
Min problem.
Technical Report TR 74221, Computer Science Department,
Cornell University, December 1974.
[The original version of “stratified trees”, which
required O(u log log u) space.]
 Peter van Emde Boas.
Preserving order in a forest in less than logarithmic time.
16th Annual IEEE Symposium on Foundations of Computer Science
75–84, 1975.
[An updated version of “stratified trees” that uses
only O(u) space, by way of indirection. Apparently the idea of separately
storing the min and max element of every chunk was developed later.]
 Peter van Emde Boas, R. Kaas, and E. Zijlstra.
Design and implementation of an efficient priority queue.
Mathematical Systems Theory 10:99–127, 1977.
[The first formal publication of the original version
of “stratified trees”, which required O(u log log u) space.
I never knew that the uptree data structure for disjoint sets was
called a ”Tritter tree“!]
 Edward Fredkin. Trie memory.
Communications of the ACM 3:490–499, 1962.
[Fredkin pronounced it “tree”, as in
”retrieval”.]
 Michael L. Fredman and Dan E. Willard.
BLASTING through the information theoretic barrier with FUSION TREES.
Proc. 22nd Annual ACM Symposium on Theory of Computing
17, 1990.
[The conference version of the original fusion tree
paper. Dynamic ordered dictionary with O(log n/log w) query
and (amortized) update time using O(n) space. If you cite this paper,
please capitalize the title words correctly. Thank you.]
 Michael L. Fredman and Dan E. Willard.
Surpassing the information theoretic bound with fusion trees.
J. Computer and System Sciences 47(3):424436, 1993.
[The journal version of the original fusion tree
paper. Apparently the referees did not have a sense of humor.]
 Michael L. Fredman and Dan E. Willard.
Transdichotomous algorithms for minimum spanning trees and shortest paths.
J. Computer and System Sciences 48:533–551, 1994.
 Mihai Pătraşcu and
Mikkel Thorup.
Timespace tradeoffs for predecessor search.
Proc. 38th Annual ACM Symposium on Theory of Computing,
232–240, 2006.
 Mihai Pătraşcu and
Mikkel Thorup.
Randomization does not help searching predecessors.
Proc. 18th Annual ACM/SIAM Symposium on Discrete Algorithms
555–564, 2007.
[Tight cellprobe lower bounds for static ordered
dictionaries, for any number of integers with any number of bits each,
any word size, and any space bound.]
 Dan E. Willard.
Loglogarithmic worst case range queries are possible in space O(N).
Information Processing Letters 17(2):81–89, 1983.
[xfast tries and yfast tries: queries in
O(log w) time using O(n) space. The paper only describes the
static data structure, because dynamic perfect hash tables were not
yet known.]
 Dan E. Willard.
New trie data structures which support very fast search operations.
J. Computer and System Sciences 28:379–394, 1984.
[pfast tries and qfast tries: both queries and
updates in O(sqrt{w}) time using O(n) space.]
 Tue, Mar 22
Thu, Mar 24

Spring break: No class
 Tue, Mar 29

Sorting a stack of Indecks notched cards.
From
The Last Whole Earth Catalog (1975)
Representing 94, without and with a Single Figure notch.
[
Anderson 1949]
Fast integer sorting: radix sort, packed sorting, Batcher's bitonic parallel mergesort, signature sort
 Susanne Albers
and
Torben Hagerup.
Improved parallel integer sorting without concurrent writing.
Information and Computation 136(1):25–51, 1997.
[Packed sorting: Sorts n bbit integers in O(n)
time, using word size w = Ω(b lo n log log n]
 Arne Andersson,
Torben Hagerup,
Stefan Nilsson*.
Blasting past fusion trees.
Technical Report LUCSTR:94135, Department of Computer Science,
Lund University, 1994.
[O(n log log n) time for all w≥log n.
Requires hashing (and therefore randomization) to work in only O(n)
space. See also Han and Shen 1995.]

Arne Andersson,
Torben Hagerup,
Stefan Nilsson*, and
Rajeev Raman.
Sorting in linear time?
J. Computer and System Sciences 57(1):74–93, 1998.
[Signature Sort: O(n) expected time for all w ≥
Ω(log^{2+ε} n), using a "full instruction set". Also:
O(n log log n) time for all w, using a restricted instruction
set (and hashing to get O(n) space).]
 Kenneth E. Batcher.
Bitonic sorting.
Goodyear Aerospace Report GER11869, 1964.
[Batcher got his PhD from UIUC in 1964!]
 Kenneth E. Batcher.
Sorting networks and their applications.
Proc. AFIPS Spring Joint Computer Conference, Vol. 32, 307–314, 1968.
[Bitonic and evenodd sorting networks. Bitonic
mergesort is a key component in Albers and Hagerup's packed sorting
algorithm. Batcher got his PhD from UIUC in 1964!]
 Amir M. BenAmram and
Zvi Galil.
When can we sort in o(n log n) time?
J. Computer and System Sciences 54:345–370, 1997.
[If and only if (1) either w is not enormous or we
allow nonuniform algorithms, and (2) either memory is huge or we allow
doubleprecision multiplication.]

Leslie J. Comrie. The Hollerith and Powers tabulating machines.
Transactions of the Office Machinery Users' Association,
25–37, 1929–30.
[Possibly the first published description of
radix sort. Comrie was the first person to use punchedcard
equipment for scientific (rather than business and statistical)
purposes.]
 Yijie Han.
Fast integer sorting in linear space.
Proc. 17th Annual Symposium on Theoretical Aspects of Computer
Science, 242–253.
Lecture Notes in Computer Science 1770, Springer, 2000.
[O(n (log log n)^{3/2}) time, without
hashing.]
 Yijie Han.
Improved fast integer sorting in linear space.
Proc. 12th Annual ACMSIAM Symposium on Discrete Algorithms,
793–796, 2001.
[O(n log log n log log log n)
time for all w, and O(n log log n) for all w ≥
Ω(log^{2+ε} n), without hashing.]
 Yijie Han.
Deterministic sorting in O(n log log n) time and linear space.
J. Algorithms 50(1):96–105, 2004.
[Avoids hashing.]
 Yijie Han and
Xiaojun Shen.
Conservative algorithms for parallel and sequential integer sorting.
Proc. 1995 International Computing and Combinatorics
Conference, 324–333. Lecture Notes in Computer
Science 959, Springer, 1995.
[O(n log log n) time for all w≥log n.
Requires hashing (and therefore randomization) to work in only O(n)
space. See also Andersson, Nilsson, and Hagerup 1994.]
 Yijie Han and
Mikkel Thorup.
Integer sorting in O(n √log log n) expected time and
linear space.
Proc. 43rd Symposium on Foundations of Computer Science,
135–144, 2002.

Herman Hollerith.
Complete specification: Improvements in the methods of and apparatus
for compiling statistics. UK Patent № 327, January 8, 1889.
[Includes a description of multikey sorting, using
an algorithm similar to radix sort.
“The most complicated combinations can readily be counted with
comparatively few counters or relays by first assorting the cards
according to the first items entering into the combinations, then
resorting each group according to the second item entering into the
combination, and so on, and finally counting on a few counters the
last item of the combinations for each group of cards.“]

David G. Kirkpatrick and Stefan Reisch*.
Upper bounds for sorting integers on random access machines.
Theoretical Computer Science 28(3):263–276, 1984.
[How to sort n integers in O(n log(w/log n))
time.]
 Stefan Nilsson*.
Radix Sorting & Searching. PhD thesis, Department
of Computer Science, Lund University, 1996.
 Stefan Nilsson.
The fastest sorting algorithm?
Dr. Dobb's Journal 311:38–45, 2000.
[Includes C source
code for Signature Sort.]
 Mikkel Thorup.
Randomized sorting in O(n log n) time and linear space
using addition, shift, and bitwise boolean operations.
J. Algorithms, 42(2):205–230, 2002.
 Dan E. Willard.
Loglogarithmic worst case range queries are possible in space O(N).
Information Processing Letters 17(2):81–89, 1983.
[How to sort n integers in O(n log w) time,
via yfast tries.]
 Thu, Mar 31

Priority queues: atomic heaps, equivalence with sorting, blackbox O(1)time insertion, blackbox melding
 Miklós Ajtai, Michael L. Fredman, and János Komlós.
Hash functions for priority queues.
Information and Control 63(3):217–225, 1984.
[Θ(w^{n1}) space is necessary and sufficient for
O(1)time priority queues via hashing.]

Stephen Alstrup,
Thore Husfeldt,
Theis Rauhe, and
Mikkel Thorup.
Black box for constanttime insertion in priority queues.
ACM Transactions on Algorithms 1(1):102–106, 2005.
[Any priority queue that supports INSERT,
DELETE, and FINDMIN in t(n)
time can be transformed into a priority queue that supports
INSERT and FINDMIN in O(1)
time and DELETE in O(t(n)) time.]
 Arne Andersson and
Mikkel Thorup.
Dynamic ordered sets with exponential search trees.
J. ACM 54(3), article 13, 2007.
[Exponential trees: A sorting algorithm that runs in
nS(n) time can be used as a black box to implement a priority queue
supporting all operations in O(S(n) log log n) time.]
 Peter van Emde Boas, R. Kaas, and E. Zijlstra.
Design and implementation of an efficient priority queue.
Mathematical Systems Theory 10:99–127, 1977.
[Any priority queue that supports INSERT,
DELETE, and FINDMIN in t(n)
amortized time can be modified to support INSERT,
DELETE, FINDMIN, and
MELD in O(t(n)α(n)) amortized time, where α(n) is the
inverse Ackermann function. The new meldable priority queue is a
unionfind structure whose nodes are blackbox nonmeldable priority
queues.]
 Michael L. Fredman and Dan E. Willard.
Surpassing the information theoretic bound with fusion trees.
J. Computer and System Sciences 47(3):424436, 1993.
[Atomic heaps: maintain O(log²n) items in O(1) time
per operation. The main component of fusion trees and AFheaps.]
 Michael L. Fredman and Dan E. Willard.
Transdichotomous algorithms for minimum spanning trees and shortest paths.
Proc. 31st Annual IEEE Symposium on Foundations of Computer
Science 719–725, 1990.
[Minimum spanning trees in O(m) time via atomic heaps.
Dijkstra's shortest path algorithm in O(m + n
log n/log log n) time via atomic Fibonacci heaps.]

Ran Mendelson*,
Robert E. Tarjan,
Mikkel Thorup, and
Uri Zwick.
Melding priority queues.
ACM Transactions on Algorithms 2(4):535–556, 2006.
[Improves the analysis of van Emde Boas, Kass, and
Zijlstra's priority queue reduction. Any priority queue that supports
INSERT, DELETE, and
FINDMIN in t(n) amortized time can be
modified to support INSERT,
FINDMIN, and MELD in O(1)
amortized time, and DELETE in O(t(n) + α(n)) amortized
time, where α(n) is the inverse Ackermann function.]

Mikkel Thorup.
Equivalence between priority queues and sorting.
J. ACM 54(6), article 28, 2007.
[A sorting algorithm that runs in nS(n) time can
be used as a black box to implement a priority queue that supports
INSERT and DELETEin O(S(n)) time and
FINDMIN in O(1) time, on an integer
RAM with multiplication. (The reverse reduction is trivial.)
The results are slightly weaker for AC⁰ RAMs.]
 Tue, Apr 5

A phylogeny of Christian churches reconstructed from memetic analysis (with the Episcopalian meme given a weight of 8)
[
Lord and Price 2001]
Least common ancestors: reduction from range minimum queries, reduction to ±1 range minimum queries, induction, the fourRussians trick, heavylight tree decomposition (again), alphabetic coding, longest common prefix (again), rakecompress trees (again)
 Alfred V. Aho,
John E. Hopcroft,
and Jeffrey D. Ullman.
On finding lowest common ancestors in trees.
SIAM J. Computing 5(1):115–132, 1976.

Stephen Alstrup,
Cyril Gavoille,
Haim Kaplan, and
Theis Rauhe.
Nearest common ancestors: A survey and a new algorithm for a
distributed environment. Theory of Computing Systems
37(3):441–456, 2004.

Michael A. Bender and
Martin FarachColton.
The LCA problem revisited.
Proc. 4th Latin American Symposium on Theoretical
Informatics, 88–94. Lecture Notes in Computer
Science 1776, Springer, 2000.
 Michael A. Bender and
Martin FarachColton.
The levelancestor problem simplified.
Theoretical Computer Science 321(1):5–12, 2004.
 Omer Berkman*,
Dany Breslauer*,
Zvi Galil,
Baruch Schieber*,
and
Uzi Vishkin. Highly parallelizable problems.
Proc. 21st Annual ACM Symposium on Theory of Computing,
309–319, 1989.
 Omer Berkman* and
Uzi Vishkin.
Recursive startree parallel data structure.
SIAM J. Computing 22(2):221–242, 1993.
 Johannes Fischer.
Optimal succinctness for
range minimum queries.
Proc. 9th Latin American Symposium on Theoretical
Informatics, 158–169.
Lecture Notes in Computer Science 6034, Springer, 2010.
 Johannes Fischer* and
Volker Heun.
Theoretical and practical improvements on the RMQproblem, with
applications to LCA and LCE.
Proc. 17th Annual Symposium on Combinatorial Pattern
Matching, 36–48. Lecture Notes in Computer
Science 4009, Springer, 2006.
 Johannes Fischer and
Volker Heun.
Spaceefficient preprocessing schemes for range minimum queries on static arrays.
To appear in SIAM J. Computing, 2011.
 Dov Harel*.
A linear time algorithm for the lowest common ancestors problem
(extended abstract).
Proc. 21st Annual IEEE Symposium on Foundations of Computer
Science, 308–319, 1980.
 Dov Harel* and
Robert E. Tarjan.
Fast algorithms for finding nearest common ancestors.
SIAM J. Computing 13(2):338–355, 1984.
 Harold N. Gabow,
Jon Louis Bentley, and
Robert E. Tarjan.
Scaling and related techniques for geometry problems".
Proc. 16th ACM Symposium on Theory of Computing
135–143, 1984.
 Baruch Schieber*
and
Uzi Vishkin.
On finding lowest common ancestors: Simplification and parallelization.
SIAM J. Computing 17(6):1253–1262, 1988.