If I have a static set of $n$ items in a database that are all queried with uniform probability it makes sense to put them in a binary search tree. This way any given search will take, on average, $O(\ln n)$.
What happens now, if the items are not queried uniformly? As a silly example, say that one item is queried 1000 more frequently than any other item. Would it make sense check for that item first and independently, before looking in a search tree of the other items (the so-called LenPEG method)?
For concreteness, let's say the items are queried with a likelyhood that is static, and drawn from a Poisson distribution. What would be the optimal way to structure the data so that the average search time is minimized?
Asked By : Hooked
Answered By : HEKTO
The Optimal Binary Search Tree is exactly what you need.
Welcome to the site!
Question Source : http://cs.stackexchange.com/questions/29814
0 comments:
Post a Comment
Let us know your responses and feedback