Authors:
Kamel Aouiche
and
Daniel Lemire
Affiliation:
LICEF, University of Quebec at Montreal, Canada
Keyword(s):
Probabilistic estimation, skewed distributions, sampling, hashing.
Related
Ontology
Subjects/Areas/Topics:
Data Warehouses and OLAP
;
Databases and Information Systems Integration
;
Enterprise Information Systems
Abstract:
Even if storage was infinite, a data warehouse could not materialize all possible views due to the running time and update requirements. Therefore, it is necessary to estimate quickly, accurately, and reliably the size of views. Many available techniques make particular statistical assumptions and their error can be quite large. Unassuming techniques exist, but typically assume we have independent hashing for which there is no known practical implementation. We adapt an unassuming estimator due to Gibbons and Tirthapura: its theoretical bounds do not make unpractical assumptions. We compare this technique experimentally with stochastic probabilistic counting, LOGLOG probabilistic counting, and multifractal statistical models. Our experiments show that we can reliably and accurately (within 10%, 19 times out 20) estimate view sizes over large data sets (1.5 GB) within minutes, using almost no memory. However, only GIBBONS-TIRTHAPURA provides universally tight estimates irrespective of
the size of the view. For large views, probabilistic counting has a small edge in accuracy, whereas the competitive sampling-based method (multifractal) we tested is an order of magnitude faster but can sometimes provide poor estimates (relative error of 100%). In our tests, LOGLOG probabilistic counting is not competitive. Experimental validation on the US Census 1990 data set and on the Transaction Processing Performance (TPC H) data set is provided.
(More)