Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu


Add a TABLESAMPLE clause to SELECT statements that allows
user to specify random BERNOULLI sampling or block level
SYSTEM sampling. Implementation allows for extensible
sampling functions to be written, using a standard API.
Basic version follows SQLStandard exactly. Usable
concrete use cases for the sampling API follow in later
Getting random sample of the table looks potentially interesting, but how does it work?
Let's make some random table:
create table test (
    id serial primary key,
    some_timestamp timestamptz,
    some_text text
insert into test (some_timestamp, some_text)
        now() - random() * '1 year'::interval,
        'depesz #' || i
        generate_series(1,100000) i;
INSERT 0 100000
The table is around 6MB:
                   List of relations
 Schema | Name | Type  | Owner  |  Size   | Description 
 public | test | table | depesz | 5920 kB | 
(1 row)
Tablesample has two modes. SYSTEM and BERNOULLI.
Before we'll go any further, we will need to know how large is the table, in pages:
select relpages from pg_class where relname = 'test';
(1 row)
OK. So, we have 736 pages, and 100 000 rows, which means that on average, in single page we have 136 rows.
Let's say we'd like to get just 10 rows. 10 rows, out of 100000, means we want to get 0.0001 of the table, so:
explain analyze select * from test tablesample system ( 0.01 );
                                                 QUERY PLAN                                                  
 Sample Scan (system) on test  (cost=0.00..0.08 rows=8 width=44) (actual time=0.016..0.021 rows=136 loops=1)
 Planning time: 0.102 ms
 Execution time: 0.045 ms
(3 rows)
That's too much – 136 rows instead of 10. Why is it so?
Well, SYSTEM TABLESAMPLE method randomly picks single page, and returns all rows from this page. This means it will be fast (pick random value 1-736, load page (8kB) return all rows from it).
But it can't return less data than single page.
of course we can use then secondary randomization:
explain analyze with x as (select * from test tablesample system ( 0.01 ))
select * from x order by random() limit 10;
                                                     QUERY PLAN                                                      
 Limit  (cost=0.39..0.41 rows=8 width=44) (actual time=0.088..0.090 rows=10 loops=1)
   CTE x
     ->  Sample Scan (system) on test  (cost=0.00..0.08 rows=8 width=44) (actual time=0.015..0.027 rows=136 loops=1)
   ->  Sort  (cost=0.31..0.33 rows=8 width=44) (actual time=0.087..0.087 rows=10 loops=1)
         Sort Key: (random())
         Sort Method: top-N heapsort  Memory: 25kB
         ->  CTE Scan on x  (cost=0.00..0.19 rows=8 width=44) (actual time=0.017..0.048 rows=136 loops=1)
 Planning time: 0.136 ms
 Execution time: 0.115 ms
(9 rows)
usually using “order by random()" is slow, but in here, we're orderingonly 136 rows, so it's fast enough.
There is 2nd method – BERNOULLI – that can return smaller number of rows:
explain analyze select * from test tablesample bernoulli ( 0.01 );
                                                   QUERY PLAN                                                    
 Sample Scan (bernoulli) on test  (cost=0.00..736.08 rows=8 width=44) (actual time=0.465..2.742 rows=10 loops=1)
 Planning time: 0.107 ms
 Execution time: 2.758 ms
(3 rows)
Looks great – the number of rows is what I wanted (it will not always be exactly given percentage, as it's random). But notice what happens when I add more data:
insert into test (some_timestamp, some_text)
        now() - random() * '1 year'::interval,
        'depesz #' || i
        generate_series(100001, 1000000) i;
INSERT 0 900000
The table is now roughly 10 times larger. Times:
explain analyze select * from test tablesample system ( 0.001 );
                                                  QUERY PLAN                                                  
 Sample Scan (system) on test  (cost=0.00..0.10 rows=10 width=25) (actual time=0.029..0.042 rows=136 loops=1)
 Planning time: 0.125 ms
 Execution time: 0.061 ms
(3 rows)
explain analyze select * from test tablesample bernoulli ( 0.001 );
                                                     QUERY PLAN                                                     
 Sample Scan (bernoulli) on test  (cost=0.00..7353.10 rows=10 width=25) (actual time=2.779..27.288 rows=15 loops=1)
 Planning time: 0.112 ms
 Execution time: 27.312 ms
(3 rows)
Time for system tablesample is more or less the same. But in case of BERNOULLI – it's 10x longer. Why? Basically BERNOULLI has to seq scan whole table, and pick rows using some math to return more or less the number of rows we want.
This means that while it is more precise, and gives exactly the same chance to each row, it is slower.
In most cases, I think, that SYSTEM sampling will be best, but it has to be understood that it works on page level.
You have to remember also, that tablesample is applied before any WHERE conditions, so this query:
explain analyze select * from test TABLESAMPLE SYSTEM ( 1 ) where id < 10;
Will usually not return any rows – it will first pick 1% of the pages, and then filter them by id < 10 - we would have to randomly pick page number 1 (and some other) for the filter to find any matching rows. All in all - I think it is useful. Thanks.


Popular posts from this blog

PostgreSQL pgBadger

ORA-01261: Parameter db_recovery_file_dest destination string cannot be translated ORA-01262: Stat failed on a file destination directory Linux-x86_64 Error: 2: No such file or directory

How to Get Table Size, Database Size, Indexes Size, schema Size, Tablespace Size, column Size in PostgreSQL Database

PostgreSQL Pgbadger Installation On Linux

PostgreSQL Database startup / shutdown /restart