Sqlite Commands and concepts.

Deepak Pant
3 min readMar 7, 2021

TLDR: sqlite3 can be a OK choice when the size of database is less than ~10 GB or ~100M rows and your webpage or requirement is not having lot of concurrency.

Installation

pip3 install sqlalchemy
sudo apt-get install sqlite3
sudo apt install sqlitebrowser
sudo apt install sqlitebrowser

Turn on header

sqlite3> .h on

Create/open a database

sqlite3 db_name.db

List all table and their schema

.schema

Understand your query bottleneck

Following can help you understand the bottleck and if index are applied during the query processing

sqlite> EXPLAIN QUERY PLAN  <QUERY> sqlite> PRAGMA index_list('stock_price_minute_data');sqlite> PRAGMA index_info('sqlite_autoindex_stock_price_minute_data_1');

Read a sql script and execute it

.r <my_sql_script.sql>

Do VACCUM (compaction)

Useful when you dropped lot of tables (to reclaim the storage)

$sqlite3 /path/to/db/foo.db 'VACUUM;'

Attaching multiple databases with one cursor (command)

ATTACH '/home/dpant/stock_trade/dbf/ticker.db' AS tickerdb

Show all the attached db

.databases

Pros of sqlitedb

In process database, easy to start with.

Decent choice to use when db size is < 10 GB.

Is sqlite3 right and adequate for you?

read here more about it

Some shortcoming of sqlitedb

Easily corruptable when db file size grows. My db got corrupted when the file size grew > 30GB. usage of sqlitebrowser in big database file also corrupt it. If you choose to use sqlite for large databases please backup frequently!

You can’t change column datatype after you created the table. Workaround is to created new table and copy the data

you can’t change drop constraint after you apply them to table.

INSERT INTO <new_table> SELECT *  FROM <old_table>;

Concurrency & write locks

Write ops locks the entire database, and it will not be available for read. if you are doing bulk write for extended period of time your db will be unavailable for that period of time

storage: I found the storage used by sqlite3 more than postgres. Most of this difference comes from index size. sqlite3 usually have 35% size used for index. I found that only 25% for postgres (for same schema and data)

Word of caution about composite index

If you are making composite index (column1, column2) it matter what is the first field in the composite index. your query will only use the index if

select * from table_name where column1 = 'x'
OR
select * from table_name where column1 = 'x' and column2 = 'y'

when you use query like, it will not use index

select * from table_name where column2 = 'y'
OR
select * from table_name where column1 = 'x' OR column2 = 'y'

Performance

For details about sqlite3 performace please read. Gist is sqlite is not much slower, but in practise it become more unstable (frequent db corruption) when database size grows.

Insertions (Write performace) will become slower when table grows (> 5GB) and if it have index (PK). Read this stackoverflow answer which is similar conclusion i made.

if you want to do bulk insert drop all indexes ( including primary key which create implicit index) and do insert. dropping index is not allowed so create new table copy data and delete the old table

Use sqlite3_analyzer to know exact storage usage from sqlite3

sudo apt-get install lib32z1
and then download the sqlite3_analyzer from the official website

Sqlite3 query optimizer overview

Read this document which will give you more insight how queries are processed. this is an amazing read.

--

--