Optimizing SQL Queries - PostgreSQL best practices

Optimizing SQL Queries - PostgreSQL best practices
Photo by benjamin lehman / Unsplash

Optimizing SQL queries is essential for improving the performance of a database system. In PostgreSQL, like in any other relational database, the way data is structured and indexed plays a crucial role in query optimization. This article will outline the best practices for structuring data, indexing, and query optimization.

So let's start our journey with the basics! 😉

Optimizing SQL queries

Structuring Data

The foundation of query optimization begins with how the data is structured. Here are some best practices:

  • Normalization: Ensure that your database design adheres to normalization rules up to 3NF to eliminate redundant data and ensure data integrity.
  • Appropriate Data Types: Choose the most appropriate data types for each column. For example, use INTEGER or BIGINT for large numbers instead of VARCHAR.
  • Use of Constraints: Apply constraints such as NOT NULLUNIQUE, and CHECK to enforce data integrity and improve query performance.


Indexes are critical for improving the performance of read operations. Here are some guidelines for effective indexing:

  • Primary Key Index: Always define a primary key for each table; PostgreSQL automatically creates a unique index for it.
  • Index Columns Used in WHERE Clauses: Create indexes on columns that are frequently used in WHERE clauses to speed up query execution.
  • Partial Indexes: Use partial indexes when you only need to index a subset of rows, such as CREATE INDEX ON orders (order_date) WHERE status = 'SHIPPED';.
  • Composite Indexes: For queries that filter on multiple columns, consider using composite indexes.

Avoiding Indexing Pitfalls

While indexes can improve performance, they can also lead to pitfalls if not used wisely:

  • Over-Indexing: Creating too many indexes can slow down write operations, as each index needs to be updated on INSERTUPDATE, or DELETE.
  • Unused Indexes: Periodically review and remove indexes that are not used by the query planner.
  • Index Maintenance: Rebuild indexes periodically to deal with bloat using REINDEX.

Indexing with Joined Tables

When dealing with joined tables, indexing becomes even more crucial:

  • Foreign Key Indexes: Index foreign key columns to speed up joins and enforce referential integrity.
  • Indexing Join Columns: Index columns that are used in JOIN conditions to improve join performance.


Here’s an example of creating an index on a joined table:

-- Assuming we have two tables, orders and customers, joined on customer_id
CREATE INDEX idx_customer_id ON orders (customer_id);

-- A query that benefits from the above index
FROM orders
JOIN customers ON orders.customer_id = customers.id
WHERE customers.last_name = 'Smith';

In this example, the index idx_customer_id helps to quickly locate orders for customers with the last name ‘Smith’, making the join operation more efficient.

This section has laid the groundwork for structuring data and indexing in PostgreSQL. In the following sections, we will delve into using EXPLAIN ANALYZE to further optimize queries and table structures.

Using EXPLAIN ANALYZE for Query Optimization

The EXPLAIN ANALYZE command in PostgreSQL is a powerful tool for understanding how the query planner processes a query and identifying areas for optimization. Let’s explore how to use it effectively.


When you prepend EXPLAIN ANALYZE to a query, PostgreSQL provides a detailed execution plan along with actual runtime statistics. Here’s an example:

FROM orders
WHERE order_date BETWEEN '2024-01-01' AND '2024-03-31';

The output will show the query plan, estimated costs, and actual execution time for each step. Analyzing this output helps you identify bottlenecks and areas for improvement.

Example output

SELECT o.order_id, c.first_name, c.last_name
FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE o.order_date BETWEEN '2024-01-01' AND '2024-03-31';


Hash Join  (cost=100.00..200.00 rows=1000 width=64) (actual time=5.123..10.456 rows=500 loops=1)
  Hash Cond: (o.customer_id = c.id)
  ->  Seq Scan on orders o  (cost=0.00..50.00 rows=1000 width=32) (actual time=0.012..2.345 rows=1000 loops=1)
        Filter: ((order_date >= '2024-01-01'::date) AND (order_date <= '2024-03-31'::date))
        Rows Removed by Filter: 5000
  ->  Hash  (cost=50.00..50.00 rows=1000 width=32) (actual time=5.100..5.100 rows=1000 loops=1)
        Buckets: 1024  Batches: 1  Memory Usage: 64kB
        ->  Seq Scan on customers c  (cost=0.00..50.00 rows=1000 width=32) (actual time=0.010..2.345 rows=1000 loops=1)
Planning Time: 0.123 ms
Execution Time: 10.789 ms


  • The query involves a hash join between the orders and customers tables.
  • The estimated cost is around 100 units.
  • The actual execution time is approximately 10.8 milliseconds.
  • The filter condition on order_date is applied during the scan.
  • The query planner chose a hash join strategy.

Query Optimization Examples

Let’s consider two scenarios: a simple query and a more complex one involving multiple joins and filters.

  1. Simple Query Optimization: Suppose we have an orders table with an index on the order_date column. We want to retrieve orders placed between January 1, 2024, and March 31, 2024. The initial query might look like this:
FROM orders
WHERE order_date BETWEEN '2024-01-01' AND '2024-03-31';

Using EXPLAIN ANALYZE, we can verify that the index is being used efficiently. If not, we might need to reindex or consider other optimizations.

  1. Complex Query Optimization: Imagine a more intricate scenario involving four joined tables: orders, customers, products, and order_items. We want to retrieve details of orders placed by customers with the last name ‘Smith’ for a specific product category. The query might look like this:
SELECT o.order_id, c.first_name, c.last_name, p.product_name
FROM orders o
JOIN customers c ON o.customer_id = c.id
JOIN order_items oi ON o.order_id = oi.order_id
JOIN products p ON oi.product_id = p.id
WHERE c.last_name = 'Smith'
  AND p.category = 'Electronics';

Using EXPLAIN ANALYZE, we can analyze the execution plan, check for sequential scans, and ensure that indexes are utilized efficiently. Adjusting indexes, rewriting the query, or denormalizing data may be necessary for optimal performance.

Practical Optimization

Consider the following steps when optimizing queries:

  • Review Execution Plans: Regularly analyze execution plans to identify inefficiencies.
  • Index Maintenance: Rebuild indexes periodically to prevent bloat.
  • Analyze Real Data: Use real data and representative workloads for accurate optimization.
Why uppercase SQL is so common, and why it doesn’t make sense
SQL, or Structured Query Language, is a widely used language for interacting with databases. SQL allows you to create, manipulate, and query data in a structured and efficient way. HOWEVER, THERE IS ONE ASPECT OF SQL THAT OFTEN SPARKS DEBATE AMONG DEVELOPERS - WHICH CASE DO YOU USE ..AND WHY

Understanding and Utilizing EXPLAIN ANALYZE Output

The EXPLAIN ANALYZE command in PostgreSQL provides valuable insights into how the query planner processes a query. By analyzing its output, you can identify bottlenecks, optimize query performance, and make informed decisions. Let’s dive into the details of reading and interpreting the EXPLAIN ANALYZE output.

Anatomy of EXPLAIN ANALYZE Output

When you execute a query with EXPLAIN ANALYZE, PostgreSQL generates an execution plan that consists of several components:

  1. Plan Tree: The execution plan is represented as a tree structure. Each node in the tree corresponds to an operation (e.g., scan, join, filter) performed during query execution.
  2. Cost Estimates: PostgreSQL estimates the cost of each operation based on factors like I/O, CPU, and memory usage.
  3. Actual Execution Times: The actual time spent executing each operation during query execution.
  4. Statistics: Information about the number of rows processed, memory usage, and other relevant metrics.

Common Output Columns

Let’s focus on some key columns you’ll encounter in the EXPLAIN ANALYZE output:

  • Node Type: Describes the type of operation (e.g., Seq Scan, Index Scan, Hash Join).
  • Relation Name: The name of the table or index being accessed.
  • Startup Cost: The estimated cost before starting the operation.
  • Total Cost: The total estimated cost for the entire query.
  • Actual Rows: The actual number of rows processed.
  • Actual Time: The actual time spent executing the operation.

Interpreting the Output

  1. Sequential Scan (Seq Scan):
    • Indicates a full table scan.
    • Check if an index can be used instead.
    • Consider adding an index on frequently filtered columns.
    • Look for high Seq Scan costs.
  2. Index Scan (Index Only Scan):
    • Indicates an index lookup.
    • Verify that the index is being used efficiently.
    • Check for index bloat and fragmentation.
  3. Join Strategies (Nested Loop, Hash Join, Merge Join):
    • Understand the join type used.
    • Nested Loop: Good for small tables.
    • Hash Join: Suitable for large tables.
    • Merge Join: Requires sorted input.
  4. Filter Conditions (Filter):
    • Identify expensive filter conditions.
    • Optimize WHERE clauses and JOIN conditions.
    • Use appropriate indexes.
  5. Sort Operations (Sort):
    • Look for sorting steps in the plan.
    • Consider adding indexes or adjusting query order.

Using the Output for Optimization

  1. Identify Costly Nodes:
    • Focus on nodes with high startup or total costs.
    • Optimize these operations first.
  2. Check Index Usage:
    • Ensure indexes are being utilized.
    • Verify that the most selective index is chosen.
  3. Analyze Actual Times:
    • Compare actual execution times to estimates.
    • Investigate significant discrepancies.
  4. Experiment with Changes:
    • Rewrite queries, adjust indexes, or denormalize data.
    • Rerun EXPLAIN ANALYZE to validate improvements.

Example 1: Worst Possible EXPLAIN ANALYZE Output

Consider the following query:

FROM orders
WHERE order_date BETWEEN '2024-01-01' AND '2024-03-31';

Output (Worst Case):

Seq Scan on orders  (cost=0.00..15629.12 rows=230412 width=422) (actual time=13.024..13.024 rows=1000 loops=1)
  Filter: ((order_date >= '2024-01-01'::date) AND (order_date <= '2024-03-31'::date))
  Rows Removed by Filter: 5000
Planning Time: 0.123 ms
Execution Time: 13.024 ms


  • The query performs a sequential scan on the entire orders table.
  • The estimated cost is high (15629.12).
  • The actual execution time is 13.024 milliseconds.
  • The filter condition on order_date is applied during the scan.
  • This is inefficient due to the lack of an appropriate index.
  • Conclusion

Example 2: Best Optimized EXPLAIN ANALYZE Output

After creating an index on the order_date column:

CREATE INDEX idx_order_date ON orders (order_date);

Now let’s re-run the query:

FROM orders
WHERE order_date BETWEEN '2024-01-01' AND '2024-03-31';

Output (Best Case):

Index Scan using idx_order_date on orders  (cost=0.14..8.16 rows=1000 width=422) (actual time=0.012..0.012 rows=1000 loops=1)
  Index Cond: ((order_date >= '2024-01-01'::date) AND (order_date <= '2024-03-31'::date))
Planning Time: 0.123 ms
Execution Time: 0.012 ms


  • The query now performs an index scan using the idx_order_date.
  • The estimated cost is low (8.16).
  • The actual execution time is significantly faster (0.012 ms).
  • The index condition directly filters the relevant rows.

By optimizing the query with an appropriate index, we achieved a substantial improvement in execution time.

Mastering the art of reading EXPLAIN ANALYZE output empowers you to fine-tune your queries, optimize execution plans, and achieve better database performance. Regularly analyze query plans, experiment with changes, and keep learning from the output to become an effective query optimizer.

Refactoring a Poorly Optimized Table Structure

In this section, we’ll examine a poorly optimized table structure and the corresponding queries. We’ll then apply the knowledge from the previous sections to refactor the table structure, use indexes effectively, and optimize the queries with EXPLAIN ANALYZE.

Identifying the Issues

CREATE TABLE order_details (
    order_id INT,
    product_id INT,
    product_name VARCHAR(255),
    quantity INT,
    unit_price MONEY,
    customer_id INT,
    customer_name VARCHAR(255),
    order_date DATE

Consider a table order_details that has been poorly designed with non-normalized data, no primary key, and no indexes:

The table contains redundant data (product_namecustomer_name), and there’s no clear primary key. Queries on this table are slow, especially those involving joins and filters.

Example of a Bad Query

Here’s an example of a query that performs poorly due to the table’s structure:

SELECT customer_name, SUM(quantity * unit_price) AS total_spent
FROM order_details
WHERE order_date BETWEEN '2024-01-01' AND '2024-03-31'
GROUP BY customer_name;

Without proper indexing, this query results in a full table scan, which is inefficient.

Applying Best Practices and Indexing

To improve the structure, we normalize the table and create appropriate indexes:

-- Normalized tables
CREATE TABLE customers (
    name VARCHAR(255)

CREATE TABLE products (
    name VARCHAR(255),
    price MONEY

    customer_id INT REFERENCES customers(id),
    order_date DATE

CREATE TABLE order_items (
    order_id INT REFERENCES orders(id),
    product_id INT REFERENCES products(id),
    quantity INT

-- Indexes
CREATE INDEX idx_order_date ON orders (order_date);
CREATE INDEX idx_customer_id ON orders (customer_id);

Optimizing the Query

Now, let’s rewrite the original query to utilize the new structure and indexes:

SELECT c.name, SUM(oi.quantity * p.price) AS total_spent
FROM customers c
JOIN orders o ON c.id = o.customer_id
JOIN order_items oi ON o.id = oi.order_id
JOIN products p ON oi.product_id = p.id
WHERE o.order_date BETWEEN '2024-01-01' AND '2024-03-31'
GROUP BY c.name;

Using EXPLAIN ANALYZE, we can confirm that the query now uses the indexes efficiently, resulting in a significant performance improvement.

By refactoring the table structure and applying indexing best practices, we’ve transformed a poorly optimized setup into one that supports efficient queries. This demonstrates the importance of thoughtful database design and the power of tools like EXPLAIN ANALYZE in optimizing SQL queries.

In the next section, we’ll discuss how to approach designing a database schema with optimization in mind from the start.

Designing a Database Schema with Performance Optimization in Mind

When starting the design of a database schema, it’s crucial to consider performance optimization from the outset. This proactive approach can save significant time and resources later on. Here are key considerations for designing an optimized database schema:

Understand the Data and its Usage

  • Data Modeling: Begin with a clear model of the data entities, their relationships, and the business rules that govern them.
  • Workload Analysis: Understand the types of queries that will be run against the database, their frequency, and their performance requirements.

Apply Normalization Prudently

  • Normalization: Apply normalization principles to eliminate redundancy and ensure data integrity, but also consider where denormalization might be beneficial for performance.
  • Balance: Find the right balance between normalization for data integrity and denormalization for query efficiency.

Indexing Strategy

  • Selective Indexing: Plan your indexes based on the queries you expect to run. Not every column needs an index.
  • Index Types: Understand the different types of indexes (B-tree, hash, GIN, GiST) and choose the one that fits the use case.

Design for Scalability

  • Scalability: Design tables and relationships that can scale with the growth of data.
  • Partitioning: Consider partitioning large tables to improve manageability and performance.

Anticipate Future Changes

  • Flexibility: Allow for flexibility in your schema design to accommodate future changes without significant rework.
  • Metadata: Store metadata effectively to aid in future optimizations and maintenance.

Use Constraints and Relationships Wisely

  • Constraints: Implement constraints such as PRIMARY KEYFOREIGN KEYUNIQUE, and CHECK to enforce data integrity.
  • Relationships: Define clear relationships between tables using foreign keys to ensure referential integrity and optimize joins.


-- Products table with a primary key and price index
CREATE TABLE products (
    name VARCHAR(255) NOT NULL,
    category VARCHAR(255) NOT NULL,
    price NUMERIC(10, 2)
CREATE INDEX idx_products_price ON products (price);

-- Customers table with a primary key
CREATE TABLE customers (
    name VARCHAR(255) NOT NULL,

-- Orders table with foreign keys and an index on the order_date
    customer_id INT REFERENCES customers(id),
    order_date DATE NOT NULL,
    status VARCHAR(50)
CREATE INDEX idx_orders_order_date ON orders (order_date);

-- Order items table with a composite primary key
CREATE TABLE order_items (
    order_id INT REFERENCES orders(id),
    product_id INT REFERENCES products(id),
    quantity INT NOT NULL,
    PRIMARY KEY (order_id, product_id)

Example of creating a well-structured database schema considering the above points:

In this schema, we’ve considered the need for efficient querying by indexing critical columns and ensuring that relationships are properly defined with foreign keys. The structure is normalized, but with an eye towards the types of queries that will be run, ensuring that performance is a key consideration.

Designing a database schema with optimization in mind involves understanding the data, applying normalization judiciously, planning an indexing strategy, designing for scalability, anticipating future changes, and using constraints and relationships wisely. By considering these factors from the beginning, you can create a robust, efficient, and scalable database schema.

A practical guide to using the JSONB type in PostgreSQL
JSONB, or JSON Binary, is a more efficient way to store and manipulate JSON data in PostgreSQL. It stores data in a decomposed binary format, which allows for faster access to individual elements within the JSON document. It is the recommended way to store JSON data in PostgreSQL. Why Use


As we wrap up our exploration of SQL optimization, it’s clear that the path to swift and efficient database queries is both an art and a science. We’ve delved into the intricacies of data structuring, the strategic use of indexes, and the insightful depths of EXPLAIN ANALYZE. These are more than just techniques; they’re the tools that empower us to craft queries that not only solve problems but do so with optimal performance.

Optimization is not a one-off task but a continuous process of refinement and learning. As databases grow and requirements evolve, so too must our approach to managing and querying them. Staying informed and adaptable is key to keeping our databases running smoothly.

Whether you’re a seasoned SQL pro or just starting out, remember that every query is an opportunity to apply these principles. By doing so, you’ll ensure that your databases are not just operational, but are performing at their best.

Let’s carry forward the lessons learned here, applying them to our future work with the confidence that comes from a solid foundation in SQL optimization.

Read more