

You can directly take it for a spin by deploying a new database on guepard and selecting Postgres Version 18.

PostgreSQL 18 removes a critical limitation in multicolumn B-tree index usage. Prior versions required equality predicates on leading columns for efficient index utilization. The new skip scan capability allows PostgreSQL to traverse distinct values in omitted prefix columns, enabling index access patterns previously impossible without query restructuring.
This matters significantly for analytical workloads where column filter combinations vary across queries. Consider a three-column index on (tenant_id, status, created_at). Queries filtering only on status and created_at can now leverage this index effectively, rather than falling back to sequential scans or requiring additional single-column indexes.
CREATE INDEX idx_events_composite ON event_logs (tenant_id, status, created_at);
-- Skip scan enables index usage without tenant_id
filterEXPLAIN (ANALYZE, BUFFERS)SELECT status, COUNT(*) as event_countFROM event_logsWHERE status IN ('failed', 'pending') AND created_at >= NOW() - INTERVAL '7 days'GROUP BY status;
Buffer usage statistics are now included in EXPLAIN ANALYZE output by default, removing the need for explicit BUFFERS directives. Additional metrics include index lookup counts, and verbose mode exposes CPU usage alongside WAL write statistics.
This streamlines performance analysis workflows. Every query execution plan now includes I/O statistics automatically, providing immediate visibility into cache hit ratios and disk access patterns without configuration changes.
sql
EXPLAIN ANALYZESELECT u.username, COUNT(o.id) as order_countFROM users uJOIN orders o ON u.id = o.user_idWHERE o.created_at >= '2025-10-01'GROUP BY u.usernameHAVING COUNT(o.id) > 5;
-- Output now automatically includes:
-- Buffers: shared hit=156 read=8-- I/O Timings: shared read=0.124
-- Planning:-- Buffers: shared hit=22 read=3
-- I/O Timings: shared read=0.089Generated columns in PostgreSQL 18 default to virtual computation rather than physical storage. Values are calculated on read, eliminating storage overhead and write amplification during INSERT and UPDATE operations. The STORED keyword remains available when physical materialization is required for indexing.
sql-- Virtual by default, computed on demandALTER TABLE invoicesADD COLUMN total_with_tax DECIMAL(12,2)GENERATED ALWAYS AS (subtotal + (subtotal * tax_rate));
-- For full-text search, physical storage required
ALTER TABLE documentsADD COLUMN search_vector tsvectorGENERATED ALWAYS AS (to_tsvector('english', title || ' ' || content)) STORED;CREATE INDEX idx_documents_search ON documents USING GIN(search_vector);Native UUIDv7 support provides RFC-compliant time-ordered unique identifiers. Unlike random UUIDv4 values, UUIDv7 incorporates timestamps, resulting in sequential distribution that improves B-tree index locality and reduces page splits during insertion. This translates directly to better write throughput and cache efficiency.
sql
CREATE TABLE api_requests ( request_id UUID PRIMARY KEY DEFAULT uuidv7(), endpoint TEXT NOT NULL, response_time_ms INT, created_at TIMESTAMPTZ DEFAULT NOW());-- Extract timestamp component for analysisSELECT request_id, uuid_extract_timestamp(request_id) as embedded_timestamp, created_atFROM api_requestsWHERE created_at >= NOW() - INTERVAL '1 hour'ORDER BY request_id;The WITHOUT OVERLAPS constraint and PERIOD clause enable schema-level enforcement of temporal integrity. These features prevent overlapping time ranges in unique and primary keys, and validate temporal relationships across foreign keys. Applications modeling time-based state transitions benefit from database-enforced correctness rather than application-layer validation.
sql
CREATE TABLE server_allocations ( server_id INT, customer_id INT, allocation_period tstzrange NOT NULL, PRIMARY KEY (server_id, allocation_period WITHOUT OVERLAPS),
CHECK (NOT isempty(allocation_period)));-- Database prevents overlapping allocations INSERT INTO server_allocations VALUES (101, 1001, '[2025-01-01, 2025-06-30]'), (101, 1002, '[2025-07-01, 2025-12-31]'); -- Success: no overlapThe RETURNING clause now exposes both pre-modification and post-modification row values through OLD and NEW table references. This eliminates the need for triggers or additional queries when capturing state changes during UPDATE, DELETE, or MERGE operations.
sql
UPDATE subscription_tiersSET max_api_calls = max_api_calls * 1.5,
last_updated = NOW()WHERE tier_name = 'professional'RETURNING tier_name, old.max_api_calls AS previous_limit, new.max_api_calls AS updated_limit, new.last_updated;
-- Output captures both states:
-- tier_name | previous_limit | updated_limit | last_updated-- --------------+----------------+---------------+---------------------------- professional | 10000 | 15000 | 2025-10-20 14:32:18+00PostgreSQL 18 supports OAuth 2.0 bearer token authentication, enabling integration with identity providers like Google, Azure AD, or Auth0. Connections authenticate using access tokens rather than database passwords, supporting both device authorization flows for CLI tooling and direct token validation for applications.
Adding NOT NULL constraints to large tables no longer requires full-table validation at creation time. The NOT VALID modifier enforces the constraint on new rows immediately while deferring validation of existing data, enabling safer schema migrations on production systems.
sql-- Add constraint without blocking writes
ALTER TABLE telemetry_dataADD CONSTRAINT telemetry_device_id_not_null CHECK (device_id IS NOT NULL) NOT VALID;-- Validate in maintenance window using concurrent approachBEGIN;SET LOCAL statement_timeout = '2h';
ALTER TABLE telemetry_data VALIDATE CONSTRAINT telemetry_device_id_not_null;COMMIT;Insert-specific autovacuum thresholds provide finer control over maintenance scheduling. The autovacuum_vacuum_max_threshold parameter caps dead tuple accumulation per table, and cumulative autovacuum timing is now tracked in pg_stat_all_tables, improving visibility into maintenance overhead.
sql-- Increase workers for high-traffic periods
ALTER SYSTEM SET autovacuum_max_workers = 12;SELECT pg_reload_conf();-- Configure per-table thresholds for write-heavy tablesALTER TABLE metrics_hourly SET (autovacuum_vacuum_max_threshold = 50000);
ALTER TABLE log_entries SET (autovacuum_vacuum_max_threshold = 100000);-- Monitor autovacuum efficiencySELECT schemaname, relname, n_tup_ins + n_tup_upd + n_tup_del as total_modifications, autovacuum_count, ROUND(total_autovacuum_time::numeric / NULLIF(autovacuum_count, 0), 2) as avg_vacuum_msFROM pg_stat_user_tablesWHERE autovacuum_count > 0ORDER BY total_autovacuum_time DESCLIMIT 10;GIN indexes, commonly used for JSONB and full-text search, now support parallel construction. Multiple worker processes distribute table scanning and index building, significantly reducing reindexing time for large datasets.
sql-- Set parallel workers for index build
SET max_parallel_maintenance_workers = 4;-- GIN index creation now parallelizes automatically
CREATE INDEX CONCURRENTLY idx_products_metadata ON products USING GIN(metadata jsonb_path_ops);
-- Monitor parallel worker utilizationSELECT pid, wait_event_type, wait_event, state, queryFROM pg_stat_activityWHERE query LIKE '%CREATE INDEX%' AND backend_type = 'parallel worker';PostgreSQL 18 introduces asynchronous I/O for read operations, replacing the legacy synchronous model where queries blocked on individual read completions. The new system issues multiple read requests concurrently, continuing execution while awaiting results. This reduces CPU idle time and improves throughput, particularly on network-attached storage with non-trivial latency.
The io_method parameter (startup-only) controls I/O handling:
# postgresql.confio_method = 'worker'
# Default: background I/O worker poolio_method = 'io_uring'
# Linux-only: kernel ring buffer interfaceio_method = 'sync'
# Legacy synchronous behaviorImportant: Our platform currently maintains io_method = 'sync' while we integrate asynchronous I/O into our storage backend. Sequential scan and prefetch optimizations for PostgreSQL 18 are actively being implemented. Performance benchmarks on guepard.run will not reflect PostgreSQL 18's asynchronous I/O improvements until this integration completes.
We're prioritizing prefetch and sequential scan corrections for the next release, with full async I/O backend integration following thereafter. Avoid using guepard.run for PostgreSQL 18 I/O performance evaluation until these updates are deployed.
The enhanced pg_stat_io view complements asynchronous I/O by exposing granular I/O activity breakdown across backend types, objects, and contexts. This visibility is critical for identifying bottlenecks and understanding storage access patterns.
sql-- Analyze I/O patterns by operation typeSELECT backend_type, object, context, reads, writes, ROUND(read_time::numeric / NULLIF(reads, 0), 3) as avg_read_ms, ROUND(write_time::numeric / NULLIF(writes, 0), 3) as avg_write_msFROM pg_stat_ioWHERE reads > 0 OR writes > 0ORDER BY reads + writes DESCLIMIT 15;
-- Expected output structure:-- backend_type | object | context | reads | writes | avg_read_ms-- ------------------+----------+-----------+-------+--------+--------------- client backend | relation | normal | 2847 | 156 | 0.342-- autovacuum | relation | vacuum | 1203 | 89 | 0.287-- background writer| relation | bulkwrite | 0 | 4521 | NULLPostgreSQL 18 represents collaborative work across the PostgreSQL community. Neon (now part of Databricks) contributed to this release through team members Heikki Linnakangas and Matthias van de Meent.
PostgreSQL 18 is available in preview on guepard.run. Production deployments should await stability assessment and complete platform integration of asynchronous I/O capabilities.
But until then feel free to experiment with PostgreSQL 18 for free here.