At ColoClue we need a database which registers the resources members of the organization use. For example, we need to register which switch port on a network switch is being used by a member. This also counts for power bar ports, rackspace in a rack, access cards, cross connect links in patch panel, and more.
Years ago we had often conflicts representing the contents of this database compared to the real world situation. ColoClue often suffered situations in which there were conflicting records in the database with overlapping time ranges for a specific switch port or power bar port. This cannot happen in the real world: in one switch port can physically go only one cable. These overlapping records meant that someone had to go to the data centre to check what the real situation was at that moment. This is far from ideal.
Also, because invoicing the use of resources happens only once a month, ColoClue has to keep a history of mutations to the database to correctly invoice the use of resources. For example, when someone starts to use a new power bar port on the 2nd of the month and stops using that before the invoices for that month are created a month later, ColoClue still has to charge money for it. So, keeping track of history is needed in the database.
This is like the classic book lending problem for libraries, which is a more generic example. When someone lends a book from a library, the library needs to register which person has a book. And not only that, only one person can lend that specific book for a specific period of time. And, also a library need to keep track of these interactions with the book.
This can be done in a relational database with a table design like this:
CREATE TABLE book_lending (
id serial4 NOT NULL
, person_id int4 NOT NULL
, book_id int4 NOT NULL
, start_datetime timestamptz NOT NULL
, end_datetime timestamptz NULL
);
With this table the combination with person and book can be registered. For each lending action, the start date and time are being registered, and when the book is returned, that data and time are registered as end date. When the end_datetime
is empty, this means the person is still lending the book.
This is a simple table design for registering the lending process. However, what it does not do is preventing the software that uses this table to insert overlapping time ranges. So it is possible to have records in the database that overlap the ranges used. The obvious solution for this is to have constraints that prevent this. This general problem occurs in a lot of software development projects.
Almost every database has the possibility to add check constraints to a table to prevent this. However, these constraints can often become quite complex for situations like this, especially when taking into account the removal_dt
column for soft deletes described in my article about tracking history in a database. Also, they are difficult to debug. That is why with a lot of software projects the constraint checking for this behavior is written in the middleware layer of the software, especially when writing software that has to be database independent.
At ColoClue, we however had the situation that the database was accessed from different sources. Regardless of that, the best solution is to have the check constraints implemented directly on the database itself. This assures time range overlapping is not possible at all times from regardless from which source the database is being modified.
PostGreSQL has a very elegant solution for this: the tstzrange column type in combination with the GiST indexes functionality.
GiST indexes are an extension to PostGreSQL, available for almost any PostGreSQL instance. GiST indexes are often used in databases using geographical databases. They can be enabled by executing the following once on the current database schema.
CREATE extension btree_gist;
Using this, the table design looks like this:
CREATE TABLE book_lending (
id serial4 NOT NULL
, person_id int4 NOT NULL
, book_id int4 NOT NULL
, during tstzrange NOT NULL,
, removal_dt timestamptz NULL,
);
ALTER TABLE book_lending
ADD CONSTRAINT gist_book_lending EXCLUDE
USING GIST (
book_id WITH=
, during WITH &&)
WHERE (removal_dt IS NULL)
;
The constraint added describes to check for it only when the removal_dt
column is empty. We do not need to check for soft deleted data, making the index faster. Further, the operators used for checking the constraint determine how each column is checked. For the book_id
column this is simple, it has to check (the =
operator) for non-conflicting books. The &&
basically checks for overlapping during
time ranges.
The tstzrange data type is specific for PostGreSQL. It is handled a bit different from standard data type while querying and inserting data. Because it is a time range, it uses parentheses and square brackets to indicate the start- and end times of the range. The parentheses or brackets indicate whether the lower and upper bounds are exclusive or inclusive. Inserting a person lending a book will look this this:
INSERT INTO book_lending (person_id, book_id, during)
VALUES (1, 134, tstzrange('2025-09-01 12:00:00', null, '[)'));
id | person_id | book_id | during | removal_dt |
---|---|---|---|---|
1 | 1 | 134 | ["2025-09-01 12:00:00+00",) |
Now, if someone else tries to lend the book an hour later, and error occurs:
INSERT INTO book_lending (person_id, book_id, during)
VALUES (2, 134, tstzrange('2025-09-01 13:00:00', null, '[)'));
SQL Error [23P01]: ERROR: conflicting key value violates exclusion constraint "gist_book_lending"
Detail: Key (book_id, during)=(134, ["2025-09-01 13:00:00+00",)) conflicts with existing key (book_id, during)=(134, ["2025-09-01 12:00:00+00",)).
If the first person returns the book, we handle that like this using the same lending date and time the person did lend the book by using lower(during)
which takes the lower part of the time range:
UPDATE book_lending
SET during = tstzrange(lower(during), '2025-09-16 10:00:00', '[)')
WHERE id = 1;
id | person_id | book_id | during | removal_dt |
---|---|---|---|---|
1 | 1 | 134 | ["2025-09-01 12:00:00+00","2025-09-16 10:00:00+00") |
If now the second persons lends the book, the constraint still goes of if there is still an overlap in the time range. So this does not work:
INSERT INTO book_lending (person_id, book_id, during)
VALUES (2, 134, tstzrange('2025-09-16 08:59:59', NULL, '[)'));
SQL Error [23P01]: ERROR: conflicting key value violates exclusion constraint "gist_book_lending"
Detail: Key (book_id, during)=(134, ["2025-09-16 09:59:59+00",)) conflicts with existing key (book_id, during)=(134, ["2025-09-01 12:00:00+00","2025-09-16 10:00:00+00")).
But this works because the top of the time range is set with until instead of until and including the higher time stamp:
INSERT INTO book_lending (person_id, book_id, during)
VALUES (2, 134, tstzrange('2025-09-16 10:00:00', NULL, '[)'));
These checks are exactly what is necessary for the book lending constraints problem, solved with only a few SQL commands.
Drawbacks
-
GiST indexes are a little bit slower than standard B-tree indexes. I did not do a benchmark compared to writing the check constraints myself, but I never had any performance problems with it.
-
Getting the start - and end timestamps of the values of the during column can be a little bit more complicated, especially when using an ORM. This can be easily solved by creating a view that separates the lower and upper time range values:
CREATE VIEW vw_book_lending AS
SELECT id, person_id, book_id, LOWER(during) start_timestamp, UPPER(during) end_timestamp, removal_dt
FROM book_lending;
SELECT * FROM vw_book_lending;
id | person_id | book_id | start_timestamp | end_timestamp | removal_dt |
---|---|---|---|---|---|
1 | 1 | 134 | 2025-09-01 12:00:00.000 +0000 | 2025-09-16 10:00:00.000 +0000 | |
6 | 2 | 134 | 2025-09-16 10:00:00.000 +0000 |
Conclusion
-
Advanced PostgreSQL types are underrated, they provide strong capabilities.
-
When using GiST and exclusion constraint, you can enforce complex logic at the database level. The error messages returned even show the conflicting record so you do not have to go and search for it in the table contents.
-
By enforcing the constraint on the database level, it does not matter anymore from which source code the database is accessed. Data integrity is always enforced.