postgresql

Michael Paquier: Postgres 12 highlight - Functions for partitions

planet postgresql - 2019-02-08(金) 12:50:47

Partitions in Postgres are a recent concept, being introduced as of version 10 and improved a lot over the last years. It is complicated, and doable, to gather information about them with specific queries working on the system catalogs, still these may not be straight-forward. For example, getting a full partition tree leads to the use of WITH RECURSIVE when working on partitions with multiple layers.

Postgres 12 is coming with improvements in this regard with two commits. The first one introduces a new system function to get easily information about a full partition tree:

commit: d5eec4eefde70414c9929b32c411cb4f0900a2a9 author: Michael Paquier <michael@paquier.xyz> date: Tue, 30 Oct 2018 10:25:06 +0900 Add pg_partition_tree to display information about partitions This new function is useful to display a full tree of partitions with a partitioned table given in output, and avoids the need of any complex WITH RECURSIVE query when looking at partition trees which are deep multiple levels. It returns a set of records, one for each partition, containing the partition's name, its immediate parent's name, a boolean value telling if the relation is a leaf in the tree and an integer telling its level in the partition tree with given table considered as root, beginning at zero for the root, and incrementing by one each time the scan goes one level down. Author: Amit Langote Reviewed-by: Jesper Pedersen, Michael Paquier, Robert Haas Discussion: https://postgr.es/m/8d00e51a-9a51-ad02-d53e-ba6bf50b2e52@lab.ntt.co.jp

The second function is able to find the top-most parent of a partition tree:

commit: 3677a0b26bb2f3f72d16dc7fa6f34c305badacce author: Michael Paquier <michael@paquier.xyz> date: Fri, 8 Feb 2019 08:56:14 +0900 Add pg_partition_root to display top-most parent of a partition tree This is useful when looking at partition trees with multiple layers, and combined with pg_partition_tree, it provides the possibility to show up an entire tree by just knowing one member at any level. Author: Michael Paquier Re[...]
カテゴリー: postgresql

Vasilis Ventirozos: Managing xid wraparound without looking like a (mail) chimp

planet postgresql - 2019-02-07(木) 05:07:00
My colleague Payal came across an outage that happened to mailchimp's mandrill app yesterday, link can be found HERE.
Since this was PostgreSQL related i wanted to post about the technical aspect of it.
According to the company :

“Mandrill uses a sharded Postgres setup as one of our main datastores,”
the email explains.
“On Sunday, February 3, at 10:30pm EST, 1 of our 5 physical Postgres instances saw a significant spike in writes.” 
The email continues:
The spike in writes triggered a Transaction ID Wraparound issue. When this occurs, database activity is completely halted. The database sets itself in read-only mode until offline maintenance (known as vacuuming) can occur.”

So, lets see what that "transaction id wraparound issue" is and how someone could prevent similar outages from ever happening.

PostgreSQL uses MVCC to control transaction visibility, basically by comparing transaction IDs (XIDs). A row with an insert XID greater than the current transaction  XID shouldn't be visible to the current transaction. But since transaction IDs are not unlimited a cluster will eventually run out after
(2^32 transactions 4+ billion) causing transaction ID wraparound: transaction counter wraps around to zero, and all past transaction would appear to be in the future

This is being taken care of by vacuum that will mark rows as frozen, indicating that they were inserted by a transaction that committed far in the past that can be visible to all current and future transactions. To control this behavior, postgres has a configurable called autovacuum_freeze_max_age, which defaults at 200.000.000 transactions, a very conservative default that must be tuned in larger production systems.

It sounds complicated but its relatively easy not to get to that point,for most people just having autovacuum on will prevent this situation from ever happening. You can simply schedule manual vacuums by getting a list of the tables "closer" to autovacuum_freeze_max_age with a simple query like this:


SELECT 'vacuum analyze ' || c.oid::regcl[...]
カテゴリー: postgresql

Mark Wong: PDXPUG: February Meetup: Temporal Databases: Theory and Postgres

planet postgresql - 2019-02-07(木) 04:51:29

2019 February 21 Meeting (Note: Back to third Thursday this month!)

Location:

PSU Business Accelerator
2828 SW Corbett Ave · Portland, OR
Parking is open after 5pm.

Speaker: Paul Jungwirth

Temporal databases let you record history: either a history of the database (what the table used to say), a history of the thing itself (what it used to be), or both at once. The theory of temporal databases goes back to the 90s, but standardization has only just begun with some modest recommendations in SQL:2011, and database products (including Postgres) are still missing major functionality.

This talk will cover how temporal tables are structured, how they are queried and updated, what SQL:2011 offers (and doesn’t), what functionality Postgres has already, and what remains to be built.

Paul started programming on a Tandy 1000 at age 8 and hasn’t been able to stop since. He helped build one of the Mac’s first web servers in 1994 and has founded software companies in politics and technical hiring. He works as an independent consultant specializing in Rails, Postgres, and Chef.

カテゴリー: postgresql

Bruce Momjian: Expanding Permission Letters

planet postgresql - 2019-02-07(木) 03:30:01

Thanks to a comment on my previous blog post by Kaarel, the ability to simply display the Postgres permission letters is not quite as dire as I showed. There is a function, aclexplode(), which expands the access control list (ACL) syntax used by Postgres into a table with full text descriptions. This function exists in all supported versions of Postgres. However, it was only recently documented in this commit based on this email thread, and will appear in the Postgres 12 documentation.

Since aclexplode() exists (undocumented) in all supported versions of Postgres, it can be used to provide more verbose output of the pg_class.relacl permission letters. Here it is used with the test table created in the previous blog entry:

SELECT relacl FROM pg_class WHERE relname = 'test'; relacl -------------------------------------------------------- {postgres=arwdDxt/postgres,bob=r/postgres,=r/postgres} SELECT a.* FROM pg_class, aclexplode(relacl) AS a WHERE relname = 'test' ORDER BY 1, 2; grantor | grantee | privilege_type | is_grantable ---------+---------+----------------+-------------- 10 | 0 | SELECT | f 10 | 10 | SELECT | f 10 | 10 | UPDATE | f 10 | 10 | DELETE | f 10 | 10 | INSERT | f 10 | 10 | REFERENCES | f 10 | 10 | TRIGGER | f 10 | 10 | TRUNCATE | f 10 | 16388 | SELECT | f

Continue Reading »

カテゴリー: postgresql

Peter Eisentraut: PostgreSQL with passphrase-protected SSL keys under systemd

planet postgresql - 2019-02-06(水) 21:34:56

PostgreSQL supports SSL, and SSL private keys can be protected by a passphrase. Many people choose not to use passphrases with their SSL keys, and that’s perhaps fine. This blog post is about what happens when you do have a passphrase.

If you have SSL enabled and a key with a passphrase and you start the server, the server will stop to ask for the passphrase. This happens automatically from within the OpenSSL library. Stopping to ask for a passphrase obviously prevents automatic starts, restarts, and reboots, but we’re assuming here that you have made that tradeoff consciously.

When you run PostgreSQL under systemd, which is very common nowadays, there is an additional problem. Under systemd, the server process does not have terminal access, and so it cannot ask for any passphrases. By default, the startup will fail in such setups.

As of PostgreSQL 11, it is possible to configure an external program to obtain the SSL passphrase, using the configuration setting ssl_passphrase_command. As a simple example, you can set

ssl_passphrase_command = 'echo "secret"'

and it will apply the passphrase that the program prints out. You can use this to fetch the passphrase from a file or other secret store, for example.

But what if you still want the security of having to manually enter the password? Systemd has a facility that lets services prompt for passwords. You can use that like this::

ssl_passphrase_command = '/bin/systemd-ask-password "%p"'

Except that that doesn’t actually work, because non-root processes are not permitted to use the systemd password system; see this bug. But there are workarounds.

One workaround is to use sudo. So you use

ssl_passphrase_command = 'sudo /bin/systemd-ask-password "%p"'

and then put something like this into /etc/sudoers:

postgres ALL=(root) NOPASSWD: /bin/systemd-ask-password

A more evil workaround (discussed in the above-mentioned bug report) is to override the permissions on the socket file underlying this mechanism. Add this to the postgresql service unit:

ExecStartPre=+/bin/setfa[...]
カテゴリー: postgresql

Jignesh Shah: PGConf.RU 2019: Slides from my sessions

planet postgresql - 2019-02-06(水) 19:17:00
It was my first visit to Moscow for PGConf.RU 2019. Enjoyed meeting the strong community of PostgreSQL in Russia!


Slides from my sessions:

1. Deep Dive into the RDS PostgreSQL Universe


Deep Dive into the RDS PostgreSQL Universe from Jignesh Shah

2. Tips and Tricks for Amazon RDS for PostgreSQL

Tips and Tricks with Amazon RDS for PostgreSQL from Jignesh Shah

This blog represents my own view points and not of my employer, Amazon Web Services.


カテゴリー: postgresql

James Coleman: We scale both vertically and horizontally with dozens of PostgreSQL clusters across multiple…

planet postgresql - 2019-02-06(水) 00:03:47

We scale both vertically and horizontally with dozens of PostgreSQL clusters across multiple datacenters.

カテゴリー: postgresql

Hans-Juergen Schoenig: Implementing “AS OF”-queries in PostgreSQL

planet postgresql - 2019-02-05(火) 23:01:09

Over the years many people have asked for “timetravel” or “AS OF”-queries in PostgreSQL. Oracle has provided this kind of functionality for quite some time already. However, in the PostgreSQL world “AS OF timestamp” is not directly available. The question now is: How can we implement this vital functionality in user land and mimic Oracle functionality?

Implementing “AS OF” and timetravel in user land

Let us suppose we want to version a simple table consisting of just three columns: id, some_data1 and some_data2. To do this we first have to install the btree_gist module, which adds some valuable operators we will need to manage time travel. The table storing the data will need an additional column to handle the validity of a row. Fortunately PostgreSQL supports “range types”, which allow to store ranges in an easy and efficient way. Here is how it works:

CREATE EXTENSION IF NOT EXISTS btree_gist; CREATE TABLE t_object ( id int8, valid tstzrange, some_data1 text, some_data2 text, EXCLUDE USING gist (id WITH =, valid WITH &&) );

Mind the last line here: “EXLUDE USING gist” will ensure that if the “id” is identical the period (“valid”) must not overlap. The idea is to ensure that the same “id” only has one entry at a time. PostgreSQL will automatically create a Gist index on that column. The feature is called “exclusion constraint”. If you are looking for more information about this feature consider checking out the official documentation (https://www.postgresql.org/docs/current/ddl-constraints.html).

If you want to filter on some_data1 and some_data2 consider creating indexes. Remember, missing indexes are in many cases the root cause of bad performance:

CREATE INDEX idx_some_index1 ON t_object (some_data1); CREATE INDEX idx_some_index2 ON t_object (some_data2);

By creating a view, it should be super easy to extract data from the underlying tables:

CREATE VIEW t_object_recent AS SELECT id, some_data1, some_data2 FROM t[...]
カテゴリー: postgresql

Avinash Kumar: Use pg_repack to Rebuild PostgreSQL Database Objects Online

planet postgresql - 2019-02-05(火) 10:14:49

In this blog post, we’ll look at how to use

pg_repack  to rebuild PostgreSQL database objects online.

We’ve seen a lot of questions regarding the options available in PostgreSQL for rebuilding a table online. We created this blog post to explain the 

pg_repack  extension, available in PostgreSQL for this requirement. pg_repack is a well-known extension that was created and is maintained as an open source project by several authors.

There are three main reasons why you need to use

pg_repack  in a PostgreSQL server:
  1. Reclaim free space from a table to disk, after deleting a huge chunk of records
  2. Rebuild a table to re-order the records and shrink/pack them to lesser number of pages. This may let a query fetch just one page  ( or < n pages) instead of n pages from disk. In other words, less IO and more performance.
  3. Reclaim free space from a table that has grown in size with a lot of bloat due to improper autovacuum settings.

You might have already read our previous articles that explained what bloat is, and discussed the internals of autovacuum. After reading these articles, you can see there is an autovacuum background process that removes dead tuples from a table and allows the space to be re-used by future updates/inserts on that table. Over a period of time, tables that take the maximum number of updates or deletes may have a lot of bloated space due to poorly tuned autovacuum settings. This leads to slow performing queries on these tables. Rebuilding the table is the best way to avoid this. 

Why is just autovacuum not enough for tables with bloat?

We have discussed several parameters that change the behavior of an autovacuum process in this blog post. There cannot be more than

autovacuum_max_workers  number of autovacuum processes running in a database cluster at a time. At the same time, due to untuned autovacuum settings and no manual vacuuming of the database as a weekly or monthy jobs, many tables can be skipped from autovacuum. We have discussed in this post that the default autovacuum settings run autova[...]
カテゴリー: postgresql

Christophe Pettus: “Breaking PostgreSQL at Scale” at FOSDEM 2019

planet postgresql - 2019-02-05(火) 06:00:49

The slides for my talk, “Breaking PostgreSQL at Scale” at FOSDEM 2019 are available.

カテゴリー: postgresql

Bruce Momjian: Permission Letters

planet postgresql - 2019-02-05(火) 05:00:01

If you have looked at Postgres object permissions in the past, I bet you were confused. I get confused, and I have been at this for a long time.

The way permissions are stored in Postgres is patterned after the long directory listing of Unix-like operating systems, e.g., ls -l. Just like directory listings, the Postgres system stores permissions using single-letter indicators. r is used for read (SELECT) in both systems, while w is used for write permission in ls, and UPDATE in Postgres. The other nine letters used by Postgres don't correspond to any directory listing permission letters, e.g., d is DELETE permission. The full list of Postgres permission letters is in the GRANT documentation page; the other letters are:

D -- TRUNCATE x -- REFERENCES t -- TRIGGER X -- EXECUTE U -- USAGE C -- CREATE c -- CONNECT T -- TEMPORARY

Continue Reading »

カテゴリー: postgresql

Amit Jain: An Overview of the Index Changes in PostgreSQL 11

planet postgresql - 2019-02-05(火) 02:23:33

The right application of indexes can make queries blazing fast.

Indexes use pointers to access data pages in a speedy fashion.

Major changes happened on Indexes in PostgreSQL 11, lots of much awaited patches have been released.

Let's have a look at some of the great features of this release.

Parallel B-TREE Index Builds

PostgreSQL 11 introduced an infrastructure patch to enable parallel index creation.

It can be only used with B-Tree index as for now.

Building a parallel B-Tree index is two to three times faster than doing the same thing without parallel working (or serial build).

In PostgreSQL 11 parallel index creation is on by default.

There are two important parameters:

  • max_parallel_workers - Sets the maximum number of workers that the system can support for parallel queries.
  • max_parallel_maintenance_workers - Controls the maximum number of worker processes which can be used to CREATE INDEX.

Let's check it with an example:

severalnines=# CREATE TABLE test_btree AS SELECT generate_series(1,100000000) AS id; SELECT 100000000 severalnines=# SET maintenance_work_mem = '1GB'; severalnines=# \timing severalnines=# CREATE INDEX q ON test_btree (id); TIME: 25294.185 ms (00:25.294)

Let's try it with 8-way parallel work:

severalnines=# SET maintenance_work_mem = '2GB'; severalnines=# SET max_parallel_workers = 16; severalnines=# SET max_parallel_maintenance_workers = 8; severalnines=# \timing severalnines=# CREATE INDEX q1 ON test_btree (id); TIME: 11001.240 ms (00:11.001)

We can see the performance difference with the parallel worker, more than 60% performant with just a small change. The maintenance_work_mem can also be increased to get more performance.

The ALTER table also helps to increase parallel workers. Below syntax can be used to increase parallel workers along with max_parallel_maintenance_workers. This bypasses the cost model completely.

ALTER TABLE test_btree SET (parallel_workers = 24);

Tip: RESET to default once the index build is completed to prevent adverse query plan.

CREATE INDEX with the CONCU

[...]
カテゴリー: postgresql

Brandur Leach: SortSupport: Sorting in Postgres at Speed

planet postgresql - 2019-02-05(火) 01:56:52

Most often, there’s a trade off involved in optimizing software. The cost of better performance is the opportunity cost of the time that it took to write the optimization, and the additional cost of maintenance for code that becomes more complex and more difficult to understand.

Many projects prioritize product development over improving runtime speed. Time is spent building new things instead of making existing things faster. Code is kept simpler and easier to understand so that adding new features and fixing bugs stays easy, even as particular people rotate in and out and institutional knowledge is lost.

But that’s certainly not the case in all domains. Game code is often an interesting read because it comes from an industry where speed is a competitive advantage, and it’s common practice to optimize liberally even at some cost to modularity and maintainability. One technique for that is to inline code in critical sections even to the point of absurdity. CryEngine, open-sourced a few years ago, has a few examples of this, with “tick” functions like this one that are 800+ lines long with 14 levels of indentation.

Another common place to find optimizations is in databases. While games optimize because they have to, databases optimize because they’re an example of software that’s extremely leveraged – if there’s a way to make running select queries or building indexes 10% faster, it’s not an improvement that affects just a couple users, it’s one that’ll potentially invigorate millions of installations around the world. That’s enough of an advantage that the enhancement is very often worth it, even if the price is a challenging implementation or some additional code complexity.

Postgres contains a wide breadth of optimizations, and happily they’ve been written conscientiously so that the source code stays readable. The one that we’ll look at today is SortSupport, a technique for localizing the information needed to compare data into places where it can be accessed very quickly, thereby making sorting data much fas

[...]
カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Converting list of integers into list of ranges

planet postgresql - 2019-02-05(火) 00:13:19
Yesterday someone on irc asked: i've a query that returns sequential numbers with gaps (generate_series + join) and my question is: can is somehow construct ranges out of the returned values? sort of range_agg or something? There was no further discussion, aside from me saying sure you can. not trivial task, but possible. you'd need … Continue reading "Converting list of integers into list of ranges"
カテゴリー: postgresql

Paul Ramsey: Dr. JTS comes to Crunchy

planet postgresql - 2019-02-04(月) 22:00:00

Today’s an exciting day in the Victoria office of Crunchy Data: our local staff count goes from one to two, as Martin Davis joins the company!

This is kind of a big deal, because this year Martin and I will be spending much or our time on the core computational geometry library that powers PostGIS, the GEOS library, and the JTS library from which it derives its structure.

Why is that a big deal? Because GEOS, JTS and other language ports provide the computational geometry algorithms underneath most of the open source geospatial ecosystem – so improvements in our core libraries ripple out to help a huge swathe of other software.

JTS came first, initially as a project of the British Columbia government. GEOS is a C++ port of JTS. There are also Javascript and .Net ports (JSTS and NTS.

Each of those libraries has developed a rich downline of other libraries and projects that depend on them. On the desktop, on the web, in the middleware, JTS and GEOS power all of it.

So we know that work on JTS and GEOS on our side is going to benefit far more than just PostGIS.

I’ve already spent a decent amount of time on bringing the GEOS library up to date with the changes in JTS over the past few months, and trying to fulfill the “maintainer” role, merging pull requests and closing some outstanding tickets.

As Martin starts adding to JTS, I now feel more confident in my ability to bring those changes into the C++ world of GEOS as they land.

Without pre-judging what will get first priority, topics of overlay robustness, predicate performance, and geometry cleaning are near the top of our list.

Our spatial customers at Crunchy process a lot of geometry, so ensuring that PostGIS (GEOS) operations are robust and high performance is a big win for PostgreSQL and for our customers as well.

カテゴリー: postgresql

Andreas Scherbaum: How long will a 64 bit Transaction-ID last in PostgreSQL?

planet postgresql - 2019-02-03(日) 22:34:00

Andreas 'ads' Scherbaum

At FOSDEM someone asked how long 64 bit Transaction-IDs will last.

To refresh: PostgreSQL is currently using 32 bits for the TXID, and is good for around 4 billion transactions:

fosdem=# SELECT 2^32; ?column? ------------ 4294967296 (1 row)

That will not last very long if you have a busy database, doing many writes over the day. MVCC keeps the new and old versions of a row in the table, and the TXID will increase with every transaction. At some point the 4 billion transactions are reached, the TXID will overrun, and start again at the beginning. The way transactions are working in PostgreSQL, suddenly all data in your database will become invisible. No one wants that!

To limit this problem, PostgreSQL has a number mechanism in place:

  • PostgreSQL splits transaction ids into half: 2 billion in the past are visible, 2 billion in the future are not visible - all visible rows must live in the 2 billion in the past, at all times.
  • Old, deleted row versions are enevtually removed by VACUUM (or Autovacuum), the XID is no longer used.
  • Old row versions, which are still live, are marked as "freezed" in a table, and assigned a special XID - the previously used XID is no longer needed. The problem here is that every single table in every database must be Vacuumed before the 2 billion threshold is reached.
  • PostgreSQL uses lazy XIDs, where a "real" transaction id is only assigned if the transaction changes something on disk - if a transaction is read only, and does not change anything, no transaction id is consumed.

 

 

Continue reading "How long will a 64 bit Transaction-ID last in PostgreSQL?"
カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 12 – Allow COPY FROM to filter data using WHERE conditions

planet postgresql - 2019-02-03(日) 17:47:51
On 19th of January 2019, Tomas Vondra committed patch: Allow COPY FROM to filter data using WHERE conditions   Extends the COPY FROM command with a WHERE condition, which allows doing various types of filtering while importing the data (random sampling, condition on a data column, etc.). Until now such filtering required either preprocessing of … Continue reading "Waiting for PostgreSQL 12 – Allow COPY FROM to filter data using WHERE conditions"
カテゴリー: postgresql

James Coleman: PostgreSQL at Scale: Database Schema Changes Without Downtime

planet postgresql - 2019-02-02(土) 08:01:28

Braintree Payments uses PostgreSQL as its primary datastore. We rely heavily on the data safety and consistency guarantees a traditional relational database offers us, but these guarantees come with certain operational difficulties. To make things even more interesting, we allow zero scheduled functional downtime for our main payments processing services.

Several years ago we published a blog post detailing some of the things we had learned about how to safely run DDL (data definition language) operations without interrupting our production API traffic.

Since that time PostgreSQL has gone through quite a few major upgrade cycles — several of which have added improved support for concurrent DDL. We’ve also further refined our processes. Given how much has changed, we figured it was time for a blog post redux.

In this post we’ll address the following topics:

First, some basics

For all code and database changes, we require that:

  • Live code and schemas be forward-compatible with updated code and schemas: this allows us to roll out deploys gradually across a fleet of application servers and database clusters.
  • New code and schemas be backward-compatible with live code and schemas: this allows us to roll back any change to the previous version in the event of unexpected errors.

For all DDL operations we require that:

  • Any exclusive locks acquired on tables or indexes be held for at most ~2 seconds.
  • Rollback strategies do not involve reverting the database schema to its previous version.
Transactionality

PostgreSQL supports transactional DDL. In most cases, you can execute multiple DDL statements inside an explicit database transaction and take an “all or nothing” approach to a set of changes. However, running multiple DDL statements inside a transaction has one serious downside: if you alter multiple objects, you’ll need to acquire exclusive locks on all of those objects in a single

[...]
カテゴリー: postgresql

Michael Paquier: Two-phase commit and temporary objects

planet postgresql - 2019-02-01(金) 17:01:39

A couple of weeks ago a bug has popped up on the community mailing lists about the use of temporary objects in two-phase commit. After discussions, the result is the following commit:

commit: c5660e0aa52d5df27accd8e5e97295cf0e64f7d4 author: Michael Paquier <michael@paquier.xyz> date: Fri, 18 Jan 2019 09:21:44 +0900 Restrict the use of temporary namespace in two-phase transactions Attempting to use a temporary table within a two-phase transaction is forbidden for ages. However, there have been uncovered grounds for a couple of other object types and commands which work on temporary objects with two-phase commit. In short, trying to create, lock or drop an object on a temporary schema should not be authorized within a two-phase transaction, as it would cause its state to create dependencies with other sessions, causing all sorts of side effects with the existing session or other sessions spawned later on trying to use the same temporary schema name. Regression tests are added to cover all the grounds found, the original report mentioned function creation, but monitoring closer there are many other patterns with LOCK, DROP or CREATE EXTENSION which are involved. One of the symptoms resulting in combining both is that the session which used the temporary schema is not able to shut down completely, waiting for being able to drop the temporary schema, something that it cannot complete because of the two-phase transaction involved with temporary objects. In this case the client is able to disconnect but the session remains alive on the backend-side, potentially blocking connection backend slots from being used. Other problems reported could also involve server crashes. This is back-patched down to v10, which is where 9b013dc has introduced MyXactFlags, something that this patch relies on. Reported-by: Alexey Bashtanov Author: Michael Paquier Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/5d910e2e-0db8-ec06-dd5f-baec420513c3@imap.cc Backpatch-through: 10

In PostgreSQL, temporary objects are as

[...]
カテゴリー: postgresql

Keith Fiske: Managing Transaction ID Exhaustion (Wraparound) in PostgreSQL

planet postgresql - 2019-01-31(木) 21:14:00

One of the most critical topics to understand when administering a PostgresSQL database is the concept of transaction IDs (TXID) and that they can be exhausted if not monitored properly. However, this blog post isn't going to go into the details of what it TXID exhaustion actually is. The Routine Vacuuming section of the documentation is probably one of the most important to read and understand so I will refer you there. What this blog post is going to cover is an easy way to monitor for it and what can be done to prevent it ever being a problem.

カテゴリー: postgresql

ページ