planet postgresql

Subscribe to planet postgresql のフィード
Planet PostgreSQL
更新: 2分 11秒 前

Bruce Momjian: Permission Letters

2019-02-05(火) 05:00:01

If you have looked at Postgres object permissions in the past, I bet you were confused. I get confused, and I have been at this for a long time.

The way permissions are stored in Postgres is patterned after the long directory listing of Unix-like operating systems, e.g., ls -l. Just like directory listings, the Postgres system stores permissions using single-letter indicators. r is used for read (SELECT) in both systems, while w is used for write permission in ls, and UPDATE in Postgres. The other nine letters used by Postgres don't correspond to any directory listing permission letters, e.g., d is DELETE permission. The full list of Postgres permission letters is in the GRANT documentation page; the other letters are:

D -- TRUNCATE x -- REFERENCES t -- TRIGGER X -- EXECUTE U -- USAGE C -- CREATE c -- CONNECT T -- TEMPORARY

Continue Reading »

カテゴリー: postgresql

Amit Jain: An Overview of the Index Changes in PostgreSQL 11

2019-02-05(火) 02:23:33

The right application of indexes can make queries blazing fast.

Indexes use pointers to access data pages in a speedy fashion.

Major changes happened on Indexes in PostgreSQL 11, lots of much awaited patches have been released.

Let's have a look at some of the great features of this release.

Parallel B-TREE Index Builds

PostgreSQL 11 introduced an infrastructure patch to enable parallel index creation.

It can be only used with B-Tree index as for now.

Building a parallel B-Tree index is two to three times faster than doing the same thing without parallel working (or serial build).

In PostgreSQL 11 parallel index creation is on by default.

There are two important parameters:

  • max_parallel_workers - Sets the maximum number of workers that the system can support for parallel queries.
  • max_parallel_maintenance_workers - Controls the maximum number of worker processes which can be used to CREATE INDEX.

Let's check it with an example:

severalnines=# CREATE TABLE test_btree AS SELECT generate_series(1,100000000) AS id; SELECT 100000000 severalnines=# SET maintenance_work_mem = '1GB'; severalnines=# \timing severalnines=# CREATE INDEX q ON test_btree (id); TIME: 25294.185 ms (00:25.294)

Let's try it with 8-way parallel work:

severalnines=# SET maintenance_work_mem = '2GB'; severalnines=# SET max_parallel_workers = 16; severalnines=# SET max_parallel_maintenance_workers = 8; severalnines=# \timing severalnines=# CREATE INDEX q1 ON test_btree (id); TIME: 11001.240 ms (00:11.001)

We can see the performance difference with the parallel worker, more than 60% performant with just a small change. The maintenance_work_mem can also be increased to get more performance.

The ALTER table also helps to increase parallel workers. Below syntax can be used to increase parallel workers along with max_parallel_maintenance_workers. This bypasses the cost model completely.

ALTER TABLE test_btree SET (parallel_workers = 24);

Tip: RESET to default once the index build is completed to prevent adverse query plan.

CREATE INDEX with the CONCU

[...]
カテゴリー: postgresql

Brandur Leach: SortSupport: Sorting in Postgres at Speed

2019-02-05(火) 01:56:52

Most often, there’s a trade off involved in optimizing software. The cost of better performance is the opportunity cost of the time that it took to write the optimization, and the additional cost of maintenance for code that becomes more complex and more difficult to understand.

Many projects prioritize product development over improving runtime speed. Time is spent building new things instead of making existing things faster. Code is kept simpler and easier to understand so that adding new features and fixing bugs stays easy, even as particular people rotate in and out and institutional knowledge is lost.

But that’s certainly not the case in all domains. Game code is often an interesting read because it comes from an industry where speed is a competitive advantage, and it’s common practice to optimize liberally even at some cost to modularity and maintainability. One technique for that is to inline code in critical sections even to the point of absurdity. CryEngine, open-sourced a few years ago, has a few examples of this, with “tick” functions like this one that are 800+ lines long with 14 levels of indentation.

Another common place to find optimizations is in databases. While games optimize because they have to, databases optimize because they’re an example of software that’s extremely leveraged – if there’s a way to make running select queries or building indexes 10% faster, it’s not an improvement that affects just a couple users, it’s one that’ll potentially invigorate millions of installations around the world. That’s enough of an advantage that the enhancement is very often worth it, even if the price is a challenging implementation or some additional code complexity.

Postgres contains a wide breadth of optimizations, and happily they’ve been written conscientiously so that the source code stays readable. The one that we’ll look at today is SortSupport, a technique for localizing the information needed to compare data into places where it can be accessed very quickly, thereby making sorting data much fas

[...]
カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Converting list of integers into list of ranges

2019-02-05(火) 00:13:19
Yesterday someone on irc asked: i've a query that returns sequential numbers with gaps (generate_series + join) and my question is: can is somehow construct ranges out of the returned values? sort of range_agg or something? There was no further discussion, aside from me saying sure you can. not trivial task, but possible. you'd need … Continue reading "Converting list of integers into list of ranges"
カテゴリー: postgresql

Paul Ramsey: Dr. JTS comes to Crunchy

2019-02-04(月) 22:00:00

Today’s an exciting day in the Victoria office of Crunchy Data: our local staff count goes from one to two, as Martin Davis joins the company!

This is kind of a big deal, because this year Martin and I will be spending much or our time on the core computational geometry library that powers PostGIS, the GEOS library, and the JTS library from which it derives its structure.

Why is that a big deal? Because GEOS, JTS and other language ports provide the computational geometry algorithms underneath most of the open source geospatial ecosystem – so improvements in our core libraries ripple out to help a huge swathe of other software.

JTS came first, initially as a project of the British Columbia government. GEOS is a C++ port of JTS. There are also Javascript and .Net ports (JSTS and NTS.

Each of those libraries has developed a rich downline of other libraries and projects that depend on them. On the desktop, on the web, in the middleware, JTS and GEOS power all of it.

So we know that work on JTS and GEOS on our side is going to benefit far more than just PostGIS.

I’ve already spent a decent amount of time on bringing the GEOS library up to date with the changes in JTS over the past few months, and trying to fulfill the “maintainer” role, merging pull requests and closing some outstanding tickets.

As Martin starts adding to JTS, I now feel more confident in my ability to bring those changes into the C++ world of GEOS as they land.

Without pre-judging what will get first priority, topics of overlay robustness, predicate performance, and geometry cleaning are near the top of our list.

Our spatial customers at Crunchy process a lot of geometry, so ensuring that PostGIS (GEOS) operations are robust and high performance is a big win for PostgreSQL and for our customers as well.

カテゴリー: postgresql

Andreas Scherbaum: How long will a 64 bit Transaction-ID last in PostgreSQL?

2019-02-03(日) 22:34:00

Andreas 'ads' Scherbaum

At FOSDEM someone asked how long 64 bit Transaction-IDs will last.

To refresh: PostgreSQL is currently using 32 bits for the TXID, and is good for around 4 billion transactions:

fosdem=# SELECT 2^32; ?column? ------------ 4294967296 (1 row)

That will not last very long if you have a busy database, doing many writes over the day. MVCC keeps the new and old versions of a row in the table, and the TXID will increase with every transaction. At some point the 4 billion transactions are reached, the TXID will overrun, and start again at the beginning. The way transactions are working in PostgreSQL, suddenly all data in your database will become invisible. No one wants that!

To limit this problem, PostgreSQL has a number mechanism in place:

  • PostgreSQL splits transaction ids into half: 2 billion in the past are visible, 2 billion in the future are not visible - all visible rows must live in the 2 billion in the past, at all times.
  • Old, deleted row versions are enevtually removed by VACUUM (or Autovacuum), the XID is no longer used.
  • Old row versions, which are still live, are marked as "freezed" in a table, and assigned a special XID - the previously used XID is no longer needed. The problem here is that every single table in every database must be Vacuumed before the 2 billion threshold is reached.
  • PostgreSQL uses lazy XIDs, where a "real" transaction id is only assigned if the transaction changes something on disk - if a transaction is read only, and does not change anything, no transaction id is consumed.

 

 

Continue reading "How long will a 64 bit Transaction-ID last in PostgreSQL?"
カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 12 – Allow COPY FROM to filter data using WHERE conditions

2019-02-03(日) 17:47:51
On 19th of January 2019, Tomas Vondra committed patch: Allow COPY FROM to filter data using WHERE conditions   Extends the COPY FROM command with a WHERE condition, which allows doing various types of filtering while importing the data (random sampling, condition on a data column, etc.). Until now such filtering required either preprocessing of … Continue reading "Waiting for PostgreSQL 12 – Allow COPY FROM to filter data using WHERE conditions"
カテゴリー: postgresql

James Coleman: PostgreSQL at Scale: Database Schema Changes Without Downtime

2019-02-02(土) 08:01:28

Braintree Payments uses PostgreSQL as its primary datastore. We rely heavily on the data safety and consistency guarantees a traditional relational database offers us, but these guarantees come with certain operational difficulties. To make things even more interesting, we allow zero scheduled functional downtime for our main payments processing services.

Several years ago we published a blog post detailing some of the things we had learned about how to safely run DDL (data definition language) operations without interrupting our production API traffic.

Since that time PostgreSQL has gone through quite a few major upgrade cycles — several of which have added improved support for concurrent DDL. We’ve also further refined our processes. Given how much has changed, we figured it was time for a blog post redux.

In this post we’ll address the following topics:

First, some basics

For all code and database changes, we require that:

  • Live code and schemas be forward-compatible with updated code and schemas: this allows us to roll out deploys gradually across a fleet of application servers and database clusters.
  • New code and schemas be backward-compatible with live code and schemas: this allows us to roll back any change to the previous version in the event of unexpected errors.

For all DDL operations we require that:

  • Any exclusive locks acquired on tables or indexes be held for at most ~2 seconds.
  • Rollback strategies do not involve reverting the database schema to its previous version.
Transactionality

PostgreSQL supports transactional DDL. In most cases, you can execute multiple DDL statements inside an explicit database transaction and take an “all or nothing” approach to a set of changes. However, running multiple DDL statements inside a transaction has one serious downside: if you alter multiple objects, you’ll need to acquire exclusive locks on all of those objects in a single

[...]
カテゴリー: postgresql

Michael Paquier: Two-phase commit and temporary objects

2019-02-01(金) 17:01:39

A couple of weeks ago a bug has popped up on the community mailing lists about the use of temporary objects in two-phase commit. After discussions, the result is the following commit:

commit: c5660e0aa52d5df27accd8e5e97295cf0e64f7d4 author: Michael Paquier <michael@paquier.xyz> date: Fri, 18 Jan 2019 09:21:44 +0900 Restrict the use of temporary namespace in two-phase transactions Attempting to use a temporary table within a two-phase transaction is forbidden for ages. However, there have been uncovered grounds for a couple of other object types and commands which work on temporary objects with two-phase commit. In short, trying to create, lock or drop an object on a temporary schema should not be authorized within a two-phase transaction, as it would cause its state to create dependencies with other sessions, causing all sorts of side effects with the existing session or other sessions spawned later on trying to use the same temporary schema name. Regression tests are added to cover all the grounds found, the original report mentioned function creation, but monitoring closer there are many other patterns with LOCK, DROP or CREATE EXTENSION which are involved. One of the symptoms resulting in combining both is that the session which used the temporary schema is not able to shut down completely, waiting for being able to drop the temporary schema, something that it cannot complete because of the two-phase transaction involved with temporary objects. In this case the client is able to disconnect but the session remains alive on the backend-side, potentially blocking connection backend slots from being used. Other problems reported could also involve server crashes. This is back-patched down to v10, which is where 9b013dc has introduced MyXactFlags, something that this patch relies on. Reported-by: Alexey Bashtanov Author: Michael Paquier Reviewed-by: Masahiko Sawada Discussion: https://postgr.es/m/5d910e2e-0db8-ec06-dd5f-baec420513c3@imap.cc Backpatch-through: 10

In PostgreSQL, temporary objects are as

[...]
カテゴリー: postgresql

Keith Fiske: Managing Transaction ID Exhaustion (Wraparound) in PostgreSQL

2019-01-31(木) 21:14:00

One of the most critical topics to understand when administering a PostgresSQL database is the concept of transaction IDs (TXID) and that they can be exhausted if not monitored properly. However, this blog post isn't going to go into the details of what it TXID exhaustion actually is. The Routine Vacuuming section of the documentation is probably one of the most important to read and understand so I will refer you there. What this blog post is going to cover is an easy way to monitor for it and what can be done to prevent it ever being a problem.

カテゴリー: postgresql

Bruce Momjian: Limiting Superuser Activity

2019-01-31(木) 02:15:01

This interesting email thread explores the question of how much you can prevent or detect unauthorized database superuser activity. The main conclusions from the thread are:

  • It is impossible to restrict database administrator access without hindering their ability to perform their jobs
  • Monitoring superuser activity is the most reasonable way to detect and hopefully discourage unauthorized activity
  • Monitoring includes:
    • Assign a separate account to each administrator for auditing purposes; do not use generic/shared accounts
    • Use an auditing tool to record database activity, e.g., pgAudit
    • Use syslog to send database logs to a computer not under database administrators' control
    • Record all shell command activity in a similar way

There is also a helpful summary email.

カテゴリー: postgresql

Liaqat Andrabi: Webinar: Banking on Postgres – Financial Application Considerations [Follow up]

2019-01-31(木) 00:34:37

The demand for PostgreSQL within the financial industry has been rapidly increasing in the recent years; mainly due to reduction in licensing costs, better choice of open source tools, and the robust enterprise features that PostgreSQL provides.

2ndQuadrant hosted the “Banking on Postgres” webinar to discuss attributes of financial databases based on Postgres, configuration processes, hardware needs, availability, backups, and more.

The webinar was presented by Shaun Thomas, Principal Consultant at 2ndQuadrant. Those who weren’t able to attend the live event can now view the recording here.

For any questions or comments regarding Postgres-BDR, please send an email to info@2ndQuadrant.com.

カテゴリー: postgresql

Pavel Stehule: plpgsql_check - new report for code coverage ratio calculation

2019-01-30(水) 18:11:00
Few months ago I integrated a profiler into plpgsql_check.

The result of prifiling is line oriented:

postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │ source │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)

This format is well readable, but it is not practical for calculation of code coverage metrics. So this week I wrote new function, that produce
report based on commands:

CREATE OR REPLACE FUNCTION public.fx1(a integer)
RETURNS integer
LANGUAGE plpgsql
1 AS $function$
2 begin
3 if a > 10 then
4 raise notice 'ahoj';
5 return -1;
6 else
7 raise notice 'nazdar';
8 return 1;
9 end if;
10 end;
11 $function$

postgres=# select stmtid, parent_stmtid, parent_note, lineno, exec_stmts, stmtname
from plpgsql_profiler_function_statements_tb('fx1');
┌────────┬───────────────┬─────────────┬────────┬────────────┬─────────────────┐
│ stmtid │ parent_stmtid │ parent_note │ lineno │ exec[...]
カテゴリー: postgresql

Dave Cramer: A Guide to Building an Active-Active PostgreSQL Cluster

2019-01-29(火) 23:39:33

One of the toughest challenges facing database engineers today is ensuring their data is always accessible so they can meet the high-availability  requirements for their applications.

While this problem may seem trivial in the world of applications where one can have many instances behind geographically distributed load balancers, in the database world where there is only one version of the data globally, the problem faces many obstacles.

PostgreSQL replication has advanced considerably in recent major releases, including continuous improvements to streaming replication and the addition of logical replication in PostgreSQL 10. While these capabilities represent important enhancements to PostgreSQL, enabling users to address a wide variety of clustered architectures, they do not (easily) address the use cases where the application requires access to an updatable database in two or more geographic regions - often referred to as an "active-active" cluster.

More specifically, an active-active cluster is one where the application can write to any instance in the cluster and the data will be written to all of the instances in the cluster, enabling each instance in the cluster to be used to:


  • Provide near zero downtime as the new instance is already in a read/write state; there is no need to reconfigure it.
  • Provide near zero downtime upgrades from one version to another
  • Improve latency for users in geographically distributed clusters. By providing an instance physically closer to the user latency is reduced.

While there are a number of proprietary solutions that attempt to address active-active PostgreSQL requirements, this post and a series to follow provides users with potential reference architectures and configurations that enable active-active PostgreSQL configurations using entirely open source software.

This post is of course only one approach to deploying an active-active PostgreSQL cluster.  There are other ways to deploy an active-active setup with PostgreSQL.  I will cover some ways to do this in the f

[...]
カテゴリー: postgresql

Kaarel Moppel: Major feature update for the pgwatch2 Postgres monitoring tool

2019-01-29(火) 18:13:23

Again I could make good (hopefully) use of the “slow time” around the turn of the year…and managed to push out another set of features for our Open Source PostgreSQL monitoring tool called pgwatch2 – so a quick overview on changes in this post. Continuing the tradition I’m calling it “Feature Pack 4” as it’s mostly about new features. Git and Docker images carry version number 1.5.0. As our last pgwatch2-related blogpost covered only 1.4.0, I’ll include here also most important stuff from 1.4.5 minor feature release.

Highlight – Monitoring Postgres with Postgres

This is the biggest one this time – finally and quite approprietly for “Postgres-minded” people, there’s now a chance to store all the gathered metrics in Postgres! This of course doesn’t necessarily mean that Postgres is best for storing Time-Series Data although it performs very nicely thanks to JSONB…but in general it’s a good compromise – more disk space (~3-4x) at comparable query times to InfluxDB…but with full power of SQL! Meaning some saved time learning a new (and quite limited) query language. And after all, only a few people are running dozens and dozens of databases so performance is mostly not and issue. And on the plus side we can now ask questions that were previously plainly not possible (no joins, remember) or were only possible by storing some extra columns of data (de-normalizing).

The new functionality is designed for the latest Postgres version of 11, but as people run all kinds of different versions and might not want to set up a new cluster, there is also a legacy mode, that will cost more IO though. In total there are 4 different “schema modes” so that people could optimize their IO based on needs:

  • a separate normal table for each distinct metric (legacy mode)
  • a separate partitioned table for each distinct metric + weekly partitions
  • a separate partitioned table for each distinct metric + separate sub-table for each distinct monitored host + monthly partitions. Best for monitoring 50+ DB-s
  • custom mode – all data inserted into a si
[...]
カテゴリー: postgresql

Bruce Momjian: Postgres Encryption Maze

2019-01-28(月) 23:30:01

This wide-ranging email thread covers many of the challenges of adding encryption to Postgres. There is discussion of:

  • The need to understand the threats you are protecting against, "For anyone to offer a proper solution, you need to say what purpose your encryption will serve."
  • The need for layers of protection
  • The questionable usefulness of storage encryption, "Thus, unless you move your DB server on a regular basis, I can't see the usefulness of whole database encryption (WDE) on a static machine."
  • The value of encrypting network storage, "Having the 'disk' associated with a specific server encrypted can provide some level of protection from another machine which also has access to the underlying infrastructure from being able to access that data."
  • Credit Card industry requirements, "Non-Pubic Information (NPI) data should not be logged nor stored on a physical device in non-encrypted mode."
  • The limits of per-column encryption, "It is extremely unlikely you just want all the data in the database encrypted." (These five emails from another thread, 1, 2, 3, 4, 5, also discuss this topic.)
  • The many other database products that support built-in column-level encryption

As you can see, the discussion was all over the map. The Postgres project probably needs to do a better job communicating about these options and their benefits.

カテゴリー: postgresql

Sebastian Insausti: PostgreSQL High Availability with Master-Slave & Master-Master Architectures

2019-01-28(月) 16:23:15

Below is an excerpt from our whitepaper “PostgreSQL Management and Automation with ClusterControl” which can be downloaded for free.

Database servers can work together to allow a second server to take over quickly if the primary server fails (high availability), or to allow several computers to serve the same data (load balancing).

For HA configuration we can have several architectures, but the basic ones would be master-slave and master-master architectures.

PostgreSQL Master-Slave Architectures Related resources  Read the Whitepaper  ClusterControl for PostgreSQL  Become a PostgreSQL DBA Blog Series

These architectures enable us to maintain an master database with one or more standby servers ready to take over operations if the primary server fails. These standby databases will remain synchronized (or almost synchronized) with the master.

The replication between the master and the slaves can be made via SQL statements (logical standbys) or via the internal data structure modifications (physical standbys). PostgreSQL uses a stream of write-ahead log (WAL) records to keep the standby databases synchronized. If the main server fails, the standby contains almost all of the data of the main server, and can be quickly made the new master database server. This can be synchronous or asynchronous and can only be done for the entire database Server.

Setting up streaming replication is a task that requires some steps to be followed thoroughly. For those steps and some more background on this subject, please see: Become a PostgreSQL DBA - How to Setup Streaming Replication for High Availability.

From version 10, PostgreSQL includes the option to setup logical replication.

Logical replication allows a database server to send a stream of data modifications to another server. PostgreSQL logical replication constructs a stream of logical data modifications from the WAL. Logical replication allows the data changes from individual tables to be replicated. It doesn’t require a particular server to be designated as a ma

[...]
カテゴリー: postgresql

Adrien Nayrat: pg_sampletolog: An extension to log a sample of statements

2019-01-28(月) 15:00:00
This article will introduce you to an extension that I developed in order to log a sample of statements. When a DBA is faced with a performance problem, he will inspect logs, but also the pg_stat_stat_statements view. An expensive query will appear in pg_stat_stat_statements and in the logs if the query exceeds log_min_duration_statement. Thus, the DBA can replay the query, and obtains its execution plan to investigate. To go even further, it is possible to enable the auto_explain extension.
カテゴリー: postgresql

Dan Langille: Fixing a sequence value

2019-01-28(月) 03:58:59
Yesterday I copied data from the old production server to the new production server. One thing I missed, but did think about at the time, was updating the sequence used by the table in question. Looking at the table definition: The report_log_id_seq value will be wrong. When the reports run, they will use values for [...]
カテゴリー: postgresql

Dan Langille: Moving the email notifications to the new server

2019-01-27(日) 05:43:54
Ever since the new server went into production, sometime in 2017, the notification emails, which tell you what has changed, have been coming from the old server. I never did that changeover. The old server was still processing incoming commits, so the notifications were accurate. However, new accounts would not get notifications and changes to [...]
カテゴリー: postgresql

ページ