planet postgresql

Subscribe to planet postgresql のフィード
Planet PostgreSQL
更新: 2時間 28分 前

Joshua Drake: PostgresConf 2019: CFP and Early Bird closing soon

2019-01-10(木) 03:04:00
PostgresConf hopes everyone had a great holiday season and we hope everyone has a fantastic 2019.

With January upon us, we start conference season. This year, PostgresConf 2019 will be in NYC at the Sheraton New York Times Square March 18-22, 2019.

If you have a story about lessons learned with Postgres, a cool feature that you've found a creative way of using, or just some practical knowledge you'd like to share with your fellow Postgres users, submit a talk. Remember, there are many people who are new to Postgres, so introduction levels talks are welcome. The CFP ends this Friday so get your talks submitted soon at:

Also, the Early Bird window for PostgresConf 2019 will be ending next Friday, January 18. This tickets are pretty substantially discounted if you purchase them early. On top of the Early Bird discount, for being a member of the NYC Postgres User Group, there is a promo code for an additional 15% off. Just use the code 2019_PUG_NYC when buying your tickets. Lastly, if you are already in the NYC area we look forward to seeing the community at the Winter Party on the 10th! We hope to see you at PostgresConf 2019 and remember:

カテゴリー: postgresql

Tomas Vondra: Sequential UUID Generators on SSD

2019-01-09(水) 22:00:01

After I shared the sequential UUID benchmarks a couple of weeks ago, one of the points raised in feedback was the choice of the storage space. I’ve intentionally used a fairly weak storage system (RAID10 on three 7.2k SATA drives) because I wanted to demonstrate the benefits. But a couple of readers suggested using SSDs might significantly reduce the difference between regular and sequential UUIDs due to SSDs handling random I/O much better than rotational storage. My hypothesis was that while using SSDs may reduce the gap, it certainly won’t eliminate it entirely because the amplification (both in terms of number of I/O requests and WAL volume) is independent of the storage system. But the only way to verify this it is to repeat the tests, this time on SSDs. So here we go …

I’m not going to repeat the explanation of how sequential UUIDs work – you can find that in the previous post. I’ll just show results on SSDs, with a short discussion.

The tests were done on two different systems I always use for benchmarking. Both systems use SSDs, but quite different types / generations. This adds a bit more variability, which seems like a good thing in this sort of tests anyway.

i5-2500K with 6 x S3700 SSD

This first system is a fairly small (and somewhat old-ish), with a relatively old Intel CPU i5-2500K with just 4 cores, 8GB of RAM and 6 Intel S3700 SSD disks in a RAID0.

The following chart illustrates the throughput for various test scales:

The degradation is clearly not as bad as with rotational storage – on the largest scale it drops “only” to ~50% (instead of ~20%). And for the medium scale the drop is even smaller.

But how is that possible? The initial hypothesis was that the I/O amplification (increase in number of random I/O requests) is the same, independently of the storage system – if each INSERT means you need to do a 3 random I/O writes, that does not depend on storage system type. Of SSDs do not behave simply as “faster” rotational device – that’s an oversimplification. For example most SSDs are interna

カテゴリー: postgresql

Michael Paquier: Postgres 12 highlight - pg_promote

2019-01-09(水) 14:31:42

The following commit has been merged into Postgres 12 a couple of months ago, easing failover control flow:

commit: 10074651e3355e2405015f6253602be8344bc829 author: Michael Paquier <> date: Thu, 25 Oct 2018 09:46:00 +0900 Add pg_promote function This function is able to promote a standby with this new SQL-callable function. Execution access can be granted to non-superusers so that failover tools can observe the principle of least privilege. Catalog version is bumped. Author: Laurenz Albe Reviewed-by: Michael Paquier, Masahiko Sawada Discussion:

Promotion is a process which can be used on a standby server to end recovery and allow it to begin read-write operations, bumping this standby server to a new timeline on the way. This operation can be done using a couple of options:

  • pg_ctl promote, which waits for the standby to finish the promotion before exiting by default.
  • Define promote_trigger_file in postgresql.conf and create the file which would be detected by the startup process and translated so as recovery finishes (or trigger_file in recovery.conf up to v11, recovery parameters being merged to postgresql.conf in v12 and newer versions).

The commit mentioned above offers a third way to trigger a promotion with a SQL-callable function, which presents a huge advantage compared to the two other methods: there is no need to connect to the standby physical host to trigger the promotion as everything can be done with a backend session. Note however that this needs a standby server able to accept read-only operations and connections.

By default pg_promote() waits for the promotion to complete before returning back its result to its caller, waiting for a maximum of 60 seconds, which is the same default as the promote mode of pg_ctl. However it is possible to enforce both the wait mode and the timeout value by specifying the wait mode as a boolean for the first argument, and the timeout as an integer in seconds

カテゴリー: postgresql

pgCMH - Columbus, OH: Monitoring your DB with VividCortex

2019-01-09(水) 14:00:00

The Jan meeting will be held at 18:00 EST on Tues, the 22nd. Once again, we will be holding the meeting in the community space at CoverMyMeds. Please RSVP on MeetUp so we have an idea on the amount of food needed.


CoverMyMeds’ very own CJ will be presenting this month. He’s going to tell us all about how CoverMyMeds uses VividCortex to assist with monitoring PostgreSQL and maintaining the high performance of the database.


CoverMyMeds has graciously agreed to validate your parking if you use their garage so please park there:

You can safely ignore any sign saying to not park in the garage as long as it’s after 17:30 when you arrive.

Park in any space that is not marked ‘24 hour reserved’.

Once parked, take the elevator/stairs to the 3rd floor to reach the Miranova lobby. Once in the lobby, the elevator bank is in the back (West side) of the building. Take a left and walk down the hall until you see the elevator bank on your right. Grab an elevator up to the 11th floor. (If the elevator won’t let you pick the 11th floor, contact Doug or CJ (info below)). Once you exit the elevator, look to your left and right; one side will have visible cubicles, the other won’t. Head to the side without cubicles. You’re now in the community space:

The kitchen is to your right (grab yourself a drink) and the meeting will be held to your left. Walk down the room towards the stage.

If you have any issues or questions with parking or the elevators, feel free to text/call Doug at +1.614.316.5079 or CJ at +1.740.407.7043

カテゴリー: postgresql

Richard Yen: Zombies!! Dealing with a Case of Stuck TransactionIDs

2019-01-09(水) 06:45:00
The Story

Postgres is up and running, and things are humming along. Replication is working, vacuums are running, and there are no idle transactions in sight. You poke around your logs and make sure things are clean, but you notice a little warning:

2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:38 GMT HINT: Close open transactions soon to avoid wraparound problems.

Strange. There are no open transactions. You look in pg_stat_activity once, twice, three times–nothing. What’s up with that? How long has this been happening? You go back to you logs and you throw the whole thing into grep and find:

$ cat db1.log | grep "WARNING" 2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:38 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:55 GMT WARNING: oldest xmin is far in the past 2019-01-03 07:51:55 GMT WARNING: oldest xmin is far in the past ... $ cat db1.log | grep "HINT" 2019-01-03 07:51:38 GMT HINT: Close open transactions soon to avoid wraparound problems. 2019-01-03 07:51:38 GMT HINT: Close open transactions soon to avoid wraparound problems. 2019-01-03 07:51:38 GMT HINT: Close open transactions soon to avoid wraparound problems. 2019-01-03 07:51:38 GMT HINT: Close open transactions soon to avoid wraparound problems. ...

Uh oh. What’s going on? And the pressure is on–nobody wants to be stuck dealing with wraparound problems. The adrenaline rush hits, and you’re frantically looking for the silver bullet to kill off this problem. Kill. Maybe restart the database? No, you can’t do that–it’s a production server. You’ll get chewed out! You look everywh

カテゴリー: postgresql

Liaqat Andrabi: Webinar: pglogical and Postgres-BDR Update [Follow Up]

2019-01-09(水) 03:30:52

Since the release of v3, Postgres-BDR has evolved into the go-to clustering technology built specially for businesses that require geographically distributed databases with multiple masters.

To get an update on Postgres-BDR’s development, new features, and future roadmap, 2ndQuadrant held the pglogical and Postgres-BDR Update webinar as part of its PostgreSQL webinar series.

The webinar was presented by Simon Riggs, Founder & CEO of 2ndQuadrant, who is also a major contributor of the open source PostgreSQL project. Those who weren’t able to attend the live event can now view the recording here.

For any questions or comments regarding Postgres-BDR, please send an email to

カテゴリー: postgresql

Sebastian Insausti: Deploying and Managing PostgreSQL 11: New in ClusterControl 1.7.1

2019-01-08(火) 21:16:44

A few days ago was the release of a new version of ClusterControl, the 1.7.1, where we can see several new features, one of the main ones being the support for PostgreSQL 11.

To install PostgreSQL 11 manually, we must first add the repositories or download the necessary packages for the installation, install them and configure them correctly, depending on our infrastructure. All these steps take time, so let’s see how we could avoid this.

In this blog, we will see how to deploy this new PostgreSQL version with a few clicks using ClusterControl and how to manage it. As pre-requisite, please install the 1.7.1 version of ClusterControl on a dedicated host or VM.

Deploy PostgreSQL 11

To perform a new installation from ClusterControl, simply select the option “Deploy” and follow the instructions that appear. Note that if you already have a PostgreSQL 11 instance running, then you need to select the ‘Import Existing Server/Database’ instead.

ClusterControl Deploy Option

When selecting PostgreSQL, we must specify User, Key or Password and port to connect by SSH to our PostgreSQL hosts. We also need the name for our new cluster and if we want ClusterControl to install the corresponding software and configurations for us.

ClusterControl Deploy Information 1

Please check the ClusterControl user requirement for this task here.

ClusterControl Deploy Information 2

After setting up the SSH access information, we must define the database user, version and datadir (optional). We can also specify which repository to use. In this case, we want to deploy PostgreSQL 11, so just select it and continue.

In the next step, we need to add our servers to the cluster we are going to create.

ClusterControl Deploy Information 3

When adding our servers, we can enter IP or hostname.

In the last step, we can choose if our replication will be Synchronous or Asynchronous.

ClusterControl Deploy Information 4

We can monitor the status of the creation of our new cluster from the ClusterControl activity monitor.

ClusterControl Activity[...]
カテゴリー: postgresql

Gregory Stark: Monitoring Postgres with Prometheus

2019-01-07(月) 23:05:00
I'm glad people found my presentation at Lisbon on monitoring Postgres using Prometheus last October interesting. The slides are now uploaded to the conference web site at Sorry for the delay. Now it's a near year it's time to work on improving monitoring in Postgres. As an aside I experimented a bit with the build process for this slide deck. It's a Gitlab project and I set up a Gitlab CI pipeline to run Latex to build the beamer presentation and serve it from Gitlab pages. So you can see the most recent version of the slide deck from any time and it's automatically rebuilt each time I do a git push. You can also see the source code at and feel free to submit issues if you spot any errors in the slides or even just suggestions on things that were unclear. ** But now the real question. I want to improve the monitoring situation in Postgres. I have all kinds of grand plans but would be interested to hear what people's feelings about what's the top priority and most practical changes they want to see in Postgrs. Personally I think the most important first step is to implement native Prometheus support in Postgres -- probably a background worker that would start up and expose all of pg_stats directly from shared memory to Prometheus without having to start an SQL session with all its transaction overhead. That would make things more efficient but also more reliable during outages. It would also make it possible to export data for all databases instead of having the agent have to reconnect for each database! I have future thoughts about distributed tracing, structured logging, and pg_stats changes to support application profiling but they are subjects for further blog posts. I have started organizing my ideas as issues in feel free to comment on them or create new issues [...]
カテゴリー: postgresql

Douglas Hunley: pgBackRest 2.08 and macOS Mojava

2019-01-04(金) 22:30:44

The team has released pgBackRest 2.08 today. As part of a continuing effort, more bits have been moved from Perl to C. Sadly, this adds a new wrinkle for those of us on OSX, as when compiling, you now get:

gcc -I. -I../libc -std=c99 -D_POSIX_C_SOURCE=200112L -O2 -Wfatal-errors -Wall -Wextra -Wwrite-strings -Wswitch-enum -Wconversion -Wformat=2 -Wformat-nonliteral -Wno-clobbered -Wno-missing-field-initializers -Wstrict-prototypes -Wpointer-arith -Wvla `xml2-config --cflags` `perl -MExtUtils::Embed -e ccopts` -DWITH_PERL -DNDEBUG -c common/lock.c -o common/lock.o warning: unknown warning option '-Wno-clobbered'; did you mean '-Wno-consumed'? [-Wunknown-warning-option] common/lock.c:76:21: warning: implicit declaration of function 'flock' is invalid in C99 [-Wimplicit-function-declaration] if (flock(result, LOCK_EX | LOCK_NB) == -1) ^ common/lock.c:76:21: warning: this function declaration is not a prototype [-Wstrict-prototypes] common/lock.c:76:35: fatal error: use of undeclared identifier 'LOCK_EX' if (flock(result, LOCK_EX | LOCK_NB) == -1) ^ 3 warnings and 1 error generated. make: *** [common/lock.o] Error 1

To fix this, you will need to edit src/Makefile and change line 12 from:

CSTD = -std=c99 -D_POSIX_C_SOURCE=200112L



Then, you can follow the other steps on my previous post and everything should compile and function properly.


カテゴリー: postgresql

damien clochard: 8 Anonymization Strategies with PostgreSQL

2019-01-04(金) 08:17:36

Data Anonymization is a complex topic but PostgreSQL has a lot of interesting features to tackle this challenge! Here is an overview of different approach and how to implement them directly within a PostgreSQL database.

Over the last few months I have been working on a project called PostgreSQL Anonymizer. It led me to try various techniques to remove personal data for different purposes: development, CI, functional testing, Analytics, etc.

See my previous article “Introducing PostgreSQL Anonymizer” for more details about that project…

So far, I have found 8 different ways to anonymize a dataset. Here’s a quick tour with practical examples. All the queries in the article will use a simplified table (see below) and should work with any current version of PostgreSQL (from 9.4 to 11).

CREATE TABLE people ( id SERIAL, name TEXT NOT NULL, address TEXT, age INTEGER, salary INTEGER, phone TEXT ); 0. Sampling

Sampling is not Anonymization! But when you need to remove personal data from a database, most of the time you don’t need to publish all the rows. The anonymization process will be faster and you will limit the risk of unintended disclosure.

So before going any further it is important to note that PostgreSQL provides a feature called TABLESAMPLE that will reduce the size of your dataset.

For example, if you want to work only on 20% of a table:


And if you want to extract a subset among several tables while maintaining referential integrity, pg_sample is your friend !

1. Suppression

This is the most obvious way to get rid of personal identifiers: just wipe a column by replacing all values with NULL (aka “Nullification”) or with a constant value (“Static Substitution”).

Example :

UPDATE people SET name = '<CONFIDENTIAL>'; UPDATE people SET address = NULL;

This is simple and effective. For useless or highly sensitive data fields, it may be the best option.

But of course it will break integrity constraints (PRIMARY, UNIQUE, NOT NULL, etc.). An

カテゴリー: postgresql

Ibrar Ahmed: Upcoming Webinar Friday 1/4: High-Performance PostgreSQL, Tuning and Optimization Guide

2019-01-04(金) 01:42:06

Please join Percona’s Senior Software Engineer, Ibrar Ahmed as he presents his High-Performance PostgreSQL, Tuning and Optimization Guide on Friday, January, 4th, at 8:00 AM PDT (UTC-7) / 11:00 AM EDT (UTC-4).

Register Now

PostgreSQL is one of the leading open-source databases. Out of the box, the default PostgreSQL configuration is not tuned for any workload. Thus, any system with least resources can run it. PostgreSQL does not give optimum performance on high permanence machines because it is not using the all available resource. PostgreSQL provides a system where you can tune your database according to your workload and machine’s specifications. In addition to PostgreSQL, we can also tune our Linux box so that the database load can work optimally.

In this webinar on High-Performance PostgreSQL, Tuning and Optimization, we will learn how to tune PostgreSQL and we’ll see the results of that tuning. We will also touch on tuning some Linux kernel parameters.


カテゴリー: postgresql

Liaqat Andrabi: PostgreSQL 9.3 EOL – Why is it Important to Upgrade?

2019-01-03(木) 21:39:11

After the final release of patch 9.3.25 on November 8th 2018, PostgreSQL 9.3 is no longer supported. Therefore it’s time for all users of PG 9.3 to upgrade their databases to a newer supported version. The benefits of having a supported version are many and that’s what Craig Ringer talks about in the Q&A session below:

Why is it important to upgrade your PostgreSQL database to the latest version?

Craig: Always update to the latest minor version. 2ndQuadrant’s 24/7 support services often help customers who could’ve avoided experiencing a production outage or fault simply by updating every minor version or two. The PostgreSQL community (including 2ndQuadrant) releases minor point releases conservatively, and for good reasons. Keep up to date on those patches.

Note: A “minor” PostgreSQL release, i.e. a maintenance release, is an increment in the last part of the version number, whether it is a two number version like 10.x or a three-number version like 9.6.x. See “version policy” on the PostgreSQL website.

For major release version updates keeping up is not that important. It’s often best to stay on the latest minor release of your current major version and skip a major PostgreSQL release or two before updating. Update if the new major releases will solve problems you are facing, your current version is approaching the community end-of-life date, or you’re planning system changes like OS upgrades or server migrations anyway.

You’ll miss out on performance improvements, data integrity protection enhancements and improvements in monitoring and diagnostics offered by new major versions. But any change has a risk, and a well-planned upgrade requires testing and preparation, which has costs. So seeking a middle ground is usually wise.

Note that it’s usually harder to upgrade from a very old PostgreSQL to the latest, and it may need more planning, so it’s wise to start planning well before the End of Life (EoL) date. That way you maintain continuous coverage for security updates and any serious bugs that may be discovered

カテゴリー: postgresql

Pavel Golub: Keyword mess

2019-01-03(木) 17:59:48

Once I received a mail with question:
Can you tell me why I can’t select the column `references`?

=# \d v_table_relation
View "public.v_table_relation"
Column | Type | Collation | Nullable | Default
schema | information_schema.sql_identifier | | |
table | information_schema.sql_identifier | | |
columns | information_schema.sql_identifier[] | | |
references | jsonb[] | | |

=# select * from v_table_relation ;
schema | table | columns | references
public | a | {b,c} | {}
public | a2 | {b,c} | {“{\”toTable\”: \”a\”, \”toSchema\”: \”public\”, \”toColumns\”: [\”b\”, \”c\”], \”fromColumns\”: [\”b\”, \”c\”]}”}
workspace | t_employee | {id,name,state} | {}
(3 rows)

=# select references from v_table_relation;
ERROR: syntax error at or near “references”
LINE 1: select references from v_table_relation;


Well, the quick answer will be: because REFERENCES is a keyword you should use double quotes around it, e.g.

=# select references from v_table_relation;

But why sometimes you need explicitly quote identifiers and sometimes it works like without them? And how to know what exactly keywords are used in the PostgreSQL? And what keywords may be used as object names? And so on…

As usual one may find the answers in the PostgreSQL manual:

There we have detailed table that lists all tokens that are key words in the SQL standard and in PostgreSQL. From the manual you will know about reserved and non-reserved tokens, SQL standard compatibility and much more which is out of scope od this post.

There are two more ways to know what tokens can or cannot be used in certain situations. First is for true programmers, and the second is for true admins. Choose your pill, Neo.

Investigating the sources

PostgreSQL uses LALR(1)* parser to work with SQL statements. Implementation of the grammar (parser itself) may be found in the gram.y and impleme

カテゴリー: postgresql

Josh Williams: Switching PostgreSQL WAL-based Backup Options

2019-01-03(木) 09:00:00

Photo by Paul Hudson · CC BY 2.0, modified

I was woken up this morning. It happens every morning, true, but not usually by a phone call requesting for help with a PostgreSQL database server that was running out of disk space.

It turns out that one of the scripts we’re in the process of retiring, but still had in place, got stuck in a loop and filled most of the available space with partial, incomplete base backups. So, since I’m awake, I’d might as well talk about Postgres backup options. I don’t mean for it to be a gripe session, but I’m tired and it kind of is.

For this particular app, since it resides partially on AWS we looked specifically at options that are able to work natively with S3. We’ve currently settled on pgBackRest. There’s a bunch of options out there, which doesn’t make the choice easy. But I suppose that’s the nature of things these days.

At first we’d tried out pghoard. It looks pretty good on the tin, especially with its ability to connect to multiple cloud storage services beyond S3: Azure, Google, Swift, etc. Having options is always nice. And for the most part it works well, apart from a couple idiosyncrasies.

We had the most trouble with the encryption feature. It didn’t have any problem on the encryption side. But for some reason on the restore the process would hang and eventually fail out without unpacking any data. Having a backup solution is a pretty important thing, but it doesn’t mean anything unless we can get the data back from it. So this was a bit of a sticking point. We probably could have figured out how to get it functioning, and at least been a good citizen and reported it upstream to get it resolved in the source. But we kind of just needed it working, and giving something else a shot is a quicker path to that goal. Sorry, pghoard devs.

The other idiosyncratic behaviors that are probably worth mentioning are that it does its own scheduling. The base backups, for instance, happen at a fixed hour interval in the configuration file, starting from when the service is first s

カテゴリー: postgresql

Craig Kerstiens: Fun with SQL: Self joins

2019-01-03(木) 00:46:00

Various families have various traditions in the US around Christmas time. Some will play games like white elephant where you get a mix of decent gifts as well as gag gifts… you then draw numbers and get to pick from existing presents that have been opened (“stealing” from someone else) or opening an up-opened one. The game is both entertaining to try to get something you want, but also stick Aunt Jennifer with the stuffed poop emoji with a Santa hat on it.

Other traditions are a bit simpler, one that my partner’s family follows is drawing names for one person you buy a gift for. This is nice because you can put a bit of effort into that one person without having to be too overwhelmed in tracking down things for multiple people. Each year we draw names for the next year. And by now you’re probably thinking what does any of this have to do with SQL? Well normally when we draw names we write them on a piece of paper, someone takes a picture, then that gets texted around to other family members. At least for me every October I’m scrolling back through text messages to try to recall who it was I’m supposed to buy for. This year I took a little time to put everyone’s name in a SQL database and write a simple query for easier recall.

A simple but workable schema

I could very much over-engineer this and record things like who has who on what year to see how many repeats happen, etc. I really just want an easy place to save the data and not forget. For that reason I’m simplifying the schema as much as possible:

CREATE TABLE giftnames AS ( id serial, name text, assignment int );

From here I populated family members names first:

INSERT INTO giftnames (name) VALUES ('Craig Kerstiens'); INSERT INTO giftnames (name) VALUES ('Aunt Jennifer'); INSERT INTO giftnames (name) VALUES ('Uncle Joe');

Then as we draw names of who each person gets I’m going to go through and update those values within the table as well:

UPDATE giftnames set assignment = 2 where id = 1; UPDATE giftnames set assignment = 3 where id = 2; UPDATE gift[...]
カテゴリー: postgresql

Bruce Momjian: Fourteen Authentication Methods

2019-01-02(水) 23:45:01

Postgres supports fourteen authentication methods — that might seem like a lot, but Postgres is used in many environments, and it has to support whatever methods are being used in those environments. The fourteen methods can seem confusing, but they are easier to understand in categories:

  • absolute: trust, reject always allow or reject
  • password: SCRAM-SHA-256, MD5, password compare a user-supplied password with something stored in the database
  • external password: LDAP, pam, RADIUS, BSD compare to a password stored outside the database
  • trusted network: peer, ident rely on the network connection to authenticate
  • trusted tokens: GSS, SSPI use possession of a token generated by a trusted key distribution server
  • certificate authority: cert uses access to the private key of a certificate signed by a trusted certificate authority

So, there is one absolute and five conditional classes of authentication.

カテゴリー: postgresql

Nikolay Shaplov: Using vim for postgres development

2019-01-02(水) 01:07:32
I've written a wiki article about configuring vim for postgres development.

I actually started to restore my vim configuration (I've lost it some time ago) and found out that these information is scattered along the web. And it is better to keep it in one place.

I've described in the article all options and plugins that I used for pg development.

If you have your own vim experience and know some tools that can be used to make development and following codestyle more easy, please add them there.

PS. I've already written about it to the mailing list. Just want to write about it in the blog too...

カテゴリー: postgresql

Viorel Tabara: An Overview of Just-in-Time Compilation (JIT) for PostgreSQL

2019-01-01(火) 19:58:00

Historically PostgreSQL has provided compilation features in the form of ahead-of-time compilation for PL/pgSQL functions and version 10 introduced expression compilation. None of those generate machine code though.

JIT for SQL was discussed many years ago, and for PostgreSQL the feature is the result of a substantial code change.

To check if PostgreSQL binary was built with LLVM support use the pg_configure command to display the compile flags and look for –with-llvm in the output. Example for the PGDG RPM distribution:

omiday ~ $ /usr/pgsql-11/bin/pg_config --configure '--enable-rpath' '--prefix=/usr/pgsql-11' '--includedir=/usr/pgsql-11/include' '--mandir=/usr/pgsql-11/share/man' '--datadir=/usr/pgsql-11/share' '--enable-tap-tests' '--with-icu' '--with-llvm' '--with-perl' '--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' '--with-openssl' '--with-pam' '--with-gssapi' '--with-includes=/usr/include' '--with-libraries=/usr/lib64' '--enable-nls' '--enable-dtrace' '--with-uuid=e2fs' '--with-libxml' '--with-libxslt' '--with-ldap' '--with-selinux' '--with-systemd' '--with-system-tzdata=/usr/share/zoneinfo' '--sysconfdir=/etc/sysconfig/pgsql' '--docdir=/usr/pgsql-11/doc' '--htmldir=/usr/pgsql-11/doc/html' 'CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' Why LLVM JIT?

It all started about two years ago as explained in Adres Freund’s post when expression evaluation and tuple deforming proved to be the roadblocks in speeding up large queries. After adding the JIT implementation “expression evaluation itself is more than ten times faster than before” in Andres’ words. Further, the Q&A section ending his post explains the choice of LLVM over other implementations

カテゴリー: postgresql

REGINA OBE: Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum

2019-01-01(火) 15:48:00

In a previous article Using pg upgrade to upgrade PostGIS without installing older version I demonstrated a trick for upgrading to a newer PostgreSQL instance from PostGIS 2.2 - 2.whatever without having to install the older version. Unfortunately that trick does not work if coming from PostGIS 2.1 because in PostGIS 2.2 we renamed a c lib function that backed sql functions in 2.1.

Fear not. There is still a way to upgrade from 2.1 to 2.5 without installing an older version of PostGIS in your new PostgreSQL instance. To do so, you need to add a step and that is to remove the functions in 2.1 that are backed by this renamed lib function. In upcoming PostGIS 3.0, we've added this function back and have it throw an error so that even coming from PostGIS 2.1, you can upgrade just the same as you do from later versions.

Continue reading "Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum"
カテゴリー: postgresql

Avinash Kumar: Great things that happened with PostgreSQL in the Year 2018

2019-01-01(火) 07:28:26

In this blog post, we’ll look back at what’s been going on in the world of PostgreSQL in 2018.

Before we start talking about the good things that have happened in the PostgreSQL in  2018, we hope you had a wonderful year and we wish you a happy and prosperous 2019.

PostgreSQL has been a choice for those who are looking for a completely community-driven open source database that is feature-rich and extensible. We have seen tremendously great things happening in PostgreSQL for many years, with 2018 being a prime example. As you could see the following snippet from DB engine rankings, PostgreSQL has topped the chart for growth in popularity in the year 2018 compared to other databases.

PostgreSQL adoption growth has been increasing year over year, and 2018 has again been one such year as we can see.

Let’s start with a recap of some of the great PostgreSQL events, and look at what we should take away from 2018 in the PostgreSQL space.

PostgreSQL 11 Released

PostgreSQL 11 was a release that incorporated a lot of features offered in commercial database software governed by an enterprise license. For example, there are times when you are required to enforce the handling of embedded transactions inside a stored procedure in your application code. There are also times when you wish to partition a table with foreign keys or use hash partitioning. This used to require workarounds. The release of PostgreSQL 11 covers these scenarios.

There were many other add-ons as well, such as Just-In-Time compilation, improved query parallelism, partition elimination, etc. You can find out more in our blog post here, or the PostgreSQL 11 release notes (if you have not seen already). Special thanks to everyone involved in such a vibrant PostgreSQL release.

End of Life for PostgreSQL 9.3

9.3.25 was the last minor release that has happened for PostgreSQL 9.3 (on November 8, 2018). There will be no more minor releases supported by the community for 9.3. If you are still using PostgreSQL 9.3 (or a major earlier release than 9.3), it is the

カテゴリー: postgresql