Jobin Augustine: Installing and Configuring JIT in PostgreSQL 11

planet postgresql - 2018-11-20(火) 10:49:56

Just-in-time (JIT in PostgreSQL) compilation of SQL statements is one of the highlighted features in PostgreSQL 11. There is great excitement in the community because of the many claims of up to a 30% jump in performance. Not all queries and workloads get the benefit of JIT compilation. So you may want to test your workload against this new feature.

However, It is important to have a general understanding of what it does and where we can expect the performance gains. Installing PostgreSQL 11 with the new JIT compilation feature requires few extra steps and packages. Taking the time and effort to figure out how to do this shouldn’t be a reason to shy away from trying these cutting-edge features and testing a workload against the JIT feature. This blog post is for those who want to try it.

What is JIT and What it does in PostgreSQL

Normal SQL execution in any DBMS software is similar to what an interpreted language does to the source code. No machine code gets generated out of your SQL statement. But we all know that how dramatic the performance gains can be from a JIT compilation and execution of the machine code it generates. We saw the magic Google V8 engine did to JavaScript language. The quest for doing a similar thing with SQL statement was there for quite some time. But it is a challenging task.

It is challenging because we don’t have the source code (SQL statement) ready within the PostgreSQL server. The source code that needs to undergo JIT need to come from client connections and there could be expressions/functions with a different number of arguments, and it may be dealing with tables of different number and type of columns.

Generally, a computer program won’t get modified at this level while it is running, so branching-predictions are possible. The unpredictability and dynamic nature of SQL statements coming from client connections and hitting the database from time-to-time give no scope for doing advance prediction or compilation in advance. That means the JIT compiler should kick in every time the d

カテゴリー: postgresql

Magnus Hagander: PGConf.EU 2018 - the biggest one yet!

planet postgresql - 2018-11-20(火) 05:01:46

It's now almost a month since PGConf.EU 2018 in Lisbon. PGConf.EU 2018 was the biggest PGConf.EU ever, and as far as I know the biggest PostgreSQL community conference in the world! So it's time to share some of the statistics and feedback.

I'll start with some attendee statistics:

451 registered attendees 2 no-shows 449 actual present attendees

Of these 451 registrations, 47 were sponsor tickets, some of who were used by sponsors, and some were given away to their customers and partners. Another 4 sponsor tickets went unused.

Another 52 were speakers.

This year we had more cancellations than we've usually had, but thanks to having a waitlist on the conference we managed to re-fill all those spaces before the event started.

カテゴリー: postgresql

Hans-Juergen Schoenig: PostgresSQL: Implicit vs. explicit joins

planet postgresql - 2018-11-19(月) 18:00:36

If you happen to be an SQL developer, you will know that joins are really at the core of the language. Joins come in various flavors: Inner joins, left joins, full joins, natural joins, self joins, semi-joins, lateral joins, and so on. However, one of the most important distinctions is the difference between implicit and explicit joins. Over the years, flame wars have been fought over this issue. Still, not many people know what is really going on. Therefore my post might help to shed some light on the situation.


Using implicit joins

Before we dig into practical examples, it is necessary to create some tables that we can later use to perform our joins:

test=# CREATE TABLE a (id int, aid int); CREATE TABLE test=# CREATE TABLE b (id int, bid int); CREATE TABLE

In the next step some rows are added to those tables:

test=# INSERT INTO a VALUES (1, 1), (2, 2), (3, 3); INSERT 0 3 test=# INSERT INTO b VALUES (2, 2), (3, 3), (4, 4); INSERT 0 3

An implicit join is the simplest way to join data. The following example shows an implicit join:

test=# SELECT * FROM a, b WHERE =; id | aid | id | bid ----+-----+----+----- 2 | 2 | 2 | 2 3 | 3 | 3 | 3 (2 rows)

In this case, all tables are listed in the FROM clause and are later connected in the WHERE clause. In my experience, an implicit join is the most common way to connect two tables. However, my observation might be heavily biased, because an implicit join is the way I tend to write things in my daily work.

Using explicit joins

The following example shows an explicit join. Some people prefer the explicit join syntax over implicit joins because of readability or for whatever other reason:

test=# SELECT * FROM a JOIN b ON (aid = bid); id | aid | id | bid ----+-----+----+----- 2 | 2 | 2 | 2 3 | 3 | 3 | 3 (2 rows)

In this case tables are connected directly using an ON-clause. The ON-clause simply contains the conditions we want to use to join those tables together.

Explicit joins

カテゴリー: postgresql

Adrien Nayrat: PostgreSQL and heap-only-tuples updates - part 2

planet postgresql - 2018-11-19(月) 16:00:00
Here is a series of articles that will focus on a new feature in version 11. During the development of this version, a feature caught my attention. It can be found in releases notes : Allow heap-only-tuple (HOT) updates for expression indexes when the values of the expressions are unchanged (Konstantin Knizhnik) I admit that this is not very explicit and this feature requires some knowledge about how postgres works, that I will try to explain through several articles:
カテゴリー: postgresql

Regina Obe: PostGIS 2.5.1

planet postgresql - 2018-11-18(日) 09:00:00

The PostGIS development team is pleased to provide bug fix 2.5.1 for the 2.5 stable branch.

Although this release will work for PostgreSQL 9.4 thru PostgreSQL 11, to take full advantage of what PostGIS 2.5 offers, you should be running PostgreSQL 11 and GEOS 3.7.0.

Best served with PostgreSQL 11.1 and pgRouting 2.6.1.

WARNING: If compiling with PostgreSQL+JIT, LLVM >= 6 is required Supported PostgreSQL versions for this release are: PostgreSQL 9.4 - PostgreSQL 11 GEOS >= 3.5


Continue Reading by clicking title hyperlink ..
カテゴリー: postgresql

Pavel Stehule: new update pspg

planet postgresql - 2018-11-17(土) 20:28:00
I released new update of pspg It is bugfix release + new HiColor theme.
カテゴリー: postgresql

Andreas Scherbaum: Google Code-In 2018 - Halftime

planet postgresql - 2018-11-17(土) 06:05:00

Andreas 'ads' Scherbaum

The PostgreSQL Project participates in Google Code-In (GCI) 2018. This is a program which allows pre-university students to pick up tasks defined by the partnering open source projects, learn about these projects, and also win a prize (certificates, t-shirts, hoodies, but also a trip to Google HQ).


Every project creates a number different tasks, some technical, some design based, some about updating documentation, or validating bugs. Whatever is useful in order to get to know the project better. Students can select tasks and submit their work. Mentors from the project then evaluate the work, and either approve it or send it back to the student because more work is needed.


Now we are halfway into this year's competition, it's time to run the numbers.


Continue reading "Google Code-In 2018 - Halftime"
カテゴリー: postgresql

Denish Patel: Postgres 11 partitioning

planet postgresql - 2018-11-17(土) 01:14:21

Postgres supported table partitioning  implementation based on inheritance and triggers for over more than a decade now. However, the declarative partition support was added in Postgres 10 release in Oct 2017.  Since Postgres 10, Postgres  supports built-in declarative partitioning so it was easier to create partitions but you still need to manage trigger to update records on parent table. Additionally, you couldn’t able to add Primary Key and Foreign Keys on partitioned tables. The recent release of Postgres 11 solves all of these problems.

Postgres 11 adds a lot more partitioning features to manage partitioned tables easier than ever! Below is the comparison of partitioning features across Postgres releases:

feature Postgres – 11 postgres -10 9.6 Declarative table partitioning Yes Yes No Default Partition –

A default partition stores data that does not match the partition key for any other partition

Yes No No Partitioning by a HASH key Yes Yes No Support for PRIMARY KEY, FOREIGN KEY, indexes, and triggers on partitioned tables Yes No No UPDATE on a partition key –

When a partition key is updated on a row, the row is moved to the appropriate partition.

Yes No No

Postgres 11 supports RANGE, LIST and HASH partition types. You can also create sub-partitions  on child tables too!

Let’s take an example to partition the table using RANGE and LIST partition types.

RANGE Partitioning: -- create parent table to store SMS campaign subscribers app=# CREATE TABLE sms_campaign_subscribers (id bigint not null, sms_campaign_id bigint not null) PARTITION BY RANGE (sms_campaign_id); CREATE TABLE -- create child table to store campaign with sms_campaign_id >= 111 and < 112 app=# CREATE TABLE sms_campaign_subscriber_111 PARTITION OF sms_campaign_subscribers FOR VALUES FROM (111) TO (112) ; CREATE TABLE -- Describe parent table app=# \d+ sms_campaign_subscribers Table "public.sms_campaign_subscribers" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description -----------------+[...]
カテゴリー: postgresql

Douglas Hunley: pgBackRest 2.07 and macOS Mojave

planet postgresql - 2018-11-16(金) 20:12:31

pgBackRest 2.07 was announced today. As usual, I immediately downloaded it and tried to get it up and running on my MacBook (currently running Mojave). It wasn’t as straightforward as one might hope, and the online instructions assume a Linux system, so I figured I’d write this up for anyone else attempting the same.

Since this is OSX, we have to do some work to make things right before we even start with the pgBackRest code. First up, get a real OpenSSL install. We’ll use Homebrew for this:

> brew install openssl [output snipped] > openssl version -a LibreSSL 2.6.4 built on: date not available platform: information not available options: bn(64,64) rc4(ptr,int) des(idx,cisc,16,int) blowfish(idx) compiler: information not available OPENSSLDIR: "/private/etc/ssl" > /usr/local/opt/openssl/bin/openssl version -a OpenSSL 1.0.2p 14 Aug 2018 built on: reproducible build, date unspecified platform: darwin64-x86_64-cc options: bn(64,64) rc4(ptr,int) des(idx,cisc,16,int) idea(int) blowfish(idx) compiler: clang -I. -I.. -I../include -fPIC -fno-common -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -arch x86_64 -O3 -DL_ENDIAN -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM OPENSSLDIR: "/usr/local/etc/openssl"

As you can see, the default SSL from OSX is in /usr/bin while the newly installed OpenSSL is in /usr/local/opt/openssl. In my testing, this is enough to proceed with pgBackRest but I prefer to have the openssl binary match the libs and I’m a glutton for punishment, so I replace the OSX binary with the Homebrew one:

> sudo mv /usr/bin/openssl /usr/bin/openssl.old > sudo ln -s /usr/local/opt/openssl/bin/openssl /usr/bin > ls -ld /usr/bin/openssl* lrwxr-xr-x 1 root wheel 34 Nov 16 11:39 /usr/bin/openssl -> /usr/local/opt/openssl/bin/openssl* -rwxr-xr-x 1 root wheel 1.2M Sep 21 00:16 /usr/bin/openssl.old*

OK, so now we h

カテゴリー: postgresql

Achilleas Mantzios: Five Cool Things I Learned at the PostgreSQL Conference Europe 2018

planet postgresql - 2018-11-16(金) 19:00:25

I spent a week in the magnificent city of Lisbon attending the annual European PostgeSQL Conference. This marked the 10th anniversary since the first European PostgreSQL conference and my sixth time attending.

First Impressions

The city was great, the atmosphere was great and it seemed that it would be a very productive and informative week full of interesting conversations with intelligent and friendly people. So basically the very first cool thing I learned in Lisbon is how great Lisbon and Portugal are, but I guess you came here for the rest of the story!

Shared Buffers

We attended the training session “PostgreSQL DBA toolbelt for day-to-day ops”

by Kaarel Moppel (Cybertec). One thing I noted was the setting of shared_buffers. Since shared_buffers actually competes or complements system’s cache it shouldn’t be set to any value between 25% and 75% of the total RAM available. So while, in general, the recommended setting for typical workloads is 25% of RAM, it could be set to >= 75% for special cases, but not in between.

Other things we learned in this session:

  • unfortunately easy online (or offline) activation/enablement of data-checksums is not yet in 11 (initdb/logical replication remains the only option)
  • beware of vm.overcommit_memory, you better disable it by setting it to 2. Set vm.overcommit_ratio to about 80.
Advanced Logical Replication

In the talk of Petr Jelinek (2nd Quadrant), the original authors of logical replication, we learned about more advanced uses of this new exciting technology:

  • Centralized Data Collection: we may have multiple publishers and then a central system with a subscriber to each of those publishers, making data from various sources available in a central system. (typical use: OLAP)
  • Shared global data or in other words a central system for maintaining global data and parameters (such as currencies, stocks, market/commodity values, weather, etc) which publishes to one or more subscribers. Then these data are maintained only in one system but available in all subscribers.
  • Log
カテゴリー: postgresql

Joshua Drake: PostgresConf Silicon Valley 2019 dates and 2018 numbers!

planet postgresql - 2018-11-15(木) 04:45:00
Announcing PostgresConf Silicon Valley 2019, September 18th - 20th at the Hilton San Jose! An absolute perfect pairing of training, breakout sessions, and a fantastic weekend break to enjoy the valley for every speaker, attendee, volunteer, and organizer. 
Didn't you just host the community at PostgresConf Silicon Valley, you ask? Why yes we did! That event was October 15th and 16th of 2018. The event was such an unexpected success that we immediately started working with the hotel to lock in our dates for 2019. We requested mid-October to early November. Unfortunately, the only week they had available was the week of September 15th, 2019. We are again working with the Silicon Valley Postgres Meetup; the fastest growing Postgres meetup in the United States. 
As we continue to be the fastest growing, non-profit, inclusive, and volunteer organized event we are providing you the breakdown of the Silicon Valley 2018 financials:

PostgresConf Silicon Valley is much more cost effective than the "big" conference in Manhattan and that is exactly what we want as a development or "local" conference. We are targeting 50% growth for 2019 and we want do so in a way that is inviting to new community members that won't overwhelm them. We succeeded with that in 2018 and we are going to continue the mission of People, Postgres, Data!

カテゴリー: postgresql

Robert Treat: The Ghost of phpPgAdmin

planet postgresql - 2018-11-13(火) 11:12:00

TLDR; This evening I put the final blotches on to a new release of phpPgAdmin 5.6. This release adds official support for all recent Postgres versions, fixes a number of smaller bugs, and includes several language updates. While I think upstream packagers need not worry about absorbing this release, I've made downloads generally available from the Github project page, or you can just pull from git to get the latest code. Note this release is designed to run on PHP 5.6.

Now for the backstory...

After much hoopla a few years back about new admin clients and talk of the pgAdmin rewrite, most of the regular contributors had pretty much moved on from the project, hoping to see a clearly better admin tool surface as a replacement. Instead, I saw multiple projects launch, none of which captured the hearts and minds so to speak, and saw the number of pull requests on an ever more abandonded looking project continue to pile up, not to mention thousands of downloads.

As for me, while not doing much publically, privately I was still maintaining two private copies of the code, one which had support for newer Postgres servers, and one which had support for PHP 7; both in rough shape. While my schedule doesn't leave much time for random hacking, about a month ago I saw an upcoming block where I would be conferencing three weeks in a row and suspected I could probably find some time during my travels to do some updates. After a little bit of thought, I decided to do two releases. The first would add support up through Postgres 11, the most recently released version of the server software, and the second would add the aforementioned PHP 7 support. Granted, it's taken longer than I had hoped, probably mostly because that's how software engineering works, but also because I had to literally relearn how it is we were running this project, but I think I've got most of that worked out now.

I suspect the two releases might annoy some people, given that PHP 5.6 is years old and in many peoples minds EOL. But it turns out that a lot of

カテゴリー: postgresql

Vasilis Ventirozos: Zero downtime upgrade PostgreSQL 10 to 11.

planet postgresql - 2018-11-13(火) 07:17:00
PostgreSQL 11 has been officially released and it's packed with exciting changes.Native declarative partitioning is more robust, parallelism is now allowing moreoperations to work in parallel, transactions are now supported in stored procedures and just in time compilation for expressions are just some of the new features. With every major version, Postgres is getting better, providing more and more tools to developers and better quality of life to the DBAs, making it harder to resist an upgrade.That said, while new major versions come with all these exciting improvements, they also come with internal system changes, making the database clusters incompatible across major versions. This means that upgrades up to Postgres 10, where logical replication was introduced, were impossible without downtime or 3rd party replication tools.
Story time, Before we get into how to upgrade using logical replication, let's see how upgrading and replication evolved over the years. pg_dump and pg_restore is probably the most traditional way to do an upgrade. This method requires all database writes to be suspended, making it impractical for any reasonably sized production database. Pg_upgrade was introduced in Postgres 8.4, it’s still the most common way to do upgrades. It works under the assumption that the internal storage format rarely changes allowing it to create new system catalog tables and simply reuse the old data files. This means that upgrades are safe and fast. It still requires the database to be down and by default it will copy the datafiles to a new data directory, This can take significant amount of time but it can easily bypassed by using the hard link option provided by pg_upgrade itself. Hard links are only valid in the same filesystem, and with that in mind, this method not only massively reduces downtime, but also eliminates the need of having a second copy of the database cluster. In rare occasions like for example, an upgrade changing storage options, like floating point to int64 date/times, pg_upgrade won’t[...]
カテゴリー: postgresql

Andrew Dunstan: PostgreSQL Buildfarm Client Release 9

planet postgresql - 2018-11-13(火) 04:30:02

Announcing Release 9 of the PostgreSQL Buildfarm client.

Along with numerous fixes of minor bugs and a couple of not so minor bugs, this release has the following features:

  • new command line parameter --run-parallel for runs
    all branches in parallel, possibly across animals as well
  • new config setting max_load_avg inhibits a run if the load average
    is higher than the setting
  • new config_option archive_reports saves that number of generations
    of the report sent to the server
  • new command line parameter --show-error-log which outputs the error
    log if any on stdout
  • automatically rerun 3 hours after a git failure, useful on back
    branches where commits can be infrequent
  • automatically convert old URLs to
  • better logic to detect when temp installs are unnecessary
  • better valgrind processing
  • new module to check core perl code for style and syntax
  • allow upstream repos to be rebased
  • add animal name and branch to verbose traces, useful in parallel runs
  • remove old if $branch eq 'global' processing in config file,
    replace with a simple global stanza, the legacy use is still supported.

If you want to run in parallel and you are just running a single animal, changing --run-all to --run-parallel in the command line should be all you need to do. Parallel runs are not run all at once. By default they are launched every 60 seconds. You can also limit the maximum number of parallel runs. The default is 10. I will be adding some notes to the Buildfarm Howto on how to use this feature.

The max_load_avg setting only works on Unix, and requires the installation of the non-standard perl module Unix::Uptime. If this value is set to a non-zero value and the module is not present the script will die. The setting is compared to the load average in the last minute and the last 5 minutes. If either are higher then the run is cancelled.

The release can be downloaded from or

カテゴリー: postgresql

Joshua Otwell: Care To Know Clauses: All About SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY, and LIMIT

planet postgresql - 2018-11-12(月) 22:47:01

SQL is a language of databases and PostgreSQL is our chosen one. Oftentimes, storing data is but one facet of the process. Typically, in any data-centered endeavor, you will: view and read data, take action or implement changes on the data, garner decision-making information (analytics), or manipulate the stored data in some form or fashion.

SQL is composed of a combination of keywords, commands, and clauses. SQL seems simple. Just a few 'easy' commands here and there. No big deal, right?

But, there is more to SQL than meets the eye. SQL can trip you up on those 'easy' queries.

One challenge (that I must routinely revisit) is understanding that SQL execution order is definitely different from that of its syntax.

In this blog post, I visit, at a high-level, the major SQL clauses as they apply to PostgreSQL. There are many dialects of SQL but PostgreSQL’'s interpretation is the focus here. (Some characteristics of each clause very well may apply to other SQL dialects.)

SQL clauses form the foundation for basic, often-used commands and queries. That being said, advanced queries and examples utilizing Window Functions, CTE's, Derived Tables, etc will not be covered in this post.

As we will see, not all clauses are created equal. Yet, they do operate in tandem, providing query results seamlessly (or not).

Allow me to clarify...

I will periodically make mention of an execution order throughout the blog post as it applies to many of the clauses. But, this is generalized.

To my understanding, more often than not, the optimizer chooses and decides the best query plan for execution.

SELECT - The 'picky' Clause Used to Query the Database

SELECT is one busy clause. It is everywhere. Used more than all the other clauses. Certain clauses you may not need at all. Not so much the case with SELECT, for it is a mandatory clause.

The SELECT clause is typically used for querying the database, containing (at a basic level):

  1. A SELECT list - The columns of data you want.
  2. the source data set(s) - named in the FROM clause. Tables, Vie
カテゴリー: postgresql

Adrien Nayrat: PostgreSQL and heap-only-tuples updates - part 1

planet postgresql - 2018-11-12(月) 16:00:00
Here is a series of articles that will focus on a new feature in version 11. During the development of this version, a feature caught my attention. It can be found in releases notes : Allow heap-only-tuple (HOT) updates for expression indexes when the values of the expressions are unchanged (Konstantin Knizhnik) I admit that this is not very explicit and this feature requires some knowledge about how postgres works, that I will try to explain through several articles:
カテゴリー: postgresql

Dimitri Fontaine: Preventing SQL Injections

planet postgresql - 2018-11-10(土) 23:40:01

An SQL Injection is a security breach, one made famous by the Exploits of a Mom xkcd comic episode in which we read about little Bobby Tables:

PostgreSQL implements a protocol level facility to send the static SQL query text separately from its dynamic arguments. An SQL injection happens when the database server is mistakenly led to consider a dynamic argument of a query as part of the query text. Sending those parts as separate entities over the protocol means that SQL injection is no longer possible.

カテゴリー: postgresql

Jan Karremans: Why document databases are old news…

planet postgresql - 2018-11-09(金) 21:57:55

We’re going to store data the way it’s stored naturally in the brain.

This is a phrase being heard more often today. This blog post is inspired by a short rant by Babak Tourani (@2ndhalf_oracle) and myself had on Twitter today. How cool is that!! This phrase is used by companies like MongoDB or Graph Database vendors to explain why they choose to store information / data in an unstructured format. It is new, it is cool, hip and happening. Al the new compute power and storage techniques enable doing this. How cool is that!! Well, it is… for the specific use-cases that can benefit from such techniques. Thinking of analytical challenges, where individual bits of information basically have no meaning. If you are analyzing a big bunch of captured data, which is coming from a single source like a machine, or a click-stream or social media, for instance, one single record basically has no meaning. If that is the case, and it is really not very interesting if you have and retain all individual bits of information, but you are interested in “the bigger picture”, these solutions can really help you! How cool is it, actually? If it comes to the other situations where you want to store and process information… where you do care about the individual records (I mean, who wants to repopulate their shopping cart on a web-shop 3 times before all the items stick in the cart) there are some historical things that you should be aware of. Back in the day when computers were invented, all information on computers was stored “the way it’s stored naturally in the brain”. Back in the day when computers were invented, all we had were documents to store information. This new cool hip and happening tech is, if anything, not new at all… Sure, things changed over the last 30 years and with all the new compute power and storage techniques, the frayed ends of data processing have significantly improved. This makes the executing of data analysis, as described above, actually so much better!! Really, we can do things to data, using these co[...]
カテゴリー: postgresql

Magnus Hagander: PGConf.EU 2019 - Dates and location!

planet postgresql - 2018-11-09(金) 18:31:37

It's been over 10 years since PostgreSQL Europe got started in Prato, just outside Florence, and it's time to return to our roots! PostgreSQL Conference Europe 2019 will be held in Milan, Italy, at the Milan Marriott Hotel, on October 15-18, 2019.

More details will be shared as things progress and we are not yet ready to open for sponsorship, call for papers or registrations, but it's time to mark your calendars and block out the week!

Follow us on twitter at @pgconfeu for notifications of when news are posted, check our website or subscribe to our RSS feed for the latest news!

We had only one correct guess in our "guess the location" contest at the closing session of this years conference. This attendee will be contacted personally with information about how to claim their free ticket for next year.

カテゴリー: postgresql

11.1, 10.6, 9.6.11, 9.5.15, 9.4.20, 9.3.25 リリース (2018-11-08) news - 2018-11-09(金) 11:46:58
11.1, 10.6, 9.6.11, 9.5.15, 9.4.20, 9.3.25 リリース (2018-11-08) harukat 2018/11/09 (金) - 11:46
カテゴリー: postgresql