movead li: Parallel Foreign Scan of PostgreSQL

planet postgresql - 2019-08-22(木) 16:30:14

PostgreSQL Community is working on adding built-in sharding in Postgres and it seems that the community is following the FDW based sharding approach for adding built-in sharding for PG. The Parallel Foreign Scan is a very important part of this approach as it will provide the functionality of executing multiple foreign scans in parallel and really improving the OLAP use case.

I want to touch on how parallel foreign scan may be implemented in Postgres in this blog. Firstly I will discuss about how parallel scan in Postgres works today. In the second part of the blog, I will introduce Andres’s linearized executor design for Postgres to gain the ability to jump among nodes. This is the infrastructure required in order to add parallel foreign scan in Postgres. In the end, I will share my thoughts on parallel foreign scan and on the current parallel scan mechanism.

1.Parallel Scan In PostgreSQL

PostgreSQL has implemented the Parallel Query from pg9.6, it includes Parallel Scan, Parallel Join, Parallel Aggregation. PostgreSQL has improved its support for parallelism over multiple major releases starting from PG 9.6.

1.1 What is Parallel Scan

Let’s talk about Parallel Query first, according to the documentation the description of Parallel Query is that ‘PostgreSQL can devise query plans which can leverage multiple CPUs in order to answer queries faster’.

The underlying idea is that the session asks for several worker processes to use the multiple CPUs to scan data of the same relation. It can speed up the scan but there be efficiency loss because of the parallel progress setup and tuple transferring between processes.

1.2 When Does Parallel Scan work?

Parallel Scan will work when the GUC is enabled and the executor thinks it worth doing a parallel scan as compared to doing a sequence scan, index scan(btree) or bitmap heap scan.

The GUC argument for Parallel scan is :

  • The max_parallel_workers_per_gather should bigger than 0.
max_parallel_workers_per_gather is an GUC argument[...]
カテゴリー: postgresql

movead li: An Overview of Logical Replication in PostgreSQL

planet postgresql - 2019-08-22(木) 16:30:07

Logical Replication appeared in Postgres10, it came along with number of keywords like ‘logical decoding’, ‘pglogical’, ‘wal2json’, ‘BDR’ etc. These words puzzle me so much so I decided to start the blog with explaining these terms and describing the relationships between them. I will also mention a new idea for a tool called ‘logical2sql’, it is designed for synchronization of data from Postgres to databases of different kinds and architectures.

Difference Between Physical And Logical Replication

Replication (Or Physical Replication) is a more ancient or traditional data synchronize approach. It plays an important role in the data synchronization of Postgres. It copies data in the WAL record by using exact block addresses and byte-by-byte replication. So it will be almost the same between master and slave and the ‘ctid’ is the same which identifies the location of a data row. ​ This blog will also briefly discuss logical replication, I have mentioned physical replication here just to differentiate physical and logical replication for the readers.

Physical Replication Schematics

Logical Replication support table-level data synchronization, in princial it is very different from physical replication. The detailed analysis below will help in understanding this :

Logical Replication : When the WAL record is produced, the logical decoding module analyzes the record to a ‘ReorderBufferChange’ struct (you can deem it as HeapTupleData). The pgoutput plugin will then do a filter and data reorganization to the ReorderBufferChanges and send it to subscription.

The publication will get some ‘HeapTupleData’ and will remake insert, delete, update to the received data. Notice that Physical Replication will copy data from wal record to data pages while Logical Replication will execute an insert, update, delete again in the subscription.

Logical Replication Schematics

Concepts About Recognize Logical

A picture to known every things about logical

Logical Decoding is the key

カテゴリー: postgresql

pgCMH - Columbus, OH: Wraparound is coming

planet postgresql - 2019-08-22(木) 13:00:00

The August meeting will be held at 18:00 EST on Tues, the 27th. Once again, we will be holding the meeting in the community space at CoverMyMeds. Please RSVP on MeetUp so we have an idea on the amount of food needed.


Seal Software’is very own Arlette will be presenting this month. She’s going to tell us all about transaction wraparound: what it is, why it happens, why you should care about, and how to prevent it. This talk will touch on MVCC, VACUUM FREEZE, and tuning autovacuum.


CoverMyMeds has graciously agreed to validate your parking if you use their garage so please park there:

You can safely ignore any sign saying to not park in the garage as long as it’s after 17:30 when you arrive.

Park in any space that is not marked ‘24 hour reserved’.

Once parked, take the elevator/stairs to the 3rd floor to reach the Miranova lobby. Once in the lobby, the elevator bank is in the back (West side) of the building. Take a left and walk down the hall until you see the elevator bank on your right. Grab an elevator up to the 11th floor. (If the elevator won’t let you pick the 11th floor, contact Doug or CJ (info below)). Once you exit the elevator, look to your left and right; one side will have visible cubicles, the other won’t. Head to the side without cubicles. You’re now in the community space:

The kitchen is to your right (grab yourself a drink) and the meeting will be held to your left. Walk down the room towards the stage.

If you have any issues or questions with parking or the elevators, feel free to text/call Doug at +1.614.316.5079 or CJ at +1.740.407.7043

カテゴリー: postgresql

PHP 7.4.0beta4 released!

planet PHP - 2019-08-22(木) 09:00:00
The PHP team is glad to announce the third and last beta release of PHP 7.4: PHP 7.4.0beta4. This continues the PHP 7.4 release cycle, the rough outline of which is specified in the PHP Wiki. For source downloads of PHP 7.4.0beta4 please visit the download page. Please carefully test this version and report any issues found in the bug reporting system. Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release would be RC 1, planned for September 5th. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
カテゴリー: php

507 Insufficient Storage

planet PHP - 2019-08-21(水) 00:00:00

507 Insufficient Storage is a status code that’s introduced by the WebDAV, specification. It allows a HTTP server to tell a client that for example their PUT or POST operation couldn’t succeed, maybe because it’s too large to fit on a disk.

Even though it was written for WebDAV, it can be used outside of WebDAV servers though.


HTTP/1.1 507 Insufficient Storage Content-Type: text/plain Wow, that's a big file. Can you store it somewhere else? We're pretty cramped here.

Many WebDAV clients handle this status pretty well, and will inform the user that a disk is full.

Should I use this?

Even though I think this status is applicable outside of WebDAV, I think the reality is that the usefulness of this status is somewhat limited.

Because 507 is specifically a server-side error, it kind of indicates that the error was server side. Maybe the disk being full was not by design or intentional.

When using a status like this for REST API’s, I feel that it’s probably more likely that you’ll want to return an error when a user ran out of space, and intentionally send back a message to a user telling them ‘YOU ran out of quota’, instead of ‘OUR disk is full’.

If the intent is to inform the user that they exceeded their quota, it’s better to use the more appropriate 413 Payload Too Large.

カテゴリー: php

Pavel Stehule: show csv files by pspg pager

planet postgresql - 2019-08-20(火) 13:15:00
Long time I am thinking about support CSV format inside pspg. Still I have some other task, and I have not a quite time for this job. But I found nice application miller, that can be used for formatting CSV to MySQL tables format, and this format is supported by pspg already.

mlr --icsv --opprint --barred put '' obce.csv | pspg --force-uniborder
カテゴリー: postgresql

Domain Registrars

planet PHP - 2019-08-20(火) 06:07:00

A few years ago, I wrote about domain registrars. Realizing how often people still reference that post, and how old it is, I decided to ask what people are using these days.

There was a lot less variety in the responses than last time I asked. In fact, four registrars accounted for almost all replies I received, even via in-person and private conversations.

Here they are.


My personal choice remains Gandi. Their new, JavaScript-heavy website belies the robustness of their platform, but if we judged companies by their websites, most would come up short. (This is why I want to rebuild every website I visit.)

The projects they support can give you a good idea of their principles, which is something that matters to me.

Gandi is committed to being the ethical choice for creating a web presence.

Notable features include:

  • Free SSL certificate
  • Whois privacy
  • DNS management

Gandi is recommended by Derick Rethans, Simon Jones, Seppe Stas, and Graham Christensen.


Like Gandi, Hover is trying to make the web better.

From podcasts to festivals, we’re proud to be patrons of inspiring projects that help fuel the internet.

Most people who recommend Hover cite their excellent customer service as a reason. They also have a helpful article on how to register a domain name.

Notable features include:

  • Whois privacy
  • DNS management

Hover is recommended by Kyle Meyer, Paul Reinheimer, and Tim Cheadle.


iwantmyname has an awkward name but a loyal customer base, and they have a wonderful sense of humor. The one-click setup of popular services like Google and AWS is a convenient way to get started quickly.

Notable features include:

  • Whois privacy
  • DNS management
  • Developer API

iwantmyname is recommended by Aaron Gilmore, Dan Duncan, and Chanpory Rith.


Namecheap continues to be a popular choice, and their mission and values are admirable. They even advertise their support for the EFF and Fight for the Future in the footer of every page.

Their domain search is one of the best I’ve seen, with a beast mode that unlocks the full gamut of search options.

Notable features include:

  • Whois privacy
  • DNS management

Namecheap is recommended by Ben Bodien and Jeff Lupinski.


I hope these recommendations can help you choose the domain registrar that’s best for you, especially if you don’t already own your own domain name(s). As I wrote in a recent article about personal websites on 99U:

Owning your own domain name is important, and if this article can convince you of only one thing, let it be this.

If you have any additions or corrections, please let me know.

カテゴリー: php

Denish Patel: Clone schema in Postgres

planet postgresql - 2019-08-20(火) 03:35:25

Postgres doesn’t have built in function to clone schema. However,  may of backend developers would like to perform a number of operations at schema level directly from command line tools such as psql or any database developer tools (DBeaver). One of the common operation is to copy or clone the entire schema. While researching any existing solutions, I stumbled upon  postgres-general thread where one of the developer(s) provided clone_schema.sql function. I uploaded the existing function provided in thread to my git repo here.

However, when I tried to use it, it was throwing error related to sequence. I updated the function to solve the error reported . Additionally, I added support to copy Materialized Views as well. I have uploaded the final working function here

I have tested the public.clone_schema function for complex schema and it seems to be working with copying schema ONLY or schema WITH DATA as well .

You can call function like this to copy schema with data:

select clone_schema('development', 'development_clone', true);

Alternatively, if you want to copy only schema without data:

select clone_schema('development', 'development_clone', false);

Hope this will help someone. If you have better or other easier approach, please don’t hesitate to put your comments so I can improve processes on my end!

カテゴリー: postgresql

Paul Ramsey: Waiting for PostGIS 3: Parallelism in PostGIS

planet postgresql - 2019-08-19(月) 23:43:36

Parallel query has been a part of PostgreSQL since 2016 with the release of version 9.6 and in theory PostGIS should have been benefiting from parallelism ever since.

In practice, the complex nature of PostGIS has meant that very few queries would parallelize under normal operating configurations -- they could only be forced to parallelize using oddball configurations.

With PostgreSQL 12 and PostGIS 3, parallel query plans will be generated and executed far more often, because of changes to both pieces of software:

  • PostgreSQL 12 includes a new API that allows extensions to modify query plans and add index clauses. This has allowed PostGIS to remove a large number of inlined SQL functions that were previously acting as optimization barriers to the planner.
  • PostGIS 3 has taken advantage of the removal of the SQL inlines to re-cost all the spatial functions with much higher costs. The combination of function inlining and high costs used to cause the planner to make poor decisions, but with the updates in PostgreSQL that can now be avoided.
カテゴリー: postgresql

Craig Ringer: Developing on many versions of PostgreSQL at once

planet postgresql - 2019-08-19(月) 20:30:05
In a developer’s ideal world there’s only one stable and one development version of the product. Everyone updates immediately to the next stable version and nobody cares about anything older. As far as I can tell this has never actually happened, ever, in the history of anything. # Working on many versions PostgreSQL is a […]
カテゴリー: postgresql

WebDB Forum 2019 (2019/9/8-9/9)

www.postgresql.jp news - 2019-08-19(月) 09:13:01
WebDB Forum 2019 (2019/9/8-9/9) harukat 2019/08/19 (月) - 09:13
カテゴリー: postgresql

Johann Oskarsson: Porting a PostgreSQL Extension from Unix to Windows 10

planet postgresql - 2019-08-18(日) 23:24:31

2ndQuadrant has written before about how to build your own extension for Windows. This time we take a look at what it takes to port one written by someone else. I am building this for PostgreSQL 11 on Windows using Visual Studio 2019 and clang.

Here we take are working with the hashtypes extension. Unfortunately, at the time of writing, version 0.1.5 has not been released with the support for PostgreSQL 11. So we build one from git.

git clone https://github.com/adjust/hashtypes.git

The first thing we do is to take a look at the C source files. There are only four of them, common.c, crc32.c, md5.c, and sha.c so we can be reasonably sure the porting effort will be quick.

Then we use the directions from 2ndQuadrant to build our own makefile, called windows.mak. We start by setting the compiler and PostgreSQL directory in variables. Here we are using clang to compile so the flags will be familiar to Unix programmers but alien to those used to cl.

## You will need to change these two settings for your own system. CC="C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\Llvm\8.0.0\bin\clang.exe" POSTGRES="C:\Program Files\PostgreSQL\11"

Then we set the compilation flags in CFLAGS.

CFLAGS=-m64 -Wall -Wextra -O3 \ -I$(POSTGRES)\include\server\port\win32_msvc \ -I$(POSTGRES)\include\server\port\win32 \ -I$(POSTGRES)\include\server \ -I$(POSTGRES)\include

The CFLAGS are set to build 64bit binaries with -m64 then we add the usual warnings and some extra warnings for extra credit with -Wall and -Wextra. Then since we’re compiling production code we add optimization with -O3. The various -I flags are for include directories and we take their names directly from the tutorial by 2ndQuadrant.

The link flags are easy, we just add the PostgreSQL lib directory with -L and link with postgres with -l.

LDFLAGS=-L$(POSTGRES)\lib -lpostgres

Now we can start our make rules. We make a generic rule for the object files.

.c.obj: $(CC) $(CF[...]
カテゴリー: postgresql

Hans-Juergen Schoenig: Timeseries: EXCLUDE TIES, CURRENT ROW and GROUP

planet postgresql - 2019-08-17(土) 16:00:06

Windowing functions and analytics have been around for quite some time and many people already make use of this awesome stuff in the PostgreSQL world. Timeseries  are an especially important area in this context. However, not all features have been widely adopted and thus many developers have to implement functionality at the application level in a painful way instead of just using some of the more advanced SQL techniques.

The idea of this blog is to demonstrate some of the advanced stuff so that more people out there can make use of PostgreSQL’s true power.

Preparing data for analytics

For the purpose of this post I have created a simple data set:

test=# CREATE TABLE t_demo AS SELECT ordinality, day, date_part('week', day) AS week FROM generate_series('2020-01-02', '2020-01-15', '1 day'::interval) WITH ORDINALITY AS day; SELECT 14

In PostgreSQL the generate_series function will return one row each day spanning January 1st, 2020 to January 15th, 2020. The WITH ORDINALITY clause tells PostgreSQL to add an “id” column to the resultset of the function. The date_part function will extract the number of the week out of our date. The purpose of this column is to have a couple of identical values in our timeseries.

In the next list you see the data set we will use:

test=# SELECT * FROM t_demo; ordinality | day | week ------------+------------------------+------ 1 | 2020-01-02 00:00:00+01 | 1 2 | 2020-01-03 00:00:00+01 | 1 3 | 2020-01-04 00:00:00+01 | 1 4 | 2020-01-05 00:00:00+01 | 1 5 | 2020-01-06 00:00:00+01 | 2 6 | 2020-01-07 00:00:00+01 | 2 7 | 2020-01-08 00:00:00+01 | 2 8 | 2020-01-09 00:00:00+01 | 2 9 | 2020-01-10 00:00:00+01 | 2 10 | 2020-01-11 00:00:00+01 | 2 11 | 2020-01-12 00:00:00+01 | 2 12 | 2020-01-13 00:00:00+01 | 3 13 | 2020-01-14 00:00:00+01 | 3 [...]
カテゴリー: postgresql

SunshinePHP 2020 CFP Started

php.net - 2019-08-17(土) 09:00:01
カテゴリー: php

Robert Treat: Charm City Postgres

planet postgresql - 2019-08-16(金) 06:17:00

This post marks the official launch of the Charm City Postgres Meetup group. Our goal is to help bring together the growing community of Developers, DBAs, DBREs, and other technologist in the greater Baltimore area who are working with Postgres. We're currently working to organize folks and would encourage interested people to sign-up on the meetup site, and have set an initial meetup at Smart Office @ BWI. If you have questions or want to speak, sponsor, or host a meeting in the future, you can contact me either through the meetup page or find me on the Postgres Team Slack.

カテゴリー: postgresql

Ibrar Ahmed: A Faster, Lightweight Trigger Function in C for PostgreSQL

planet postgresql - 2019-08-16(金) 00:16:25

We have been writing blog posts about how to write simple extensions in C language and a little more complex one by Ibrar which were well received by PostgreSQL user community. Then we observed that many PostgreSQL users create simple triggers for small auditing requirements, and then feel the pain of trigger on transactions. So we were discussing how simple/lightweight and faster a trigger function is when written in C. Generally, Trigger functions are written in high-level languages like PlpgSQL, but it has a higher overhead during execution and it can impact the transactions – and thereby application performance.

This blog post is an attempt to create a simple trigger function to address one of the common use-cases of triggers, which is to update auditing columns in a table.

In this post, we are going to introduce SPI (Server Programming Interface) functions for novice users. Towards the end of the blog, we share some of the quick benchmark results for understanding the benefits.

Example of Audit timestamp

Let’s proceed with taking up a case and assume that we have a table to hold transaction details. But auditing requirements say that there should be a timestamp on each tuple when the tuple is inserted and when it was last updated.

CREATE TABLE transdtls( transaction_id int, cust_id int, amount int, ... insert_ts timestamp, update_ts timestamp );

For demonstration purpose, let’s remove and trim the other columns and create a table with only 3 essential columns.

CREATE TABLE transdtls( transaction_id int, insert_ts timestamp, update_ts timestamp ); Developing Trigger Function

The trigger function can also be developed and packaged as an extension, which we discussed in s previous blog post here. So we are not going to repeat those steps here. The difference is that file names are named as “trgr” instead of “addme” in the previous blog. Makefile is also modified to refer “trgr” files. This need not be same as the function name “trig_test” in the C source detailed

カテゴリー: postgresql

Displaying exif information in WordPress posts

planet PHP - 2019-08-14(水) 19:01:00

After discovering that WordPress modifies img tags when rendering a page, it crossed my mind that I could display exif information underneath each image on my new photography blog. The basic process is applicable to any manipulation of the content that you would want to do before it is displayed.

To do this, we add a new filter to the the_content hook:

add_filter('the_content', function ($content) { // manipulate $content return $content; });

As with all filters, we have to return data in the same format as the first parameter to our function. For the_content, we receive a string which is the post's article text.

The core operation

In my case, all my <img> tags are immediately followed by a <figcaption> tag as I have a caption under each one. I decided to add the exif data within the <figcaption>.

We use regex to find the tags:

if (!preg_match_all( '@img [^>]+><figcaption>[^<]*</figcaption>@', $content, $matches)) { return $content; }

This will store each <img> tag that's followed by a <figcaption> tag into the $matches variable, so we can now iterate over it:

foreach ($matches[0] as $match) { // Find the url in the tag if (preg_match('@src="([0-9a-z:/._-]+)@i', $match, $srcList)) { $url = $srcList[1]; if ($url) { $postId = attachment_url_to_postid($url); if ($postId) { $exif = ffp_create_exifcaption($postId); $new = str_replace('</figcaption>', $exif .'</figcaption>', $match); $content = str_replace($match, $new, $content); } } } }

This code iterates over each match and finds the url in the src attribute of the image tag. We can then then load the post from that url using the rather helpful $postId = attachment_url_to_postid($url);.

Now that we have an ID, we can create the EXIF caption in a separate function and then finally we append it to the end of the current caption. We do this by simply replacing the closing </figcaption> with our text followed by </figcaption> which is a nice fast operation.

Creating EXIF display string

To create the string of EXIF data, I used a separate function to make it easier.

WordPress has already extracted the EXIF data we require when the image was uploaded into the admin, so we simply need to read it and format it. To get it we do this:

$metadata = wp_get_attachment_metadata($postId); if (!isset($metadata['image_meta'])) { return ''; } $imagemeta = $meta['image_meta'] ?? [];

$imagemeta is an array of useful characteristics about the image.

We pull the info we care into a set of variables like this:

$camera = $imagemeta['camera'] ?? null; $lens = $imagemeta['lens'] ?? null; $aperture = $imagemeta['aperture'] ?? null; $shutter = $imagemeta['shutter_speed'] ?? null; $iso = $imagemeta['iso'] ?? null; $focal_length = $imagemeta['focal_length'] ?? null; if (! ($camera || $lens || $aperture || $shutter || $iso || $focal_length)) { return ''; }

(Obviously, if there is no data, there's nothing to do.)

Finally we format it nicely:

$string = '<aside class="exif">'; if ($camera) { $string .= '<div><ion-icon name="camera"></ion-icon><span>'.$camera.'</span></div>'; } if ($lens) { $string .='<div><ion-icon name="radio-button-on"></ion-icon><span>'.$lens.'</span></div>'; } if ($aperture) { $aperture = 'f/' . $aperture; $string .= '<div><ion-icon name="aperture"></ion-icon><span>'.$aperture.'</span></div>'; } if ($shutter) { $shutter = '1/' . (int)(1.0/(float)$shutter); $string .= '<div><ion-icon name="timer"></ion-icon><span>'.$shutter.'</span></div>'; } if ($iso) { $string .= '<div><span class="text-icon">ISO</span><span>'.$iso.'</span></div>'; } if ($focal_length) { $focal_length = (string)(round((float)$focal_length)) . 'mm'; $string .= '<div><ion-icon name="eye"></ion-icon><span>'.$focal_length.'</span></div>'; } $string .= '</aside>'; return $string;

We wrap our entire EXIF string in an <aside> tag so that we can style appropriately. Then for each item of data, we display an icon followed by the text information. The icons are Io

Truncated by Planet PHP, read more at the original (another 950 bytes)

カテゴリー: php

Paul Ramsey: Waiting for PostGIS 3: ST_TileEnvelope(z,x,y)

planet postgresql - 2019-08-14(水) 03:21:30

With the availability of MVT tile format in PostGIS via ST_AsMVT(), more and more people are generating tiles directly from the database. Doing so usually involves a couple common steps:

カテゴリー: postgresql