postgresql

Liaqat Andrabi: Webinar: pglogical and Postgres-BDR Update [Follow Up]

planet postgresql - 2019-01-09(水) 03:30:52

Since the release of v3, Postgres-BDR has evolved into the go-to clustering technology built specially for businesses that require geographically distributed databases with multiple masters.

To get an update on Postgres-BDR’s development, new features, and future roadmap, 2ndQuadrant held the pglogical and Postgres-BDR Update webinar as part of its PostgreSQL webinar series.

The webinar was presented by Simon Riggs, Founder & CEO of 2ndQuadrant, who is also a major contributor of the open source PostgreSQL project. Those who weren’t able to attend the live event can now view the recording here.

For any questions or comments regarding Postgres-BDR, please send an email to info@2ndQuadrant.com.

カテゴリー: postgresql

Sebastian Insausti: Deploying and Managing PostgreSQL 11: New in ClusterControl 1.7.1

planet postgresql - 2019-01-08(火) 21:16:44

A few days ago was the release of a new version of ClusterControl, the 1.7.1, where we can see several new features, one of the main ones being the support for PostgreSQL 11.

To install PostgreSQL 11 manually, we must first add the repositories or download the necessary packages for the installation, install them and configure them correctly, depending on our infrastructure. All these steps take time, so let’s see how we could avoid this.

In this blog, we will see how to deploy this new PostgreSQL version with a few clicks using ClusterControl and how to manage it. As pre-requisite, please install the 1.7.1 version of ClusterControl on a dedicated host or VM.

Deploy PostgreSQL 11

To perform a new installation from ClusterControl, simply select the option “Deploy” and follow the instructions that appear. Note that if you already have a PostgreSQL 11 instance running, then you need to select the ‘Import Existing Server/Database’ instead.

ClusterControl Deploy Option

When selecting PostgreSQL, we must specify User, Key or Password and port to connect by SSH to our PostgreSQL hosts. We also need the name for our new cluster and if we want ClusterControl to install the corresponding software and configurations for us.

ClusterControl Deploy Information 1

Please check the ClusterControl user requirement for this task here.

ClusterControl Deploy Information 2

After setting up the SSH access information, we must define the database user, version and datadir (optional). We can also specify which repository to use. In this case, we want to deploy PostgreSQL 11, so just select it and continue.

In the next step, we need to add our servers to the cluster we are going to create.

ClusterControl Deploy Information 3

When adding our servers, we can enter IP or hostname.

In the last step, we can choose if our replication will be Synchronous or Asynchronous.

ClusterControl Deploy Information 4

We can monitor the status of the creation of our new cluster from the ClusterControl activity monitor.

ClusterControl Activity[...]
カテゴリー: postgresql

Gregory Stark: Monitoring Postgres with Prometheus

planet postgresql - 2019-01-07(月) 23:05:00
I'm glad people found my presentation at Lisbon on monitoring Postgres using Prometheus last October interesting. The slides are now uploaded to the conference web site at https://www.postgresql.eu/events/pgconfeu2018/schedule/session/2166/. Sorry for the delay. Now it's a near year it's time to work on improving monitoring in Postgres. As an aside I experimented a bit with the build process for this slide deck. It's a Gitlab project and I set up a Gitlab CI pipeline to run Latex to build the beamer presentation and serve it from Gitlab pages. So you can see the most recent version of the slide deck from https://_stark.gitlab.io/monitoring-postgres-pgconf.eu-2018/monitoring.pdf any time and it's automatically rebuilt each time I do a git push. You can also see the source code at https://gitlab.com/_stark/monitoring-postgres-pgconf.eu-2018 and feel free to submit issues if you spot any errors in the slides or even just suggestions on things that were unclear. ** But now the real question. I want to improve the monitoring situation in Postgres. I have all kinds of grand plans but would be interested to hear what people's feelings about what's the top priority and most practical changes they want to see in Postgrs. Personally I think the most important first step is to implement native Prometheus support in Postgres -- probably a background worker that would start up and expose all of pg_stats directly from shared memory to Prometheus without having to start an SQL session with all its transaction overhead. That would make things more efficient but also more reliable during outages. It would also make it possible to export data for all databases instead of having the agent have to reconnect for each database! I have future thoughts about distributed tracing, structured logging, and pg_stats changes to support application profiling but they are subjects for further blog posts. I have started organizing my ideas as issues in https://gitlab.com/_stark/postgresql/issues feel free to comment on them or create new issues [...]
カテゴリー: postgresql

Douglas Hunley: pgBackRest 2.08 and macOS Mojava

planet postgresql - 2019-01-04(金) 22:30:44

The team has released pgBackRest 2.08 today. As part of a continuing effort, more bits have been moved from Perl to C. Sadly, this adds a new wrinkle for those of us on OSX, as when compiling, you now get:

gcc -I. -I../libc -std=c99 -D_POSIX_C_SOURCE=200112L -O2 -Wfatal-errors -Wall -Wextra -Wwrite-strings -Wswitch-enum -Wconversion -Wformat=2 -Wformat-nonliteral -Wno-clobbered -Wno-missing-field-initializers -Wstrict-prototypes -Wpointer-arith -Wvla `xml2-config --cflags` `perl -MExtUtils::Embed -e ccopts` -DWITH_PERL -DNDEBUG -c common/lock.c -o common/lock.o warning: unknown warning option '-Wno-clobbered'; did you mean '-Wno-consumed'? [-Wunknown-warning-option] common/lock.c:76:21: warning: implicit declaration of function 'flock' is invalid in C99 [-Wimplicit-function-declaration] if (flock(result, LOCK_EX | LOCK_NB) == -1) ^ common/lock.c:76:21: warning: this function declaration is not a prototype [-Wstrict-prototypes] common/lock.c:76:35: fatal error: use of undeclared identifier 'LOCK_EX' if (flock(result, LOCK_EX | LOCK_NB) == -1) ^ 3 warnings and 1 error generated. make: *** [common/lock.o] Error 1

To fix this, you will need to edit src/Makefile and change line 12 from:

CSTD = -std=c99 -D_POSIX_C_SOURCE=200112L

to:

CSTD = -std=c99 -D_DARWIN_C_SOURCE

Then, you can follow the other steps on my previous post and everything should compile and function properly.

Enjoy!

カテゴリー: postgresql

damien clochard: 8 Anonymization Strategies with PostgreSQL

planet postgresql - 2019-01-04(金) 08:17:36

Data Anonymization is a complex topic but PostgreSQL has a lot of interesting features to tackle this challenge! Here is an overview of different approach and how to implement them directly within a PostgreSQL database.

Over the last few months I have been working on a project called PostgreSQL Anonymizer. It led me to try various techniques to remove personal data for different purposes: development, CI, functional testing, Analytics, etc.

See my previous article “Introducing PostgreSQL Anonymizer” for more details about that project…

So far, I have found 8 different ways to anonymize a dataset. Here’s a quick tour with practical examples. All the queries in the article will use a simplified table (see below) and should work with any current version of PostgreSQL (from 9.4 to 11).

CREATE TABLE people ( id SERIAL, name TEXT NOT NULL, address TEXT, age INTEGER, salary INTEGER, phone TEXT ); 0. Sampling

Sampling is not Anonymization! But when you need to remove personal data from a database, most of the time you don’t need to publish all the rows. The anonymization process will be faster and you will limit the risk of unintended disclosure.

So before going any further it is important to note that PostgreSQL provides a feature called TABLESAMPLE that will reduce the size of your dataset.

For example, if you want to work only on 20% of a table:

SELECT * FROM people TABLESAMPLE BERNOULLI(20);

And if you want to extract a subset among several tables while maintaining referential integrity, pg_sample is your friend !

1. Suppression

This is the most obvious way to get rid of personal identifiers: just wipe a column by replacing all values with NULL (aka “Nullification”) or with a constant value (“Static Substitution”).

Example :

UPDATE people SET name = '<CONFIDENTIAL>'; UPDATE people SET address = NULL;

This is simple and effective. For useless or highly sensitive data fields, it may be the best option.

But of course it will break integrity constraints (PRIMARY, UNIQUE, NOT NULL, etc.). An

[...]
カテゴリー: postgresql

Ibrar Ahmed: Upcoming Webinar Friday 1/4: High-Performance PostgreSQL, Tuning and Optimization Guide

planet postgresql - 2019-01-04(金) 01:42:06

Please join Percona’s Senior Software Engineer, Ibrar Ahmed as he presents his High-Performance PostgreSQL, Tuning and Optimization Guide on Friday, January, 4th, at 8:00 AM PDT (UTC-7) / 11:00 AM EDT (UTC-4).

Register Now

PostgreSQL is one of the leading open-source databases. Out of the box, the default PostgreSQL configuration is not tuned for any workload. Thus, any system with least resources can run it. PostgreSQL does not give optimum performance on high permanence machines because it is not using the all available resource. PostgreSQL provides a system where you can tune your database according to your workload and machine’s specifications. In addition to PostgreSQL, we can also tune our Linux box so that the database load can work optimally.

In this webinar on High-Performance PostgreSQL, Tuning and Optimization, we will learn how to tune PostgreSQL and we’ll see the results of that tuning. We will also touch on tuning some Linux kernel parameters.

 

カテゴリー: postgresql

Liaqat Andrabi: PostgreSQL 9.3 EOL – Why is it Important to Upgrade?

planet postgresql - 2019-01-03(木) 21:39:11

After the final release of patch 9.3.25 on November 8th 2018, PostgreSQL 9.3 is no longer supported. Therefore it’s time for all users of PG 9.3 to upgrade their databases to a newer supported version. The benefits of having a supported version are many and that’s what Craig Ringer talks about in the Q&A session below:

Why is it important to upgrade your PostgreSQL database to the latest version?

Craig: Always update to the latest minor version. 2ndQuadrant’s 24/7 support services often help customers who could’ve avoided experiencing a production outage or fault simply by updating every minor version or two. The PostgreSQL community (including 2ndQuadrant) releases minor point releases conservatively, and for good reasons. Keep up to date on those patches.

Note: A “minor” PostgreSQL release, i.e. a maintenance release, is an increment in the last part of the version number, whether it is a two number version like 10.x or a three-number version like 9.6.x. See “version policy” on the PostgreSQL website.

For major release version updates keeping up is not that important. It’s often best to stay on the latest minor release of your current major version and skip a major PostgreSQL release or two before updating. Update if the new major releases will solve problems you are facing, your current version is approaching the community end-of-life date, or you’re planning system changes like OS upgrades or server migrations anyway.

You’ll miss out on performance improvements, data integrity protection enhancements and improvements in monitoring and diagnostics offered by new major versions. But any change has a risk, and a well-planned upgrade requires testing and preparation, which has costs. So seeking a middle ground is usually wise.

Note that it’s usually harder to upgrade from a very old PostgreSQL to the latest, and it may need more planning, so it’s wise to start planning well before the End of Life (EoL) date. That way you maintain continuous coverage for security updates and any serious bugs that may be discovered

[...]
カテゴリー: postgresql

Pavel Golub: Keyword mess

planet postgresql - 2019-01-03(木) 17:59:48

Once I received a mail with question:
Can you tell me why I can’t select the column `references`?

=# \d v_table_relation
View "public.v_table_relation"
Column | Type | Collation | Nullable | Default
------------+-------------------------------------+-----------+----------+---------
schema | information_schema.sql_identifier | | |
table | information_schema.sql_identifier | | |
columns | information_schema.sql_identifier[] | | |
references | jsonb[] | | |

=# select * from v_table_relation ;
schema | table | columns | references
———–+————+—————–+——————————————————————————————————————–
public | a | {b,c} | {}
public | a2 | {b,c} | {“{\”toTable\”: \”a\”, \”toSchema\”: \”public\”, \”toColumns\”: [\”b\”, \”c\”], \”fromColumns\”: [\”b\”, \”c\”]}”}
workspace | t_employee | {id,name,state} | {}
(3 rows)

=# select references from v_table_relation;
ERROR: syntax error at or near “references”
LINE 1: select references from v_table_relation;

 

Well, the quick answer will be: because REFERENCES is a keyword you should use double quotes around it, e.g.


=# select references from v_table_relation;

But why sometimes you need explicitly quote identifiers and sometimes it works like without them? And how to know what exactly keywords are used in the PostgreSQL? And what keywords may be used as object names? And so on…

As usual one may find the answers in the PostgreSQL manual:
www.postgresql.org/docs/current/sql-keywords-appendix.html

There we have detailed table that lists all tokens that are key words in the SQL standard and in PostgreSQL. From the manual you will know about reserved and non-reserved tokens, SQL standard compatibility and much more which is out of scope od this post.

There are two more ways to know what tokens can or cannot be used in certain situations. First is for true programmers, and the second is for true admins. Choose your pill, Neo.

Investigating the sources

PostgreSQL uses LALR(1)* parser to work with SQL statements. Implementation of the grammar (parser itself) may be found in the gram.y and impleme

[...]
カテゴリー: postgresql

Josh Williams: Switching PostgreSQL WAL-based Backup Options

planet postgresql - 2019-01-03(木) 09:00:00


Photo by Paul Hudson · CC BY 2.0, modified

I was woken up this morning. It happens every morning, true, but not usually by a phone call requesting for help with a PostgreSQL database server that was running out of disk space.

It turns out that one of the scripts we’re in the process of retiring, but still had in place, got stuck in a loop and filled most of the available space with partial, incomplete base backups. So, since I’m awake, I’d might as well talk about Postgres backup options. I don’t mean for it to be a gripe session, but I’m tired and it kind of is.

For this particular app, since it resides partially on AWS we looked specifically at options that are able to work natively with S3. We’ve currently settled on pgBackRest. There’s a bunch of options out there, which doesn’t make the choice easy. But I suppose that’s the nature of things these days.

At first we’d tried out pghoard. It looks pretty good on the tin, especially with its ability to connect to multiple cloud storage services beyond S3: Azure, Google, Swift, etc. Having options is always nice. And for the most part it works well, apart from a couple idiosyncrasies.

We had the most trouble with the encryption feature. It didn’t have any problem on the encryption side. But for some reason on the restore the process would hang and eventually fail out without unpacking any data. Having a backup solution is a pretty important thing, but it doesn’t mean anything unless we can get the data back from it. So this was a bit of a sticking point. We probably could have figured out how to get it functioning, and at least been a good citizen and reported it upstream to get it resolved in the source. But we kind of just needed it working, and giving something else a shot is a quicker path to that goal. Sorry, pghoard devs.

The other idiosyncratic behaviors that are probably worth mentioning are that it does its own scheduling. The base backups, for instance, happen at a fixed hour interval in the configuration file, starting from when the service is first s

[...]
カテゴリー: postgresql

Craig Kerstiens: Fun with SQL: Self joins

planet postgresql - 2019-01-03(木) 00:46:00

Various families have various traditions in the US around Christmas time. Some will play games like white elephant where you get a mix of decent gifts as well as gag gifts… you then draw numbers and get to pick from existing presents that have been opened (“stealing” from someone else) or opening an up-opened one. The game is both entertaining to try to get something you want, but also stick Aunt Jennifer with the stuffed poop emoji with a Santa hat on it.

Other traditions are a bit simpler, one that my partner’s family follows is drawing names for one person you buy a gift for. This is nice because you can put a bit of effort into that one person without having to be too overwhelmed in tracking down things for multiple people. Each year we draw names for the next year. And by now you’re probably thinking what does any of this have to do with SQL? Well normally when we draw names we write them on a piece of paper, someone takes a picture, then that gets texted around to other family members. At least for me every October I’m scrolling back through text messages to try to recall who it was I’m supposed to buy for. This year I took a little time to put everyone’s name in a SQL database and write a simple query for easier recall.

A simple but workable schema

I could very much over-engineer this and record things like who has who on what year to see how many repeats happen, etc. I really just want an easy place to save the data and not forget. For that reason I’m simplifying the schema as much as possible:

CREATE TABLE giftnames AS ( id serial, name text, assignment int );

From here I populated family members names first:

INSERT INTO giftnames (name) VALUES ('Craig Kerstiens'); INSERT INTO giftnames (name) VALUES ('Aunt Jennifer'); INSERT INTO giftnames (name) VALUES ('Uncle Joe');

Then as we draw names of who each person gets I’m going to go through and update those values within the table as well:

UPDATE giftnames set assignment = 2 where id = 1; UPDATE giftnames set assignment = 3 where id = 2; UPDATE gift[...]
カテゴリー: postgresql

Bruce Momjian: Fourteen Authentication Methods

planet postgresql - 2019-01-02(水) 23:45:01

Postgres supports fourteen authentication methods — that might seem like a lot, but Postgres is used in many environments, and it has to support whatever methods are being used in those environments. The fourteen methods can seem confusing, but they are easier to understand in categories:

  • absolute: trust, reject always allow or reject
  • password: SCRAM-SHA-256, MD5, password compare a user-supplied password with something stored in the database
  • external password: LDAP, pam, RADIUS, BSD compare to a password stored outside the database
  • trusted network: peer, ident rely on the network connection to authenticate
  • trusted tokens: GSS, SSPI use possession of a token generated by a trusted key distribution server
  • certificate authority: cert uses access to the private key of a certificate signed by a trusted certificate authority

So, there is one absolute and five conditional classes of authentication.

カテゴリー: postgresql

Nikolay Shaplov: Using vim for postgres development

planet postgresql - 2019-01-02(水) 01:07:32
I've written a wiki article about configuring vim for postgres development.

https://wiki.postgresql.org/wiki/Configuring_vim_for_postgres_development

I actually started to restore my vim configuration (I've lost it some time ago) and found out that these information is scattered along the web. And it is better to keep it in one place.

I've described in the article all options and plugins that I used for pg development.

If you have your own vim experience and know some tools that can be used to make development and following codestyle more easy, please add them there.

PS. I've already written about it to the mailing list. Just want to write about it in the blog too...

comments
カテゴリー: postgresql

Viorel Tabara: An Overview of Just-in-Time Compilation (JIT) for PostgreSQL

planet postgresql - 2019-01-01(火) 19:58:00

Historically PostgreSQL has provided compilation features in the form of ahead-of-time compilation for PL/pgSQL functions and version 10 introduced expression compilation. None of those generate machine code though.

JIT for SQL was discussed many years ago, and for PostgreSQL the feature is the result of a substantial code change.

To check if PostgreSQL binary was built with LLVM support use the pg_configure command to display the compile flags and look for –with-llvm in the output. Example for the PGDG RPM distribution:

omiday ~ $ /usr/pgsql-11/bin/pg_config --configure '--enable-rpath' '--prefix=/usr/pgsql-11' '--includedir=/usr/pgsql-11/include' '--mandir=/usr/pgsql-11/share/man' '--datadir=/usr/pgsql-11/share' '--enable-tap-tests' '--with-icu' '--with-llvm' '--with-perl' '--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' '--with-openssl' '--with-pam' '--with-gssapi' '--with-includes=/usr/include' '--with-libraries=/usr/lib64' '--enable-nls' '--enable-dtrace' '--with-uuid=e2fs' '--with-libxml' '--with-libxslt' '--with-ldap' '--with-selinux' '--with-systemd' '--with-system-tzdata=/usr/share/zoneinfo' '--sysconfdir=/etc/sysconfig/pgsql' '--docdir=/usr/pgsql-11/doc' '--htmldir=/usr/pgsql-11/doc/html' 'CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' Why LLVM JIT?

It all started about two years ago as explained in Adres Freund’s post when expression evaluation and tuple deforming proved to be the roadblocks in speeding up large queries. After adding the JIT implementation “expression evaluation itself is more than ten times faster than before” in Andres’ words. Further, the Q&A section ending his post explains the choice of LLVM over other implementations

[...]
カテゴリー: postgresql

REGINA OBE: Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum

planet postgresql - 2019-01-01(火) 15:48:00

In a previous article Using pg upgrade to upgrade PostGIS without installing older version I demonstrated a trick for upgrading to a newer PostgreSQL instance from PostGIS 2.2 - 2.whatever without having to install the older version. Unfortunately that trick does not work if coming from PostGIS 2.1 because in PostGIS 2.2 we renamed a c lib function that backed sql functions in 2.1.

Fear not. There is still a way to upgrade from 2.1 to 2.5 without installing an older version of PostGIS in your new PostgreSQL instance. To do so, you need to add a step and that is to remove the functions in 2.1 that are backed by this renamed lib function. In upcoming PostGIS 3.0, we've added this function back and have it throw an error so that even coming from PostGIS 2.1, you can upgrade just the same as you do from later versions.

Continue reading "Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum"
カテゴリー: postgresql

Avinash Kumar: Great things that happened with PostgreSQL in the Year 2018

planet postgresql - 2019-01-01(火) 07:28:26

In this blog post, we’ll look back at what’s been going on in the world of PostgreSQL in 2018.

Before we start talking about the good things that have happened in the PostgreSQL in  2018, we hope you had a wonderful year and we wish you a happy and prosperous 2019.

PostgreSQL has been a choice for those who are looking for a completely community-driven open source database that is feature-rich and extensible. We have seen tremendously great things happening in PostgreSQL for many years, with 2018 being a prime example. As you could see the following snippet from DB engine rankings, PostgreSQL has topped the chart for growth in popularity in the year 2018 compared to other databases.

PostgreSQL adoption growth has been increasing year over year, and 2018 has again been one such year as we can see.

Let’s start with a recap of some of the great PostgreSQL events, and look at what we should take away from 2018 in the PostgreSQL space.

PostgreSQL 11 Released

PostgreSQL 11 was a release that incorporated a lot of features offered in commercial database software governed by an enterprise license. For example, there are times when you are required to enforce the handling of embedded transactions inside a stored procedure in your application code. There are also times when you wish to partition a table with foreign keys or use hash partitioning. This used to require workarounds. The release of PostgreSQL 11 covers these scenarios.

There were many other add-ons as well, such as Just-In-Time compilation, improved query parallelism, partition elimination, etc. You can find out more in our blog post here, or the PostgreSQL 11 release notes (if you have not seen already). Special thanks to everyone involved in such a vibrant PostgreSQL release.

End of Life for PostgreSQL 9.3

9.3.25 was the last minor release that has happened for PostgreSQL 9.3 (on November 8, 2018). There will be no more minor releases supported by the community for 9.3. If you are still using PostgreSQL 9.3 (or a major earlier release than 9.3), it is the

[...]
カテゴリー: postgresql

David Fetter: psql: A New Edit

planet postgresql - 2019-01-01(火) 04:44:00
Have you ever found yourself in the middle of a long statement in psql and wanted to pull up your favorite editor? Now, you can, using the same shortcut of control-x control-e that you would in bash!

Here's how:

$EDITOR ~/.inputrc Now add the following lines:

$if psql
    "\C-x\C-e": "\C-e\\e\C-m"
$endif ...save the file, and you're good to go.
カテゴリー: postgresql

Bruce Momjian: The Future of Major Version Upgrades

planet postgresql - 2019-01-01(火) 02:45:01

Pg_upgrade has matured to become a popular method to perform major version upgrades. This email thread considers what better upgrades would look like. Options include:

  • Migrating analyze statistics
  • Logical replication (requires double the disk space, managing schema changes)
  • Multi-master replication (similar to previous, but also requires conflict resolution)
  • Allowing the new server to read the old server's data files

Upgrade improvements have to be either significantly better in one of these measurements: faster, easier, more reliable, less required storage, and not significantly worse in any of those. For example, a solution that is 50% faster but is more complex or less reliable will be hard to gain acceptance. Of course, if a solution is one-hundred-times faster, it can be worse in some of those areas.

カテゴリー: postgresql

Brian Fehrle: PostgreSQL Replication for Disaster Recovery

planet postgresql - 2018-12-31(月) 19:58:00

With Disaster Recovery, we aim to set up systems to handle anything that could go wrong with our database. What happens if the database crashes? What if a developer accidently truncates a table? What if we find out some data was deleted last week but we didn’t notice it until today? These things happen, and having a solid plan and system in place will make the DBA look like a hero when everyone else’s hearts have already stopped when a disaster rears its ugly head.

Any database that has any sort of value should have a way to implement one or more Disaster Recovery options. PostgreSQL has a very solid replication system built in, and is flexible enough to be set up in many configurations to aid with Disaster Recovery, should anything go wrong. We’ll focus on scenarios like questioned above, how to set up our Disaster Recovery options, and the benefits of each solution.

High Availability

With streaming replication in PostgreSQL, High Availability is simple to set up and maintain. The goal is to provide a failover site that can be promoted to master if the main database goes down for any reason, such as hardware failure, software failure, or even network outage. Hosting a replica on another host is great, but hosting it in another data center is even better.

For specifics for setting up streaming replication, Severalnines has a detailed deep dive available here. The official PostgreSQL Streaming Replication Documentation has detailed information on the streaming replication protocol and how it all works.

A standard setup will look like this, a master database accepting read / write connections, with a replica database receiving all WAL activity in near real-time, replaying all data change activity locally.

Standard Streaming Replication with PostgreSQL

When the master database becomes unusable, a failover procedure is initiated to bring it offline, and promote the replica database to master, then pointing all connections to the newly promoted host. This can be done by either reconfiguring a load balancer, appli

[...]
カテゴリー: postgresql

Bruce Momjian: Zero Downtime Pg_upgrade

planet postgresql - 2018-12-29(土) 00:45:01

Pg_upgrade can upgrade a multi-terabyte system in 3-5 minutes using link mode. There are some things that can be done to make it faster — particularly, migration of analyze statistics from the old to new clusters.

However, even if pg_upgrade could perform an upgrade in zero time, would that be a zero-downtime upgrade? I am not sure, since my previous blog post explains that the work of switching clients from the old cluster to the new one seems to be downtime in the sense that running sessions are either terminated and restarted, or new connections must wait for old connections to complete. Multi-master replication seems to be unique in offering a way for new sessions to start on the new cluster while old sessions finish on the old cluster, but unfortunately it adds complexity.

カテゴリー: postgresql

Michael Paquier: ON COMMIT actions with inheritance and partitions

planet postgresql - 2018-12-28(金) 12:45:08

The following bug fix has been committed to the PostgreSQL code tree, addressing an issue visibly since ON COMMIT support for CREATE TABLE has been added back in commit ebb5318 from 2002:

commit: 319a8101804f3b62512fdce1a3af1c839344b593 author: Michael Paquier <michael@paquier.xyz> date: Fri, 9 Nov 2018 10:03:22 +0900 Fix dependency handling of partitions and inheritance for ON COMMIT This commit fixes a set of issues with ON COMMIT actions when used on partitioned tables and tables with inheritance children: - Applying ON COMMIT DROP on a partitioned table with partitions or on a table with inheritance children caused a failure at commit time, with complains about the children being already dropped as all relations are dropped one at the same time. - Applying ON COMMIT DELETE on a partition relying on a partitioned table which uses ON COMMIT DROP would cause the partition truncation to fail as the parent is removed first. The solution to the first problem is to handle the removal of all the dependencies in one go instead of dropping relations one-by-one, based on a suggestion from Álvaro Herrera. So instead all the relation OIDs to remove are gathered and then processed in one round of multiple deletions. The solution to the second problem is to reorder the actions, with truncation happening first and relation drop done after. Even if it means that a partition could be first truncated, then immediately dropped if its partitioned table is dropped, this has the merit to keep the code simple as there is no need to do existence checks on the relations to drop. Contrary to a manual TRUNCATE on a partitioned table, ON COMMIT DELETE does not cascade to its partitions. The ON COMMIT action defined on each partition gets the priority. Author: Michael Paquier Reviewed-by: Amit Langote, Álvaro Herrera, Robert Haas Discussion: https://postgr.es/m/68f17907-ec98-1192-f99f-8011400517f5@lab.ntt.co.jp Backpatch-through: 10

For beginners, ON COMMIT actions can be defined as part of CREATE TABLE on a temporary table t

[...]
カテゴリー: postgresql

ページ