planet postgresql

Subscribe to planet postgresql のフィード
Planet PostgreSQL
更新: 17分 55秒 前

Liaqat Andrabi: PostgreSQL 9.3 EOL – Why is it Important to Upgrade?

2019-01-03(木) 21:39:11

After the final release of patch 9.3.25 on November 8th 2018, PostgreSQL 9.3 is no longer supported. Therefore it’s time for all users of PG 9.3 to upgrade their databases to a newer supported version. The benefits of having a supported version are many and that’s what Craig Ringer talks about in the Q&A session below:

Why is it important to upgrade your PostgreSQL database to the latest version?

Craig: Always update to the latest minor version. 2ndQuadrant’s 24/7 support services often help customers who could’ve avoided experiencing a production outage or fault simply by updating every minor version or two. The PostgreSQL community (including 2ndQuadrant) releases minor point releases conservatively, and for good reasons. Keep up to date on those patches.

Note: A “minor” PostgreSQL release, i.e. a maintenance release, is an increment in the last part of the version number, whether it is a two number version like 10.x or a three-number version like 9.6.x. See “version policy” on the PostgreSQL website.

For major release version updates keeping up is not that important. It’s often best to stay on the latest minor release of your current major version and skip a major PostgreSQL release or two before updating. Update if the new major releases will solve problems you are facing, your current version is approaching the community end-of-life date, or you’re planning system changes like OS upgrades or server migrations anyway.

You’ll miss out on performance improvements, data integrity protection enhancements and improvements in monitoring and diagnostics offered by new major versions. But any change has a risk, and a well-planned upgrade requires testing and preparation, which has costs. So seeking a middle ground is usually wise.

Note that it’s usually harder to upgrade from a very old PostgreSQL to the latest, and it may need more planning, so it’s wise to start planning well before the End of Life (EoL) date. That way you maintain continuous coverage for security updates and any serious bugs that may be discovered

[...]
カテゴリー: postgresql

Pavel Golub: Keyword mess

2019-01-03(木) 17:59:48

Once I received a mail with question:
Can you tell me why I can’t select the column `references`?

=# \d v_table_relation
View "public.v_table_relation"
Column | Type | Collation | Nullable | Default
------------+-------------------------------------+-----------+----------+---------
schema | information_schema.sql_identifier | | |
table | information_schema.sql_identifier | | |
columns | information_schema.sql_identifier[] | | |
references | jsonb[] | | |

=# select * from v_table_relation ;
schema | table | columns | references
———–+————+—————–+——————————————————————————————————————–
public | a | {b,c} | {}
public | a2 | {b,c} | {“{\”toTable\”: \”a\”, \”toSchema\”: \”public\”, \”toColumns\”: [\”b\”, \”c\”], \”fromColumns\”: [\”b\”, \”c\”]}”}
workspace | t_employee | {id,name,state} | {}
(3 rows)

=# select references from v_table_relation;
ERROR: syntax error at or near “references”
LINE 1: select references from v_table_relation;

 

Well, the quick answer will be: because REFERENCES is a keyword you should use double quotes around it, e.g.


=# select references from v_table_relation;

But why sometimes you need explicitly quote identifiers and sometimes it works like without them? And how to know what exactly keywords are used in the PostgreSQL? And what keywords may be used as object names? And so on…

As usual one may find the answers in the PostgreSQL manual:
www.postgresql.org/docs/current/sql-keywords-appendix.html

There we have detailed table that lists all tokens that are key words in the SQL standard and in PostgreSQL. From the manual you will know about reserved and non-reserved tokens, SQL standard compatibility and much more which is out of scope od this post.

There are two more ways to know what tokens can or cannot be used in certain situations. First is for true programmers, and the second is for true admins. Choose your pill, Neo.

Investigating the sources

PostgreSQL uses LALR(1)* parser to work with SQL statements. Implementation of the grammar (parser itself) may be found in the gram.y and impleme

[...]
カテゴリー: postgresql

Josh Williams: Switching PostgreSQL WAL-based Backup Options

2019-01-03(木) 09:00:00


Photo by Paul Hudson · CC BY 2.0, modified

I was woken up this morning. It happens every morning, true, but not usually by a phone call requesting for help with a PostgreSQL database server that was running out of disk space.

It turns out that one of the scripts we’re in the process of retiring, but still had in place, got stuck in a loop and filled most of the available space with partial, incomplete base backups. So, since I’m awake, I’d might as well talk about Postgres backup options. I don’t mean for it to be a gripe session, but I’m tired and it kind of is.

For this particular app, since it resides partially on AWS we looked specifically at options that are able to work natively with S3. We’ve currently settled on pgBackRest. There’s a bunch of options out there, which doesn’t make the choice easy. But I suppose that’s the nature of things these days.

At first we’d tried out pghoard. It looks pretty good on the tin, especially with its ability to connect to multiple cloud storage services beyond S3: Azure, Google, Swift, etc. Having options is always nice. And for the most part it works well, apart from a couple idiosyncrasies.

We had the most trouble with the encryption feature. It didn’t have any problem on the encryption side. But for some reason on the restore the process would hang and eventually fail out without unpacking any data. Having a backup solution is a pretty important thing, but it doesn’t mean anything unless we can get the data back from it. So this was a bit of a sticking point. We probably could have figured out how to get it functioning, and at least been a good citizen and reported it upstream to get it resolved in the source. But we kind of just needed it working, and giving something else a shot is a quicker path to that goal. Sorry, pghoard devs.

The other idiosyncratic behaviors that are probably worth mentioning are that it does its own scheduling. The base backups, for instance, happen at a fixed hour interval in the configuration file, starting from when the service is first s

[...]
カテゴリー: postgresql

Craig Kerstiens: Fun with SQL: Self joins

2019-01-03(木) 00:46:00

Various families have various traditions in the US around Christmas time. Some will play games like white elephant where you get a mix of decent gifts as well as gag gifts… you then draw numbers and get to pick from existing presents that have been opened (“stealing” from someone else) or opening an up-opened one. The game is both entertaining to try to get something you want, but also stick Aunt Jennifer with the stuffed poop emoji with a Santa hat on it.

Other traditions are a bit simpler, one that my partner’s family follows is drawing names for one person you buy a gift for. This is nice because you can put a bit of effort into that one person without having to be too overwhelmed in tracking down things for multiple people. Each year we draw names for the next year. And by now you’re probably thinking what does any of this have to do with SQL? Well normally when we draw names we write them on a piece of paper, someone takes a picture, then that gets texted around to other family members. At least for me every October I’m scrolling back through text messages to try to recall who it was I’m supposed to buy for. This year I took a little time to put everyone’s name in a SQL database and write a simple query for easier recall.

A simple but workable schema

I could very much over-engineer this and record things like who has who on what year to see how many repeats happen, etc. I really just want an easy place to save the data and not forget. For that reason I’m simplifying the schema as much as possible:

CREATE TABLE giftnames AS ( id serial, name text, assignment int );

From here I populated family members names first:

INSERT INTO giftnames (name) VALUES ('Craig Kerstiens'); INSERT INTO giftnames (name) VALUES ('Aunt Jennifer'); INSERT INTO giftnames (name) VALUES ('Uncle Joe');

Then as we draw names of who each person gets I’m going to go through and update those values within the table as well:

UPDATE giftnames set assignment = 2 where id = 1; UPDATE giftnames set assignment = 3 where id = 2; UPDATE gift[...]
カテゴリー: postgresql

Bruce Momjian: Fourteen Authentication Methods

2019-01-02(水) 23:45:01

Postgres supports fourteen authentication methods — that might seem like a lot, but Postgres is used in many environments, and it has to support whatever methods are being used in those environments. The fourteen methods can seem confusing, but they are easier to understand in categories:

  • absolute: trust, reject always allow or reject
  • password: SCRAM-SHA-256, MD5, password compare a user-supplied password with something stored in the database
  • external password: LDAP, pam, RADIUS, BSD compare to a password stored outside the database
  • trusted network: peer, ident rely on the network connection to authenticate
  • trusted tokens: GSS, SSPI use possession of a token generated by a trusted key distribution server
  • certificate authority: cert uses access to the private key of a certificate signed by a trusted certificate authority

So, there is one absolute and five conditional classes of authentication.

カテゴリー: postgresql

Nikolay Shaplov: Using vim for postgres development

2019-01-02(水) 01:07:32
I've written a wiki article about configuring vim for postgres development.

https://wiki.postgresql.org/wiki/Configuring_vim_for_postgres_development

I actually started to restore my vim configuration (I've lost it some time ago) and found out that these information is scattered along the web. And it is better to keep it in one place.

I've described in the article all options and plugins that I used for pg development.

If you have your own vim experience and know some tools that can be used to make development and following codestyle more easy, please add them there.

PS. I've already written about it to the mailing list. Just want to write about it in the blog too...

comments
カテゴリー: postgresql

Viorel Tabara: An Overview of Just-in-Time Compilation (JIT) for PostgreSQL

2019-01-01(火) 19:58:00

Historically PostgreSQL has provided compilation features in the form of ahead-of-time compilation for PL/pgSQL functions and version 10 introduced expression compilation. None of those generate machine code though.

JIT for SQL was discussed many years ago, and for PostgreSQL the feature is the result of a substantial code change.

To check if PostgreSQL binary was built with LLVM support use the pg_configure command to display the compile flags and look for –with-llvm in the output. Example for the PGDG RPM distribution:

omiday ~ $ /usr/pgsql-11/bin/pg_config --configure '--enable-rpath' '--prefix=/usr/pgsql-11' '--includedir=/usr/pgsql-11/include' '--mandir=/usr/pgsql-11/share/man' '--datadir=/usr/pgsql-11/share' '--enable-tap-tests' '--with-icu' '--with-llvm' '--with-perl' '--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' '--with-openssl' '--with-pam' '--with-gssapi' '--with-includes=/usr/include' '--with-libraries=/usr/lib64' '--enable-nls' '--enable-dtrace' '--with-uuid=e2fs' '--with-libxml' '--with-libxslt' '--with-ldap' '--with-selinux' '--with-systemd' '--with-system-tzdata=/usr/share/zoneinfo' '--sysconfdir=/etc/sysconfig/pgsql' '--docdir=/usr/pgsql-11/doc' '--htmldir=/usr/pgsql-11/doc/html' 'CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' Why LLVM JIT?

It all started about two years ago as explained in Adres Freund’s post when expression evaluation and tuple deforming proved to be the roadblocks in speeding up large queries. After adding the JIT implementation “expression evaluation itself is more than ten times faster than before” in Andres’ words. Further, the Q&A section ending his post explains the choice of LLVM over other implementations

[...]
カテゴリー: postgresql

REGINA OBE: Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum

2019-01-01(火) 15:48:00

In a previous article Using pg upgrade to upgrade PostGIS without installing older version I demonstrated a trick for upgrading to a newer PostgreSQL instance from PostGIS 2.2 - 2.whatever without having to install the older version. Unfortunately that trick does not work if coming from PostGIS 2.1 because in PostGIS 2.2 we renamed a c lib function that backed sql functions in 2.1.

Fear not. There is still a way to upgrade from 2.1 to 2.5 without installing an older version of PostGIS in your new PostgreSQL instance. To do so, you need to add a step and that is to remove the functions in 2.1 that are backed by this renamed lib function. In upcoming PostGIS 3.0, we've added this function back and have it throw an error so that even coming from PostGIS 2.1, you can upgrade just the same as you do from later versions.

Continue reading "Using pg_upgrade to upgrade PostgreSQL 9.3 PostGIS 2.1 to PostgreSQL 11 2.5 on Yum"
カテゴリー: postgresql

Avinash Kumar: Great things that happened with PostgreSQL in the Year 2018

2019-01-01(火) 07:28:26

In this blog post, we’ll look back at what’s been going on in the world of PostgreSQL in 2018.

Before we start talking about the good things that have happened in the PostgreSQL in  2018, we hope you had a wonderful year and we wish you a happy and prosperous 2019.

PostgreSQL has been a choice for those who are looking for a completely community-driven open source database that is feature-rich and extensible. We have seen tremendously great things happening in PostgreSQL for many years, with 2018 being a prime example. As you could see the following snippet from DB engine rankings, PostgreSQL has topped the chart for growth in popularity in the year 2018 compared to other databases.

PostgreSQL adoption growth has been increasing year over year, and 2018 has again been one such year as we can see.

Let’s start with a recap of some of the great PostgreSQL events, and look at what we should take away from 2018 in the PostgreSQL space.

PostgreSQL 11 Released

PostgreSQL 11 was a release that incorporated a lot of features offered in commercial database software governed by an enterprise license. For example, there are times when you are required to enforce the handling of embedded transactions inside a stored procedure in your application code. There are also times when you wish to partition a table with foreign keys or use hash partitioning. This used to require workarounds. The release of PostgreSQL 11 covers these scenarios.

There were many other add-ons as well, such as Just-In-Time compilation, improved query parallelism, partition elimination, etc. You can find out more in our blog post here, or the PostgreSQL 11 release notes (if you have not seen already). Special thanks to everyone involved in such a vibrant PostgreSQL release.

End of Life for PostgreSQL 9.3

9.3.25 was the last minor release that has happened for PostgreSQL 9.3 (on November 8, 2018). There will be no more minor releases supported by the community for 9.3. If you are still using PostgreSQL 9.3 (or a major earlier release than 9.3), it is the

[...]
カテゴリー: postgresql

David Fetter: psql: A New Edit

2019-01-01(火) 04:44:00
Have you ever found yourself in the middle of a long statement in psql and wanted to pull up your favorite editor? Now, you can, using the same shortcut of control-x control-e that you would in bash!

Here's how:

$EDITOR ~/.inputrc Now add the following lines:

$if psql
    "\C-x\C-e": "\C-e\\e\C-m"
$endif ...save the file, and you're good to go.
カテゴリー: postgresql

Bruce Momjian: The Future of Major Version Upgrades

2019-01-01(火) 02:45:01

Pg_upgrade has matured to become a popular method to perform major version upgrades. This email thread considers what better upgrades would look like. Options include:

  • Migrating analyze statistics
  • Logical replication (requires double the disk space, managing schema changes)
  • Multi-master replication (similar to previous, but also requires conflict resolution)
  • Allowing the new server to read the old server's data files

Upgrade improvements have to be either significantly better in one of these measurements: faster, easier, more reliable, less required storage, and not significantly worse in any of those. For example, a solution that is 50% faster but is more complex or less reliable will be hard to gain acceptance. Of course, if a solution is one-hundred-times faster, it can be worse in some of those areas.

カテゴリー: postgresql

Brian Fehrle: PostgreSQL Replication for Disaster Recovery

2018-12-31(月) 19:58:00

With Disaster Recovery, we aim to set up systems to handle anything that could go wrong with our database. What happens if the database crashes? What if a developer accidently truncates a table? What if we find out some data was deleted last week but we didn’t notice it until today? These things happen, and having a solid plan and system in place will make the DBA look like a hero when everyone else’s hearts have already stopped when a disaster rears its ugly head.

Any database that has any sort of value should have a way to implement one or more Disaster Recovery options. PostgreSQL has a very solid replication system built in, and is flexible enough to be set up in many configurations to aid with Disaster Recovery, should anything go wrong. We’ll focus on scenarios like questioned above, how to set up our Disaster Recovery options, and the benefits of each solution.

High Availability

With streaming replication in PostgreSQL, High Availability is simple to set up and maintain. The goal is to provide a failover site that can be promoted to master if the main database goes down for any reason, such as hardware failure, software failure, or even network outage. Hosting a replica on another host is great, but hosting it in another data center is even better.

For specifics for setting up streaming replication, Severalnines has a detailed deep dive available here. The official PostgreSQL Streaming Replication Documentation has detailed information on the streaming replication protocol and how it all works.

A standard setup will look like this, a master database accepting read / write connections, with a replica database receiving all WAL activity in near real-time, replaying all data change activity locally.

Standard Streaming Replication with PostgreSQL

When the master database becomes unusable, a failover procedure is initiated to bring it offline, and promote the replica database to master, then pointing all connections to the newly promoted host. This can be done by either reconfiguring a load balancer, appli

[...]
カテゴリー: postgresql

Bruce Momjian: Zero Downtime Pg_upgrade

2018-12-29(土) 00:45:01

Pg_upgrade can upgrade a multi-terabyte system in 3-5 minutes using link mode. There are some things that can be done to make it faster — particularly, migration of analyze statistics from the old to new clusters.

However, even if pg_upgrade could perform an upgrade in zero time, would that be a zero-downtime upgrade? I am not sure, since my previous blog post explains that the work of switching clients from the old cluster to the new one seems to be downtime in the sense that running sessions are either terminated and restarted, or new connections must wait for old connections to complete. Multi-master replication seems to be unique in offering a way for new sessions to start on the new cluster while old sessions finish on the old cluster, but unfortunately it adds complexity.

カテゴリー: postgresql

Michael Paquier: ON COMMIT actions with inheritance and partitions

2018-12-28(金) 12:45:08

The following bug fix has been committed to the PostgreSQL code tree, addressing an issue visibly since ON COMMIT support for CREATE TABLE has been added back in commit ebb5318 from 2002:

commit: 319a8101804f3b62512fdce1a3af1c839344b593 author: Michael Paquier <michael@paquier.xyz> date: Fri, 9 Nov 2018 10:03:22 +0900 Fix dependency handling of partitions and inheritance for ON COMMIT This commit fixes a set of issues with ON COMMIT actions when used on partitioned tables and tables with inheritance children: - Applying ON COMMIT DROP on a partitioned table with partitions or on a table with inheritance children caused a failure at commit time, with complains about the children being already dropped as all relations are dropped one at the same time. - Applying ON COMMIT DELETE on a partition relying on a partitioned table which uses ON COMMIT DROP would cause the partition truncation to fail as the parent is removed first. The solution to the first problem is to handle the removal of all the dependencies in one go instead of dropping relations one-by-one, based on a suggestion from Álvaro Herrera. So instead all the relation OIDs to remove are gathered and then processed in one round of multiple deletions. The solution to the second problem is to reorder the actions, with truncation happening first and relation drop done after. Even if it means that a partition could be first truncated, then immediately dropped if its partitioned table is dropped, this has the merit to keep the code simple as there is no need to do existence checks on the relations to drop. Contrary to a manual TRUNCATE on a partitioned table, ON COMMIT DELETE does not cascade to its partitions. The ON COMMIT action defined on each partition gets the priority. Author: Michael Paquier Reviewed-by: Amit Langote, Álvaro Herrera, Robert Haas Discussion: https://postgr.es/m/68f17907-ec98-1192-f99f-8011400517f5@lab.ntt.co.jp Backpatch-through: 10

For beginners, ON COMMIT actions can be defined as part of CREATE TABLE on a temporary table t

[...]
カテゴリー: postgresql

Dan Langille: procedure OK under 10.6 but starts acting up under 11.1

2018-12-28(金) 06:50:35
I originally documented this so I would have a reference while tracking down the problem. I have since solved the issue and you can jump to the bottom of this post to find the solution. What I do not know is why this was not an issue under PostgreSQL 10.6 but was under PostgreSQL 11.1. [...]
カテゴリー: postgresql

Claire Giordano: The perks of sharing your Citus open source stories

2018-12-27(木) 19:15:00

Most of us who work with open source like working with open source. You get to build on what’s already been built, and you get to focus on inventing new solutions to new problems instead of reinventing the wheel on each project. Plus you get to share your work publicly (which can improve the state of the art in the industry) and you get feedback from developers outside your company. Hiring managers give it a +1 too, since sharing your code will sometimes trigger outside interest in what you’re doing and can be a big boon for recruiting. After all “smart people like to hang out with smart people”.

Open source downloads make it easy to try out new software

One of the (countless) benefits of working with open source is that it’s so much easier to try things out. Even at four o’clock in the morning, when the rest of the world seems like they’re asleep. We’ve come a long way from the years when the only way to try out new software was to secure an enterprise “try & buy” license: through a salesperson, during business hours, and only after you were done playing an annoying game of phone tag.

Today, when you’re hunting for a new way to solve a problem and you want to try out a new technology, that ability to download open source packages and be up and running in minutes takes a lot of friction out of the process.

Sharing your stories about how & what you did contributes to the state of the art, too

And the transparency of the open source culture goes beyond the sharing of source code. Being transparent about both the good and the bad of working with open source can help to promote best practices as well as helps to make things better. Lots of you also share your stories about how you solved a problem, built a thing, or created an order of magnitude efficiency. Whether by conference talk, case study interview, or blog post, we love it when users and customers of the Citus database share their stories about what their challenges were and how they solved their problems.

A conference talk from Microsoft: Citus and Postgres [...]
カテゴリー: postgresql

Kaarel Moppel: Unearthing some hidden PostgreSQL 11 gems

2018-12-27(木) 17:59:22

It’s been already a bit over a month since the last Postgres major version was released (and also the 1st minor update is out) so it’s not exactly fresh out of the oven…and as usual there has been already a barrage of articles on the most prominent features. Which is great, as I can save some keyboard strokes on those. But there are of course some other little gems that didn’t get any spotlight (by the way release notes are about 13 pages long so a lot of stuff!)…and now luckily had some “aluminum tube time” to have a second look on some more interesting/useful little features and improvements. So here my findings (somewhat logically grouped).

General
  • Add column pg_stat_activity.backend_type

This makes it possible to report the “connected/active users” correctly as just a “count(*)” could lie due to the “parallel query” features added already 2 versions ago.

  • Allow ALTER TABLE to add a column with a non-null default without doing a table rewrite

Adding new columns with DEFAULT values to large and active tables is a classical beginner mistake, basically halting operation. But no more – Postgres is now a lot more beginner-friendly!

Performance
  • Fill the unused portion of force-switched WAL segment files with zeros for improved compressibility

My favourite low-hanging fruit from this release…and I wonder why it took so long. Why is it cool? Well, one can now set the “archive_timeout” so low that RPO-s from 1 second (lower values currently not allowed) are doable without a replica, with plain WAL-shipping. A great alternative for semi-important systems where losing a couple of last records is not so tragic.

  • Improve performance of monotonically increasing index additions

Them most common use case for “serial” ID columns got a 30-50% boost!

  • Allow bitmap scans to perform index-only scans when possible

Bitmap index scans (which are quite different from normal index scans) could get quite costly, so this is very good news.

  • Add support for large pages on Windows

Mostly known as “huge pages”, this feature (at leas

[...]
カテゴリー: postgresql

Bruce Momjian: Threaded Postgres

2018-12-26(水) 23:15:01

This amazing work by Konstantin Knizhnik created some experimental numbers of the benefits of moving Postgres from process forking to threading. (Much slower CreateProcess is used on Windows.)

His proof-of-concept showed that you have to get near 100 simultaneous queries before you start to see benefits. A few conclusions from the thread are that threading Postgres would open up opportunities for much simpler optimizations, particularly in parallel query and perhaps a built-in connection pooler. The downside is that some server-side languages like PL/Perl and PL/Python have interpreters that cannot be loaded multiple times into the same executable, making them of limited use in a threaded database server. Languages like PL/Java, that are made to run multiple threads safely, would benefit from threaded Postgres.

カテゴリー: postgresql

Bruce Momjian: Why Use Multi-Master?

2018-12-24(月) 23:15:01

Multi-master replication sounds great when you first hear about it — identical data is stored on more than one server, and you can query any server. What's not to like? Well, there is actually quite a bit not to like, but it isn't obvious. The crux of the problem relates to the write nature of databases. If this was a web server farm serving static data, using multiple web servers to handle the load is easily accomplished. However, databases, because they are frequently modified, make multi-master configurations problematic.

For example, how do you want to handle a write to one of the database servers in a multi-master setup? Do you lock rows on the other servers before performing the write to make sure they don't make similar conflicting writes (synchronous), or do you tell them later and programmatically or administratively deal with write conflicts (asynchronous)? Locking remote rows before local writes can lead to terrible performance, and telling them later means your data is inconsistent and conflicts need to be resolved.

In practice, few people use synchronous multi-master setups — the slowdown is too dramatic, and the benefits of being able to write to multiple servers is minimal. Remember all the data still must be written to all the servers, so there is no write-scaling benefit. (Read load balancing can be accomplished with streaming replication and Pgpool-II.)

Continue Reading »

カテゴリー: postgresql

Pierre-Emmanuel André: Setup a PostgreSQL cluster with repmgr and pgbouncer

2018-12-22(土) 09:32:06
Setup a PostgreSQL cluster with repmgr and pgbouncer

Recently I had to setup a PostgreSQL cluster and one of the prerequisites was to use repmgr.

In this post, I will explain you the work I did and how to setup this kind of cluster.

カテゴリー: postgresql

ページ