フィードアグリゲーター

Avinash Kumar: Great things that happened with PostgreSQL in the Year 2018

planet postgresql - 2019-01-01(火) 07:28:26

In this blog post, we’ll look back at what’s been going on in the world of PostgreSQL in 2018.

Before we start talking about the good things that have happened in the PostgreSQL in  2018, we hope you had a wonderful year and we wish you a happy and prosperous 2019.

PostgreSQL has been a choice for those who are looking for a completely community-driven open source database that is feature-rich and extensible. We have seen tremendously great things happening in PostgreSQL for many years, with 2018 being a prime example. As you could see the following snippet from DB engine rankings, PostgreSQL has topped the chart for growth in popularity in the year 2018 compared to other databases.

PostgreSQL adoption growth has been increasing year over year, and 2018 has again been one such year as we can see.

Let’s start with a recap of some of the great PostgreSQL events, and look at what we should take away from 2018 in the PostgreSQL space.

PostgreSQL 11 Released

PostgreSQL 11 was a release that incorporated a lot of features offered in commercial database software governed by an enterprise license. For example, there are times when you are required to enforce the handling of embedded transactions inside a stored procedure in your application code. There are also times when you wish to partition a table with foreign keys or use hash partitioning. This used to require workarounds. The release of PostgreSQL 11 covers these scenarios.

There were many other add-ons as well, such as Just-In-Time compilation, improved query parallelism, partition elimination, etc. You can find out more in our blog post here, or the PostgreSQL 11 release notes (if you have not seen already). Special thanks to everyone involved in such a vibrant PostgreSQL release.

End of Life for PostgreSQL 9.3

9.3.25 was the last minor release that has happened for PostgreSQL 9.3 (on November 8, 2018). There will be no more minor releases supported by the community for 9.3. If you are still using PostgreSQL 9.3 (or a major earlier release than 9.3), it is the

[...]
カテゴリー: postgresql

David Fetter: psql: A New Edit

planet postgresql - 2019-01-01(火) 04:44:00
Have you ever found yourself in the middle of a long statement in psql and wanted to pull up your favorite editor? Now, you can, using the same shortcut of control-x control-e that you would in bash!

Here's how:

$EDITOR ~/.inputrc Now add the following lines:

$if psql
    "\C-x\C-e": "\C-e\\e\C-m"
$endif ...save the file, and you're good to go.
カテゴリー: postgresql

Bruce Momjian: The Future of Major Version Upgrades

planet postgresql - 2019-01-01(火) 02:45:01

Pg_upgrade has matured to become a popular method to perform major version upgrades. This email thread considers what better upgrades would look like. Options include:

  • Migrating analyze statistics
  • Logical replication (requires double the disk space, managing schema changes)
  • Multi-master replication (similar to previous, but also requires conflict resolution)
  • Allowing the new server to read the old server's data files

Upgrade improvements have to be either significantly better in one of these measurements: faster, easier, more reliable, less required storage, and not significantly worse in any of those. For example, a solution that is 50% faster but is more complex or less reliable will be hard to gain acceptance. Of course, if a solution is one-hundred-times faster, it can be worse in some of those areas.

カテゴリー: postgresql

Brian Fehrle: PostgreSQL Replication for Disaster Recovery

planet postgresql - 2018-12-31(月) 19:58:00

With Disaster Recovery, we aim to set up systems to handle anything that could go wrong with our database. What happens if the database crashes? What if a developer accidently truncates a table? What if we find out some data was deleted last week but we didn’t notice it until today? These things happen, and having a solid plan and system in place will make the DBA look like a hero when everyone else’s hearts have already stopped when a disaster rears its ugly head.

Any database that has any sort of value should have a way to implement one or more Disaster Recovery options. PostgreSQL has a very solid replication system built in, and is flexible enough to be set up in many configurations to aid with Disaster Recovery, should anything go wrong. We’ll focus on scenarios like questioned above, how to set up our Disaster Recovery options, and the benefits of each solution.

High Availability

With streaming replication in PostgreSQL, High Availability is simple to set up and maintain. The goal is to provide a failover site that can be promoted to master if the main database goes down for any reason, such as hardware failure, software failure, or even network outage. Hosting a replica on another host is great, but hosting it in another data center is even better.

For specifics for setting up streaming replication, Severalnines has a detailed deep dive available here. The official PostgreSQL Streaming Replication Documentation has detailed information on the streaming replication protocol and how it all works.

A standard setup will look like this, a master database accepting read / write connections, with a replica database receiving all WAL activity in near real-time, replaying all data change activity locally.

Standard Streaming Replication with PostgreSQL

When the master database becomes unusable, a failover procedure is initiated to bring it offline, and promote the replica database to master, then pointing all connections to the newly promoted host. This can be done by either reconfiguring a load balancer, appli

[...]
カテゴリー: postgresql

Bruce Momjian: Zero Downtime Pg_upgrade

planet postgresql - 2018-12-29(土) 00:45:01

Pg_upgrade can upgrade a multi-terabyte system in 3-5 minutes using link mode. There are some things that can be done to make it faster — particularly, migration of analyze statistics from the old to new clusters.

However, even if pg_upgrade could perform an upgrade in zero time, would that be a zero-downtime upgrade? I am not sure, since my previous blog post explains that the work of switching clients from the old cluster to the new one seems to be downtime in the sense that running sessions are either terminated and restarted, or new connections must wait for old connections to complete. Multi-master replication seems to be unique in offering a way for new sessions to start on the new cluster while old sessions finish on the old cluster, but unfortunately it adds complexity.

カテゴリー: postgresql

Michael Paquier: ON COMMIT actions with inheritance and partitions

planet postgresql - 2018-12-28(金) 12:45:08

The following bug fix has been committed to the PostgreSQL code tree, addressing an issue visibly since ON COMMIT support for CREATE TABLE has been added back in commit ebb5318 from 2002:

commit: 319a8101804f3b62512fdce1a3af1c839344b593 author: Michael Paquier <michael@paquier.xyz> date: Fri, 9 Nov 2018 10:03:22 +0900 Fix dependency handling of partitions and inheritance for ON COMMIT This commit fixes a set of issues with ON COMMIT actions when used on partitioned tables and tables with inheritance children: - Applying ON COMMIT DROP on a partitioned table with partitions or on a table with inheritance children caused a failure at commit time, with complains about the children being already dropped as all relations are dropped one at the same time. - Applying ON COMMIT DELETE on a partition relying on a partitioned table which uses ON COMMIT DROP would cause the partition truncation to fail as the parent is removed first. The solution to the first problem is to handle the removal of all the dependencies in one go instead of dropping relations one-by-one, based on a suggestion from Álvaro Herrera. So instead all the relation OIDs to remove are gathered and then processed in one round of multiple deletions. The solution to the second problem is to reorder the actions, with truncation happening first and relation drop done after. Even if it means that a partition could be first truncated, then immediately dropped if its partitioned table is dropped, this has the merit to keep the code simple as there is no need to do existence checks on the relations to drop. Contrary to a manual TRUNCATE on a partitioned table, ON COMMIT DELETE does not cascade to its partitions. The ON COMMIT action defined on each partition gets the priority. Author: Michael Paquier Reviewed-by: Amit Langote, Álvaro Herrera, Robert Haas Discussion: https://postgr.es/m/68f17907-ec98-1192-f99f-8011400517f5@lab.ntt.co.jp Backpatch-through: 10

For beginners, ON COMMIT actions can be defined as part of CREATE TABLE on a temporary table t

[...]
カテゴリー: postgresql

Dan Langille: procedure OK under 10.6 but starts acting up under 11.1

planet postgresql - 2018-12-28(金) 06:50:35
I originally documented this so I would have a reference while tracking down the problem. I have since solved the issue and you can jump to the bottom of this post to find the solution. What I do not know is why this was not an issue under PostgreSQL 10.6 but was under PostgreSQL 11.1. [...]
カテゴリー: postgresql

Claire Giordano: The perks of sharing your Citus open source stories

planet postgresql - 2018-12-27(木) 19:15:00

Most of us who work with open source like working with open source. You get to build on what’s already been built, and you get to focus on inventing new solutions to new problems instead of reinventing the wheel on each project. Plus you get to share your work publicly (which can improve the state of the art in the industry) and you get feedback from developers outside your company. Hiring managers give it a +1 too, since sharing your code will sometimes trigger outside interest in what you’re doing and can be a big boon for recruiting. After all “smart people like to hang out with smart people”.

Open source downloads make it easy to try out new software

One of the (countless) benefits of working with open source is that it’s so much easier to try things out. Even at four o’clock in the morning, when the rest of the world seems like they’re asleep. We’ve come a long way from the years when the only way to try out new software was to secure an enterprise “try & buy” license: through a salesperson, during business hours, and only after you were done playing an annoying game of phone tag.

Today, when you’re hunting for a new way to solve a problem and you want to try out a new technology, that ability to download open source packages and be up and running in minutes takes a lot of friction out of the process.

Sharing your stories about how & what you did contributes to the state of the art, too

And the transparency of the open source culture goes beyond the sharing of source code. Being transparent about both the good and the bad of working with open source can help to promote best practices as well as helps to make things better. Lots of you also share your stories about how you solved a problem, built a thing, or created an order of magnitude efficiency. Whether by conference talk, case study interview, or blog post, we love it when users and customers of the Citus database share their stories about what their challenges were and how they solved their problems.

A conference talk from Microsoft: Citus and Postgres [...]
カテゴリー: postgresql

Kaarel Moppel: Unearthing some hidden PostgreSQL 11 gems

planet postgresql - 2018-12-27(木) 17:59:22

It’s been already a bit over a month since the last Postgres major version was released (and also the 1st minor update is out) so it’s not exactly fresh out of the oven…and as usual there has been already a barrage of articles on the most prominent features. Which is great, as I can save some keyboard strokes on those. But there are of course some other little gems that didn’t get any spotlight (by the way release notes are about 13 pages long so a lot of stuff!)…and now luckily had some “aluminum tube time” to have a second look on some more interesting/useful little features and improvements. So here my findings (somewhat logically grouped).

General
  • Add column pg_stat_activity.backend_type

This makes it possible to report the “connected/active users” correctly as just a “count(*)” could lie due to the “parallel query” features added already 2 versions ago.

  • Allow ALTER TABLE to add a column with a non-null default without doing a table rewrite

Adding new columns with DEFAULT values to large and active tables is a classical beginner mistake, basically halting operation. But no more – Postgres is now a lot more beginner-friendly!

Performance
  • Fill the unused portion of force-switched WAL segment files with zeros for improved compressibility

My favourite low-hanging fruit from this release…and I wonder why it took so long. Why is it cool? Well, one can now set the “archive_timeout” so low that RPO-s from 1 second (lower values currently not allowed) are doable without a replica, with plain WAL-shipping. A great alternative for semi-important systems where losing a couple of last records is not so tragic.

  • Improve performance of monotonically increasing index additions

Them most common use case for “serial” ID columns got a 30-50% boost!

  • Allow bitmap scans to perform index-only scans when possible

Bitmap index scans (which are quite different from normal index scans) could get quite costly, so this is very good news.

  • Add support for large pages on Windows

Mostly known as “huge pages”, this feature (at leas

[...]
カテゴリー: postgresql

Bruce Momjian: Threaded Postgres

planet postgresql - 2018-12-26(水) 23:15:01

This amazing work by Konstantin Knizhnik created some experimental numbers of the benefits of moving Postgres from process forking to threading. (Much slower CreateProcess is used on Windows.)

His proof-of-concept showed that you have to get near 100 simultaneous queries before you start to see benefits. A few conclusions from the thread are that threading Postgres would open up opportunities for much simpler optimizations, particularly in parallel query and perhaps a built-in connection pooler. The downside is that some server-side languages like PL/Perl and PL/Python have interpreters that cannot be loaded multiple times into the same executable, making them of limited use in a threaded database server. Languages like PL/Java, that are made to run multiple threads safely, would benefit from threaded Postgres.

カテゴリー: postgresql

Bruce Momjian: Why Use Multi-Master?

planet postgresql - 2018-12-24(月) 23:15:01

Multi-master replication sounds great when you first hear about it — identical data is stored on more than one server, and you can query any server. What's not to like? Well, there is actually quite a bit not to like, but it isn't obvious. The crux of the problem relates to the write nature of databases. If this was a web server farm serving static data, using multiple web servers to handle the load is easily accomplished. However, databases, because they are frequently modified, make multi-master configurations problematic.

For example, how do you want to handle a write to one of the database servers in a multi-master setup? Do you lock rows on the other servers before performing the write to make sure they don't make similar conflicting writes (synchronous), or do you tell them later and programmatically or administratively deal with write conflicts (asynchronous)? Locking remote rows before local writes can lead to terrible performance, and telling them later means your data is inconsistent and conflicts need to be resolved.

In practice, few people use synchronous multi-master setups — the slowdown is too dramatic, and the benefits of being able to write to multiple servers is minimal. Remember all the data still must be written to all the servers, so there is no write-scaling benefit. (Read load balancing can be accomplished with streaming replication and Pgpool-II.)

Continue Reading »

カテゴリー: postgresql

Pierre-Emmanuel André: Setup a PostgreSQL cluster with repmgr and pgbouncer

planet postgresql - 2018-12-22(土) 09:32:06
Setup a PostgreSQL cluster with repmgr and pgbouncer

Recently I had to setup a PostgreSQL cluster and one of the prerequisites was to use repmgr.

In this post, I will explain you the work I did and how to setup this kind of cluster.

カテゴリー: postgresql

Avinash Kumar: Backup and Restore a PostgreSQL Cluster With Multiple Tablespaces Using pg_basebackup

planet postgresql - 2018-12-22(土) 02:35:59

pg_basebackup is a widely used PostgreSQL backup tool that allows us to take an ONLINE and CONSISTENT file system level backup. These backups can be used for point-in-time-recovery or to set up a slave/standby. You may want to refer to our previous blog posts, PostgreSQL Backup StrategyStreaming Replication in PostgreSQL and Faster PITR in PostgreSQL where we describe how we used pg_basebackup for different purposes. In this post, I’ll demonstrate the steps to restore a backup taken using pg_basebackup when we have many tablespaces that store databases or their underlying objects.

A simple backup can be taken using the following syntax.

Tar and Compressed Format $ pg_basebackup -h localhost -p 5432 -U postgres -D /backupdir/latest_backup -Ft -z -Xs -P Plain Format $ pg_basebackup -h localhost -p 5432 -U postgres -D /backupdir/latest_backup -Fp -Xs -P

Using a tar and compressed format is advantageous when you wish to use less disk space to backup and store all tablespaces, data directory and WAL segments, with everything in just one directory (target directory for backup).

Whereas a plain format stores a copy of the data directory as is, in the target directory. When you have one or more non-default tablespaces, tablespaces may be stored in a separate directory. This is usually the same as the original location, unless you use

--tablespace-mapping   to modify the destination for storing the tablespaces backup.

PostgreSQL supports the concept of tablespaces. In simple words, a tablespace helps us maintain multiple locations to scatter databases or their objects. In this way, we can distribute the IO and balance the load across multiple disks.

To understand what happens when we backup a PostgreSQL cluster that contains multiple tablespaces, let’s consider the following example. We’ll take these steps:

  • Create two tablespaces in an existing master-slave replication setup.
  • Take a backup and see what is inside the backup directory.
  • Restore the backup.
  • Conclude our findings
Create 2 tablespaces and take a backup (ta[...]
カテゴリー: postgresql

Bruce Momjian: Compiled PL/pgSQL?

planet postgresql - 2018-12-22(土) 01:12:18

PL/pgSQL has good alignment with SQL. When first executed in a session, the PL/pgSQL source is compiled to an abstract syntax tree which is then executed every time the PL/pgSQL function is executed in the session. Other languages have different compile behavior:

  • PL/Perl compiles to bytecode on first call, similar to PL/pgSQL
  • PL/Python loads pyc bytecode files
  • SPI loads machine-language instruction files (object files)

This email thread covers some of the details. Keep in mind that most server-side functions spend the majority of their time running SQL queries, so the method of compilation is often insignificant.

カテゴリー: postgresql

Hans-Juergen Schoenig: A PostgreSQL story about “NULL IS NULL = NULL AND NOT NULL”

planet postgresql - 2018-12-21(金) 17:59:22

After years of software development, some might still wonder: What is a NULL value? What does it really mean and what is its purpose? The general rule is: NULL basically means “undefined”. Many books state that NULL means “empty” but I think that is not the ideal way to see things: If you wallet is empty, your financial situation is perfectly defined (= you are broke). But, “undefined” is different. It means that we don’t know the value. If we don’t know how much cash you got, you might still be a millionaire. So using the word “unknown” to describe NULL in SQL is really better than to use word “empty”, which can be pretty misleading in my judgement.

NULL values in PostgreSQL: Basic rules

First of all: NULL is a super useful thing in SQL and people should be aware of the details associated with it. Before digging deeper into NULL it is necessary to take a look at the most basic rules. The following example shows a mistake commonly made by many developers:

test=# SELECT 10 = NULL; ?column? ---------- (1 row)

Many people assume that the output of this query is actually “false”, which is not correct. The result is NULL. Why is that? Suppose you got 10 bucks in your left pocket and nobody knows how much cash you got in your right pocket. Is the amount of cash in your pockets the same? We don’t know. It might be very well so but we simply don’t know. Thus the result of this query has to be NULL.

Let us try something else:

test=# SELECT NULL = NULL; ?column? ---------- (1 row)

The same is true for this query. The result has to be NULL. We don’t know how much cash is in your left pocket and we got no idea how much there is in your right pocket. Is it identical? Again: We have absolutely no idea – the result is undefined.

To figure out if two values are actually NULL we have to use the following syntax:

test=# SELECT NULL IS NULL; ?column? ---------- t (1 row)

In this case the result is true because “IS” actually check if both value are indeed NULL. Consequently the next query is going to return false:

test=[...]
カテゴリー: postgresql

Michael Paquier: Postgres 12 highlight - wal_sender_timeout now user-settable

planet postgresql - 2018-12-21(金) 14:04:27

The following commit has reached PostgreSQL 12, which brings more flexibility in managing replication with standbys distributed geographically:

commit: db361db2fce7491303f49243f652c75c084f5a19 author: Michael Paquier <michael@paquier.xyz> date: Sat, 22 Sep 2018 15:23:59 +0900 Make GUC wal_sender_timeout user-settable Being able to use a value that can be changed on a connection basis is useful with clusters distributed geographically, and makes failure detection more flexible. A note is added in the documentation about the use of "options" in primary_conninfo, which can be hard to grasp for newcomers with the need of two single quotes when listing a set of parameters. Author: Tsunakawa Takayuki Reviewed-by: Masahiko Sawada, Michael Paquier Discussion: https://postgr.es/m/0A3221C70F24FB45833433255569204D1FAAD3AE@G01JPEXMBYT05

For some deployments, it matters to be able to change wal_sender_timeout depending on the standby and the latency with its primary (or another standby when dealing with a cascading instance). For example, a shorter timeout for a standby close to its primary allows faster problem detection and failover, while a longer timeout can become helpful for a standby in a remote location to judge correctly its health. In Postgres 11 and older versions, and this since wal_sender_timeout has been introduced since 9.1, this parameter can only be set at server-level, being marked as PGC_SIGHUP in its GUC properties. Changing the value of this parameter does not need an instance restart and the new value can be reloaded to all the sessions connected, including WAL senders.

The thread related to the above commit has also discussed if this parameter should be changed to be a backend-level parameter, which has the following properties:

  • Reload does not work on it. Once this parameter is changed at connection time it can never change.
  • Changing this parameter at server level will make all new connections using the new value.
  • Role-level configuration is not possible.

Still, for default values, it is a h

[...]
カテゴリー: postgresql

Ibrar Ahmed: Benchmark PostgreSQL With Linux HugePages

planet postgresql - 2018-12-21(金) 03:13:13

Linux kernel provides a wide range of configuration options that can affect performance. It’s all about getting the right configuration for your application and workload. Just like any other database, PostgreSQL relies on the Linux kernel to be optimally configured. Poorly configured parameters can result in poor performance. Therefore, it is important that you benchmark database performance after each tuning session to avoid performance degradation. In one of my previous posts, Tune Linux Kernel Parameters For PostgreSQL Optimization, I described some of the most useful Linux kernel parameters and how those may help you improve database performance. Now I am going to share my benchmark results with you after configuring Linux Huge Page with different PostgreSQL workload. I have performed a comprehensive set of benchmarks for many different PostgreSQL load sizes and different number concurrent clients.

Benchmark Machine
  • Supermicro server:
    • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
    • 2 sockets / 28 cores / 56 threads
    • Memory: 256GB of RAM
    • Storage: SAMSUNG  SM863 1.9TB Enterprise SSD
    • Filesystem: ext4/xfs
  • OS: Ubuntu 16.04.4, kernel 4.13.0-36-generic
  • PostgreSQL: version 11
Linux Kernel Settings

I have used default kernel settings without any optimization/tuning except for disabling Transparent HugePages. Transparent HugePages are by default enabled, and allocate a page size that may not be recommended for database usage. For databases generally, fixed sized HugePages are needed, which Transparent HugePages do not provide. Hence, disabling this feature and defaulting to classic HugePages is always recommended.

PostgreSQL Settings

I have used consistent PostgreSQL settings for all the benchmarks in order to record different PostgreSQL workloads with different settings of Linux HugePages. Here is the PostgreSQL setting used for all benchmarks:

shared_buffers = '64GB' work_mem = '1GB' random_page_cost = '1' maintenance_work_mem = '2GB' synchronous_commit = 'on' seq_page_cost = '1' max_wal_size = '100GB' checkpoint_time[...]
カテゴリー: postgresql

Luca Ferrari: PGVersion: a class to manage PostgreSQL Version (strings) within a Perl 6 Program

planet postgresql - 2018-12-20(木) 09:00:00

While writing a program in Perl 6 I had the need to correctly parse and analyze diffefent PostgreSQL version strings. I wrote a simple and minimal class to the aim, and refactored so it can escape in the wild.

PGVersion: a class to manage PostgreSQL Version (strings) within a Perl 6 Program

As you probably already know, PostgreSQL has changed its versioning number scheme from a major.major.minor approach to a concise major.minor one. Both are simple enought to be evaulated with a regular expression, but I found myself wrinting the same logic over and over, so I decided to write a minimal class to do the job for me and provide several information.
Oh, and this is Perl 6 (that I’m still learning!).
The class is named Fluca1978::Utils::PostgreSQL::PGVersion and is released as it is under the BSD Licence.

Quick, show me something!

Ok, here it is how it works:

use Fluca1978::Utils::PostgreSQL::PGVersion; for <10.1 11beta1 11.1 9.6.5 6.11> { my $v = PGVersion.new: :version-string( $_ ); say "PostgreSQL version is $v"; say "or for short { $v.gist }"; say "and if you want a detailed version:\n{ $v.Str( True ) }"; say "URL to download: { $v.http-download-url }"; say '~~~~' x 10; }

The above simple loop provides the following output:

% perl6 -Ilib usage-example-pgversion.pl PostgreSQL version is v10.1 or for short 10.1 and if you want a detailed version: 10.1 (Major: 10,...
カテゴリー: postgresql

Ajay Kulkarni: How we are building a self-sustaining open-source business in the cloud era

planet postgresql - 2018-12-20(木) 03:31:25

Today, we are announcing that we have started developing new open-code (or source-available) features that will be made available under the (also new) Timescale License (TSL). You will be able to follow the development of these features on our GitHub.

Some of these new features (“community features”) will be free (i.e., available at no charge) for all users except for the <0.0001% who just offer a hosted “database-as-a-service” version of TimescaleDB. Other features will require a commercial relationship with TimescaleDB to unlock and use for everyone (“enterprise features”). And to be clear, we will also continue to invest in the core of TimescaleDB, which will remain open-source under the Apache 2 license.

Going forward, all new features will be categorized into one of the three following buckets: open-source, community, and enterprise.

Software licenses like the TSL are the new reality for open-source businesses like Timescale. This is because the migration of software workloads to the cloud has changed the open-source software industry. Public clouds currently dominate the space, enabling some of them to treat open-source communities and businesses as free R&D for their own proprietary services.

While this behavior (also known as “strip mining”) is legally permissible within the boundaries of classic open-source licenses, we believe it is not good for the open-source community because it is not sustainable. It makes it impossible to build the independent businesses required to support large-scale open-source projects (without being tied to the whims of a large corporate sponsor with varied interests). We ultimately believe that a self-sustaining business builds better open-source software.

(The new Confluent Community License is another example of an open-source business enacting a new license to counteract strip mining by some of the public clouds.)

In this post we explain what we are doing to address this challenge. We also explain what this might mean for you, an existing or potential TimescaleDB user.

W[...]
カテゴリー: postgresql

Bruce Momjian: WAL Directory Location

planet postgresql - 2018-12-20(木) 01:45:01

You might have noticed that initdb has a -X/--waldir option to relocate the write-ahead log (pg_wal) directory. This allows the WAL I/O traffic to be on a device different from the default data directory. (This is similar to the use of tablespaces.) If you wish to move the pg_wal (or older pg_xlog) directory to a different device after running initdb, you can simply shut down the database server, move pg_wal to a new location, and create a symbolic link from the data directory to the new pg_wal location.

カテゴリー: postgresql

ページ