フィードアグリゲーター

Pavel Stehule: plpgsql_check can detect bad default volatility flag

planet postgresql - 2018-11-29(木) 01:20:00
Common performance problem of plpgsql function when these functions are used from some more complex queries is using default VOLATILE flag. There are not possible to do more aggressive optimization of this function call. plpgsql_check can detect this issue now:

CREATE OR REPLACE FUNCTION public.flag_test1(integer)
RETURNS integer
LANGUAGE plpgsql
STABLE
AS $function$
begin
return $1 + 10;
end;
$function$;

CREATE OR REPLACE FUNCTION public.flag_test2(integer)
RETURNS integer
LANGUAGE plpgsql
VOLATILE
AS $function$
begin
return (select * from fufu where a = $1 limit 1);
end;
$function$;

postgres=# select * from plpgsql_check_function('flag_test1(int)', performance_warnings => true);
┌────────────────────────────────────────────────────────────────────┐
│ plpgsql_check_function │
╞════════════════════════════════════════════════════════════════════╡
│ performance:00000:routine is marked as STABLE, should be IMMUTABLE │
└────────────────────────────────────────────────────────────────────┘
(1 row)

postgres=# select * from plpgsql_check_function('flag_test2(int)', performance_warnings => true);
┌──────────────────────────────────────────────────────────────────────┐
│ plpgsql_check_function │
╞══════════════════════════════════════════════════════════════════════╡
│ performance:00000:routine is marked as VOLATILE, should be STABLE │
└──────────────────────────────────────────────────────────────────────┘
(1 row)
カテゴリー: postgresql

Bruce Momjian: Data Storage Options

planet postgresql - 2018-11-29(木) 00:30:01

Since I have spent three decades working with relational databases, you might think I believe all storage requires relational storage. However, I have used enough non-relational data stores to know that each type of storage has its own benefits and costs.

It is often complicated to know which data store to use for your data. Let's look at the different storage levels, from simplest to most complex:

  1. Flat files: Flat files are exactly what they sound like — an unstructured stream of bytes stored in a file. There is no structure defined in the file itself — structure must be implemented in the application that reads the file. This is obviously the simplest way to store data, and works well for small data volumes when only a single user and a few well-coordinated applications need access. File locking is required to serialize multi-user access. Changes typically require a complete file rewrite.
  2. Word processing documents: This is similar to flat files, but defines structure in the data file, e.g., highlighting, sections. The same flat file limitations apply.
  3. Spreadsheet: This is similar to word processing documents, but adds more capabilities, including computations and the definition of relationships between data elements. Data is more atomized in this format than in the previous one.
  4. NoSQL stores: This removes many of the limitations from previous data stores. Multi-user access is supported, including locking, and modification of single data elements does not require rewriting all data.
  5. Relational databases: This is the most complex data storage option. Rigid structure is enforced internally, though unstructured options exist. Data access occurs using a declarative language that is dynamically optimized based on that structure. Multi-user and multi-application access is efficient.

You might think that since relational databases have the most features, everything should use it. However, with features come complexity and rigidity. Therefore, all levels are valid for some use cases:

  • Flat files are ideal for read-onl
[...]
カテゴリー: postgresql

Alexey Lesovsky: Global shortcuts and PostgreSQL queries.

planet postgresql - 2018-11-28(水) 19:42:00
Using your favorite hotkeys on queries in Linux One of my colleagues often talks about using hot keys for his favourite SQL queries and commands in iterm2 (e.g. for checking current activity or to view lists of largest tables). 
Usually, I listen to this with only half an ear, because iterm2 is available only for MacOS and I am a strong Linux user. Once this topic came up again I thought perhaps this function could be realised not only through iterm2 but through an alternative tool or settings in desktop environment.
Due to me being an old KDE user, I opened “System settings” and began to check all settings related to keyboard, input, hotkeys and so on. What I have found is number of settings that allow to emulate text input from the keyboard. Using this feature I configured bindings for my favourite queries. So now, for executing needed query I don’t need to search it in my queries collection, copy and paste… I just press hotkey and got it in active window, it can be psql console, work chat, text editor or something else.

Here is how these bindings are configured:
Open “System settings” application and go to “Shortcuts”. There is “Custom Shortcuts” menu. Here, optionally, we should create a dedicated group for our shortcuts. I named my group “PostgreSQL hot queries”. When creating a shortcut, select “Global shortcut” and next “Send Keyboard Input”.

Now, we need to setup a new shortcut, give it a name and description. And here is the most interesting part. From the different Linux users, sometimes I’ve heard that KDE must have been written by aliens, and that statement has not been completely clear for me, until this moment since I never had serious issues with KDE. Now, after configuring the shortcuts, I tend to agree with this statement more and more.

Ok, here we go, next we should type text of a query which should appear when hotkey will be pressed. So, instead of plain query text you have to input alien sequences of symbols.

Check out the screenshot with example of query that should show current activit[...]
カテゴリー: postgresql

Viorel Tabara: What's New in PostgreSQL 11

planet postgresql - 2018-11-28(水) 19:34:22

PostgreSQL 11 was released on October 10th, 2018, and on schedule, marking the 23rd anniversary of the increasingly popular open source database.

While a complete list of changes is available in the usual Release Notes, it is worth checking out the revamped Feature Matrix page which just like the official documentation has received a makeover since its first version which makes it easier to spot changes before diving into the details.

For example on the Release Notes page, the “Channel binding for SCAM authentication” is buried under the Source Code while the matrix has it under the Security section. For the curious here’s a screenshot of the interface:

PostgreSQL Feature Matrix

Additionally, the Bucardo Postgres Release Notes page linked above, is handy in its own way, making it easy to search for a keyword across all versions.

What’s New? With literally hundreds of changes, I will go through the differences listed in the Feature Matrix.

Covering Indexes for B-trees (INCLUDE)

CREATE INDEX received the INCLUDE clause which allows indexes to include non-key columns. Its use case for frequent identical queries, is well described in Tom Lane’s commit from November 22nd, which updates the development documentation (meaning that the current PostgreSQL 11 documentation doesn’t have it yet), so for the full text refer to section 11.9. Index-Only Scans and Covering Indexes in the development version.

Parallelized CREATE INDEX for B-tree Indexes

As alluded in the name, this feature is only implemented for the B-tree indexes, and from Robert Haas’ commit log we learn that the implementation may be refined in the future. As noted from the CREATE INDEX documentation, while both parallel and concurrent index creation methods take advantage of multiple CPUs, in the case of CONCURRENT only the first table scan will be performed in parallel.

Related to this new feature are the configuration parameters maintenance_work_mem and maintenance_parallel_maintenance_workers.

Lastly, the number of parallel workers can be set per tabl

[...]
カテゴリー: postgresql

第 155 回理事会議事録 (2018-11)

www.postgresql.jp news - 2018-11-28(水) 19:22:52
第 155 回理事会議事録 (2018-11) anzai 2018/11/28 (水) - 19:22
カテゴリー: postgresql

Hans-Juergen Schoenig: Transactions in PostgreSQL: READ COMMITTED vs. REPEATABLE READ

planet postgresql - 2018-11-28(水) 18:00:11

The ability to run transactions is the core of every modern relational database system. The idea behind a transaction is to allow users to control the way data is written to PostgreSQL. However, a transaction is not only about writing – it is also important to understand the implications on reading data for whatever purpose (OLTP, data warehousing, etc.).

Understanding transaction isolation levels

One important aspect of transactions in PostgreSQL and therefore in all other modern relational databases is the ability to control when a row is visible to a user and when it is not. The ANSI SQL standard proposes 4 transaction isolation levels (READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ and SERIALIZABLE) to allow users to explicitly control the behavior of the database engine. Unfortunately the existence of transaction isolation levels is still not as widely known as it should be, and therefore I decided to blog about this important topic to give more PostgreSQL users the chance to apply this very important, yet under-appreciated feature.

The two most commonly used transaction isolation levels are READ COMMITTED and REPEATABLE READ. In PostgreSQL READ COMMITTED is the default isolation level and should be used for normal OLTP operations. In contrast to other systems such as DB2 or Informix, PostgreSQL does not provide support for READ UNCOMMITTED, which I personally consider to be a thing of the past anyway.

What READ COMMITTED does

In READ COMMITTED mode, every SQL statement will see changes which have already been committed (e.g. new rows added to the database) by some other transactions. In other words: If you run the same SELECT statement multiple times within the same transaction, you might see different results. This is something you have to take into account when writing an application.

However, within a statement the data you see is constant – it does not change. A SELECT statement (or any other statement) will not see changes committed WHILE the statement is running. Within an SQL statement, data and t

[...]
カテゴリー: postgresql

Pavel Stehule: Orafce - simple thing that can help

planet postgresql - 2018-11-28(水) 17:23:00
I merged small patch to master branch of Orafce. This shows a wide PostgreSQL possibilities and can decrease a work necessary for migration from Oracle to Postgres.

One small/big differences between Oracle and any other databases is meaning of empty string. There are lot of situation, when Oracle use empty string as NULL, and NULL as empty string. I don't know any other database, that does it.

Orafce has native type (not domain type) varchar2 and nvarchar2. Then it is possible to define own operators. I implemented || concat operator as null safe for these types. So now it is possible to write:
postgres=# select null || 'xxx'::varchar2 || null;
┌──────────┐
│ ?column? │
╞══════════╡
│ xxx │
└──────────┘
(1 row)

When you port some application from Oracle to Postgres, then is good to disallow empty strings in Postgres. One possible solution is using generic C trigger function replace_empty_string(). This trigger function can check any text type field in stored rows and can replace empty strings by NULLs. Sure, you should to fix any check like colname = '' or colname '' in your application, and you should to use just only colname IS [NOT] NULL. Then the code will be same on Oracle and PostgreSQL too, and you can use automatic translation by ora2pg.
カテゴリー: postgresql

Pavel Stehule: PostgreSQL 12 - psql - csv output

planet postgresql - 2018-11-28(水) 14:51:00
After some years and long discussion, a psql console has great feature - csv output (implemented by Daniel Vérité).

Usage is very simple, just use --csv option.

[pavel@nemesis postgresql.master]$ psql --csv -c "select * from pg_namespace limit 10" postgres
oid,nspname,nspowner,nspacl
99,pg_toast,10,
10295,pg_temp_1,10,
10296,pg_toast_temp_1,10,
11,pg_catalog,10,"{postgres=UC/postgres,=U/postgres}"
2200,public,10,"{postgres=UC/postgres,=UC/postgres}"
11575,information_schema,10,"{postgres=UC/postgres,=U/postgres}"
カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 12 – Add CSV table output mode in psql.

planet postgresql - 2018-11-28(水) 11:49:05
On 26th of November 2018, Tom Lane committed patch: Add CSV table output mode in psql.   "\pset format csv", or --csv, selects comma-separated values table format. This is compliant with RFC 4180, except that we aren't too picky about whether the record separator is LF or CRLF; also, the user may choose a field … Continue reading "Waiting for PostgreSQL 12 – Add CSV table output mode in psql."
カテゴリー: postgresql

Stefan Fercot: Combining pgBackRest and Streaming Replication

planet postgresql - 2018-11-28(水) 09:00:00

pgBackRest is a well-known powerful backup and restore tool. It offers a lot of possibilities.

While pg_basebackup is commonly used to setup the initial database copy for the Streaming Replication, it could be interesting to reuse a previous database backup (eg. taken with pgBackRest) to perform this initial copy.

Furthermore, the --delta option provided by pgBackRest can help us to re-synchronize an old secondary server without having to rebuild it from scratch.

To reduce the load on the primary server during a backup, pgBackRest even allows to take backups from a standby server.

We’ll see in this blog post how to do that.

For the purpose of this post, we’ll use 2 nodes called primary and secondary. Both are running on CentOS 7.

We’ll cover some pgBackRest tips but won’t go deeper in the PostgreSQL configuration, nor in the Streaming Replication best practices.

Installation

On both primary and secondary server, install PostgreSQL and pgBackRest packages directly from the PGDG yum repositories:

$ sudo yum install -y https://download.postgresql.org/pub/repos/yum/11/redhat/\ rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm $ sudo yum install -y postgresql11-server postgresql11-contrib pgbackrest

Check that pgBackRest is correctly installed:

$ pgbackrest pgBackRest 2.07 - General help Usage: pgbackrest [options] [command] Commands: archive-get Get a WAL segment from the archive. archive-push Push a WAL segment to the archive. backup Backup a database cluster. check Check the configuration. expire Expire backups that exceed retention. help Get help. info Retrieve information about backups. restore Restore a database cluster. stanza-create Create the required stanza data. stanza-delete Delete a stanza. stanza-upgrade Upgrade a stanza. start Allow pgBackRest processes to run. stop Stop pgBackRest processes from running. version Get version. Use 'pgbackrest[...]
カテゴリー: postgresql

Craig Kerstiens: How Postgres is more than a relational database: Extensions

planet postgresql - 2018-11-28(水) 05:51:00

Postgres has been a great database for decades now, and has really come into its own in the last ten years. Databases more broadly have also gotten their own set of attention as well. First we had NoSQL which started more on document databases and key/value stores, then there was NewSQL which expanding things to distributed, graph databases, and all of these models from documents to distributed to relational were not mutually exclusive. Postgres itself went from simply a relational database (which already had geospatial capabilities) to a multi modal database by adding support for JSONB.

But to me the most exciting part about Postgres isn’t how it continues to advance itself, rather it is how Postgres has shifted itself from simply a relational database to more of a data platform. The largest driver for this shift to being a data platform is Postgres extensions. Postgres extensions in simplified terms are lower level APIs that exist within Postgres that allow to change or extend it’s functionality. These extension hooks allow Postgres to be adapted for new use cases without requiring upstream changes to the core database. This is a win in two ways:

  1. The Postgres core can continue to move at a very safe and stable pace, ensuring a solid foundation and not risking your data.
  2. Extensions themselves can move quickly to explore new areas without the same review process or release cycle allowing them to be agile in how they evolve.

Okay, plug-ins and frameworks aren’t new when it comes to software, what is so great about extensions for Postgres? Well they may not be new to software, but they’re not new to Postgres either. Postgres has had extensions as long as I can remember. In Postgres 9.1 we saw a new sytax to make it easy to CREATE EXTENSION and since that time the ecosystem around them has grown. We have a full directory of extensions at PGXN. Older forks such as which were based on older versions are actively working on catching up to a modern release to presumably become a pure extension. By being a pure exten

[...]
カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 12 – Integrate recovery.conf into postgresql.conf

planet postgresql - 2018-11-28(水) 05:29:53
On 25th of November 2018, Peter Eisentraut committed patch: Integrate recovery.conf into postgresql.conf   recovery.conf settings are now set in postgresql.conf (or other GUC sources). Currently, all the affected settings are PGC_POSTMASTER; this could be refined in the future case by case.   Recovery is now initiated by a file recovery.signal. Standby mode is initiated … Continue reading "Waiting for PostgreSQL 12 – Integrate recovery.conf into postgresql.conf"
カテゴリー: postgresql

Tomas Vondra: Sequential UUID Generators

planet postgresql - 2018-11-28(水) 01:10:15

UUIDs are a popular identifier data type – they are unpredictable, and/or globally unique (or at least very unlikely to collide) and quite easy to generate. Traditional primary keys based on sequences won’t give you any of that, which makes them unsuitable for public identifiers, and UUIDs solve that pretty naturally.

But there are disadvantages too – they may make the access patterns much more random compared to traditional sequential identifiers, cause WAL write amplification etc. So let’s look at an extension generating “sequential” UUIDs, and how it can reduce the negative consequences of using UUIDs.

Let’s assume we’re inserting rows into a table with an UUID primary key (so there’s a unique index), and the UUIDs are generated as random values. In the table the rows may be simply appended at the end, which is very cheap. But what about the index? For indexes ordering matters, so the database has little choice about where to insert the new item – it has to go into a particular place in the index. As the UUID values are generated as random, the location will be random, with uniform distribution for all index pages.

This is unfortunate, as it works against adaptive cache management algorithms – there is no set of “frequently” accessed pages that we could keep in memory. If the index is larger than memory, the cache hit ratio (both for page cache and shared buffers) is doomed to be poor. And for small indexes, you probably don’t care that much.

Furthermore, this random write access pattern inflates the amount of generated WAL, due to having to perform full-page writes every time a page is modified for the first time after a checkpoint. (There is a feedback loop, as FPIs increase the amount of WAL, triggering checkpoints more often – which then results in more FPIs generated, …)

Of course, UUIDs influence read patterns too. Applications typically access a fairly limited subset of recent data. For example, an e-commerce site mostly ares about orders from the last couple of days, rarely accessing data beyond this

[...]
カテゴリー: postgresql

Joshua Drake: Breaking down the walls of exclusivity

planet postgresql - 2018-11-27(火) 06:03:00
When you are considering a conference about Postgres, one should pick the one that is focused on building the community. PostgresConf is all about building the community and we even captured it on video!

PostgresConf embraces a holistic view of what community is. We want everyone to feel welcome and encouraged to give back to PostgreSQL.org. However, that is not the only opportunity for you to give back to the Postgres community. We all have different talents and some of those don't extend to writing patches or Docbook XML.  Giving back When considering who is part of the community and who is contributing to the community, we want to introduce you to a couple of fantastic organizers of our conference: Debra Cerda and Viral Shah. Some in the community will know Debra. She has been in the community for years and is one of the primary organizers of Austin Postgres.
Debra Cerda Debra is our Attendee and Speaker Liaison as well as our Volunteer Coordinator. She is also a key asset in the development and performance of our Career Fair.
Viral Shah Viral is our on-site logistics lead and is part of the volunteer acquisition team. It is Viral that works with the hotel using a fine tooth comb to make sure everything is on target, on budget, and executed with extreme efficiency.
Without her amazing attention to detail and dedication to service we wouldn't be able to deliver the level of conference our community has come to expect from PostgresConf.

Building relationships There a lot of reasons to go to a conference. You may be looking for education on a topic, a sales lead, or possibly just to experience a central location of top talent, products, and services. All of these reasons are awesome but we find that the most important reason is to build relationships. The following are two exceptional examples of community projects.
Our first example is ZomboDB. No, they are not a sponsor (yet!) but they have a fantastic Open Source extension to Postgres that integrates Elasticsearch into Postgres. 
Ou[...]
カテゴリー: postgresql

Bruce Momjian: First Wins, Last Wins, Huh?

planet postgresql - 2018-11-26(月) 22:30:01

Someone recently pointed out an odd behavior in Postgres's configuration files. Specifically, they mentioned that the last setting for a variable in postgresql.conf is the one that is honored, while the first matching connection line in pg_hba.conf in honored. They are both configuration files in the cluster's data directory, but they behave differently. It is clear why they behave differently — because the order of lines in pg_hba.conf is significant, and more specific lines can be placed before more general lines (see the use of reject lines.) Still, it can be confusing, so I wanted to point it out.

カテゴリー: postgresql

Adrien Nayrat: PostgreSQL and heap-only-tuples updates - part 3

planet postgresql - 2018-11-26(月) 16:00:00
Here is a series of articles that will focus on a new feature in version 11. During the development of this version, a feature caught my attention. It can be found in releases notes : https://www.postgresql.org/docs/11/static/release-11.html Allow heap-only-tuple (HOT) updates for expression indexes when the values of the expressions are unchanged (Konstantin Knizhnik) I admit that this is not very explicit and this feature requires some knowledge about how postgres works, that I will try to explain through several articles:
カテゴリー: postgresql

Stefan Fercot: PostgreSQL 12 preview - recovery.conf disappears

planet postgresql - 2018-11-26(月) 09:00:00

PostgreSQL needs some infrastructure changes to have a more dynamic reconfiguration around recovery, eg. to change the primary_conninfo at runtime.

The first step, mostly to avoid having to duplicate the GUC logic, results on the following patch.

On 25th of November 2018, Peter Eisentraut committed Integrate recovery.conf into postgresql.conf:

recovery.conf settings are now set in postgresql.conf (or other GUC sources). Currently, all the affected settings are PGC_POSTMASTER; this could be refined in the future case by case. Recovery is now initiated by a file recovery.signal. Standby mode is initiated by a file standby.signal. The standby_mode setting is gone. If a recovery.conf file is found, an error is issued. The trigger_file setting has been renamed to promote_trigger_file as part of the move. The documentation chapter "Recovery Configuration" has been integrated into "Server Configuration". pg_basebackup -R now appends settings to postgresql.auto.conf and creates a standby.signal file. Author: Fujii Masao <masao.fujii@gmail.com> Author: Simon Riggs <simon@2ndquadrant.com> Author: Abhijit Menon-Sen <ams@2ndquadrant.com> Author: Sergei Kornilov <sk@zsrv.org> Discussion: https://www.postgresql.org/message-id/flat/607741529606767@web3g.yandex.ru/

Let’s compare a simple example between PostgreSQL 11 and 12.

Initialize replication on v11

With a default postgresql11-server installation on CentOS 7, let’s start archiving on our primary server:

$ mkdir /var/lib/pgsql/11/archives $ echo "archive_mode = 'on'" >> /var/lib/pgsql/11/data/postgresql.conf $ echo "archive_command = 'cp %p /var/lib/pgsql/11/archives/%f'" \ >> /var/lib/pgsql/11/data/postgresql.conf # systemctl start postgresql-11.service

Check that the archiver process is running:

$ psql -c "SELECT pg_switch_wal();" pg_switch_wal --------------- 0/16AC7D0 (1 row) $ ps -ef |grep postgres|grep archiver ... postgres: archiver last was 000000010000000000000001 $ ls -l /var/lib/pgsql/11/archives/ total 16384 -rw-------. 1 postgres po[...]
カテゴリー: postgresql

Pavel Stehule: plpgsql_check can be used as profiler

planet postgresql - 2018-11-26(月) 02:05:00
Today I integrated profiling functionality into plpgsql_check. When you enable profiling, then you don't need configure more.
postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
┌────────┬──────────┬───────────────────────────────────────────────────────────────────┐
│ lineno │ avg_time │ source │
╞════════╪══════════╪═══════════════════════════════════════════════════════════════════╡
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
└────────┴──────────┴───────────────────────────────────────────────────────────────────┘
(9 rows)
In this case, the function profile is stored in session memory, and when session is closed, the profile is lost.

There is possibility to load plpgsql_check by shared_preload_libraries config option. In this case, the profile is stored in shared memory and it is "pseudo" persistent. It is cleaned, when profile reset is required or when PostgreSQL is restarted.

There is another good PLpgSQL profiler. I designed integrated plpgsql_check profiler because I would to collect different data from running time, and I would to use this profiler for calculating test coverage. More, this profiler can be used without any special PostgreSQL configuration, what can be useful for some cases, when there are not a possibility to restart a server.
カテゴリー: postgresql

Regina Obe: PostGIS 2.3.8, 2.4.6

planet postgresql - 2018-11-24(土) 09:00:00

The PostGIS development team is pleased to provide bug fix 2.3.8 and 2.4.6 for the 2.3 and 2.4 stable branches.

Continue Reading by clicking title hyperlink ..
カテゴリー: postgresql

Brian Fehrle: Cloud Backup Options for PostgreSQL

planet postgresql - 2018-11-23(金) 19:58:00

As with any other component of a business, databases are extremely important its inner workings.

Whether it’s the core of the business or just another component, databases should be backed up regularly, and stored in safe locations for possible future recovery.

Should I Backup To The Cloud?

A general rule is to have at least 3 copies of anything of value and to store those backups in different locations. Backups on the same drive are useless if the drive itself dies, same host backups are also at risk if the host goes down, and same building backups are also in danger if the building burns down (drastic and unlikely, but possible).

Related resources  ClusterControl for PostgreSQL  Top Backup Tools for PostgreSQL  How to Minimize RPO for Your PostgreSQL Databases Using Point in Time Recovery  Using Barman to Backup PostgreSQL - An Overview  Using pg_dump and pg_dumpall to Backup PostgreSQL

Cloud backups offer an easy solution for the need of off-site backups without having to spin up new hardware in a secondary location. There are many different cloud services that offer backup storage, and choosing the right one will depend on backup needs, size requirements, cost, and security.

The benefits of having cloud backups are many, but mainly revolve around having these backups stored in a different location than the main database, allowing us to have a safety net in the case of a disaster recovery. While we won’t go into detail about how to set up each of these backup options, we will explore some different ideas and configurations for backups.

There are some downsides to storing backups in the cloud, starting with the transfer. If the backups for the database are extremely large, it could take a long time to do the actual upload, and could even have increased costs if the cloud service charges for bandwidth transfer. Compression is highly suggested to keep time and costs low.

Security could be another concern with hosting backups in the cloud, while some companies have strict guidelines for where their data

[...]
カテゴリー: postgresql

ページ