planet postgresql

Subscribe to planet postgresql のフィード
Planet PostgreSQL
更新: 15分 1秒 前

Bruce Momjian: The Democratization of Databases

2019-06-28(金) 02:15:01

Having delivered my new talk, The Democratization of Databases, at PostGres IBiZa and Postgres Vision, I am now posting my slides online. It covers the history of various governing structures and why democracy provides superior results. It has been well received.

カテゴリー: postgresql

Joe Conway: PostgreSQL Deep Dive: PostgreSQL Defaults and Impact on Security - Part 2

2019-06-27(木) 22:17:33

In Part 1 of this blog, we covered quite a bit of information with respect to how a PostgreSQL database is initially configured by default from the perspective of discretionary access control. We also saw how to inspect those default behaviors using the crunchy_check_access extension. In Part 2, we will explore the situation identified in CVE-2018-1058 and discuss how to protect yourself.

CVE-2018-1058: Explained and Exploited

Finally we have come to the Pièce De Résistance!

CVE-2018-1058 describes how a user can create objects, named the same as objects in different schemas which can change the behavior of other users' queries, potentially causing unexpected or malicious behavior. This is also known as a trojan-horse attack. In order to fully understand how this situation may arise, we will first review some fundamental PostgreSQL concepts.

カテゴリー: postgresql

Luca Ferrari: PostgreSQL Administrator Account WITH NOLOGIN (recover your role)

2019-06-27(木) 09:00:00

Today I got an email from a friend of mine that locked out of his own database due to a little mistake.

PostgreSQL Administrator Account WITH NOLOGIN (recover your postgres role)

What if you get locked out your own cluster due to a simple and, to some extent, stupid error? Let’s see it in quick list of steps.
First of all, lock the default postgres account so that the default administrator cannot any more log in the clsuter:

% psql -U postgres -c "ALTER ROLE postgres WITH NOLOGIN" testdb ALTER ROLE % psql -U postgres -c "SELECT version();" testdb psql: FATAL: role "postgres" is not permitted to log in

What a mess!

PostgreSQL has a specific recovery mode, called single user mode, that resemble the operating system single user mode and can be used for such situations. Let’s see how.
First of all, shut down the cluster, avoid more damages of what you have already done!

% sudo service postgresql stop


Now, start the postgres process in single user mode. You need to know the data directory of your cluster in order for it to work:

% sudo -u postgres postgres --single -D /mnt/pg_data/pgdata/11.1 PostgreSQL stand-alone backend 11.3 backend>

What happened? I used the operating system user postgres to launch the operating system process postgres (ok there’s a little name confusion here!) in single (--single) mode for my own data directory (-D). I got a prompt, I’m connected to the backend process directly, so this is not the same as a local or TCP/IP...

カテゴリー: postgresql

Kaarel Moppel: Version 1.6 of pgwatch2 PostgreSQL monitoring tool released

2019-06-26(水) 17:00:20

After almost half a year since last updates I’m glad to announce that another set of useful features and improvements have found their way to pgwatch2 – our Open Source PostgreSQL monitoring tool of choice. New version is incremented to 1.6 but continuing the naming tradition I’m also calling it “Feature Pack 5” as it’s mostly a feature release (with a couple of bugfixes as well). So here a quick overview on changes – for the “full monty” and when updating from a previous version please turn to the Github changelog link below.

Highlights – Prometheus and Patroni support

The choice of data storage “backends” has been widened again (previous version added PostgreSQL) – it’s now possible to leverage the very popular Prometheus TSDB to store metrics gathered by pgwatch2! But a recommendation – I’d use Prometheus only when it’s already running in your organization and would strongly consider the alternatives (Postgres, InfluxDB) if planning to only monitor Postgres and starting a fresh monitoring setup as Prometheus is not able to store string values which is not optimal for Postgres monitoring. Also remember that we’re talking about the “pull” model now – previously metrics were pushed to the datastore. To get quickly started there’s also a suitable “predefined config” provided and an “DB Overview” dashboard similar to Postgres / InfluxDB.

The second big feature add support for our favourite Postgres HA cluster manager – Patroni, and should especially appeal to people running Postgres in a “cloudy” manner. In short the feature means that you can provide connection information to your Distributed Consensus Store (etcd, Zookeeper, Consul) that’s powering Patroni and then pgwatch2 will periodically consult this DCS and fetch the active “to be monitored” instances listing from there. Remember – the standard way is to exactly tell on which IP-s / hostnames your Postgres instances are available. For the time being though to keep it simple access is only possible to DCS-s that are not using passwords (which is the common ca

[...]
カテゴリー: postgresql

Michael Paquier: Postgres 12 highlight - More progress reporting

2019-06-26(水) 15:16:35

Since Postgres 11, it is possible to monitor progress activity of running manual VACUUM and even autovacuum using a dedicated system catalog called pg_stat_progress_vacuum. This is vital for operators when doing some long-running operations, because it is possible to not blindly wait for an operation to finish. When doing performance workload analysis, this is also proving to be helpful in evaluating VACUUM job progressing for tuning system-level parameters or even relation-level once depending on the load pattern.

Postgres 12 has added more monitoring in this area thanks for a set of commits. Here is the one for CLUSTER and VACUUM FULL:

commit: 6f97457e0ddd8b421ca5e483439ef0318e6fc89a author: Robert Haas <rhaas@postgresql.org> date: Mon, 25 Mar 2019 10:59:04 -0400 Add progress reporting for CLUSTER and VACUUM FULL. This uses the same progress reporting infrastructure added in commit c16dc1aca5e01e6acaadfcf38f5fc964a381dc62 and extends it to these additional cases. We lack the ability to track the internal progress of sorts and index builds so the information reported is coarse-grained for some parts of the operation, but it still seems like a significant improvement over having nothing at all. Tatsuro Yamada, reviewed by Thomas Munro, Masahiko Sawada, Michael Paquier, Jeff Janes, Alvaro Herrera, Rafia Sabih, and by me. A fair amount of polishing also by me. Discussion: http://postgr.es/m/59A77072.3090401@lab.ntt.co.jp

And here is the second one for CREATE INDEX and REINDEX:

commit: ab0dfc961b6a821f23d9c40c723d11380ce195a6 author: Alvaro Herrera <alvherre@alvh.no-ip.org> date: Tue, 2 Apr 2019 15:18:08 -0300 Report progress of CREATE INDEX operations This uses the progress reporting infrastructure added by c16dc1aca5e0, adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY. There are two pieces to this: one is index-AM-agnostic, and the other is AM-specific. The latter is fairly elaborate for btrees, including reportage for parallel index builds and the separate phases that btree index c[...]
カテゴリー: postgresql

Richard Yen: The Challenges of Setting max_connections and Why You Should Use a Connection Pooler

2019-06-26(水) 06:01:09
Introduction

PostgreSQL is “The World’s Most Advanced Open Source Database,” and I believe it. In my 10+ years of working with it, it’s been solid, serving up data to SaaS loads of over 1000 queries per second, rarely going down, surviving all manner of accusations of corruption (which turned out to be software engineering error) and performance degradation (which turned out to be user error). It’s got so many features and extensions that suit every need, maybe 50-60% of which most users don’t use on a regular basis, or haven’t even heard about. Unfortunately, in my recent experiences in technical support, I’ve learned that Postgres is generally very trusting, but that’s the case with many pieces of technology; it makes no judgments of how you should tune the parameters in postgresql.conf, just like the gas pedal on your sports car doesn’t make any judgments about how fast you should be driving. Just because you can put the pedal to the metal doesn’t mean you should do that in rush hour traffic, and just because you can set a parameter very high doesn’t necessarily mean that you should set it that high to serve a high-traffic OLTP application.

One of the parameters that gets the most misunderstanding is max_connections. It’s understandable that on modern systems with lots of CPUs and lots of RAM, serving modern SaaS loads to a global user base, one can see tens of thousands of user sessions at a time, each trying to query the database to update a user’s status, upload a selfie, or whatever else users might do. Naturally, a DBA would want to set max_connections in postgresql.conf to a value that would match the traffic pattern the application would send to the database, but that comes at a cost. One example of such a cost would be connection/disconnection latency; for every connection that is created, the OS needs to allocate memory to the process that is opening the network socket, and PostgreSQL needs to do its own under-the-hood computations to establish that connection. Scale that up to thousands of user sessi

[...]
カテゴリー: postgresql

Joe Conway: PostgreSQL Deep Dive: PostgreSQL Defaults and Impact on Security - Part 1

2019-06-26(水) 03:25:05

Recently I gave a "deep dive" talk on the topic of PostgreSQL security, and I wanted to capture one part of that content into a blog since this format is both better for making that content stand on its own and for expanding on it a bit.

Specifically, in this two-part blog, we will cover a PostgreSQL extension that I wrote called crunchy_check_access -- the motivation behind it and what it does -- and then use that extension to probe and understand the consequences of the default, out of the box, security of PostgreSQL. We will also examine the implications of choices we make when setting up "users" and "groups". To cap it off, we will explore the situation identified in CVE-2018-1058 and discuss how to protect yourself.

Warning: this two-part blog is a wall of text (and SQL). It would probably be more suited as a chapter in a long book. I encourage you to stick with it though, as I believe even those with a great deal of PostgreSQL experience will find the content very enlightening. At a minimum it will probably remind you of things you once learned but have since forgotten.

カテゴリー: postgresql

Mark Wong: PDXPUG July Meetup at OSCON: New things coming to Postgres 12

2019-06-25(火) 08:31:25

Note location for the July meeting is at OSCON!

When: 7-9pm Wednesday July 17, 2019
Where: Oregon Convention Center Room D135/136

New versions of PostgreSQL usually get released every fall with lots of improvements and new features. Come hear about some of the highlighted changes expected to come in the next new release!

This special edition meetup also welcomes any OSCON attendee.  Also come and meet the local Portland PostgreSQL user group to ask any PostgreSQL related questions!
カテゴリー: postgresql

Jobin Augustine: Managing Multiple PostgreSQL Instances on Ubuntu/Debian

2019-06-24(月) 23:46:14
Introduction

Those DBAs who are experts in one database system look for other database systems to have “similar features”. It is a human tendency to look at any new technology and compare it with a world they are familiar with. Most of the time, I keep addressing Oracle DBAs who are looking for similar features or setup in PostgreSQL, but this time it is for MySQL DBA. MySQL historically ships mysqld_multi to manage multiple instances of MySQL on a server. Even though this is not a core feature in MySQL, but rather a wrapper, I understand that it is something widely used and my college Fernando blogged about it.

In a discussion a few months back, one of my friends asked how PostgreSQL manages multiple instances and I agreed to write on that as it will be useful for anyone who looks for a similar setup. In short, the answer is that the PostgreSQL community manages it using wrapper scripts and the Linux service manager. During this time, systemd started becoming a standard among Linux distros and every Linux package started relying on it for service management. To my surprise, some of the MySQL packages started removing mysqld_multi from the bundle and started moving towards systemd as the way to manage multiple instances. So everything is getting converged to systemd in the new world. In this blog, I want to show how this can be done on Ubuntu/Debian Linux because all necessary wrapper scripts already exist on those distros as part of postgres-common package which gets installed by default with every PostgreSQL installation. Nothing prevents us from doing a similar wrapper on other Linux distros also.

$ dpkg -l | grep postgresql-common ii postgresql-common 201.pgdg18.10+1 all PostgreSQL database-cluster manager Default Installation

The default installation of PostgreSQL from apt repo of postgresql.org will contain postgresql-common backage for Debian/Ubuntu.

$ sudo apt install postgresql-11 Reading package lists... Done ... The following additional packages will be installed: libllvm7 libpq5 libsensors4 pgdg-ke[...]
カテゴリー: postgresql

Bruce Momjian: Postgres 12 Features Presentation

2019-06-23(日) 19:00:01

Now that I have given a presentation about Postgres 12 features in Ibiza, I have made my slides available online.

カテゴリー: postgresql

Avinash Kumar: Hypothetical Indexes in PostgreSQL

2019-06-22(土) 03:02:17

At recent conferences, we have received a curious question from users who have used Oracle in the past and are now using PostgreSQL: “Do we have hypothetical indexes in PostgreSQL ?“. The answer to that question is YES. The true meaning of hypothetical is imaginary but not real. We can try creating indexes that are imaginary for the PostgreSQL optimizer which don’t exist in reality. Let’s look at some detailed examples.

How are hypothetical indexes helpful?

The execution time of an SQL in most of the relational databases depends on the cost of the execution. An optimizer chooses the plan that has the lowest cost, and thus considers further phases such as execute and fetch using that plan. One of the easiest ways of optimizing a well-written SQL is through appropriate indexes that are suitable for that query. An index may reduce the number of pages fetched from disk and may live in the cache due to its size (<<< table size). So, indexing is always a low-hanging fruit for admins and developers wishing to tune an SQL.

But often developers wish to see if an index can be really helpful. The only option available is to create the index and check to see if the queries are able to use it with a reduced cost. Creating the index may be fine if it is in a test environment, or a table that is not huge in size, but if for some reason you wish to see if an index can be helpful for an SQL without actually creating it, then you may test and try this extension to create hypothetical indexes.

PostgreSQL Extension for hypothetical indexes

In order to achieve the functionality of creating imaginary indexes in PostgreSQL, we need to use an extension named hypopg. This extension is made available in the PGDG repository from PostgreSQL 9.2 and works until the latest active release PostgreSQL 11.3. In order to create this extension, you may just use yum or apt depending on your linux distro or compile it from source.

Installing hypopg

In RedHat/CentOS

# yum install hypopg

For Debian/Ubuntu, you must make sure you have the development p

[...]
カテゴリー: postgresql

Vasilis Ventirozos: CVE-2019-10164 Who's affected and how to protect your systems.

2019-06-22(土) 02:01:00

Yesterday, 20th of June Postgres community released minor version updates for all supported versions. (11.4, 10.9, 9.6.14, 9.5.18, 9.4.23, 12Beta2)
As with any minor version, it is recommended to upgrade and keep your database to the latest minor version. But this release is a bit more important 
than others because it includes a fix about a recently discovered CVE (CVE-2019-10164) 
From the release notes:

"This release is made outside of the normal update release schedule as the security
vulnerability was determined to be critical enough to distribute the fix as quickly as possible.
Users who are running PostgreSQL 10, PostgreSQL 11, or the PostgreSQL 12 beta should upgrade as soon as possible."

In this post i'll talk about this CVE. What this all about, who is affected and how to protect your systems against it.

What is this all about. A system authenticated user could change their own password with a "special crafted password" that could crash your database server or, 
worst case, have the operating system user who owns postgres executing arbitrary code. This is a bug on SCRAM authentication mechanism 
which is why it only affects postgres version 10 and above. Reality is that on some operating systems, for example ubuntu, the default compiler options
should prevent code execution (see here) and since an attack can only happen from a user that is already authenticated, this user, with enough privileges 
could potentially come up with an SQL statement that could DOS or even bring the service down. 
Of course, this doesn't mean that if you trust your users you shouldn't upgrade, you definitely should.. 
The latest pg versions also fix a second attack in libpq, a library that is extensively used by almost all common postgres clients
psql, psycopg2 and pgadmin4 included. In this attack a user connected from a server that runs software that uses libpq, 
say psycopg2 or psql could crash the client or run arbitrary code by exploiting the SCRAM authentication process. 
In this case the exploitation will happen on the[...]
カテゴリー: postgresql

Michael Paquier: Postgres 12 highlight - SQL/JSON path

2019-06-21(金) 12:58:34

Postgres ships in-core data types for JSON with specific functions and operators (json since 9.2, and jsonb which is a binary representation since 9.4). The upcoming Postgres 12 is becoming more complaint with the SQL specifications by introducing SQL/JSON path language, introduced mainly by the following commit:

commit: 72b6460336e86ad5cafd3426af6013c7d8457367 author: Alexander Korotkov <akorotkov@postgresql.org> date: Sat, 16 Mar 2019 12:15:37 +0300 Partial implementation of SQL/JSON path language SQL 2016 standards among other things contains set of SQL/JSON features for JSON processing inside of relational database. The core of SQL/JSON is JSON path language, allowing access parts of JSON documents and make computations over them. This commit implements partial support JSON path language as separate datatype called "jsonpath". The implementation is partial because it's lacking datetime support and suppression of numeric errors. Missing features will be added later by separate commits. Support of SQL/JSON features requires implementation of separate nodes, and it will be considered in subsequent patches. This commit includes following set of plain functions, allowing to execute jsonpath over jsonb values: * jsonb_path_exists(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_match(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query(jsonb, jsonpath[, jsonb, bool]), * jsonb_path_query_array(jsonb, jsonpath[, jsonb, bool]). * jsonb_path_query_first(jsonb, jsonpath[, jsonb, bool]). This commit also implements "jsonb @? jsonpath" and "jsonb @@ jsonpath", which are wrappers over jsonpath_exists(jsonb, jsonpath) and jsonpath_predicate(jsonb, jsonpath) correspondingly. These operators will have an index support (implemented in subsequent patches). Catversion bumped, to add new functions and operators. Code was written by Nikita Glukhov and Teodor Sigaev, revised by me. Documentation was written by Oleg Bartunov and Liudmila Mantrova. The work was inspired by Oleg Bartunov. Discussion: https://postgr.es/m/[...]
カテゴリー: postgresql

Jonathan Katz: Explaining CVE-2019-10164 + PostgreSQL Security Best Practices

2019-06-21(金) 11:30:49

The PostgreSQL Global Development Group provided an out-of-cycle update release for all supported  to provide a fix for the CVE-2019-10164 vulnerability. This vulnerability only affects people running PostgreSQL 10, 11 or the 12 beta, and it is effectively remediated by simply upgrading all of your PostgreSQL installations to the latest versions.

What follows is some more insight about what this vulnerability is, the impact it can have in your environment, how to ensure you have patched all of your systems, and provide some PostgreSQL security best practices that could help mitigate the impact of this kind of vulnerability.

カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Changes on explain.depesz.com

2019-06-18(火) 20:54:00
Recently got two bug reports: plans with “COSTS OFF" do not parse, and error out (bugreport by Marc Dean Jr) WorkTable Scan is not properly parsed (bugreport by Ivan Vergiliev) Additionally, I was kinda upset because plans that include trigger calls did not display properly. All of this has been fixed today: First, I fixed … Continue reading "Changes on explain.depesz.com"
カテゴリー: postgresql

Granthana Biswas: Install PostgreSQL 9.6 with Transparent Data Encryption

2019-06-18(火) 17:00:14
Cluster encryption can be used if the DBA can not or does not rely on the file system in terms of confidentiality. If this feature is enabled, PostgreSQL encrypts data  (both relations and write-ahead log) when writing to disk, and decrypts it when reading. The encryption is transparent, so the applications see no difference between the encrypted and unencrypted clusters.

PostgreSQL 9.6 with TDE on Ubuntu

In this blog, we go through the basic steps used to install PostgreSQL 9.6 with Transparent Data Encryption (TDE) on Ubuntu. You can download the patch for this here.

Create data directory for PostgreSQL: Just for example, I am creating it at the default location:

 

sudo mkdir -p /usr/local/pgsql/data
sudo chown postgres:postgres /usr/local/pgsql/data

Install the libraries for readline, bison, flex, openssl, zlib and crypto:

 

sudo apt-get install libreadline8 libreadline-dev zlibc zlib1g-dev bison flex libssl-dev openssl

Run configure from the source code directory:

You can choose where all files will be installed by passing --prefix. Default is /usr/local/pgsql which I am using here for example. Make sure you enable openssl by passing --with-openssl:

 

sudo ./configure --prefix=/usr/local/pgsql --with-openssl
sudo make

 

The above two commands should run without errors. Now we are ready to install:

 

sudo make install

 

We can now proceed to initialize the cluster. For that, let’s switch to the postgres user:

 

sudo su - postgres

As a good practice, lets add the PostgreSQL binaries to PATH:

 

export PATH=$PATH:/usr/local/pgsql/bin

To create encrypted cluster, use the -K option to pass the initdb utility. For example:

 

initdb -D /usr/local/pgsql/data -K/usr/local/pgsql/keypass

 

Here /usr/local/pgsql/keypass is an executable file that returns either encryption key or encryption password with the appropriate prefix. In this case, we are passing the encryption_password in 8-16 characters in a simple executable file which outputs: 

 

encryption_password=<8-16_passphrase>

$ chmod 755 /usr/local/pgsql/keypass
$ c

[...]
カテゴリー: postgresql

pgCMH - Columbus, OH: What’s new in pgBouncer

2019-06-17(月) 13:00:00

The June meeting will be held at 18:00 EST on Tues, the 25th. Once again, we will be holding the meeting in the community space at CoverMyMeds. Please RSVP on MeetUp so we have an idea on the amount of food needed.

What

CoverMyMeds’ very own CJ will be presenting this month. This month’s meeting will be CJ telling us about what’s new and improved in pgBouncer as well as how to get it up and running. Discussion will include real life examples from its use at CMM. pgBouncer is the lightweight connection pooler for PostgreSQL.

Where

CoverMyMeds has graciously agreed to validate your parking if you use their garage so please park there:

You can safely ignore any sign saying to not park in the garage as long as it’s after 17:30 when you arrive.

Park in any space that is not marked ‘24 hour reserved’.

Once parked, take the elevator/stairs to the 3rd floor to reach the Miranova lobby. Once in the lobby, the elevator bank is in the back (West side) of the building. Take a left and walk down the hall until you see the elevator bank on your right. Grab an elevator up to the 11th floor. (If the elevator won’t let you pick the 11th floor, contact Doug or CJ (info below)). Once you exit the elevator, look to your left and right; one side will have visible cubicles, the other won’t. Head to the side without cubicles. You’re now in the community space:

The kitchen is to your right (grab yourself a drink) and the meeting will be held to your left. Walk down the room towards the stage.

If you have any issues or questions with parking or the elevators, feel free to text/call Doug at +1.614.316.5079 or CJ at +1.740.407.7043

カテゴリー: postgresql

Avinash Kumar: Bloom Indexes in PostgreSQL

2019-06-15(土) 04:29:56

There is a wide variety of indexes available in PostgreSQL. While most are common in almost all databases, there are some types of indexes that are more specific to PostgreSQL. For example, GIN indexes are helpful to speed up the search for element values within documents. GIN and GiST indexes could both be used for making full-text searches faster, whereas BRIN indexes are more useful when dealing with large tables, as it only stores the summary information of a page. We will look at these indexes in more detail in future blog posts. For now, I would like to talk about another of the special indexes that can speed up searches on a table with a huge number of columns and which is massive in size. And that is called a bloom index.

In order to understand the bloom index better, let’s first understand the bloom filter data structure. I will try to keep the description as short as I can so that we can discuss more about how to create this index and when will it be useful.

Most readers will know that an array in computer sciences is a data structure that consists of a collection of values and variables. Whereas a bit or a binary digit is the smallest unit of data represented with either 0 or 1. A bloom filter is also a bit array of m bits that are all initially set to 0.

A bit array is an array that could store a certain number of bits (0 and 1). It is one of the most space-efficient data structures to test whether an element is in a set or not.

Why use bloom filters?

Let’s consider some alternates such as list data structure and hash tables. In the case of a list data structure, it needs to iterate through each element in the list to search for a specific element. We can also try to maintain a hash table where each element in the list is hashed, and we then see if the hash of the element we are searching for matches a hash in the list. But checking through all the hashes may be a higher order of magnitude than expected. If there is a hash collision, then it does a linear probing which may be time-consuming. When we

[...]
カテゴリー: postgresql

Luca Ferrari: A recursive CTE to get information about partitions

2019-06-12(水) 09:00:00

I was wondering about writing a function that provides a quick status about partitioning. But wait, PostgreSQL has recursive CTEs!

A recursive CTE to get information about partitions

I’m used to partitioning, it allows me to quickly and precisely split data across different tables. PostgreSQL 10 introduced the native partitioning, and since that I’m using native partitioning over inheritance whenever it is possible.
But how to get a quick overview of the partition status? I mean, knowing which partition is growing the more?
In the beginning I was thinking to write a function to do that task, quickly finding myself iterating recursively over pg_inherits, the table that links partitions to their parents. But the keyword here is recursively: PostgreSQL provides recursive Common Table Expression, and a quick search revelead I was right: it is possible to do it with a single CTE. Taking inspiration from this mailing list message, here it is a simple CTE to get a partition status (you can find it on my GitHub repository):

WITH RECURSIVE inheritance_tree AS ( SELECT c.oid AS table_oid , c.relname AS table_name , NULL::text AS table_parent_name , c.relispartition AS is_partition FROM pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind = 'p' AND c.relispartition = false UNION ALL SELECT inh.inhrelid AS table_oid , c.relname AS table_name ,...
カテゴリー: postgresql

ページ