postgresql

Richard Yen: The Curious Case of Split WAL Files

planet postgresql - 2019-01-23(水) 05:00:00
Introduction

I’ve come across this scenario maybe twice in the past eight years of using Streaming Replication in Postgres: replication on a standby refuses to proceed because a WAL file has already been removed from the primary server, so it executes Plan B by attempting to fetch the relevant WAL file using restore_command (assuming that archiving is set up and working properly), but upon replay of that fetched file, we hear another croak: “No such file or directory.” Huh? The file is there? ls shows that it is in the archive. On and on it goes, never actually progressing in the replication effort. What’s up with that? Here’s a snippet from the log:

2017-07-19 16:35:19 AEST [111282]: [96-1] user=,db=,client= (0:00000)LOG: restored log file "0000000A000000510000004D" from archive scp: /archive/xlog//0000000A000000510000004E: No such file or directory 2017-07-19 16:35:20 AEST [114528]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 51/4D000000 on timeline 10 2017-07-19 16:35:20 AEST [114528]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000A000000510000004D has already been removed scp: /archive/xlog//0000000B.history: No such file or directory scp: /archive/xlog//0000000A000000510000004E: No such file or directory 2017-07-19 16:35:20 AEST [114540]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 51/4D000000 on timeline 10 2017-07-19 16:35:20 AEST [114540]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000A000000510000004D has already been removed scp: /archive/xlog//0000000B.history: No such file or directory scp: /archive/xlog//0000000A000000510000004E: No such file or directory 2017-07-19 16:35:25 AEST [114550]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 51/4D000000 on timeline 10 2017-07-19 16:35:25 AEST [114550]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive[...]
カテゴリー: postgresql

Jonathan Katz: Scheduling Backups En Masse with the Postgres Operator

planet postgresql - 2019-01-23(水) 01:27:00

An important part of running a production PostgreSQL database system (and for that matter, any database software) is to ensure you are prepared for disaster. There are many ways to go about preparing your system for disaster, but one of the simplest and most effective ways to do this is by taking periodic backups of your database clusters.

How does one typically go about setting up taking a periodic backup? If you’re running PostgreSQL on a Linux based system, the solution is to often use cron, and setting up a crontab entry similar to this in your superuser account:

# take a daily base backup at 1am to a mount point on an external disk # using pg_basebackup 0 1 * * * /usr/bin/env pg_basebackup –D /your/external/mount/

However, if you’re managing tens, if not hundreds and thousands of PostgreSQL databases, this very quickly becomes an onerous task and you will need some automation to help you scale your disaster recovery safely and efficiently.

Automating Periodic Backups

The Crunchy PostgreSQL Operator, an application for managing PostgreSQL databases in a Kubernetes-based environment in is designed for managing thousands of PostgreSQL database from a single interface to help with challenges like the above. One of the key features of the PostgreSQL Operator is to utilize Kubernetes Labels to apply commands across many PostgreSQL databases. Later in this article, we will see how we can take advantage of labels in order to set backup policies across many clusters.

カテゴリー: postgresql

Bruce Momjian: Insufficient Passwords

planet postgresql - 2019-01-22(火) 00:45:01

As I already mentioned, passwords were traditionally used to prove identity electronically, but are showing their weakness with increased computing power has increased and expanded attack vectors. Basically, user passwords have several restrictions:

  • must be simple enough to remember
  • must be short enough to type repeatedly
  • must be complex enough to not be easily guessed
  • must be long enough to not be easily cracked (discovered by repeated password attempts) or the number of password attempts must be limited

As you can see, the simple/short and complex/long requirements are at odds, so there is always a tension between them. Users often choose simple or short passwords, and administrators often add password length and complexity requirements to counteract that, though there is a limit to the length and complexity that users will accept. Administrators can also add delays or a lockout after unsuccessful authorization attempts to reduce the cracking risk. Logging of authorization failures can sometimes help too.

While Postgres records failed login attempts in the server logs, it doesn't provide any of the other administrative tools for password control. Administrators are expected to use an external authentication service like LDAP or PAM, which have password management features.

Continue Reading »

カテゴリー: postgresql

Hans-Juergen Schoenig: pg_permission: Inspecting your PostgreSQL security system

planet postgresql - 2019-01-22(火) 00:34:21

Security is a super important topic. This is not only true in the PostgreSQL world – it holds true for pretty much any modern IT system. Databases, however, have special security requirements. More often than not confidential data is stored and therefore it makes sense to ensure that data is protected properly. Security first !

PostgreSQL: Listing all permissions

Gaining an overview of all permissions granted to users in PostgreSQL can be quite difficult. However, if you want to secure your system, gaining an overview is really everything – it can be quite easy to forget a permission here and there and fixing things can be a painful task. To make life easier, Cybertec has implemented pg_permission (https://github.com/cybertec-postgresql/pg_permission). There are a couple of things, which can be achieved with pg_permission:

  • Gain a faster overview and list all permissions
  • Compare your “desired state” to what you got
  • Instantly fix errors

In short: pg_permission can do more than just listing what there is. However, let us get started with the simple case – listing all permissions. pg_permission provides a couple of views, which can be accessed directly once the extension has been deployed. Here is an example:

test=# \x Expanded display is on. test=# SELECT * FROM all_permissions WHERE role_name = 'workspace_owner'; -[ RECORD 1 ]------------------------------------------------------- object_type | TABLE role_name | workspace_owner schema_name | public object_name | b column_name | permission | SELECT granted | t -[ RECORD 2 ]------------------------------------------------------- object_type | TABLE role_name | workspace_owner schema_name | public object_name | b column_name | permission | INSERT granted | t -[ RECORD 3 ]------------------------------------------------------- object_type | TABLE role_name | workspace_owner schema_name | public object_name | b column_name | permission | UPDATE granted | f

The easiest way is to use the “all_permissions” view to gain an

[...]
カテゴリー: postgresql

Luca Ferrari: PostgreSQL to Microsoft SQL Server Using TDS Foreign Data Wrapper

planet postgresql - 2019-01-18(金) 09:00:00

I needed to push data from a Microsoft SQL Server 2005 to our beloved database, so why don’t use a FDW to the purpose? It has not been as simple as with other FDW, but works!

PostgreSQL to Microsoft SQL Server Using TDS Foreign Data Wrapper

At work I needed to push data out from a Microsoft SQL Server 2005 to a PostgreSQL 11 instance. Foreign Data Wrappers was my first thought! Perl to the rescue was my second, but since I had some time, I decided to investigate the first way first.


The scenario was the following:

  • CentOS 7 machine running PostgreSQL 11, it is not my preferred setup (I do prefer either FreeBSD or Ubuntu), but I have to deal with that;
  • Microsoft SQL Server 2005 running on a Windows Server , surely not something I like to work with and to which I had to connect via remote desktop (argh!).
First Step: get TDS working

After a quick research on the web, I discovered that MSSQL talks the Table Data Stream (TDS for short), so I don’t need to install an ODBC stack on my Linux box. And luckily, there are binaries for CentOS:

$ sudo yum install freetds $ sudo yum install freetds-devel freetds-doc

freetds comes along with a tsql terminal command that is meant to be a diagnosing tool, so nothing as complete as a psql terminal. You should really test your connectivity with tsql before proceeding further, since it can save you hours of debugging when things do not work.
Thanks to a pragmatic test with tsql I discovered that I needed to open port 1433 (default MSSQL port) on our...

カテゴリー: postgresql

Venkata Nagothi: An Overview of JSON Capabilities Within PostgreSQL

planet postgresql - 2019-01-18(金) 05:19:55
What is JSON?

JSON stands for “JavaScript Object Notification” which is a type of data format popularly used by web applications. This means, the data would be transmitted between web applications and servers in such a format. JSON was introduced as an alternative to the XML format. In the “good old days” the data used to get transmitted in XML format which is a heavy weight data type compared to JSON.Below is an example of JSON formatted string:

{ "ID":"001","name": "Ven", "Country": "Australia", "city": "Sydney", "Job Title":"Database Consultant"}

A JSON string can contain another JSON object with-in itself like shown below:

{ "ID":"001", "name": "Ven", "Job Title":"Database Consultant", "Location":{"Suburb":"Dee Why","city": "Sydney","State":"NSW","Country": "Australia"}}

Modern day web and mobile applications mostly generate the data in JSON format, also termed as “JSON Bytes” which are picked up by the application servers and are sent across to the database. The JSON bytes are in-turn processed, broken down into separate column values and inserted into an RDBMS table.
Example:

{ "ID":"001","name": "Ven", "Country": "Australia", "city": "Sydney", "Job Title":"Database Consultant"}

Above JSON data is converted to an SQL like below..

Insert into test (id, name, country,city,job_title) values (001,'Ven','Australia','Sydney','Database Consultant');

When it comes to storing and processing the JSON data, there are various NoSQL databases supporting it and the most popular one is MongoDB. When it comes to RDBMS databases, until recent times, JSON strings were treated as normal text and there were no data types which specifically recognize, store or process JSON format strings. PostgreSQL, the most popular open-source RDBMS database has come up with JSON data-type which turned out to be highly beneficial for performance, functionality and scalability when it comes to handling JSON data.

PostgreSQL + JSON

PostgreSQL database has become more-and-more popular ever since the JSON data-type was introduced. In-fact, P

[...]
カテゴリー: postgresql

Michael Paquier: Postgres 12 highlight - SKIP_LOCKED for VACUUM and ANALYZE

planet postgresql - 2019-01-17(木) 16:25:09

The following commit has been merged into Postgres 12, adding a new option for VACUUM and ANALYZE:

commit: 803b1301e8c9aac478abeec62824a5d09664ffff author: Michael Paquier <michael@paquier.xyz> date: Thu, 4 Oct 2018 09:00:33 +0900 Add option SKIP_LOCKED to VACUUM and ANALYZE When specified, this option allows VACUUM to skip the work on a relation if there is a conflicting lock on it when trying to open it at the beginning of its processing. Similarly to autovacuum, this comes with a couple of limitations while the relation is processed which can cause the process to still block: - when opening the relation indexes. - when acquiring row samples for table inheritance trees, partition trees or certain types of foreign tables, and that a lock is taken on some leaves of such trees. Author: Nathan Bossart Reviewed-by: Michael Paquier, Andres Freund, Masahiko Sawada Discussion: https://postgr.es/m/9EF7EBE4-720D-4CF1-9D0E-4403D7E92990@amazon.com Discussion: https://postgr.es/m/20171201160907.27110.74730@wrigleys.postgresql.org

Postgres 11 has extended VACUUM so as multiple relations can be specified in a single query, processing each relation one at a time. However if VACUUM gets stuck on a relation which is locked for a reason or another for a long time, it is up to the application layer which has triggered VACUUM to be careful to look at that and unblock the situation. SKIP_LOCKED brings more control regarding that by skipping immediately any relation that cannot be locked at the beginning of VACUUM or ANALYZE processing, meaning that the processing will finish on a timely manner at the cost of potentially doing nothing, which can also be dangerous if a table keeps accumulating bloat and is not cleaned up. As mentioned in the commit message, there are some limitations similar to autovacuum:

  • Relation indexes may need to be locked, which would cause the processing to still block when working on them.
  • The list of relations part of a partition or inheritance tree to process is built at the beginning of VACUUM or
[...]
カテゴリー: postgresql

Bruce Momjian: Removable Certificate Authentication

planet postgresql - 2019-01-17(木) 03:00:01

I mentioned previously that it is possible to implement certificate authentication on removable media, e.g., a USB memory stick. This blog post shows how it is done. First, root and server certificates and key files must be created:

$ cd $PGDATA # create root certificate and key file $ openssl req -new -nodes -text -out root.csr -keyout root.key -subj "/CN=root.momjian.us" $ chmod og-rwx root.key $ openssl x509 -req -in root.csr -text -days 3650 -extfile /etc/ssl/openssl.cnf -extensions v3_ca -signkey root.key -out root.crt # create server certificate and key file $ openssl req -new -nodes -text -out server.csr -keyout server.key -subj "/CN=momjian.us" $ chmod og-rwx server.key $ openssl x509 -req -in server.csr -text -days 365 -CA root.crt -CAkey root.key -CAcreateserial -out server.crt

Continue Reading »

カテゴリー: postgresql

Craig Kerstiens: Contributing to Postgres

planet postgresql - 2019-01-16(水) 02:48:00

About once a month I get this question: “How do I contribute to Postgres?”. PostgreSQL is a great database with a solid code base and for many of us, contributing back to open source is a worthwhile cause. The thing about contributing back to Postgres is you generally don’t just jump right in and commit code on day one. So figuring out where to start can be a bit overwhelming. If you’re considering getting more involved with Postgres, here’s a few tips that you may find helpful.

Follow what’s happening

The number one way to familiarize yourself with the Postgres development and code community is to subscribe to the mailing lists. Even if you’re not considering contributing back, the mailing lists can be a great place to level up your knowledge and skills around Postgres. Fair warning: the mailing lists can be very active. But that’s ok, as you don’t necessarily need to read every email as it happens—daily digests work just fine. There is a long list of mailing lists you can subscribe to, but here are a few I think you should know about:

  • pgsql-general - This is the most active of mailing lists where you’ll find questions about working with Postgres and troubleshooting. It’s a great place to start to chime in and help others as you see questions.
  • pgsql-hackers - Where core development happens. A must read to follow along for a few months before you start contributing yourself
  • pgsql-announce - Major announcements about new releases and happenings with Postgres.
  • pgsql-advocacy - If you’re more interested in the evangelism side this one is worth a subscription.

Following along to these lists will definitely prepare you to contribute code in the future. And will give you the opportunity to chime in to the discussions.

Familiarize yourself with the process

As you are reading along with the mailing lists the docs will become one of your best friends for understanding how certain things work. Postgres docs are rich with how things work so when you have questions best to check there ahead of asking what may have already

[...]
カテゴリー: postgresql

Joshua Drake: CFP extended until Friday!

planet postgresql - 2019-01-15(火) 23:30:00


We’ve had a great response to our PostgresConf US 2019 call for proposals with over 170 potential presentations -- thank you to everyone who has submitted so far! As with what has become a tradition among Postgres Conferences, we are extending our deadline by one week to allow those final opportunities to trickle in!

The new deadline is Friday, January 18th, submit now!
We accept all topics that relate to People, Postgres, Data including any Postgres related topic, such as open source technologies (Linux, Python, Ruby, Golang, PostGIS).

Talks especially in high demand are sessions related to Regulated Industries including healthtech, fintech, govtech, etc., especially use case and case studies.
Interested in attending this year’s conference? We’ve expanded our offerings, with trainings and tutorials open to everyone who purchases a Platinum registration. No separate fees for Mondays trainings (but it will be first come, first serve for seating).

Don’t forget that Early Bird registration ends this Friday, January 18. Tickets are substantially discounted when purchased early.
Register for PostgresConf 2019 Interested in an AWESOME international Postgres Conference opportunity? Consider attending PgConf Russia




カテゴリー: postgresql

Peter Eisentraut: Maintaining feature branches and submitting patches with Git

planet postgresql - 2019-01-15(火) 21:41:26

I have developed a particular Git workflow for maintaining PostgreSQL feature branches and submitting patches to the pgsql-hackers mailing list and commit fests. Perhaps it’s also useful to others.

This workflow is useful for features that take a long time to develop, will be submitted for review several times, and will require a significant amount of changes over time. In simpler cases, it’s probably too much overhead.

You start as usual with a new feature branch off master

git checkout -b reindex-concurrently master

and code away. Make as many commits as you like for every change you make. Never rebase this branch. Push it somewhere else regularly for backup.

When it’s time to submit your feature for the first time, first merge in the current master branch, fix any conflicts, run all the tests:

git checkout master git pull make world make check-world git checkout reindex-concurrently git merge master # possibly conflict resolution make world make check-world

(The actual commands are something like make world -j4 -k and make check-world -Otarget -j4, but I’ll leave out those options in this post for simplicity.)

(Why run the build and tests on the master branch before merging? That ensures that if the build or tests fail later in your branch, it’s because of your code and not because a bad situation in master or a system problem. It happens.)

Now your code is good to go. But you can’t submit it like that, you need to squash it to a single patch. But you don’t want to mess up your development history by overwriting your feature branch. I create what I call a separate “submission” branch for this, like so:

git checkout -b _submission/reindex-concurrently-v1 master

The naming here is just so that this shows up sorted separately in my git branch list. Then

git merge --squash reindex-concurrently

This effectively makes a copy of your feature branch as a single staged commit but doesn’t commit it yet. Now commit it:

git commit

At the first go-round, you should now write a commit message for your feature patch. Then c

[...]
カテゴリー: postgresql

Bruce Momjian: Three Factors of Authentication

planet postgresql - 2019-01-15(火) 06:00:01

Traditionally, passwords were used to prove identity electronically. As computing power has increased and attack vectors expanded, passwords are proving insufficient. Multi-factor authentication uses more than one authentication factor to strengthen authentication checking. The three factors are:

  1. What you know, e.g., password, PIN
  2. What you have, e.g., cell phone, cryptographic hardware
  3. What you are, e.g., finger print, iris pattern, voice

Postgres supports the first option, "What you know," natively using local and external passwords. It supports the second option, "What you have," using cert authentication. If the private key is secured with a password, that adds a second required factor for authentication. Cert only supports private keys stored in the file system, like a local file system or a removable USB memory stick.

One enhanced authentication method allows access to private keys stored on PIV devices, like the YubiKey. There are two advantages of using a PIV device compared to cert:

  • Requires a PIN, like a private-key password, but locks the device after three incorrect PIN entries (File-system-stored private keys protected with passwords can be offline brute-force attacked.)
  • While the private key can be used to decrypt and sign data, it cannot be copied from the PIV device, unlike one stored in a file system

Continue Reading »

カテゴリー: postgresql

Achilleas Mantzios: One Security System for Application, Connection Pooling and PostgreSQL - The Case for LDAP

planet postgresql - 2019-01-14(月) 20:54:48

Traditionally, the typical application consists of the following components:

In this simple case, a basic setup would suffice:

  • the application uses a simple local authentication mechanism for its users
  • the application uses a simple connection pool
  • there is a single user defined for database access

However, as the organization evolves and gets larger more components are added:

  • more tenant apps or instances of the app accessing the database
  • more services and systems accessing the database
  • central authentication/authorization (AA) for all (or most) services
  • separation of components for easier future scaling

In the above scheme, all concerns are separated into individual components, each component serves a specialized purpose. However, still the connection pool uses a single dedicated database user as in the previous simpler setup we saw above.

Besides the new components, also new requirements arrive:

  • better fine grained control of what users can do on the database level
  • auditing
  • better more useful system logging

We can always implement all three with more application code or more layers in the application, but this is just cumbersome and hard to maintain.

In addition, PostgreSQL offers such a rich set of solutions on the aforementioned areas (security, Row Level Security, auditing, etc) that it makes perfect sense to move all those services to the database layer. In order to take those services directly from the database, we must forget about single user in the database and use real individual users instead.

This takes us to a scheme like the below:

In our use case we will describe a typical enterprise setup consisting of the above scheme where we use:

  • Wildfly app server (examples shown for version 10)
  • LDAP Authentication/Authorization Service
  • pgbouncer connection pooler
  • PostgreSQL 10

It seems like a typical setup, since jboss/wildfly has been supporting LDAP authentication and authorization for many years, PostgreSQL has been supporting LDAP for many years.

However pgbouncer only started support for LDA

[...]
カテゴリー: postgresql

Claire Giordano: 10 Most Popular Citus Data Blog Posts in 2018, ft. Postgres

planet postgresql - 2019-01-14(月) 00:42:00

Seasons each have a different feel, a different rhythm. Temperature, weather, sunlight, and traditions—they all vary by season. For me, summer usually includes a beach vacation. And winter brings the smell of hot apple cider on the stove, days in the mountains hoping for the next good snowstorm—and New Year’s resolutions. Somehow January is the time to pause and reflect on the accomplishments of the past year, to take stock in what worked, and what didn’t. And of course there are the TOP TEN LISTS.

Spoiler alert, yes, this is a Top 10 list. If you’re a regular on the Citus Data blog, you know our Citus database engineers love PostgreSQL. And one of the open source responsibilities we take seriously is the importance of sharing learnings, how-to’s, and expertise. One way we share learnings is by giving lots of conference talks (seems like I have to update our Events page every week with new events.) And another way we share our learnings is with our blog.

So just in case you missed any of our best posts from last year, here is the TOP TEN list of the most popular Citus Data blogs published in 2018. Enjoy.

The Postgres 10 feature you didn’t know about: CREATE STATISTICS

BY SAMAY SHARMA | Postgres stores a lot of statistics about your data in order to effectively retrieve results when you query your database. In this post, Samay dives deep into some of the statistics PostgreSQL stores and how you can leverage CREATE STATISTICS to improve query performance when different columns are related.

In Postgres, the EXPLAIN planner collects statistics to help it estimate how many rows will be returned after executing a certain part of the plan, which then influences which type of scan or join algorithm will be used. Before Postgres 10, there wasn’t an easy way to tell the planner to collect statistics which capture the relationship between columns. But since the release of Postgres 10, there’s a feature which is built to solve exactly this problem. And this feature has gotten even better in PostgreSQL 11, too. All about C

[...]
カテゴリー: postgresql

Abdul Yadi: pgqr: a QR Code Generator

planet postgresql - 2019-01-12(土) 21:23:03

Related with my post: https://abdulyadi.wordpress.com/2015/11/14/extension-for-qr-code-bitmap/. I have repackage the module and available on github: https://github.com/AbdulYadi/pgqr.

This project adds 2 functionality to QR code generator from repository https://github.com/swex/QR-Image-embedded:

  1. In-memory monochrome bitmap construction (1 bit per pixel).
  2. Wrap the whole package as PostgreSQL extension.

This project has been compiled successfully in Linux against PostgreSQL version 11.
$ make clean
$ make
$ make install

On successful compilation, install this extension in PostgreSQL environment
$ create extension pgqr

Function pgqr has 4 parameters:

  1. t text: text to be encoded.
  2. correction_level integer: 0 to 3.
  3. model_number integer: 0 to 2.
  4. scale integer: pixels for each dot.

Let us create a QR Code
$ select pgqr(‘QR Code with PostgreSQL’, 0, 0, 4);
The output is a monochrome bitmap ready for display.

 

 

カテゴリー: postgresql

Quinn Weaver:

planet postgresql - 2019-01-12(土) 07:28:00
In the Bay Area? This Tuesday, 2019-01-15, SFPUG features David Fetter's talk on ASSERTIONs. RSVPs close Monday at noon, so don't hesitate! Thanks to Heap for hosting at their FiDi office.
カテゴリー: postgresql

Shaun M. Thomas: PG Phriday: PgBouncer or Bust

planet postgresql - 2019-01-12(土) 02:00:27

What is the role of PgBouncer in a Postgres High Availability stack? What even is PgBouncer at the end of the day? Is it a glorified traffic cop, or an integral component critical to the long-term survival of a Postgres deployment?

When we talk about Postgres High Availability, a lot of terms might spring to mind. Replicas, streaming, disaster recovery, fail-over, automation; it’s a ceaseless litany of architectural concepts and methodologies. The real question is: how do we get from Here to There?

The Importance of Proxies

It’s no secret that the application stack must communicate with the database. Regardless of how many layers of decoupling, queues, and atomicity of our implementation, data must eventually be stored for reference. But where is that endpoint? Presuming that write target is Postgres, what ensures the data reaches that desired terminus?

Consider this diagram:

Managed Proxy Layer

In this case, it doesn’t matter what type of Standby we’re using. It could be a physical streaming replica, some kind of logical copy, or a fully configured BDR node. Likewise, the Failover Mechanism is equally irrelevant. Whether we rely on repmgr, Patroni, Stolon, Pacemaker, or a haphazard collection of ad-hoc scripts, the important part is that we separate the application from the database through some kind of proxy.

Patroni relies on HAProxy and Stolon has its own proxy implementation, but what about the others? Traditionally PgBouncer fills this role. Without this important component, applications must connect directly to either the Primary or post-promotion Standby. If we’re being brutally honest, the application layer can’t be trusted with that kind of responsibility.

But why is that? Simply stated, we don’t know what the application layer is. In reality, an application is anything capable of connecting to the database. That could be the official software, or it could be a report, or a maintenance script, or a query tool, or any number of other access vectors. Which database node are they connecting to, and doe

[...]
カテゴリー: postgresql

Laurenz Albe: pgbouncer authentication made easy

planet postgresql - 2019-01-10(木) 18:00:45

pgbouncer is the most widely used connection pooler for PostgreSQL.
This blog will provide a simple cookbook how to configure user authentication with pgbouncer.

I wrote this cookbook using Fedora Linux and installed pgbouncer using the PGDG Linux RPM packages available from the download site.
But it should work pretty similarly anywhere.

 

What is a connection pooler?

Setting max_connections to a high value can impact performace and can even bring your database to its knees if all these connections become active at the same time.

Also, if database connections are short-lived, a substantial amount of your database resources can be wasted just opening database connections.

To mitigate these two problems, we need a connection pooler. A connection pooler is a proxy between the client and the database: clients connect to the connection pooler, which handles the SQL requests via a relatively stable set of persistent database connections (the “connection pool”).

Since clients connect to pgbouncer, it will have to be able to authenticate them, so we have to configure it accordingly.

 

The very simple method (authentication file)

This method is useful if the number of database users is small and passwords don’t change frequently.

For that, we create a configuration file userlist.txt in the pgbouncer configuration directory (on my system /etc/pgbouncer).
The file contains the database users and their passwords, so that pgbouncer can authenticate the client without resorting to the database server.
It looks like this:

"laurenz" "md565b6fad0e85688f3f101065bc39552df" "postgres" "md553f48b7c4b76a86ce72276c5755f217d"

You can write the file by hand using the information from the pg_shadow catalog table, or you can create it automatically.
For that to work, you need

    • the PostgreSQL command line client psql
      If it is not on your PATH, you’ll have to use the absolute path (something like /usr/pgsql-11/bin/psql or "C:\Program Files\PostgreSQL\11\bin\psql").
    • write access to the pgbouncer configuration file, which may require th
[...]
カテゴリー: postgresql

Magnus Hagander: Nordic PGDay 2019 - Schedule announced and registration open

planet postgresql - 2019-01-10(木) 17:41:25

Nordic PGDay 2019 is happy to announce our schedule has now been published.

Our registration has also been opened. Seats are limited so we recommend that you register early if you are interested! Early Bird discount is available until February 17th or for the first 40 tickets.

We look forward to seeing you in Copenhagen!

カテゴリー: postgresql

Joshua Drake: PostgresConf 2019: CFP and Early Bird closing soon

planet postgresql - 2019-01-10(木) 03:04:00
PostgresConf hopes everyone had a great holiday season and we hope everyone has a fantastic 2019.

With January upon us, we start conference season. This year, PostgresConf 2019 will be in NYC at the Sheraton New York Times Square March 18-22, 2019.

If you have a story about lessons learned with Postgres, a cool feature that you've found a creative way of using, or just some practical knowledge you'd like to share with your fellow Postgres users, submit a talk. Remember, there are many people who are new to Postgres, so introduction levels talks are welcome. The CFP ends this Friday so get your talks submitted soon at:
https://postgresconf.org/conferences/2019#callforpapers

Also, the Early Bird window for PostgresConf 2019 will be ending next Friday, January 18. This tickets are pretty substantially discounted if you purchase them early. On top of the Early Bird discount, for being a member of the NYC Postgres User Group, there is a promo code for an additional 15% off. Just use the code 2019_PUG_NYC when buying your tickets.
https://postgresconf.org/conferences/2019/buytickets Lastly, if you are already in the NYC area we look forward to seeing the community at the Winter Party on the 10th! https://www.meetup.com/postgres-nyc/events/257567648/ We hope to see you at PostgresConf 2019 and remember:


カテゴリー: postgresql

ページ