postgresql

Regina Obe: PostGIS 2.5.0rc2

planet postgresql - 2018-09-16(日) 09:00:00

The PostGIS development team is pleased to release PostGIS 2.5.0rc2.

Although this release will work for PostgreSQL 9.4 and above, to take full advantage of what PostGIS 2.5 offers, you should be running PostgreSQL 11beta3+ and GEOS 3.7.0 which were released recently.

Best served with PostgreSQL 11beta3.

2.5.0rc2

Changes since PostGIS 2.5.0rc1 release are as follows:

  • 4162, ST_DWithin documentation examples for storing geometry and radius in table (Darafei Praliaskouski, github user Boscop).
  • 4163, MVT: Fix resource leak when the first geometry is NULL (Raúl Marín)
  • 4172, Fix memory leak in lwgeom_offsetcurve (Raúl Marín)
  • 4164, Parse error on incorrectly nested GeoJSON input (Paul Ramsey)
  • 4176, ST_Intersects supports GEOMETRYCOLLECTION (Darafei Praliaskouski)
  • 4177, Postgres 12 disallows variable length arrays in C (Laurenz Albe)
  • 4160, Use qualified names in topology extension install (Raúl Marín)
  • 4180, installed liblwgeom includes sometimes getting used instead of source ones (Regina Obe)

View all closed tickets for 2.5.0.

After installing the binaries or after running pg_upgrade, make sure to do:

ALTER EXTENSION postgis UPDATE;

— if you use the other extensions packaged with postgis — make sure to upgrade those as well

ALTER EXTENSION postgis_sfcgal UPDATE; ALTER EXTENSION postgis_topology UPDATE; ALTER EXTENSION postgis_tiger_geocoder UPDATE;

If you use legacy.sql or legacy_minimal.sql, make sure to rerun the version packaged with these releases.

カテゴリー: postgresql

Bruce Momjian: Postgres 11 Features Presentation

planet postgresql - 2018-09-15(土) 03:30:01

Now that I have given a presentation about Postgres 11 features in New York City, I have made my slides available online.

カテゴリー: postgresql

Avinash Kumar: PostgreSQL Webinar Wed Oct 10th – Enterprise-Grade PostgreSQL: Built on Open Source Tools

planet postgresql - 2018-09-15(土) 02:23:22

Please join Percona’s PostgreSQL Support Technical Lead,  Avinash Vallarapu; Senior Support Engineer, Fernando Laudares; and Senior Support Engineer, Jobin Augustine, on Wednesday, October 10th, 2018 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4), as they demonstrate an enterprise-grade PostgreSQL® environment built using a combination of open source tools and extensions.

Register Now

“We built our application on top of PostgreSQL. It works great but only now that we took it to the market and it became a hit we realize how much it relies on the database. How can we “harden” PostgreSQL? How can we make the solution we built around PostgreSQL enterprise-grade?”

“I migrated from a proprietary database software to PostgreSQL. I am curious to know whether I can get the same features I used to have in the proprietary database software.”

You’ll find the answer to these questions and more in a series of blog posts we will be publishing on this topic, which will be followed by a live demo we planned for our webinar on October 10th, 2018.

The market coined the term “enterprise grade” or “enterprise ready” to differentiate products and service offerings for licensed database software. For example: there may be a standard database software or an entry-level package that delivers the core functionality and basic features. Likewise, there may be an enterprise version, a more advanced package which goes beyond the essentials to include features and tools indispensable for running critical solutions in production. With such a differentiation found in commercial software, we may wonder whether a solution built on top of an open source database like PostgreSQL can satisfy all the enterprise requirements.

It starts with building a secured PostgreSQL environment, tuning the database for the production workload, building a high availability strategy that avoids single-point-of-failures, scaling PostgreSQL using connection poolers to avoid excessive usage of server resources, and finally load balancing the reads between master and all

[...]
カテゴリー: postgresql

Sebastian Insausti: How to Deploy PostgreSQL for High Availability

planet postgresql - 2018-09-14(金) 17:56:00
Introduction

Nowadays, high availability is a requirement for many systems, no matter what technology we use. This is especially important for databases, as they store data that applications rely upon. There are different ways to replicate data across multiple servers, and failover traffic when e.g. a primary server stops responding.

Architecture

There are several architectures for PostgreSQL high availability, but the basic ones would be master-slave and master-master architectures.

Master-Slave

This may be the most basic HA architecture we can setup, and often times, the more easy to set and maintain. It is based on one master database with one or more standby servers. These standby databases will remain synchronized (or almost synchronized) with the master, depending on whether the replication is synchronous or asynchronous. If the main server fails, the standby contains almost all of the data of the main server, and can quickly be turned into the new master database server.

We can have two categories of standby databases, based on the nature of the replication:

  • Logical standbys - The replication between the master and the slaves is made via SQL statements.
  • Physical standbys - The replication between the master and the slaves is made via the internal data structure modifications.

In the case of PostgreSQL, a stream of write-ahead log (WAL) records is used to keep the standby databases synchronized. This can be synchronous or asynchronous, and the entire database server is replicated.

From version 10, PostgreSQL includes a built in option to setup logical replication which is based on constructing a stream of logical data modifications from the information in the WAL. This replication method allows the data changes from individual tables to be replicated without the need of designating a master server. It also allows data to flow in multiple directions.

But a master-slave setup is not enough to effectively ensure high availability, as we also need to handle failures. To handle failures, we need to be able t

[...]
カテゴリー: postgresql

Liaqat Andrabi: Webinar : Database Security in PostgreSQL [Follow Up]

planet postgresql - 2018-09-13(木) 21:26:04

Database security is an increasingly critical topic for any business handling personal data. Data breach can have serious ramifications for an organization, especially if the proper security protocols are not in place.

There are many ways to harden your database. As an example PostgreSQL addresses security using firewalls, encryption and authentication levels among other ways.

2ndQuadrant hosted a webinar on Database Security in PostgreSQL to highlight security concepts, features and architecture. The webinar was presented by Kirk Roybal, Principal Consultant at 2ndQuadrant – the recording is now available here.

Some of the questions that Kirk responded to are listed below:

Q1: What are your thoughts on performance of row-level security vs. doing that filtering via WHERE at the application level and how that affects development? I.E. now that you’re filtering via DB capabilities you lose the visibility of that being done at the application level – it becomes a bit of a black box of “it just works” for the development team.

A1: The PostgreSQL query parser is involved in evaluating the constraint either way. Since this is mostly dependent on PostgreSQL, there will be very little or no measurable difference in performance. Putting the security in the database has the advantage of being modifiable without changes to the application layer.

Q2: Do you have any suggestions for encrypting data at rest?

A2: PostgreSQL provides pgcrypto as an extension. PostgreSQL also allows you to create your own datatypes, operators and aggregates. Put the two together and you have encryption at rest.

Q3: Is it possible to configure Azure AD authentication too?

A3: Yes, if you create a bare Linux machine, you can configure anything you want.

Q4: Do you support performance tuning on AWS RDS Postgres?

A4: Yes, we do provide the Performance Tuning service for RDS. Because of the closed nature of the system, however, there might be some advanced settings that we won’t be able to tune.

Q5: What are the main differences between the PostgreS

[...]
カテゴリー: postgresql

Christophe Pettus: “Securing PostgreSQL” at PDXPUG PostgreSQL Day 2018

planet postgresql - 2018-09-13(木) 12:22:26

The slides from my presentation, Securing PostgreSQL at PDXPUG PostgreSQL Day 2018 are now available.

カテゴリー: postgresql

Joe Abbate: The Future of Pyrseas, revisited

planet postgresql - 2018-09-13(木) 10:55:02

Over two years ago, I lamented that this blog had remained silent for too long and about the lack of development activity on the Pyrseas project. I also announced immediate and longer term plans. Pyrseas 0.8 was finally released before the end of last year.

From time we get interest in Pyrseas, sometimes from people who use it in unexpected ways. However, my own interest in developing it further and actual commits have been declining considerably. I probably spend less than four hours a week on it. Therefore, I’ve decided to put the project on life support. I will attempt to notify owners of open issues via GitHub as to this status. I may continue to work on some issues or enhancements but on an even more reduced scale. If requested, I’ll make maintenance releases as needed. Should any enterprising developer want to take it over, I’ll gladly consider handing the reins.

Thanks to everyone who has contributed, inquired or shown interest in Pyrseas over the past eight years. In particular, I’d like to extend my gratitude to Daniele Varrazzo (not only for his contributions to Pyrseas but also for his work on Psycopg2), Roger Hunwicks and Josep Martínez Vila.

カテゴリー: postgresql

Bruce Momjian: Multi-Host Pg_dump

planet postgresql - 2018-09-13(木) 02:00:01

You have probably looked at logical dumps as supported by pg_dump and restores by pg_restore or, more simply, psql. What you might not have realized are the many options for dumping and restoring when multiple computers are involved.

The most simple case is dumping and restoring on the same server:

$ pg_dump -h localhost -Fc test > /home/postgres/dump.sql $ pg_restore -h localhost test < /home/postgres/dump.sql

Continue Reading »

カテゴリー: postgresql

Ajay Kulkarni: Announcing TimescaleDB 1.0: First enterprise-ready time-series database to support full SQL & scale

planet postgresql - 2018-09-12(水) 21:45:40
Announcing TimescaleDB 1.0: The first enterprise-ready time-series database to support full SQL and scale

Over 1M downloads; production deployments at Comcast, Bloomberg, Cray, and more; native Grafana integration; first-class Prometheus support; and dozens of new features signify positive momentum for TimescaleDB and the future of the time-series market

Today, we are excited to officially announce the first release candidate for TimescaleDB 1.0.

If you work in the software industry, you already know that 1.0 announcements generally signify that your product is “production-ready.”

Ironically, just last week we got this question on Twitter from a TimescaleDB user, who founded a weather mapping company:

@TimescaleDB Any reason why Timescale is not yet version 1.0.0? We use it for a few months under heavy load and so far no problems.

 — @ilblog

Yes, our 1.0 release is a little overdue as we’ve actually been production-ready for quite some time now.

Today, just a year and a half after our launch in April 2017, businesses large and small all over the world trust TimescaleDB for powering mission-critical applications including industrial data analysis, complex monitoring systems, operational data warehousing, financial risk management, geospatial asset tracking, and more.

“At Bloomberg, we have millions of data feeds and trillions of data points dating back over 100 years. My team and I have been extremely pleased with TimescaleDB’s capability to accommodate our workload while simplifying geo-financial analytics and data visualization. If you are looking to support large scale time-series datasets, then TimescaleDB is a good fit.” Erik Anderson, Lead Software Engineer at Bloomberg From 0 to over 1 million downloads in less than 18 months

Since our launch, we’ve experienced some significant momentum:

[...]
カテゴリー: postgresql

pgCMH - Columbus, OH: Inside PG 11

planet postgresql - 2018-09-12(水) 13:00:00

The Sep meeting will be held at 18:00 EST on Tues, the 25th. Once again, we will be holding the meeting in the community space at CoverMyMeds. Please RSVP on MeetUp so we have an idea on the amount of food needed.

What

CoverMyMeds’ very own Andy will be presenting this month. He’s going to tell us all about the upcoming major PostgreSQL release 11. You’ll definitely want to attend this one to find out all that’s new and cool in the upcoming release!

Where

CoverMyMeds has graciously agreed to validate your parking if you use their garage so please park there:

You can safely ignore any sign saying to not park in the garage as long as it’s after 17:30 when you arrive.

Park in any space that is not marked ‘24 hour reserved’.

Once parked, take the elevator/stairs to the 3rd floor to reach the Miranova lobby. Once in the lobby, the elevator bank is in the back (West side) of the building. Take a left and walk down the hall until you see the elevator bank on your right. Grab an elevator up to the 11th floor. (If the elevator won’t let you pick the 11th floor, contact Doug or CJ (info below)). Once you exit the elevator, look to your left and right; one side will have visible cubicles, the other won’t. Head to the side without cubicles. You’re now in the community space:

The kitchen is to your right (grab yourself a drink) and the meeting will be held to your left. Walk down the room towards the stage.

If you have any issues or questions with parking or the elevators, feel free to text/call Doug at +1.614.316.5079 or CJ at +1.740.407.7043

カテゴリー: postgresql

Dimitri Fontaine: PostgreSQL 11 and Just In Time Compilation of Queries

planet postgresql - 2018-09-12(水) 01:34:00

PostgreSQL 11 is brewing and will be released soon. In the meantime, testing it with your own application is a great way to make sure the community catches all the remaining bugs before the dot-zero release.

One of the big changes in the next PostgreSQL release is the result of Andres Freund’s work on the query executor engine. Andres has been working on this part of the system for a while now, and in the next release we are going to see a new component in the execution engine: a JIT expression compiler!

Benchmarks and TPC-H

Benchmarks are a great tool to show where performance improvements provide a benefit. The JIT expression compiler currently works best in the following situation:

  • the query contains several complex expression such as aggregates.
  • the query reads a fair amount of data but isn’t starved on IO resources.
  • the query is complex enough to warrant spending JIT efforts on it.

A query that fetches some information over a primary key surrogate id would not be a good candidate to see the improvements given by the new JIT infrastructure in PostgreSQL.

The TPC-H benchmark Q1 query is a good candidate for measuring the impact of the new executor stack at its best, so that’s the one we’re using here.

The specifications of the benchmark are available in a 137 pages PDF document named TPC Benchmark™ H. Each query in this specification comes with a business question, so here’s Q1:

Pricing Summary Report Query (Q1)

This query reports the amount of business that was billed, shipped, and returned.

The Pricing Summary Report Query provides a summary pricing report for all lineitems shipped as of a given date. The date is within 60 - 120 days of the greatest ship date contained in the database. The query lists totals for extended price, discounted extended price, discounted extended price plus tax, average quantity, average extended price, and average discount. These aggregates are grouped by RETURNFLAG and LINESTATUS, and listed in ascending order of RETURNFLAG and LINESTATUS. A count of the number of lineitems

[...]
カテゴリー: postgresql

Haroon .: PostgreSQL for IoT Data Retention and Archiving

planet postgresql - 2018-09-11(火) 16:08:38

We do understand that IoT revolution is resulting in enormous amounts of data. With brisk data growth where data is mostly time series append-only, relational databases and DBAs have a rather tough task to store, maintain, archive and in some cases get rid of the old data in an efficient manner. In my previous posts, I talked about various strategies and techniques around better scalability for data storage using PostgreSQL and Postgres-BDR extension. Data retention is becoming ever so important. So let’s see what PostgreSQL 10 and above have to offer to efficiently manage your data retention needs.

PostgreSQL has supported time based partitioning in some form for quite some time. However, it wasn’t part of the core PostgreSQL. PostgreSQL 10 made a major improvement in this area by introducing declarative partitioning.

As Postgres-BDR runs as an extension on top of PostgreSQL 10 and above, we get all partitioning features and improvements of PostgreSQL 10 and above in Postgres-BDR as well. So from the implementation perspective in the IoT domain, you can now create and manage partitions over time and space in PostgreSQL for multimaster environments using Postgres-BDR extension.

So let’s see how easy it is to implement data retention policies with PostgreSQL’s partitioning improvements.

CREATE TABLE iot_temperature_sensor_data ( ts timestamp without time zone, device_id text, reading float ) PARTITION BY RANGE (ts);

PARTITION BY RANGE tells PostgreSQL that we are partitioning this table by range using column logdate.

Let’s create some partitions

CREATE TABLE iot_temperature_sensor_data_2018_february PARTITION OF iot_temperature_sensor_data FOR VALUES FROM ( '2018-02-01') TO ( '2018-03-01' ); CREATE TABLE iot_temperature_sensor_data_2018_march PARTITION OF iot_temperature_sensor_data FOR VALUES FROM ( '2018-03-01') TO ( '2018-04-01' ); CREATE TABLE iot_temperature_sensor_data_2018_april PARTITION OF iot_temperature_sensor_data FOR VALUES FROM ( '2018-04-01') TO ( [...]
カテゴリー: postgresql

【プレスリリース】PostgreSQL 11 リリース

www.postgresql.jp news - 2018-09-11(火) 11:43:54
【プレスリリース】PostgreSQL 11 リリース anzai 2018/09/11 (火) - 11:43
カテゴリー: postgresql

Abdul Yadi: pgsocket: Extension for Simple TCP/IP Socket Client

planet postgresql - 2018-09-11(火) 11:05:32

pgsocket is an extension for PostgreSQL server to send bytes to remote TCP/IP socket server. For the first version only single function provided for one way data send in bytearray.

This extension is compiled in Linux against PostgreSQL version 10.

Download source code from https://github.com/AbdulYadi/pgsocket. Build in Linux as usual:
$ make clean
$ make
$ make install

On successful compilation, install this extension in PostgreSQL environment
$ create extension pgsocket

Let us send bytes to –for example– host with IP address nnn.nnn.nnn.nnn, port 9090, send time out 30 seconds, messages “Hello”
$ select pgsocketsend(‘nnn.nnn.nnn.nnn’, 9090, 30, (E’\\x’ || encode(‘Hello’, ‘hex’))::bytea);

Or using address host name instead of IP address
$ select pgsocketsend(‘thesocketserver’, 9090, 30, (E’\\x’ || encode(‘Hello’, ‘hex’))::bytea);

カテゴリー: postgresql

Bruce Momjian: Monitoring Complexity

planet postgresql - 2018-09-11(火) 03:15:01

I have always had trouble understanding the many monitoring options available in Postgres. I was finally able to collect all popular monitoring tools into a single chart (slide 96). It shows the various levels of monitoring: OS, process, query, parser, planner, executor. It also separates instant-in-time reporting and across-time analysis options.

カテゴリー: postgresql

Paul Ramsey: Parallel PostGIS and PgSQL 11

planet postgresql - 2018-09-11(火) 01:00:00

A little under a year ago, with the release of PostgreSQL 10, I evaluated the parallel query infrastructure and how well PostGIS worked with it.

The results were less than stellar for my example data, which was small-but-not-too-small: under default settings of PostgreSQL and PostGIS, parallel behaviour did not occur.

However, unlike in previous years, as of PostgreSQL 10, it was possible to get parallel plans by making changes to PostGIS settings only. This was a big improvement from PostgreSQL 9.6, which substantial changes to the PostgreSQL default settings were needed to force parallel plans.

PostgreSQL 11 promises more improvements to parallel query:

  • Parallelized hash joins
  • Parallelized CREATE INDEX for B-tree indexes
  • Parallelized CREATE TABLE .. AS, CREATE MATERIALIZED VIEW, and certain queries using UNION

With the exception of CREATE TABLE ... AS none of these are going to affect spatial parallel query. However, there have also been some none-headline changes that have improved parallel planning and thus spatial queries.

TL;DR:

PostgreSQL 11 has slightly improved parallel spatial query:

  • Costly spatial functions on the query target list (aka, the SELECT ... line) will now trigger a parallel plan.
  • Under default PostGIS costings, parallel plans do not kick in as soon as they should.
  • Parallel aggregates parallelize readily under default settings.
  • Parallel spatial joins require higher costings on functions than they probably should, but will kick in if the costings are high enough.
Setup

In order to run these tests yourself, you will need:

  • PostgreSQL 11
  • PostGIS 2.5

You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best.

The setup instructions show where to download the Canadian polling division data used for the testing:

  • pd a table of ~70K polygons
  • pts a table of ~70K points
  • pts_10 a table of ~700K points
  • pts_100 a table of ~7M points

We will work with the default configuration parameters and just mess

[...]
カテゴリー: postgresql

Regina Obe: PGOpen 2018 Data Loading Presentation Slides

planet postgresql - 2018-09-09(日) 13:16:00

At PGOpen 2018 in San Francisco, we gave a talk on 10 ways to load data into Posgres. This is one of the rare talks where we didn't talk much about PostGIS. However we did showcase tools ogr_fdw, ogr2ogr, shp2pgsql, which are commonly used for loading spatial data, but equally as good for loading non-spatial data. Below are the slide links.

Continue reading "PGOpen 2018 Data Loading Presentation Slides"
カテゴリー: postgresql

REGINA OBE: pgAdmin4 now offers PostGIS geometry viewer

planet postgresql - 2018-09-09(日) 06:13:00

pgAdmin4 version 3.3 released this week comes with a PostGIS geometry viewer. You will be able to see the graphical output of your query directly in pgAdmin, provided you output a geometry or geography column. If your column is of SRID 4326 (WGS 84 lon/lat), pgAdmin will automatically display against an OpenStreetMap background.

We have Xuri Gong to thank for working on this as a PostGIS/pgAdmin Google Summer of Code (GSOC) project. We'd like to thank Victoria Rautenbach and Frikan Erwee for mentoring.

Continue reading "pgAdmin4 now offers PostGIS geometry viewer"
カテゴリー: postgresql

Avinash Kumar: Setting up Streaming Replication in PostgreSQL

planet postgresql - 2018-09-08(土) 03:00:58

Configuring replication between two databases is considered to be a best strategy towards achieving high availability during disasters and provides fault tolerance against unexpected failures. PostgreSQL satisfies this requirement through streaming replication. We shall talk about another option called logical replication and logical decoding in our future blog post.

Streaming replication works on log shipping. Every transaction in postgres is written to a transaction log called WAL (write-ahead log) to achieve durability. A slave uses these WAL segments to continuously replicate changes from its master.

There exists three mandatory processes –

wal sender  , wal receiver  and startup  process, these play a major role in achieving streaming replication in postgres.

A

wal sender  process runs on a master, whereas the wal receiver  and startup  processes runs on its slave. When you start the replication, a wal receiver  process sends the LSN (Log Sequence Number) up until when the WAL data has been replayed on a slave, to the master. And then the wal sender  process on master sends the WAL data until the latest LSN starting from the LSN sent by the wal receiver , to the slave. Wal receiver  writes the WAL data sent by wal sender  to WAL segments. It is the startup  process on slave that replays the data written to WAL segment. And then the streaming replication begins.

Note: Log Sequence Number, or LSN, is a pointer to a location in the WAL.

Steps to setup streaming replication between a master and one slave Step 1:

Create the user in master using whichever slave should connect for streaming the WALs. This user must have REPLICATION ROLE.

CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'replicator'; Step 2:

The following parameters on the master are considered as mandatory when setting up streaming replication.

  • archive_mode : Must be set to ON to enable archiving of WALs.
  • wal_level : Must be at least set to hot_standby  until version 9.5 or replica  in the later versions.
  • max_wal_senders : Must be set t
[...]
カテゴリー: postgresql

Bruce Momjian: Signing Rows

planet postgresql - 2018-09-07(金) 22:30:01

With the RSA keys created in my previous blog entry, we can now properly sign rows to provide integrity and non-repudiation, which we did not have before. To show this, let's create a modified version of the previous schema by renaming the last column to signature:

CREATE TABLE secure_demo2 ( id SERIAL, car_type TEXT, license TEXT, activity TEXT, event_timestamp TIMESTAMP WITH TIME ZONE, username NAME, signature BYTEA);

Continue Reading »

カテゴリー: postgresql

ページ