フィードアグリゲーター

PHP 7.3.8 Release Announcement

php.net - 2019-08-01(木) 17:19:59
カテゴリー: php

Abbas Butt: How to display encoded byte stream of a varchar column in PostgreSQL

planet postgresql - 2019-08-01(木) 15:53:00
CREATE TABLE varchar_demo(description varchar(200));

INSERT INTO varchar_demo VALUES('こんにちは');
INSERT INTO varchar_demo VALUES('ABCDE');

SELECT * FROM varchar_demo;
 description 
-------------
 こんにちは
 ABCDE
(2 rows)


CREATE OR REPLACE FUNCTION dump(columnname TEXT, tablename TEXT,
                                rowid INT) RETURNS TEXT
LANGUAGE plpgsql
AS
$function$
DECLARE 
  hex_str TEXT;
  hex_byte TEXT;
  query TEXT;
  i INT;
  len INT;
BEGIN
  query := format('SELECT octet_length(%I) FROM %I offset %s
                   LIMIT 1', columnname, tablename, rowid);
  EXECUTE query INTO len;
  hex_str := 'Len=' || len || ' ';
  FOR i in 1..len LOOP
    query := format('SELECT to_hex(get_byte(%I::bytea, %s)) 
                     FROM %I offset %s LIMIT 1', 
                     columnname, i - 1, tablename, rowid);
    EXECUTE query INTO hex_byte;
    hex_str := hex_str || hex_byte || ',';
  END LOOP;
  RETURN hex_str;
END;
$function$;


select dump('description', 'varchar_demo', 0);
                         dump                         
------------------------------------------------------
 Len=15 e3,81,93,e3,82,93,e3,81,ab,e3,81,a1,e3,81,af,
(1 row)

select dump('description', 'varchar_demo', 1);
         dump          
-----------------------
 Len=5 41,42,43,44,45,
(1 row)

カテゴリー: postgresql

PHP 7.1.31 Released

planet PHP - 2019-08-01(木) 09:00:00
The PHP development team announces the immediate availability of PHP 7.1.31. This is a security release.All PHP 7.1 users are encouraged to upgrade to this version.For source downloads of PHP 7.1.31 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
カテゴリー: php

PHP 7.3.8 Release Announcement

planet PHP - 2019-08-01(木) 09:00:00
The PHP development team announces the immediate availability of PHP 7.3.8. This is a security release which also contains several bug fixes.All PHP 7.3 users are encouraged to upgrade to this version.For source downloads of PHP 7.3.8 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
カテゴリー: php

PHP 7.2.21 Released

planet PHP - 2019-08-01(木) 09:00:00
The PHP development team announces the immediate availability of PHP 7.2.21. This is a security release which also contains several minor bug fixes.All PHP 7.2 users are encouraged to upgrade to this version.For source downloads of PHP 7.2.21 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
カテゴリー: php

Sven Klemm: OrderedAppend: An optimization for range partitioning

planet postgresql - 2019-08-01(木) 02:00:07

With this feature, we’ve seen up to 100x performance improvements for certain queries.

In our previous post on implementing constraint exclusion, we discussed how TimescaleDB leverages PostgreSQL’s foundation and expands on its capabilities to improve performance. Continuing with the same theme, in this post we will discuss how we’ve added support for ordered appends which optimize a large range of queries, particularly those that are ordered by time.

We’ve seen performance improvements up to 100x for certain queries after applying this feature, so we encourage you to keep reading!

Optimizing Appends for large queries

PostgreSQL represents how plans should be executed using “nodes”. There are a variety of different nodes that may appear in an EXPLAIN output, but we want to focus specifically on Append nodes, which essentially combine the results from multiple sources into a single result.

PostgreSQL has two standard Appends that are commonly used that you can find in an EXPLAIN output:

  • Append: appends results of child nodes to return a unioned result
  • MergeAppend: merge output of child nodes by sort key; all child nodes must be sorted by that same sort key; accesses every chunk when used in TimescaleDB

When MergeAppend nodes are used with TimescaleDB, we necessarily access every chunk to figure out if the chunk has keys that we need to merge. However, this is obviously less efficient since it requires us to touch every chunk.

To address this issue, with the release of TimescaleDB 1.2 we introduced OrderedAppend as an optimization for range partitioning. The purpose of this feature is to optimize a large range of queries, particularly those that are ordered by time and contain a LIMIT clause. This optimization takes advantage of the fact that we know the range of time held in each chunk, and can stop accessing chunks once we’ve found enough rows to satisfy the LIMIT clause. As mentioned above, with this optimization we see performance improvements of up to 100x

[...]
カテゴリー: postgresql

Jobin Augustine: PostgreSQL: Simple C extension Development for a Novice User (and Performance Advantages)

planet postgresql - 2019-07-31(水) 23:15:10

One of the great features of PostgreSQL is its extendability. My colleague and senior PostgreSQL developer Ibar has blogged about developing an extension with much broader capabilities including callback functionality. But in this blog post, I am trying to address a complete novice user who has never tried but wants to develop a simple function with business logic. Towards the end of the blog post, I want to show how lightweight the function is by doing simple benchmarking which is repeatable and should act as a strong justification for why end-users should do such development.

Generally, PostgreSQL and extension developers work on a PostgreSQL source build. For a novice user, that may not be required, instead, dev/devel packages provided for the Linux distro would be sufficient. Assuming that you have installed PostgreSQL already, the following steps can get you the additional development libraries required.

On Ubuntu/Debian

$ sudo apt install postgresql-server-dev-11

On RHEL/CentOS

sudo yum install postgresql11-devel

The next step is to add a PostgreSQL binary path to your environment, to ensure that pg_config is there in the path. In my Ubuntu laptop, this is how:

export PATH=/usr/lib/postgresql/11/bin:$PATH

Above mentioned paths may vary according to the environment.

Please make sure that the pg_config is executing without specifying the path:

$ pg_config

PostgreSQL installation provides a build infrastructure for extensions, called PGXS, so that simple extension modules can be built simply against an already-installed server. It automates common build rules for simple server extension modules.

$ pg_config --pgxs /usr/lib/postgresql/11/lib/pgxs/src/makefiles/pgxs.mk

Now let’s create a directory for development. I am going to develop a simple extension addme with a function addme to add 2 numbers.

$ mkdir addme

Now we need to create a Makefile which builds the extension. Luckily, we can use all PGXS macros.

MODULES = addme EXTENSION = addme DATA = addme--0.0.1.sql PG_CO[...]
カテゴリー: postgresql

PHP fwdays'20

php.net - 2019-07-31(水) 22:50:41
カテゴリー: php

Haroon .: PostgreSQL: Regular expressions and pattern matching

planet postgresql - 2019-07-31(水) 21:31:25
A regular expression is a special text string used to describe a search pattern. PostgreSQL’s regular expressions supports three separate approaches to pattern matching: POSIX-style regular expressions (BREs and EREs) SIMILAR TO operator added in SQL:1999 SQL LIKE operator There are some more advanced techniques for advanced pattern matching requirements but those will very likely […]
カテゴリー: postgresql

php Central Europe 2019

php.net - 2019-07-31(水) 15:29:41
カテゴリー: php

Paul Ramsey: Waiting for PostGIS 3: ST_AsMVT Performance

planet postgresql - 2019-07-31(水) 07:39:17

Vector tiles are the new hotness, allowing large amounts of dynamic data to be sent for rendering right on web clients and mobile devices, and making very beautiful and highly interactive maps possible.

Since the introduction of ST_AsMVT(), people have been generating their tiles directly in the database more and more, and as a result wanting tile generation to go faster and faster.

Every tile generation query has to carry out the following steps:

  • Gather all the relevant rows for the tile
  • Simplify the data appropriately to match the resolution of the tile
  • Clip the data to the bounds of the tile
  • Encode the data into the MVT protobuf format

For PostGIS 3.0, performance of tile generation has been vastly improved.

カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 13 – Add support for –jobs in reindexdb

planet postgresql - 2019-07-31(水) 07:04:31
On 27th of July 2019, Michael Paquier committed patch: Add support for --jobs in reindexdb   When doing a schema-level or a database-level operation, a list of relations to build is created which gets processed in parallel using multiple connections, based on the recent refactoring for parallel slots in src/bin/scripts/. System catalogs are processed first … Continue reading "Waiting for PostgreSQL 13 – Add support for –jobs in reindexdb"
カテゴリー: postgresql

Paul Ramsey: Simple SQL GIS

planet postgresql - 2019-07-31(水) 05:16:14

And, late on a Friday afternoon, the plaintive cry was heard!

Anyone got a KML/Shapefile of B.C. elxn boundaries that follows the water (Elections BC's KML has ridings going out into the sea)

— Chad Skelton (@chadskelton) November 16, 2012

And indeed, into the sea they do go!

And ‘lo, the SQL faeries were curious, and gave it a shot!

##### Commandline OSX/Linux ##### # Get the Shape files # http://www.elections.bc.ca/index.php/voting/electoral-maps-profiles/ wget http://www.elections.bc.ca/docs/map/redis08/GIS/ED_Province.exe # Exe? No prob, it's actually a self-extracting ZIP unzip ED_Province # Get a PostGIS database ready for the data createdb ed_clip psql -c "create extension postgis" -d ed_clip # Load into PostGIS # The .prj says it is "Canada Albers Equal Area", but they # lie! It's actually BC Albers, EPSG:3005 shp2pgsql -s 3005 -i -I ED_Province ed | psql -d ed_clip # We need some ocean! Use Natural Earth... # http://www.naturalearthdata.com/downloads/ wget http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_ocean.zip unzip ne_10m_ocean.zip # Load the Ocean into PostGIS! shp2pgsql -s 4326 -i -I ne_10m_ocean ocean | psql -d ed_clip # OK, now we connect to PostGIS and start working in SQL psql -e ed_clip -- How big is the Ocean table? SELECT Count(*) FROM ocean; -- Oh, only 1 polygon. Well, that makes it easy... -- For each electoral district, we want to difference away the ocean. -- The ocean is a one big polygon, this will take a while (if we -- were being more subtle, we'd first clip the ocean down to -- a reasonable area around BC.) CREATE TABLE ed_clipped AS SELECT CASE WHEN ST_Intersects(o.geom, ST_Transform(e.geom,4326)) THEN ST_Difference(ST_Transform(e.geom,4326), o.geom) ELSE ST_Transform(e.geom,4326) END AS geom, e.edabbr, e.edname FROM ed e, ocean o; -- Check our geometry types... SELECT DISTINCT ST_GeometryType(geom) FROM ed_clipped; -- Oh, they are heterogeneous. Let's force them all multi U[...]
カテゴリー: postgresql

Avinash Kumar: Using plpgsql_check to Find Compilation Errors and Profile Functions

planet postgresql - 2019-07-31(水) 03:02:26

There is always a need for profiling tools in databases for admins or developers. While it is easy to understand the point where an SQL is spending more time using

EXPLAINor EXPLAIN ANALYZEin PostgreSQL, the same would not work for functions. Recently, Jobin has published a blog post where he demonstrated how plprofiler can be useful in profiling functions. plprofilerbuilds call graphs and creates flame graphs which make the report very easy to understand. Similarly, there is another interesting project called plpgsql_checkwhich can be used for a similar purpose as plprofiler, while it also looks at code and points out compilation errors. Let us see all of that in action, in this blog post. Installing plpgsql-check

You could use yum on RedHat/CentOS to install this extension from PGDG repository. Steps to perform source installation on Ubuntu/Debian are also mentioned in the following logs.

On RedHat/CentOS

$ sudo yum install plpgsql_check_11

On Ubuntu/Debian

$ sudo apt-get install postgresql-server-dev-11 libicu-dev gcc make $ git clone https://github.com/okbob/plpgsql_check.git $ cd plpgsql_check/ $ make && make install Creating and enabling this extension

There are 3 advantages of using

plpgsql_check
  1. Checking for compilation errors in a function code
  2. Finding dependencies in functions
  3. Profiling functions

When using plpgsql_check for the first 2 requirements, you may not need to add any entry to

shared_preload_libraries. However, if you need to use it for profiling functions (3), then you should add appropriate entries to shared_preload_libraries so that it could load both plpgsqland plpgsql_check. Due to dependencies, plpgsqlmust be before plpgsql_checkin the shared_preload_librariesconfig as you see in the following example : shared_preload_libraries = plpgsql, plpgsql_check

Any change to

shared_preload_librariesrequires a restart. You may see the following error when you do not have plpgsqlbefore plpgsql_checkin the shared_preload_librariesconfig. $ g[...]
カテゴリー: postgresql

504 Gateway Timeout

planet PHP - 2019-07-31(水) 00:00:00

504 Gateway Timeout is a status a proxy might emit, when it’s acting as a proxy and not getting a response from an upstream server.

This is fairly close to 502 Bad Gateway, expect that 502 should be returned when a proxy got an invalid response, and 504 typically when the proxy didn’t get a response at all (or too late).

Example HTTP/1.1 504 Gateway Timeout Content-Type text/html <h1>Timeout</h1> <p>We did not get a timely response from our upstream application server :(</p> References
カテゴリー: php

Viorel Tabara: Cloud Vendor Deep-Dive: PostgreSQL on AWS Aurora

planet postgresql - 2019-07-30(火) 23:56:33

How deep should we go with this? I’ll start by saying that as of this writing, I could locate only 3 books on Amazon about PostgreSQL in the cloud, and 117 discussions on PostgreSQL mailing lists about Aurora PostgreSQL. That doesn’t look like a lot, and it leaves me, the curious PostgreSQL end user, with the official documentation as the only place where I could really learn some more. As I don’t have the ability, nor the knowledge to adventure myself much deeper, there is AWS re:Invent 2018 for those who are looking for that kind of thrill. I can settle for Werner’s article on quorums.

To get warmed up, I started from the Aurora PostgreSQL homepage where I noted that the benchmark showing that Aurora PostgreSQL is three times faster than a standard PostgreSQL running on the same hardware dates back to PostgreSQL 9.6. As I’ve learned later, 9.6.9 is currently the default option when setting up a new cluster. That is very good news for those who don’t want to, or cannot upgrade right away. And why only 99.99% availability? One explanation can be found in Bruce Momjian’s article.

Compatibility

According to AWS, Aurora PostgreSQL is a drop-in replacement for PostgreSQL, and the documentation states:

The code, tools, and applications you use today with your existing MySQL and PostgreSQL databases can be used with Aurora.

That is reinforced by Aurora FAQs:

It means that most of the code, applications, drivers and tools you already use today with your PostgreSQL databases can be used with Aurora with little or no change. The Amazon Aurora database engine is designed to be wire-compatible with PostgreSQL 9.6 and 10, and supports the same set of PostgreSQL extensions that are supported with RDS for PostgreSQL 9.6 and 10, making it easy to move applications between the two engines.

“most” in the above text suggests that there isn’t a 100% guarantee in which case those seeking certainty should consider purchasing technical support fro

[...]
カテゴリー: postgresql

Ibrar Ahmed: Parallelism in PostgreSQL

planet postgresql - 2019-07-30(火) 22:31:52

PostgreSQL is one of the finest object-relational databases, and its architecture is process-based instead of thread-based. While almost all the current database systems utilize threads for parallelism, PostgreSQL’s process-based architecture was implemented prior to POSIX threads. PostgreSQL launches a process “postmaster” on startup, and after that spans new process whenever a new client connects to the PostgreSQL.

Before version 10 there was no parallelism in a single connection. It is true that multiple queries from the different clients can have parallelism because of process architecture, but they couldn’t gain any performance benefit from one another. In other words, a single query runs serially and did not have parallelism. This is a huge limitation because a single query cannot utilize the multi-core. Parallelism in PostgreSQL was introduced from version 9.6. Parallelism, in a sense, is where a single process can have multiple threads to query the system and utilize the multicore in a system. This gives PostgreSQL intra-query parallelism. 

Parallelism in PostgreSQL was implemented as part of multiple features which cover sequential scans, aggregates, and joins.

Components of Parallelism in PostgreSQL

There are three important components of parallelism in PostgreSQL. These are the process itself, gather, and workers. Without parallelism the process itself handles all the data, however, when planner decides that a query or part of it can be parallelized, it adds a Gather node within the parallelizable portion of the plan and makes a gather root node of that subtree.  Query execution starts at the process (leader) level and all the serial parts of the plan are run by the leader. However, if parallelism is enabled and permissible for any part (or whole) of the query, then gather node with a set of workers is allocated for it. Workers are the threads that run in parallel with part of the tree (partial-plan) that needs to be parallelized. The relation’s blocks are divided amongst threads su

[...]
カテゴリー: postgresql

Interview with Jason McCreary

planet PHP - 2019-07-30(火) 20:30:00
カテゴリー: php

Paul Ramsey: PostGIS Overlays

planet postgresql - 2019-07-30(火) 17:00:00

One question that comes up often during our PostGIS training is “how do I do an overlay?” The terminology can vary: sometimes they call the operation a “union” sometimes an “intersect”. What they mean is, “can you turn a collection of overlapping polygons into a collection of non-overlapping polygons that retain information about the overlapping polygons that formed them?”

So an overlapping set of three circles becomes a non-overlapping set of 7 polygons.

Calculating the overlapping parts of a pair of shapes is easy, using the ST_Intersection() function in PostGIS, but that only works for pairs, and doesn’t capture the areas that have no overlaps at all.

How can we handle multiple overlaps and get out a polygon set that covers 100% of the area of the input sets? By taking the polygon geometry apart into lines, and then building new polygons back up.

Let’s construct a synthetic example: first, generate a collection of random points, using a Gaussian distribution, so there’s more overlap in the middle. The crazy math in the SQL below just converts the uniform random numbers from the random() function into normally distributed numbers.

CREATE TABLE pts AS WITH rands AS ( SELECT generate_series as id, random() AS u1, random() AS u2 FROM generate_series(1,100) ) SELECT id, ST_SetSRID(ST_MakePoint( 50 * sqrt(-2 * ln(u1)) * cos(2*pi()*u2), 50 * sqrt(-2 * ln(u1)) * sin(2*pi()*u2)),4326) AS geom FROM rands;

The result looks like this:

Now, we turn the points into circles, big enough to have overlaps.

CREATE TABLE circles AS SELECT id, ST_Buffer(geom, 10) AS geom FROM pts;

Which looks like this:

Now it’s time to take the polygons apart. In this case we’ll take the exterior ring of the circles, using ST_ExteriorRing(). If we were dealing with complex polygons with holes, we’d have to use ST_DumpRings(). Once we have the rings, we want to make sure that everywhere rings cross the lines are broken, so that no lines cross, they only touch at their e

[...]
カテゴリー: postgresql

Hans-Juergen Schoenig: Combined indexes vs. separate indexes in PostgreSQL

planet postgresql - 2019-07-29(月) 19:01:56

A “composite index”, also known as “concatenated index”, is an index on multiple columns in a table. Many people are wondering, what is more beneficial: Using separate or using composite indexes? Whenever we do training, consulting or support this question is high up on the agenda and many people keep asking this question. Therefore, I decided to shed some light on this question.

Which indexes shall one create?

To discuss the topic on a more practical level, I created a table consisting of three columns. Then I loaded 1 million rows and added a composite index covering all three columns:

test=# CREATE TABLE t_data (a int, b int, c int); CREATE TABLE test=# INSERT INTO t_data SELECT random()*100000, random()*100000, random()*100000 FROM generate_series(1, 1000000); INSERT 0 1000000 test=# CREATE INDEX idx_data ON t_data(a, b, c); CREATE INDEX

The layout of the table is therefore as follows:

test=# \d t_data Table "public.t_data" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- a | integer | | | b | integer | | | c | integer | | | Indexes: "idx_data" btree (a, b, c)

Let us run ANALYZE now to ensure that optimizer statistics are there. Usually autovacuum will kick in and create statistics for your table, but to make sure running ANALYZE does not hurt in this case.

test=# ANALYZE t_data; ANALYZE PostgreSQL will rearrange filters for you

The first important thing to observe is that PostgreSQL will try to arrange the filters in your query for you. The following query will filter on all indexed columns:

test=# explain SELECT * FROM t_data WHERE c = 10 AND b = 20 AND a = 10; QUERY PLAN --------------------------------------------------- Index Only Scan [...]
カテゴリー: postgresql

ページ