フィードアグリゲーター

Sebastian Insausti: How to Deploy Highly Available PostgreSQL with Single Endpoint for WordPress

planet postgresql - 2019-04-01(月) 23:00:27

WordPress is an open source software you can use to create your website, blog, or application. There are many designs and features/plugins to add to your WordPress installation. WordPress is a free software, however, there are many commercial plugins to improve it depending on your requirements.

WordPress makes it easy for you to manage your content and it’s really flexible. Create drafts, schedule publication, and look at your post revisions. Make your content public or private, and secure posts and pages with a password.

Related resources  ClusterControl for PostgreSQL  Scaling Wordpress and MySQL on Multiple Servers for Performance  Wordpress Application Clustering using Kubernetes with HAProxy and Keepalived

To run WordPress you should have at least PHP version 5.2.4+, MySQL version 5.0+ (or MariaDB), and Apache or Nginx. Some of these versions have reached EOL and you may expose your site to security vulnerabilities, so you should install the latest version available according to your environment.

As we could see, currently, WordPress only supports the MySQL and MariaDB database engines. WPPG is a plugin based on PG4WP plugin, that gives you the possibility to install and use WordPress with a PostgreSQL database as a backend. It works by replacing calls to MySQL specific functions with generic calls that map them to other database functions and rewriting SQL queries on the fly when needed.

For this blog, we’ll install 1 Application Server with WordPress 5.1.1 and HAProxy, 1.5.18 in the same server, and 2 PostgreSQL 11 database nodes (Master-Standby). All the operating system will be CentOS 7. For the databases and load balancer deploy we’ll use the ClusterControl system.

This is a basic environment. You can improve it by adding more high availability features as you can see here. So, let’s start.

Database Deployment

First, we need to install our PostgreSQL database. For this, we’ll assume you have ClusterControl installed.

To perform a deployment from ClusterControl, simply select the option

[...]
カテゴリー: postgresql

Regina Obe: SQL Server on Linux

planet postgresql - 2019-04-01(月) 22:31:00

Today is April 1st. Having no thoughts on Fools jokes for today, I dug up one of our old April fools, and it was pretty scary how the joke is just about true now. Yes SQL Server now really does run on Linux and is on it's 2017th edition, but still a poor competition to PostgreSQL.

A goody from our old joke archives

CatchMe - Microsoft SQL Server for Unix and Linux
カテゴリー: postgresql

Devrim GÜNDÜZ: End of the naming game: The PostgreSQL project changes its name

planet postgresql - 2019-04-01(月) 18:36:00
I started using PostgreSQL around September 1998. The first problem I had was pronouncing it, and even using right capital letters at the right place.

Was is PostGreSQL? PostgresSQL? PoStGreSQL? PostgreySQL?

Recently Craig also mentioned about the same problem.

Starting today, the PostgreSQL Global Development Group (abbreviated as PGDG) announced that the project will be written as PostgresQL. This will solve the problems (hopefully), and will also help use to drop the "QL" in 2024. Starting v12, all packages will also "provide" postgresXY as the package name, for a smooth change in 2024. Meanwhile, as of today, the project will accept "Postgre" as an alias for those who did not want to learn about the name of the software they are using. I heard rumours that they also say "Orac" or "SQ Serv" or "MyS", so they will now be free to drop SQL in our name, too.

Thanks to everyone who made this change. This was a real blocker for the community, and it will also help newbies in the PostgreSQL Facebook group -- they will now be free to use "Postgre" from now on.
カテゴリー: postgresql

Raghavendra Rao: Install PL/Java 1.5.2 in PostgreSQL 11

planet postgresql - 2019-04-01(月) 08:09:14
PostgreSQL 11 includes several procedural languages with the base distribution: PL/pgSQL, PL/Tcl, PL/Perl, and PL/Python. In addition, there are a number of procedural languages that are developed and maintained outside the core PostgreSQL Distribution like PL/Java (Java), PL/Lua (Lua), PL/R (R), PL/sh (Unix Shell), and PL/v8 (JavaScript). In this post, we are going to see...
カテゴリー: postgresql

PSR-14: Being a good Provider

planet PHP - 2019-03-31(日) 01:00:00
PSR-14: Being a good Provider

As mentioned back in part 1, PSR-14 splits the core mediator into two objects: The Dispatcher and the Provider. The Dispatcher is fairly straightforward and most implementations will be fairly simple and fairly similar.

Providers are the exact opposite; A Listener Provider has one requirement: It maps the Event object into an iterable of callables, in the order it chooses. How it does that is left up to the Provider to define, and there are dozens of possible ways.

Continue reading this post on SteemIt.

Larry 30 March 2019 - 11:00am
カテゴリー: php

Andrew Dunstan: Where and when you need a root.crt file

planet postgresql - 2019-03-30(土) 04:09:50

This is something people seem to get confused about quite often. A root.crt file is used to validate a TLS (a.k.a. SSL) certificate presented by the other end of a connection. It is usually the public certificate of the Certificate Authority (CA) that signed the presented certificate, and is used to validate that signature. If a non-root CA was used to sign the other end’s TLS certificate, the root.crt file must contain at least the root of the CA chain, and enough other elements of the chain that together with the certificate can connect the root to the signing CA.

In the simple and most common case where client certificates are not being used, only the client needs a root.crt file, to validate the server’s TLS certificate, if using 'verify-ca' or 'verify-full' ssl mode. The server doesn’t need and can’t use a root.crt file when client certificates are not being used.

On the other hand, if you are using client certificates, the server will also need a root.crt file to validate the client certificates. There is no requirement that same root.crt be used for both sides. It would be perfectly possible for the server’s certificate to be signed by one CA and the client certificates by another.

If more than one CA is used in a certain context, i.e. if the client connects to servers with certificates signed by more than one CA, or of the server accepts connections from clients with certificates signed by more than one CA, then the certificates of all the CAs can be placed in the root.crt file, one after the other. The connection will succeed as long as one of the certificates (or certificate chains) in the file is that of the relevant signing authority.

カテゴリー: postgresql

Craig Kerstiens: A health checkup playbook for your Postgres database

planet postgresql - 2019-03-30(土) 01:59:00

I talk with a lot of folks that set their database up, start working with it, and then are surprised by issues that suddenly crop up out of nowhere. The reality is, so many don’t want to have to be a DBA, instead you would rather build features and just have the database work. But your is that a database is a living breathing thing. As the data itself changes what is the right way to query and behave changes. Making sure your database is healthy and performing at it’s maximum level doesn’t require a giant overhaul constantly. In fact you can probably view it similar to how you approach personal health. Regular check-ups allow you to make small but important adjustments without having to make dramatic life altering changes to keep you on the right path.

After years of running and managing literally millions of Postgres databases, here’s my breakdown of what your regular Postgres health check should look like. Consider running this on a monthly basis to be able to make small tweaks and adjustments and avoid the drastic changes.

Cache rules everything around me

For many applications not all the data is accessed all the time. Instead certain datasets are accessed one and then for some period of time, then the data you’re accessing changes. Postgres in fact is quite good at keeping frequently accessed data in memory.

Your cache hit ratio tells you how often your data is served from in memory vs. having to go to disk. Serving from memory vs. going to disk will be orders of magnitude faster, thus the more you can keep in memory the better. Of course you could provision an instance with as much memory as you have data, but you don’t necessarily have to. Instead watching your cache hit ratio and ensuring it is at 99% is a good metric for proper performance.

You can monitor your cache hit ratio with:

SELECT sum(heap_blks_read) as heap_read, sum(heap_blks_hit) as heap_hit, sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio FROM pg_statio_user_tables; Be careful of dead tuples

Under the cov

[...]
カテゴリー: postgresql

Hubert 'depesz' Lubaczewski: Waiting for PostgreSQL 12 – REINDEX CONCURRENTLY

planet postgresql - 2019-03-30(土) 00:00:02
On 29th of March 2019, Peter Eisentraut committed patch: REINDEX CONCURRENTLY   This adds the CONCURRENTLY option to the REINDEX command. A REINDEX CONCURRENTLY on a specific index creates a new index (like CREATE INDEX CONCURRENTLY), then renames the old index away and the new index in place and adjusts the dependencies, and then drops … Continue reading "Waiting for PostgreSQL 12 – REINDEX CONCURRENTLY"
カテゴリー: postgresql

Ibrar Ahmed: PostgreSQL: Access ClickHouse, One of the Fastest Column DBMSs, With clickhousedb_fdw

planet postgresql - 2019-03-29(金) 23:01:12

Database management systems are meant to house data but, occasionally, they may need to talk with another DBMS. For example, to access an external server which may be hosting a different DBMS. With heterogeneous environments becoming more and more common, a bridge between the servers is established. We call this bridge a “Foreign Data Wrapper” (FDW). PostgreSQL completed its support of SQL/MED (SQL Management of External Data) with release 9.3 in 2013. A foreign data wrapper is a shared library that is loaded by a PostgreSQL server. It enables the creation of foreign tables in PostgreSQL that act as proxies for another data source.

When you query a foreign table, Postgres passes the request to the associated foreign data wrapper. The FDW creates the connection and retrieves or updates the data in the external data store. Since PostgreSQL planner is involved in all of this process as well, it may perform certain operations like aggregate or joins on the data when retrieved from the data source. I cover some of these later in this post.

ClickHouse Database

ClickHouse is an open source column based database management system which claims to be 100–1,000x faster than traditional approaches, capable of processing of more than a billion rows in less than a second.

clickhousedb_fdw

clickhousedb_fdw is an open source project – GPLv2 licensed – from Percona. Here’s the link for GitHub project repository:

https://github.com/Percona-Lab/clickhousedb_fdw

It is an FDW for ClickHouse that allows you to SELECT from, and INSERT INTO, a ClickHouse database from within a PostgreSQL v11 server.

The FDW supports advanced features like aggregate pushdown and joins pushdown. These significantly improve performance by utilizing the remote server’s resources for these resource intensive operations.

If you would like to follow this post and try the FDW between Postgres and ClickHouse, you can download and set up the ontime dataset for ClickHouse.  After following the instructions, the test that you have the desired data. The ClickHouse

[...]
カテゴリー: postgresql

Andreas 'ads' Scherbaum: PostgreSQL Europe Community User Group Recognition Guidelines

planet postgresql - 2019-03-29(金) 21:00:00

Over the past months, a great number of PostgreSQL User Groups and Meetups showed up all over Europe. It’s good to see that interest in PostgreSQL is growing!

Some of the user groups approached the PostgreSQL Europe board, and asked for support. Mostly for swag, but also for sending speakers, or other kind of support. We are happy to help!

In order to handle all of these requests, the PostgreSQL Europe board created a set of guidelines for user group meetings. The current version can be found on the PostgreSQL Europe website, under “Community”, and then “Community User Group Recognition Guidelines”. User groups which approach the PostgreSQL Europe board for support are expected to comply by these guidelines. Every user group is self-certified under these guidelines. If you have reason to believe that a self-certified status for a user group is not correct, please contact the PostgreSQL Europe board under “Contact”.

カテゴリー: postgresql

PSR-14: All about Events

planet PHP - 2019-03-29(金) 01:03:00
PSR-14: All about Events

In the last installment we discussed the overall structure of PSR-14. Today I'd like to dive into the key aspect of any event system: The Events themselves.

Various different event systems structure events in different ways. Some require that it be an object. Others it's any arbitrary value. Others it's any number of arbitrary values, depending on the Event. Some really really want pass-by-reference arrays.

For PSR-14, we chose to standardize on an object and always an object. There were three main reasons for that:

Continue reading this post on SteemIt.

Larry 28 March 2019 - 11:03am
カテゴリー: php

Using loops instead of higher order functions

planet PHP - 2019-03-29(金) 00:24:00

I came across a great article from Kirstian Poslek, titled “One reduce() to rule them all”. It explains the reduce() function quite well.

I often feel though that using higher order functions are used in many cases where simple loops might be more legible.

The article ends with an example of a function that takes an array, and returns the maximum, minimum and average values. This is the sample from the article:

const data = [115, 26, 99]; const callbackFunction = function( accumulator, currentElement, currentIndex, array ) { // Get the maximum by checking first if there is a maximum from the previous step const maximum = accumulator.maximum ? // If there is, then check if the current element is higher than the previous maximum accumulator.maximum < currentElement ? currentElement : accumulator.maximum : // If there isn't, use the current element right away currentElement; // Get the minimum by checking first if there is a minimum from the previous step const minimum = accumulator.minimum ? // If there is, then check if the current element is lower than the previous maximum accumulator.minimum > currentElement ? currentElement : accumulator.minimum : // If there isn't, use the current element right away currentElement; // Get the average by checking if we're at the last step (where it we can finally calculate the average) const average = currentIndex === array.length - 1 ? (accumulator.average + currentElement) / array.length : // If we're not at the last step, check if there even is a value from the previous step accumulator.average ? accumulator.average + currentElement : currentElement; // Return the value for the next element return { maximum, minimum, average };

Truncated by Planet PHP, read more at the original (another 3827 bytes)

カテゴリー: php

Daniel Vérité: Text search: a custom dictionary to avoid long words

planet postgresql - 2019-03-28(木) 21:51:02

The full text search is based on transforming the initial text into a tsvector. For example:

test=> select to_tsvector('english', 'This text is being processed.'); to_tsvector ---------------------- 'process':5 'text':2

This result is a sorted list of lexems, with their relative positions in the initial text, obtained by this process:

Raw text => Parser => Dictionaries (configurable) => tsvector

When there is enough data, we tend to index these vectors with a GIN or GIST index to speed up text search queries.

In SQL we can inspect the intermediate results of this process with the ts_debug function:

test=> select * from ts_debug('english', 'This text is being processed.'); alias | description | token | dictionaries | dictionary | lexemes -----------+-----------------+-----------+----------------+--------------+----------- asciiword | Word, all ASCII | This | {english_stem} | english_stem | {} blank | Space symbols | | {} | | asciiword | Word, all ASCII | text | {english_stem} | english_stem | {text} blank | Space symbols | | {} | | asciiword | Word, all ASCII | is | {english_stem} | english_stem | {} blank | Space symbols | | {} | | asciiword | Word, all ASCII | being | {english_stem} | english_stem | {} blank | Space symbols | | {} | | asciiword | Word, all ASCII | processed | {english_stem} | english_stem | {process} blank | Space symbols | . | {} | |

The parser breaks down the text into tokens (token column), each token being associated with a type (alias and description columns). Then depending on their types, these tokens are submitted as input to dictionaries mapped to these types, which may produce one lexem, or several, or zero to eliminate the term from the output vector.

In the above example, spaces and punctuation are eliminated be

[...]
カテゴリー: postgresql

第 159 回理事会議事録 (2019-3)

www.postgresql.jp news - 2019-03-28(木) 19:06:44
第 159 回理事会議事録 (2019-3) anzai 2019/03/28 (木) - 19:06
カテゴリー: postgresql

Avinash Kumar: PostgreSQL Upgrade Using pg_dump/pg_restore

planet postgresql - 2019-03-28(木) 03:09:33

In this blog post, we will explore  pg_dump /

pg_restore , one of the most commonly used options for performing a PostgreSQL upgrade. It is important to understand the scenarios under which pg_dump and pg_restore utilities will be helpful.

This post is the second of our Upgrading or Migrating Your Legacy PostgreSQL to Newer PostgreSQL Versions series where we’ll be exploring different methods available to upgrade your PostgreSQL databases.

About pg_dump pg_dump is a utility to perform a backup of single database. You cannot backup multiple databases unless you do so using separate commands in parallel. If your upgrade plan needs global objects to be copied over, pg_dump need to be supplemented by pg_dumpall  . To know more about pg_dumpall  , you may refer to our previous blog post. pg_dump formats pg_dump can produce dumps in multiple formats – plain text and custom format – each with own advantages. When you use pg_dump with custom format (-Fc) , you must use pg_restore to restore the dump.

If the dump is taken using a plain-text format, pg_dump generates a script file of multiple SQL commands. It can be restored using psql.

A custom format dump, however, is compressed and is not human-readable.

A dump taken in plain text format may be slightly larger in size when compared to a custom format dump.

At times, you may wish to perform schema changes in your target PostgreSQL database before restore, for example, table partitioning. Or you may wish to restore only a selected list of objects from a dump file.

In such cases, you cannot restore a selected list of tables from a plain format dump of a database. If you take the database dump in custom format,  you can use pg_restore, which will help you choose a specific set of tables for restoration.

Steps involved in upgrade

The most important point to remember is that both dump and restore should be performed using the latest binaries. For example, if we need to migrate from version 9.3 to version 11, we should be using the pg_dump binary of PostgreSQL 11 to connect

[...]
カテゴリー: postgresql

Paul Ramsey: GeoJSON Features from PostGIS

planet postgresql - 2019-03-27(水) 22:00:00

Every once in a while, someone comes to me and says:

Sure, it’s handy to use ST_AsGeoJSON to convert a geometry into a JSON equivalent, but all the web clients out there like to receive full GeoJSON Features and I end up writing boilerplate to convert database rows into GeoJSON. Also, the only solution I can find on the web is scary and complex. Why don’t you have a row_to_geojson function?

And the answer (still) is that working with rows is fiddly and I don’t really feel like it.

However! It turns out that, with the tools for JSON manipulation already in PostgreSQL and a little scripting it’s possible to make a passable function to do the work.

Start with a simple table.

DROP TABLE IF EXISTS mytable; CREATE TABLE mytable ( pk SERIAL PRIMARY KEY, name TEXT, size DOUBLE PRECISION, geom GEOMETRY ); INSERT INTO mytable (name, size, geom) VALUES ('Peter', 1.0, 'POINT(2 34)'), ('Paul', 2.0, 'POINT(5 67)');

You can convert any row into a JSON structure using the to_jsonb() function.

SELECT to_jsonb(mytable.*) FROM mytable; {"pk": 1, "geom": "010100000000000000000000400000000000004140", "name": "Peter", "size": 1} {"pk": 2, "geom": "010100000000000000000014400000000000C05040", "name": "Paul", "size": 2}

That’s actually all the information we need to create a GeoJSON feature, it just needs to be re-arranged. So let’s make a little utility function to re-arrange it.

CREATE OR REPLACE FUNCTION rowjsonb_to_geojson( rowjsonb JSONB, geom_column TEXT DEFAULT 'geom') RETURNS TEXT AS $$ DECLARE json_props jsonb; json_geom jsonb; json_type jsonb; BEGIN IF NOT rowjsonb ? geom_column THEN RAISE EXCEPTION 'geometry column ''%'' is missing', geom_column; END IF; json_geom := ST_AsGeoJSON((rowjsonb ->> geom_column)::geometry)::jsonb; json_geom := jsonb_build_object('geometry', json_geom); json_props := jsonb_build_object('properties', rowjsonb - geom_column); json_type := jsonb_build_object('type', 'Feature'); return (json_type || json_geom || json_props)::text; END; $$ LANGUAGE 'plpgsql[...]
カテゴリー: postgresql

PSR-14: A Major Event in PHP

planet PHP - 2019-03-27(水) 00:44:00
PSR-14: A Major Event in PHP

The PHP Framework Interoperability Group ([PHP-FIG](https://www.php-fig.org/)) has released a number of new specifications in the last year. The latest, [PSR-14](http://www.php-fig.org/psr/psr-14/), covers Event Dispatching. Like many PSRs it's a fairly small spec, at the end of the day, but intended to be high-impact.

In this series of posts I want to cover what PSR-14 is and does (and what it isn't and doesn't), and how to best leverage it in your projects as it gets deployed more widely.

Continue reading this post on SteemIt.

Larry 26 March 2019 - 10:44am
カテゴリー: php

417 Expectation Failed

planet PHP - 2019-03-27(水) 00:00:00

A server emits 417 Expecation Failed when it encountered a Expect header that it didn’t understand or doesn’t support.

A client can use the Expect header to tell the server that it requires a certain behavior from a server.

The only expectation that (as far as I know) is standardized, is 100-continue.

This looks like this:

POST /foo/bar HTTP/1.1 Content-Type: application/gzip Content-Length: 12345678765 Expect: 100-continue ...

This request intends to upload a large file. The client tells the server that it expects the server to first respond with a 100 Continue response.

If a server supports this, it will first return this 100 Continue response, which tells a client that the request was understood, supported and probably acceptable and it can continue uploading.

If the server did not support this feature, it must response with 417:

HTTP/1.1 417 Expectation Failed Content-Type: text/plain We don't support 100-continue

If a server sees a different type of expectation that’s a new future standard, or some custom extension it should always return 417.

References
カテゴリー: php

Paul Ramsey: Notes for FDW in PostgreSQL 12

planet postgresql - 2019-03-26(火) 22:00:00

TL;DR: There are some changes in PostgresSQL 12 that FDW authors might be surprised by! Super technical, not suitable for ordinary humans.

OK, so I decided to update my two favourite extension projects (pgsql-http and pgsql-ogr-fdw) yesterday to support PostgreSQL 12 (which is the version currently under development likely to be released in the fall).

Fixing up pgsql-http was pretty easy, involving just one internal function signature change.

Fixing up pgsql-ogr-fdw involved some time in the debugger wondering what had changed.

Your Slot is Empty

When processing an FDW insert/update/delete, your code is expected to take a TupleTableSlot as input and use the data in that slot to apply the insert/update/delete operation to your backend data store, whatever that may be (OGR in my case). The data lived in the tts_values array, and the null flags in tts_isnull.

In PostgreSQL 12, the slot arrives at your ExecInsert/ExecUpdate/ExecDelete callback function empty! The tts_values array is populated with Datum values of 0, yet the tts_isnull array is full of true values. There’s no data to pass back to the FDW source.

What gives?!?

Andres Freund has been slowly laying the groundwork for pluggable storage in PostgreSQL, and one of the things that work has affected is TupleTableSlot. Now when you get a slot, it might not have been fully populated yet, and that is what is happening in the FDW code.

The short-term fix is just to force the slot to populate by calling slot_getallattrs, and then go on with your usual work. That’s what I did. A more future-proof way would be to use slot_getattr and only retrieve the attributes you need (assuming you don’t just need them all).

Your VarLena might have a Short Header

Varlena types are the variable size types, like text, bytea, and varchar. Varlena types store their length and some extra information in a header. The header is potentially either 4 bytes or 1 byte long. Practically it is almost always a 4 byte header. If you call the standard VARSIZE and VARDATA macros on a varlena,

[...]
カテゴリー: postgresql

ページ