425 Too Early

planet PHP - 2019-05-15(水) 00:00:00

When a HTTP client makes a connection to a HTTPS server, it uses TLS to create a secure connection. TLS can have a bit of a complicated ‘handshake’ to establish the connection. Because there’s a bunch of back and forward, this can take a long time, especially when there’s a lot of latency between server and client.

There are ways for clients to optimize this setup by sending a bunch of data very early in the process, before the full TLS connection is completely setup.

In some cases this can cause security problems. In those cases a the server can tell the client to retry a specific HTTP request after the TLS connection has been fully set up

In these situations, it will return the 425 Too Early status code.

Normally, you would never have to deal with this status code as a developer, unless you are creating HTTP(s) servers from scratch.

カテゴリー: php


planet PHP - 2019-05-14(火) 22:42:00

It was 6 years ago when I was last looking for a change after being a freelancer for a very long time. The idea was simple. I was tired of being the accountant, salesperson, consultant, developer, collections, sysadmin, and more. As a freelance “developer” I had to be all these things to support my family and live in a manner I was accustomed. But I was growing tired of it all, and wanted to have a little more fun by doing the parts I enjoyed most…consulting.

A good friend had been working at a well-known company for about a year and was very happy doing it. He also had grown tired of being a freelance developer, and a job at the company was his answer. So, when I saw an open consulting position on their website, I applied for it.

About a week later I received a call, then went through the typical round of interviews and questions. I was hired!

It was an exciting time, filled with learning new systems, people, and experiences. I was suddenly thrust into meetings with very large companies, and large teams of developers, who needed my help. There were new problems to solve on a weekly basis, and with each problem came new challenges. The number of things I learned during my six years of consulting at the company was mind-blowing, and with each day I discovered there was more and more I didn’t know. I basically went from knowing a bunch of things to village idiot overnight when I was hired.

“I went from knowing a bunch of things to village idiot overnight when I was hired.”

As I transitioned from one customer to another, it also led to traveling quite a bit. I spent half of each year away from home as I went onsite to meet new teams, learn network and application infrastructures, and build relationships with hundreds of people.

I continued to learn a great deal, and with each engagement, I spent less time on search engines and could draw from my own knowledge more often. (Of course, there was still a bunch of searching, but it was less. I’m still the village idiot learning daily.)

As a user group organizer, and speaker, I’ve always enjoyed teaching and sharing, and it was wonderful that my employer encouraged this activity. So I tended to share my knowledge with anyone who would listen, as I began speaking at conferences, user groups, and online from blog posts, podcasts, and videos, as well as through code via online source code repositories.

Through the process, I also did a fair amount of evangelism around products, libraries, and frameworks I believed in and witnessed some real growth from these efforts which drove me to do more.

However, as times change and acquisitions happen, so do the directions companies take. For good, or bad, companies are forced to make decisions and make changes to help them move forward and grow. I’ve witnessed and lived through some events these past couple years that have left me feeling dissatisfied and a little disconnected from the things I’ve come to hold dear.

This doesn’t mean the company is bad. It simply means our paths have diverged for the time being. Therefore, I will be leaving my current employer, as it is time once again for a change.

カテゴリー: php

Interview with Omni Adams

planet PHP - 2019-05-14(火) 22:05:00

@omnicolor Show Notes Audio

This episode is sponsored by

The post Interview with Omni Adams appeared first on Voices of the ElePHPant.

カテゴリー: php

Umair Shahid: Postgres is the coolest database – Reason #3: No vendor lock-in

planet postgresql - 2019-05-14(火) 17:12:31
You buy a cool new technology for your organization in order to cut operational costs. It works really well for you, and incrementally, your entire business starts to rely on this tech for its day to day operations. You have successfully made it an essential component of your business. There are some issues now and […]
カテゴリー: postgresql

Avinash Kumar: PostgreSQL Minor Versions Released May 9, 2019

planet postgresql - 2019-05-14(火) 01:13:52

Usually, the PostgreSQL Community releases minor patches on the Thursday of the second week of the second month of each quarter. This can vary depending on the nature of the fixes. For example, if the fixes include critical fixes as a postgres community response to security vulnerabilities or bugs that might affect data integrity. The previous minor release happened on February 14, 2019. In this case, the fysnc failure issue was corrected, along with some other enhancements and fixes.

The latest minor release, published on May 9, 2019, has some security fixes and bug fixes for these PostgreSQL Major versions.

PostgreSQL 11 (11.3)
PostgreSQL 10 (10.8)
PostgreSQL 9.6 (9.6.13)
PostgreSQL 9.5 (9.5.17)
PostgreSQL 9.4 (9.4.22)

Let’s take a look at some of the security fixes in this release.

Security Fixes CVE-2019-10130 (Applicable to PostgreSQL 9.5, 9.6, 10 and 11 versions)

When you

ANALYZE a table in PostgreSQL, statistics of all the database objects are stored in pg_statistic . The query planner uses this statistical data is and it may contain some sensitive data, for example, min and max values of a column. Some of the planner’s selectivity estimators apply user-defined operators to values found in pg_statistic . There was a security fix : CVE-2017-7484 in the past, that restricted a leaky user-defined operator from disclosing some of the entries of a data column.

Starting from PostgreSQL 9.5, tables in PostgreSQL not only have SQL-standard privilege system but also row security policies. To keep it short and simple, you can restrict a user so that they can only access specific rows of a table. We call this RLS (Row-Level Security).

CVE-2019-10130 is about restricting a user who has SQL permissions to read a column but is forbidden to read some rows due to RLS policy from discovering the restricted rows through a leaky operator. Through this patch, leaky operators to statistical data are only allowed when there is no relevant RLS policy. We shall see more about RLS in our future blog posts. CVE-2019-10129 (Applic[...]
カテゴリー: postgresql

Bruce Momjian: Draft of Postgres 12 Release Notes

planet postgresql - 2019-05-13(月) 02:45:01

I have completed the draft version of the Postgres 12 release notes. Consisting of 186 items, this release makes big advances in partitioning, query optimization, and index performance. Many long-awaited features, like REINDEX CONCURRENTLY, multi-variate most-frequent-value statistics, and common table expression inlining, are included in this release.

The release notes will be continually updated until the final release, which is expected in September or October of this year.

カテゴリー: postgresql

Jobin Augustine: pgBackRest – A Great Backup Solution and a Wonderful Year of Growth

planet postgresql - 2019-05-10(金) 23:14:20

pgBackRest addresses many of the must-have features that you’ll want to look for in a PostgreSQL backup solution. I have been a great fan of the pgBackRest project for quite some time, and it gets better all the time. Historically, it was written in perl and now over the last year, the project is making steady progress converting into native C code. At the time of writing, the latest version is 2.13 and there remains dependency on a long list of perl libraries. In case you’ve never tried pgBackRest, now it is a great time to do it. This post should help you to set up a simple backup with a local backup repository.


The pgBackRest project packages are maintained in the PGDG repository. If you have already added the PGDG repository to package manager,  installation is a breeze.


$ sudo yum install pgbackrest

On Ubuntu/Debian

$ sudo apt install pgbackrest

This will fetch all the required perl libraries too:

The backrest is a native executable now (version 2):

$ file /usr/bin/pgbackrest /usr/bin/pgbackrest: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=5e3f6123d02e0013b53f6568f99409378d43ad89, not stripped

Some of the other changes DBAs should keep in mind are:

  1. thread-max option is no longer valid – use process-max instead.
  2. archive-max-mb option is no longer valid and has been replaced with the archive-push-queue-max option which has different semantics
  3. The default for the backup-user (deprecated), which is a new repo-host-user, has changed from backrest to pgbackrest.
  4. The configuration file has changed from /etc/pgbackrest.conf to /etc/pgbackrest/pgbackrest.conf
Building from source

We may want to build pgBackRest depending on our environment and version. Building pgBackrest from source on Debian / Ubuntu is already covered in the official documentation. Below I’ve provided the steps to follow for the Red Hat family.

Get the tarball of the latest release:

curl -LO https://github.com/pgbackrest/pgback[...]
カテゴリー: postgresql

php[world] 2019 Call for Speakers

php.net - 2019-05-10(金) 22:18:25
カテゴリー: php

Laurenz Albe: PostgreSQL v12 new feature: optimizer support for functions

planet postgresql - 2019-05-10(金) 17:00:18
© Laurenz Albe 2019


PostgreSQL commit 74dfe58a5927b22c744b29534e67bfdd203ac028 has added “support functions”. This exciting new functionality that allows the optimizer some insight into functions. This article will discuss how this will improve query planning for PostgreSQL v12. If you are willing to write C code, you can also use this functionality for your own functions.

Functions as “black boxes”

Up to now, the PostgreSQL optimizer couldn’t really do a lot about functions. No matter how much it knew about the arguments of a function, it didn’t have the faintest clue about the function result. This also applied to built-in functions: no information about them was “wired into” the optimizer.

Let’s look at a simple example: language=”sql”

EXPLAIN SELECT * FROM unnest(ARRAY[1,2,3]); QUERY PLAN ------------------------------------------------------------- Function Scan on unnest (cost=0.00..1.00 rows=100 width=4) (1 row)

PostgreSQL knows exactly that the array contains three elements. Still, it has no clue how many rows unnest will return, so it estimates an arbitrary 100 result rows. If this function invocation is part of a bigger SQL statement, the wrong result count can lead to a bad plan. The most common problem is that PostgreSQL will select bad join strategies based on wrong cardinality estimates. If you have ever waited for a nested loop join to finish that got 10000 instead of 10 rows in the outer relation, you know what I’m talking about.

There is the option to specify COST and ROWS on a function to improve the estimates. But you can only specify a constant there, which often is not good enough.

There were many other ways in which optimizer support for functions was lacking. This situation has been improved with support functions.

Support function syntax

The CREATE FUNCTION statement has been extended like this: like

CREATE FUNCTION name (...) RETURNS ... SUPPORT supportfunction AS ...

This way a function gets a “support function” that knows about the function and can

カテゴリー: postgresql

11.3, 10.8, 9.6.13, 9.5.17, 9.4.22 リリース (2019-05-09)

www.postgresql.jp news - 2019-05-10(金) 09:00:12
11.3, 10.8, 9.6.13, 9.5.17, 9.4.22 リリース (2019-05-09) harukat 2019/05/10 (金) - 09:00
カテゴリー: postgresql

php|architect: Serverless PHP With Bref, Part One

phpdeveloper.org - 2019-05-10(金) 06:00:01

By Rob Allen In recent years, a different way to build applications has arisen called serverless computing. This term is not a good name; I’m reminded of the adage that there are two hard problems in programming: naming things, cache invalidation, and off-by-one errors. Serverless computing as...

カテゴリー: php

PHP – Code Wall: Setting Up Laravel 5.8 With Authentication & Role Based Access

phpdeveloper.org - 2019-05-10(金) 06:00:01

Setting up your application with authentication & role-based access is much better fleshed out at the very beginning of a new project. There’s nothing worse than being 1 month deep into development and the client then requires login functionality and role-based access, trust me, I’...

カテゴリー: php

Laravel News: PhpStorm 2019.1.2 Released With Blade Debugging Fixes

phpdeveloper.org - 2019-05-10(金) 06:00:01
The PhpStorm IDE v2019.1.2 was released yesterday with support for composer execution using Docker and a Blade template debugging fix. Visit Laravel News for the full post. The post PhpStorm 2019.1.2 Released With Blade Debugging ...
カテゴリー: php

Derick Rethans: PHP Internals News: Episode 9: Bundled Extensions

phpdeveloper.org - 2019-05-10(金) 06:00:01
PHP Internals News: Episode 9: Bundled Extensions London, UK Thursday, May 9th 2019, 09:09 BST In this ninth episode of "PHP Internals News" I talk to Kalle Nielsen (Twitter, GitHub) about bundled extensions, and his work on the Windows por...
カテゴリー: php

Ibrar Ahmed: Improving OLAP Workload Performance for PostgreSQL with ClickHouse Database

planet postgresql - 2019-05-10(金) 03:00:55

Every database management system is not optimized for every workload. Database systems are designed for specific loads, and thereby give better performance for that workload. Similarly, some kinds of queries work better on some database systems and worse on others. This is the era of specialization, where a product is designed for a specific requirement or a specific set of requirements. We cannot achieve everything performance-wise from a single database system. PostgreSQL is one of the finest object databases and performs really well in OLTP types of workloads, but I have observed that its performance is not as good as some other database systems for OLAP workloads. ClickHouse is one of the examples that outperforms PostgreSQL for OLAP.

ClickHouse is an open source, column-based database management system which claims to be 100–1,000x faster than traditional approaches, capable of processing of more than a billion rows in less than a second.

Foreign Data Wrapper (Clickhousedb_fdw)

If we have a different kind of workload that PostgreSQL is not optimized for, what is the solution? Fortunately, PostgreSQL provides a mechanism where you can handle a different kind of workload while using it, by creating a data proxy within PostgreSQL that internally calls and fetches data from a different database system. This is called Foreign Data Wrapper, which is based on SQL-MED. Percona provides a foreign data wrapper for the Clickhousedb database system, which is available at Percona’s GitHub project repository.

Benchmark Machine
  • Supermicro server:
    • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
    • 2 sockets / 28 cores / 56 threads
    • Memory: 256GB of RAM
    • Storage: Samsung  SM863 1.9TB Enterprise SSD
    • Filesystem: ext4/xfs
  • OS: Linux smblade01 4.15.0-42-generic #45~16.04.1-Ubuntu
  • PostgreSQL: version 11
Benchmark Workload

Ontime (On Time Reporting Carrier On-Time Performance) is an openly-available dataset I have used to benchmark. It has a table size of 85GB with 109 number of different types of columns, and its designed queries more

カテゴリー: postgresql

PHP Architect: Serverless PHP With Bref, Part 1

planet PHP - 2019-05-09(木) 19:02:00

I've written a two-part series on Serverless PHP on AWS Lambda using Matthieu Napoli's Bref for php[architect].

Part one has been published in the May 2019 issue and if you're not already a subscriber, you should be!

If you just want to learn about Bref though, then my introduction to Bref is available for free, just for you!

カテゴリー: php

Sebastian Insausti: How to Deploy PostgreSQL to a Docker Container Using ClusterControl

planet postgresql - 2019-05-09(木) 18:55:18

Docker has become the most common tool to create, deploy, and run applications by using containers. It allows us to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. Docker could be considered as a virtual machine, but instead of creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they're running on and only requires applications to be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.

In this blog, we’ll see how we can easily deploy a PostgreSQL setup via Docker, and how to turn our setup in a primary/standby replication setup by using ClusterControl.

How to Deploy PostgreSQL with Docker

First, let’s see how to deploy PostgreSQL with Docker manually by using a PostgreSQL Docker Image.

The image is available on Docker Hub and you can find it from the command line:

$ docker search postgres NAME DESCRIPTION STARS OFFICIAL AUTOMATED postgres The PostgreSQL object-relational database sy… 6519 [OK]

We’ll take the first result. The Official one. So, we need to pull the image:

$ docker pull postgres

And run the node containers mapping a local port to the database port into the container:

$ docker run -d --name node1 -p 6551:5432 postgres $ docker run -d --name node2 -p 6552:5432 postgres $ docker run -d --name node3 -p 6553:5432 postgres

After running these commands, you should have this Docker environment created:

$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 51038dbe21f8 postgres "docker-entrypoint.s…" About an hour[...]
カテゴリー: postgresql

Interview with Michael Moussa

planet PHP - 2019-05-09(木) 18:09:00
カテゴリー: php