414 URI Too Long

planet PHP - 2019-03-06(水) 00:00:00

The URI or path of a HTTP request doesn’t have any hard limits in terms of how long it’s allowed to be.

However, Browsers and search engines have limits, and on the server side it’s a good idea to limit the length of the URI to combat certain denial-of-service attacks or bugs.

Based on limits of browsers, it’s a good idea to try and not exceed 2000 bytes for the uri.

When a client does exceed it, the appropriate status code to return is 414 URI Too Long.

Example HTTP/1.1 414 URI Too Long Content-Type: text/html <p>Insufficient level of conciseness in request</p> References
カテゴリー: php

Retiring PHP's Mirror Program

planet PHP - 2019-03-05(火) 23:15:00
Retiring PHP's Mirror Program
London, UK Tuesday, March 5th 2019, 14:15 GMT

The PHP.net website has in the last 20 years made use of an extensive network of mirrors to make the PHP documentation available, and distribute source tarballs. These mirrors have been maintained by members and companies in the PHP eco-system for many valuable years. However, the administration of the mirror system is often haphazard, with few contributors helping out—PHP is Open Source, and this is simply how these things can go.

Maintaining the mirrors is now no longer sustainable, and also hinders the take up of moving the PHP.net website fully to HTTPS. Because the PHP.net team has no access to the mirror servers, we also can't make sure the mirrors are up-to-date, and some mirrors are still running PHP 5.3.

It is likely no longer necessary to have a mirror system in place, as unlike 20 years ago, it is not nearly has hard as setting up a distributed cache system. As a matter of fact, some of the PHP.net web site, through http://www.php.net/, already sits behind a Content Delivery Network (CDN) from Myra, which is sponsored by long time PHP contributor Sascha Schumann.

With these preliminaries out of the way, I would therefore like to announce the discontinuation of PHP.net's mirroring program. Instead of having mirrors, we are moving all of PHP.net to HTTPS (and get rid of https://secure.php.net), and move them behind Myra's CDN, with the same local content delivery opportunities, but at significantly less administration requirements.

Watch this space for further developments!

To end this post, I would very much like to thank all the mirror maintainers for their dedication, time, and bandwidth over all these years. Thanks!

カテゴリー: php

Croissants in Québec

planet PHP - 2019-03-05(火) 16:00:00
カテゴリー: php

10 Years of thePHP.cc

planet PHP - 2019-03-05(火) 16:00:00
カテゴリー: php

Richard Yen: I Fought the WAL, and the WAL Won: Why hot_standby_feedback can be Misleading

planet postgresql - 2019-03-05(火) 09:15:00

When I first got involved in managing a Postgres database, I was quickly introduced to the need for replication. My first project was to get our databases up on Slony, which was the hot new replication technology, replacing our clunky DRBD setup and allowing near-realtime read-only copies of the database. Of course, with time and scale, Slony had a hard time keeping up with the write traffic, especially since it ultimately suffered from write amplification (each write ultimately becomes two or more writes to the database, because of all the under-the-hood work involved). When Postgres Streaming Replication came out in v. 9.0, everyone felt like they struck gold. Streaming Replication was fast, and it took advantage of an already-existing feature in Postgres: the WAL Stream.

Many years have passed since v. 9.0 (we’re coming up on v. 12 very soon). More features have been added, like Hot Standby, Logical Replication, and some two-way Master-Master replication extensions have been created. This has been quite a path of growth, especially since I remember someone saying that Postgres’ roadmap would not include replication at a BOF at PGCon, circa 2010.

With all the improvements to Streaming Replication over the years, I think one of the most misunderstood features is hot_standby_feedback, and I hope to clarify that here.

With Streaming Replication, users are able to stand up any number of standby servers with clones of the primary, and they are free to throw all sorts of load at them. Some will send read-only traffic for their OLTP apps, huge cronjobs, and long-running reporting queries, all without affecting write traffic on the primary. However, some will occasionally see that their queries get aborted for some reason, and in their logs they might see something like:

ERROR: canceling statement due to conflict with recovery

That’s an unfortunate reality that nobody likes. Nobody wants their queries canceled on them, just like nobody likes to order a pastrami sandwich, only to be told 10 minutes later

カテゴリー: postgresql

Stephen Frost: How to setup Windows Active Directory with PostgreSQL GSSAPI Kerberos Authentication

planet postgresql - 2019-03-05(火) 01:38:24

PostgreSQL provides a bevy of authentication methods to allow you to pick the one that makes the most sense for your environment. One desired implementation that I have found customers wanting is to use  Windows Active Directory with PostgreSQL's GSSAPI authentication interface using Kerberos. I've put together this guide to help you take advantage of this setup in your own environment. 

カテゴリー: postgresql

Bruce Momjian: Corporate Backing

planet postgresql - 2019-03-05(火) 01:15:01

Postgres has long lived in the shadow of proprietary and other open source databases. We kind of got used to that, though we had early support from Fujitsu and NTT. In recent years, Postgres has become more noticed, and the big companies promoting Postgres have become somewhat of a flood:

Even with IBM having DB2 and Microsoft having SQL Server, they still support Postgres.

It is odd having multi-billion-dollar companies asking how they can help the Postgres community, but I guess we will have to get used to it. These companies support the community to varying degrees, but we certainly appreciate all the help we receive. Just having these companies list us as supported is helpful.

カテゴリー: postgresql

Alexey Lesovsky: pgCenter’s wait event profiler

planet postgresql - 2019-03-04(月) 21:17:00
As you might know, in the last pgCenter release the new tool has been added - wait event profiler. In this post, I'd like to explore this tool and propose some use-cases for it.

First of all, what are “wait events”? PostgreSQL official documentation doesn't give an explanation for wait events (it just provides a list of all wait events). In short, wait events are points in time where backends have to wait until a particular event occurs. This may be waiting for obtaining locks, IO, inter-process communication, interacting with client or something else. Stats about wait events are provided by pg_stat_activity view in wait_event_type and wait_event columns.

Using EXPLAIN we always can understand what query does. But EXPLAIN is aimed to work with query planner and doesn't show time when query got stuck in waitings. For this, you can use pgCenter's wait event profiler.

How does it work? First, you need to know PID of profiled backend. It can be found using pg_stat_activity, or, if you're connected to Postgres directly, with pg_backend_pid(). Next, in the second terminal, run 'pgcenter profile' and pass backend PID as an argument. That’s it. pgCenter connects to Postgres and using wait events stats from pg_stat_activity will start collecting data. When query finishes, pgCenter shows you distribution of wait events, like this:

$ pgcenter profile -U postgres -P 19241 LOG: Profiling process 19241 with 10ms sampling ------ ------------ ----------------------------- % time      seconds wait_event         query: update t1 set a = a + 100; ------ ------------ ----------------------------- 72.15     30.205671 IO.DataFileRead 20.10      8.415921 Running 5.50       2.303926 LWLock.WALWriteLock 1.28       0.535915 IO.DataFileWrite 0.54       0.225117 IO.WALWrite 0.36       0.152407 IO.WALInitSync 0.03       0.011429 IO.WALInitWrite 0.03       0.011355 LWLock.WALBufMappingLock ------ ------------ ----------------------------- 99.99     41.861741
In this example, a massive UPDATE is profiled. The query took around 40 second[...]
カテゴリー: postgresql

Hand-written service containers

planet PHP - 2019-03-04(月) 19:45:00

You say "convention over configuration;" I hear "ambient information stuck in someone's head." You say "configuration over hardcoding;" I hear "information in a different language that must be parsed, can be malformed, or not exist."

— Paul Snively (@paul_snively) March 2, 2019

Dependency injection is very important. Dependency injection containers are too. The trouble is with the tools, that let us define services in a meta-language, and rely on conventions to work well. This extra layer requires the "ambient information" Paul speaks about in his tweet, and easily lets us make mistakes that we wouldn't make if we'd just write out the code for instantiating our services.

Please consider this article to be a thought experiment. If its conclusions are convincing to you, decide for yourself if you want to make it a coding experiment as well.

The alternative: a hand-written service container

I've been using hand-written service containers for workshop projects, and it turns out that it's very nice to work with them. A hand-written service container would look like this:

final class ServiceContainer { public function finalizeInvoiceController(): FinalizeInvoiceController { return new FinalizeInvoiceController( new InvoiceService( new InvoiceRepository( $this->dbConnection() ) ) ); } private function dbConnection(): Connection { static $connection; return $connection ?: $connection = new Connection(/* ... */); } }

The router/dispatcher/controller listener, or any kind of middleware you have for processing an incoming web request, could retrieve a controller from the service container, and call its main method. Simplified, the code would look this:

$serviceContainer = new ServiceContainer(); if ($request->getUri() === '/finalize-invoice') { return $serviceContainer->finalizeInvoiceController()->__invoke($request); } // and so on

We see the power of dependency injection here: the service won't have to fetch its dependencies, it will get them injected. The controller here is a so-called "entry point" for the service container, because it's a public service that can be requested from it. All the dependencies of an entry-point service (and the dependencies of its dependencies, and so on), will be private services, which can't be fetched directly from the container.

There are many things that I like about a hand-written dependency injection container. Every one of these advantages can show how many modern service containers have to reinvent features that you already have in the programming language itself.

No service ID naming conventions

For starters, service containers usually allow you to request services using a method like get(string $id). The hand-written container doesn't have such a generic service getter. This means, you don't have to think about what the ID should be of every service you want to define. You don't have to come up with arbitrary naming conventions, and you don't have to deal with inconsistent naming schemes in a legacy single project.

The name of a service is just the name of its factory method. Choosing a service name is therefore the same as choosing a method name. But since every method in your service container is going to create and return an object of a given type, why not use that type's name as the name of the method? In fact, this is what most service containers have also started doing at some point: they recommend using the name of the class you want to instantiate.

Type-safe, with full support for static analysis

Several years ago I was looking for a way to check the quality of the Symfony service definitions that I wrote in Yaml. So I created a tool for validating service definitions created with the Symfony Dependency Injection Component. It would inspect the service definitions and find out if they had the right number constructor arguments, if the class name it referenced actually existed, etc. This tool helped me catch several issues that I would only have been able to find out by clicking through the entire web application.

Instead of doing complicated and incomplete analysis after writing service definitions

Truncated by Planet PHP, read more at the original (another 13461 bytes)

カテゴリー: php

Dave Conlin: Index-only scans in Postgres

planet postgresql - 2019-03-04(月) 19:04:25

Index-only scans can be a really effective way to speed up table reads that hit an index. Of course, they’re not a silver bullet to all your performance problems, but they are a very welcome and useful part of the toolbox.

In order to understand index-only scans, why (and when) they’re valuable, let’s recap how a “normal” index scan works.

Index scans

An index is just references to the rows in a table, stored in a data structure (usually a binary tree) based on their values in the indexed columns.

An index scan reads through the index and uses it to quickly look up the rows that match your filter (something like WHERE x > 10), and return them in the order they’re stored in the index.

Postgres then goes to look up the data in these rows from the table, in the heap, where it would have found them if it had done a sequential scan.

It checks that they are visible to the current transaction — for example they haven’t been deleted or replaced by a newer version — and passes them on to the next operation.

Photo by João Silas

It’s a bit like using an index in a book. Instead of starting at page one and turning over the pages until you find the ones that deal with, say, soil erosion, you skip to “s” in the index, look up “soil erosion” and turn to the listed pages to read about where all the dirt is going.

Enter index-only scans

Index-only scans start off like index scans, but they get all their column information from the index, obviating the need to go back to the table to fetch the row data — the second step in the index scan process.

Returning to our book example, if we want to produce a list of topics in the book, ordered by the number of pages they appear on, then all that information is stored in the book’s index, so we can do so purely from reading the index without ever actually turning to the pages in question.

As you can imagine, under the right circumstances this can be an incredibly fast way for Postgres to access the table data. In pgMustard, we suggest considering an index-only scan to improve slow index

カテゴリー: postgresql

LPI-Japan、無償公開中の「オープンソースデータベース標準教科書 - PostgreSQL-」のバージョン更新版(Ver2.0.0)をリリース

www.postgresql.jp news - 2019-03-04(月) 16:08:17
LPI-Japan、無償公開中の「オープンソースデータベース標準教科書 - PostgreSQL-」のバージョン更新版(Ver2.0.0)をリリース anzai 2019/03/04 (月) - 16:08
カテゴリー: postgresql

Luca Ferrari: Running pgbackrest on FreeBSD

planet postgresql - 2019-03-04(月) 09:00:00

I tend to use FreeBSD as my PostgreSQL base machine, and that’s not always as simple as it sounds to get software running on it. In this post I take some advices on running pgbackrest on FreeBSD 12.

Running pgbackrest on FreeBSD

pgbackrest is an amazing tool for backup and recovery of a PostgreSQL database. However, and this is not a critique at all, it has some Linux-isms that make it difficult to run on FreeBSD. I tried to install and run it on FreeBSD 12, stopping immediatly at the compilation part. So I opened an issue to get some help, and then tried to experiment a little more to see if at least I could compile.

The first trial was to cross-compile: I created the executable (pgbackrest has a single executable) on a Linux machine, then moved it to the FreeBSD machine along with all the ldd libraries (placed into /compat/linux/lib64). But libpthread.so.0 prevented me to start the command:

% ./pgbackrest ./pgbackrest: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

So I switched back to native compilation and, as described in the issue I made a little changes to the client.c and the Makefile. Since it compiled (using of course gmake), I also made a little more changes to Makefile to compile and install it the FreeBSD way (i.e., under /usr/local/bin). The full diff is the following (some changes are not shown in the issue):

% git diff diff --git a/src/Makefile b/src/Makefile index 73672bff..0472c7f1 100644 --- a/src/Makefile +++ b/src/Makefile @@ -8,7 +8,7 @@ CC=gcc # Compile using C99 and Posix 2001...
カテゴリー: postgresql

Bulgaria PHP Conference 2019

php.net - 2019-03-03(日) 20:37:07
カテゴリー: php

User Roles and Access Control (ACL) in Laravel

planet PHP - 2019-03-03(日) 11:37:00

It's been over a year since I covered how to protect adminpanel routes in Laravel using Gates. Some people kept reminding me about my promise to cover ACL and user roles, and I kept putting off fulfilling that promise.

Finally I run into that on one of my projects, and that's the sign I was waiting for to continue giving back to the community I learned so much from.

What is ACL

Although some computer science theorists like to throw baffling definitions of the term into people (looking at you, MSDN), in reality it's pretty simple and straightforward. ACL stands for Access Control List, and specifies what users are allowed to do.

There are three entities in the ACL:

  • User Role: e.g. admin, editor, reader
  • Object: e.g. blog post
  • Operation: create, edit, read, etc.

Continue reading
カテゴリー: php

Pavel Stehule: compiled dll of plpgsql 1.6 for PostgreSQL 10, 11

planet postgresql - 2019-03-03(日) 01:39:00
Adam Bartoszewicz prepared dll. Please, read a message.

Thank you, Adam
カテゴリー: postgresql

Pavel Stehule: pspg is available in Fedora 29, Fedora 30 repository

planet postgresql - 2019-03-03(日) 00:31:00
If you use fresh Fedora distribution, you can install a pspg pager very simply:

dnf install pspg

after this:
export PSQL_PAGER=pspg #for Postgres 11
export PAGER=pspg
psql dbname
カテゴリー: postgresql

elein mustain: JOIN LATERAL

planet postgresql - 2019-03-02(土) 05:13:05

The primary feature of LATERAL JOIN is to enable access elements of a main query in a subquery which can be very powerful.

Several common uses of LATERAL are to:

  • denormalize arrays into parent child tables
  • aggregation across several tables
  • row or action generation.

Note, however, that the subquery will execute for each main query row since the values used in the subquery
will change. This might make a slower query.


SELECT <target list>
FROM <table>
(<subquery using table.column>) as foo;

Here are three examples of using LATERAL.  Obviously there are many more.:


In the Normalization example we have a table (denorm) containing ids and an array of other ids. We want to to flatten the arrays, creating parent and child tables.  This is also a good example of using a function as a subquery.


Activity log

The Activity log example captures pg_stat_activity item for current command for
auditing, spying or review. There will be lots of garbage collection and room for
further analysis in table slog.

Logging and auditing is usually done by triggers or rules.
In this case we want to grab the pg_stat_activity data
in the middle of the query. The lateral join is implicitly
on pg_backend_pid().

As you will see, the lateral join is not appropriate for UPDATES and
INSERTS. The slog() function can be called in the
FROM clause in those cases.



The Aggregation example examines people, books and checkouts. Filling the fields
is amusing, but not the point of explaining LATERAL. At the end we
will look at various uses of LATERAL while executing aggregates.


Drop all example tables:


I hope your examination of LATERAL give

カテゴリー: postgresql

Pavel Golub: Choose plpgsql variable names wisely

planet postgresql - 2019-03-01(金) 23:04:56

Pavel Stehule recently wrote the post “Don’t use SQL keywords as PLpgSQL variable names” describing the situation when internal stored routine variable names match PostgreSQL keywords.

But the problem is not only in keywords. Consider:

CREATE TABLE human( name varchar, email varchar); CREATE FUNCTION get_user_by_mail(email varchar) RETURNS varchar LANGUAGE plpgsql AS $$ DECLARE human varchar; BEGIN SELECT name FROM human WHERE email = email INTO human; RETURN human; END $$; SELECT get_user_by_mail('foo@bar');


column reference "email" is ambiguous LINE 1: SELECT name FROM human WHERE email = email ^ DETAIL: It could refer to either a PL/pgSQL variable or a table column.

OK, at least we have no hidden error like in Pavel’s case. Let’s try to fix it specifying an alias for the table name:

CREATE FUNCTION get_user_by_mail(email varchar) RETURNS varchar LANGUAGE plpgsql AS $$ DECLARE human varchar; BEGIN SELECT name FROM human u WHERE u.email = email INTO human; RETURN human; END $$;


column reference "email" is ambiguous LINE 1: SELECT name FROM human u WHERE u.email = email ^ DETAIL: It could refer to either a PL/pgSQL variable or a table column.

Seems better, but still parser cannot distinguish the variable name from column name. Of course, we may use variable placeholders instead of names. So the quick dirty fix is like:

CREATE FUNCTION get_user_by_mail(email varchar) RETURNS varchar LANGUAGE plpgsql AS $$ DECLARE human varchar; BEGIN SELECT name FROM human u WHERE u.email = $1 INTO human; RETURN human; END $$;

In addition, pay attention that human variable doesn’t produce an error, even though it shares the same name with the target table. I personally do not like using $1 placeholders in code, so my suggestion would be (of course, if you doesn’t want to change parameter name):

CREATE FUNCTION get_user_by_mail(email varchar) RETURNS varchar LANGUAGE plpgsql AS $$ DECLA[...]
カテゴリー: postgresql

Rafia Sabih: Using parallelism for queries from PL functions in PostgreSQL 10

planet postgresql - 2019-03-01(金) 19:09:00
Intra-query parallelism was introduced in PostgreSQL in version 9.6. The benefit from the parallel  scans and joins were talked about and significant improvement in the benchmark queries on higher  scale factors were highlighted. However, one area remained devoid of the benefits - queries from
 procedural language functions. Precisely, if you fire a query from a PL/pgSQL function then it can  not use parallel scans or joins for that query, even though the query is capable of using them  otherwise. Have a look at an example yourself,

-- creating and populating the table
create table foo (i int, j int)
insert into foo values (generate_series(1,500), generate_series(1,500)); -- for experimental purposes we are forcing parallelism by setting relevant parameters
set parallel_tuple_cost = 0;
set parallel_setup_cost = 0;
alter table foo set (parallel_workers = 4);
set max_parallel_workers_per_gather = 4; -- executing the query as an SQL statement
 explain analyse select * from foo where i <= 150;
 Gather  (cost=0.00..4.56 ...) (actual time=0.217..5.614 ...)
   Workers Planned: 4
   Workers Launched: 4
   ->  Parallel Seq Scan on foo  (cost=0.00..4.56 ...) (actual time=0.004..0.018 ...)
         Filter: (i <= 150)
         Rows Removed by Filter: 70
-- executing the query from a PLpgSQL function in v 9.6
explain analyse select total();
Query Text: SELECT count(*) FROM foo where i <=150
Aggregate  (cost=9.25..9.26 ...)
  ->  Seq Scan on foo  (cost=0.00..8.00 ...)
Query Text: explain analyse select total();
Result  (cost=0.00..0.26 ...)
To your relief the feature was then added in  version 10. Have a look,

-- executing the query from a PLpgSQL function in v 10
explain analyse select total();
Query Text: SELECT count(*) FROM foo where i <=150
Finalize Aggregate  (cost=4.68..4.69 ...)
  ->  Gather  (cost=4.66..4.67 ...)
        Workers Planned: 4
        ->  Partial Aggregate  (cost=4.66..4.67 ...)
              ->  Parallel Seq Scan on foo  (cost=0.00..4.56 ...)
                    Filter: (i <= 150)
This extends the [...]
カテゴリー: postgresql

Pavel Stehule: don't use SQL keywords as PLpgSQL variable names

planet postgresql - 2019-03-01(金) 16:22:00
Yesterday I had a possibility to see some strange runtime error

RETURNS integer
LANGUAGE plpgsql
AS $function$
DECLARE offset int DEFAULT 0;
RETURN offset + 1;

postgres=# SELECT fx();
ERROR: query "SELECT offset + 1" returned 0 columns
CONTEXT: PL/pgSQL function fx() line 4 at RETURN

What is problem? On first view, the RETURN returns 1 column, so error message is strange.

But any PLpgSQL expression is a SQL expression - more it is SQL SELECT statement. So SELECT OFFSET 1 really returns 0 columns.

The basic issue is on bad name of variable - it is same like SQL reserved keyword. OFFSET is unhappy word.

I wrote new check to plpgsql_check, that raises a warning in this situation.
カテゴリー: postgresql