フィードアグリゲーター

Blog archive in space

planet PHP - 2019-06-18(火) 06:14:00

I’ve been writing articles on this blog for about 13 years, and for a while now I’ve marked all of the 400ish articles with geo tags.

This blog is Jekyll-based. To add Geo tags, all I had to do was add the information to the ‘front-matter’. Here’s the header of a sample post:

title: "Browser tabs are probably the wrong metaphor" date: "2019-06-11 21:14:00 UTC" tags: - browsers - ux geo: [43.660773, -79.429926] location: "Bloor St W, Toronto, Canada"

I thought it would be neat to grab all these posts and plot them on a map, so next to my ‘time-based’ archive, I can look at a ‘space-based’ one.

This is how that looks like:

The archive of this blog in space!

Want to check it out? Browse this interactive map

To generate this map, I did two things. First I generated a .kml file. The process for this is basically the same as generating an atom feed for your Jekyll blog. This is how mine looks like:

--- layout: null --- <?xml version="1.0" encoding="UTF-8"?> <kml xmlns="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom"> <Document> <name>{{ site.title }}</name> <description> This map contains a list of locations where I wrote an article on this blog. </description> <Folder> <name>Posts</name> {% for post in site.posts %}{% if post.geo %} <Placemark> <name>{{ post.title | xml_escape }}</name> <Point> <coordinates> {{ post.geo[1] }},{{ post.geo[0] }},0 </coordinates> </Point> <description>https://evertpot.com{{post.url}}</description> <atom:link type="text/html" rel="alternate" href="https://evertpot.com{{ post.url }}"/> </Placemark> {% endif %}{% endfor %} </Folder> </Document> </kml>

Lastly, I needed to generate a map page and use the Google maps API to pull in the .kml:

--- layout:

Truncated by Planet PHP, read more at the original (another 1400 bytes)

カテゴリー: php

pgCMH - Columbus, OH: What’s new in pgBouncer

planet postgresql - 2019-06-17(月) 13:00:00

The June meeting will be held at 18:00 EST on Tues, the 25th. Once again, we will be holding the meeting in the community space at CoverMyMeds. Please RSVP on MeetUp so we have an idea on the amount of food needed.

What

CoverMyMeds’ very own CJ will be presenting this month. This month’s meeting will be CJ telling us about what’s new and improved in pgBouncer as well as how to get it up and running. Discussion will include real life examples from its use at CMM. pgBouncer is the lightweight connection pooler for PostgreSQL.

Where

CoverMyMeds has graciously agreed to validate your parking if you use their garage so please park there:

You can safely ignore any sign saying to not park in the garage as long as it’s after 17:30 when you arrive.

Park in any space that is not marked ‘24 hour reserved’.

Once parked, take the elevator/stairs to the 3rd floor to reach the Miranova lobby. Once in the lobby, the elevator bank is in the back (West side) of the building. Take a left and walk down the hall until you see the elevator bank on your right. Grab an elevator up to the 11th floor. (If the elevator won’t let you pick the 11th floor, contact Doug or CJ (info below)). Once you exit the elevator, look to your left and right; one side will have visible cubicles, the other won’t. Head to the side without cubicles. You’re now in the community space:

The kitchen is to your right (grab yourself a drink) and the meeting will be held to your left. Walk down the room towards the stage.

If you have any issues or questions with parking or the elevators, feel free to text/call Doug at +1.614.316.5079 or CJ at +1.740.407.7043

カテゴリー: postgresql

Solving problems and failure with PHP

planet PHP - 2019-06-16(日) 05:19:00

Imagine living in a 500 square-foot store, in a strip mall. The back half of the business was as expected, with a bathroom, 2 small offices, and work area. The front was a bedroom barely large enough to hold a bed, and a living room barely able to contain a couch and TV. The only thing separating the living room from the sidewalk, and the busy main street, was paper taped onto the floor to ceiling windows. And behind that, some vertical blinds to make it more home-like.

In 1996, that was my life. I was broke, and could no longer afford an apartment, so I moved into the front half of my failing business. I had one employee, who believed in me so much they were willing to donate their spare time to help me because I couldn’t afford to pay them.

Up to that point in my life, I had never made more than $9,000 in a single year. I was a failure, and couldn’t find a way out. I was living by eating a single Subway $5 foot-long sub…each day…for weeks, because that is all I could afford. And friends contributed cigarettes to keep that habit alive.

“I was living on a single Subway $5 foot-long sub…each day”

To top things off, I was experiencing anxiety attacks multiple times each day. After a couple of trips to the emergency room convinced I’d had a heart attack, I finally gave up going there because the bill was already thousands that would continue dragging my credit rating even farther down.

But then, something happened that changed my life as a nurse in the emergency room was asking me general health questions, such as age, height, weight, how much did I smoke/drink? (I answered 2 packs of cigarettes and 2 pots of coffee a day.) She looked at me with caring eyes and asked, “Do you think God intended you to put that much poison into your body?”

For some reason, I’d never thought of my bad habits in that manner, and it made sense to me. So, at that moment I quit smoking and stopped drinking coffee. This caused me to suffer from bronchial spasms severe enough I could visually see my chest quivering despite wearing a shirt, and even more anxiety attacks over the following month.

I moved in with family at the age of 30 and started searching for a job. In northeast Ohio, that is no small task. That area of the country has been abandoned for so long that the population of Youngstown, Ohio has declined from 160,000 in the ’70s to only 60,000’ish in 2017. (http://worldpopulationreview.com/us-cities/youngstown-oh-population/)

Finally, I found a job selling cars for about a year, which paid fairly well. And luckily a friend of my mom offered me a job as a service person with a cabinetry company, which was the best job I’d ever had to that point. I loved it and thrived.

As one part of the job, I generated my own reports to allow me to grow quickly over a couple of years from District manager to Area manager. As I was being considered for Regional manager, the company offered me a job in Florida generating reports for the entire country. This meant I needed to move to Florida. I took it, and in 2000 I moved to West Palm Beach.

This was when I was introduced, through necessity, to programming as the events of 911 caused me to lose my job. In 2002 I started learning to program with PHP and accepted funding from Florida to get some training to learn system administration.

After a job as a system admin, I decided I liked web programming more and focused on finding a new job doing that.

Over the following years, I continued gaining skills and moved from one job to the next to ensure my level of compensation kept up with those newly honed skills. I also took up long distance ultra-running, and Judo, as I continued to improve my life.

Today, as a senior/architect level web developer, who has also worked as a consultant and now as a developer advocate, I’ve been through alot over the past 21 years and had many amazing accomplishments.

Maybe I would have achieved these things regardless of the technology used. But PHP enabled me to do it more easily that I think any other programming/scripting language would have. Looking back, it was the approachability of PHP that allowed me to start solving problems immediately. And has allowed me to continue growing my skills as PHP itself continued to mature as well.

You may ask, “Why are you sharing this?”. Or you may get the impression I’m bragging. And perhaps that is a little true. But most of all I wish to share 3 thoughts, which is why I am sharing my story in such an open way.

#1 – If you are down on your luck, and struggling to get by. Know that as long as you continue to push forward, great things will eventually happ

Truncated by Planet PHP, read more at the original (another 1121 bytes)

カテゴリー: php

I was wrong abput PSR-11

planet PHP - 2019-06-16(日) 04:43:00
I was wrong abput PSR-11

Back in January 2017, the PHP Framework Interoperability Group (FIG) reviewed and passed PSR-11, the "Container Interface" specification. It was a very simplistic 2-method interface for Dependency Injection Containers, which had been worked on for some time by a small group. (This was before FIG had formal Working Groups, but "container-interop" was one of the effectively proto-Working Groups that were floating about.)

PSR-11 passed overwhelmingly, 23 to 1 out of the FIG member projects at the time. The lone holdout was Drupal, for which at the time I was the voting representative.

Two and a half years later, I will say I was wrong, and PSR-11 has been a net-win for PHP.

Continue reading this post on SteemIt.

Larry 15 June 2019 - 2:43pm
カテゴリー: php

Avinash Kumar: Bloom Indexes in PostgreSQL

planet postgresql - 2019-06-15(土) 04:29:56

There is a wide variety of indexes available in PostgreSQL. While most are common in almost all databases, there are some types of indexes that are more specific to PostgreSQL. For example, GIN indexes are helpful to speed up the search for element values within documents. GIN and GiST indexes could both be used for making full-text searches faster, whereas BRIN indexes are more useful when dealing with large tables, as it only stores the summary information of a page. We will look at these indexes in more detail in future blog posts. For now, I would like to talk about another of the special indexes that can speed up searches on a table with a huge number of columns and which is massive in size. And that is called a bloom index.

In order to understand the bloom index better, let’s first understand the bloom filter data structure. I will try to keep the description as short as I can so that we can discuss more about how to create this index and when will it be useful.

Most readers will know that an array in computer sciences is a data structure that consists of a collection of values and variables. Whereas a bit or a binary digit is the smallest unit of data represented with either 0 or 1. A bloom filter is also a bit array of m bits that are all initially set to 0.

A bit array is an array that could store a certain number of bits (0 and 1). It is one of the most space-efficient data structures to test whether an element is in a set or not.

Why use bloom filters?

Let’s consider some alternates such as list data structure and hash tables. In the case of a list data structure, it needs to iterate through each element in the list to search for a specific element. We can also try to maintain a hash table where each element in the list is hashed, and we then see if the hash of the element we are searching for matches a hash in the list. But checking through all the hashes may be a higher order of magnitude than expected. If there is a hash collision, then it does a linear probing which may be time-consuming. When we

[...]
カテゴリー: postgresql

Interview with John Kelly

planet PHP - 2019-06-15(土) 01:40:00
カテゴリー: php

Interview with Vesna Kovach

planet PHP - 2019-06-14(金) 11:30:00
カテゴリー: php

PHP 7.4.0 alpha 1 Released

php.net - 2019-06-13(木) 20:24:11
カテゴリー: php

PHP 7.4.0 alpha 1 Released

planet PHP - 2019-06-13(木) 09:00:00
PHP team is glad to announce the release of the first PHP 7.4.0 version, PHP 7.4.0 Alpha 1. This starts the PHP 7.4 release cycle, the rough outline of which is specified in the PHP Wiki. For source downloads of PHP 7.4.0 Alpha 1 please visit the download page. Please carefully test this version and report any issues found in the bug reporting system. Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release would be Alpha 2, planned for June 27. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
カテゴリー: php

Luca Ferrari: A recursive CTE to get information about partitions

planet postgresql - 2019-06-12(水) 09:00:00

I was wondering about writing a function that provides a quick status about partitioning. But wait, PostgreSQL has recursive CTEs!

A recursive CTE to get information about partitions

I’m used to partitioning, it allows me to quickly and precisely split data across different tables. PostgreSQL 10 introduced the native partitioning, and since that I’m using native partitioning over inheritance whenever it is possible.
But how to get a quick overview of the partition status? I mean, knowing which partition is growing the more?
In the beginning I was thinking to write a function to do that task, quickly finding myself iterating recursively over pg_inherits, the table that links partitions to their parents. But the keyword here is recursively: PostgreSQL provides recursive Common Table Expression, and a quick search revelead I was right: it is possible to do it with a single CTE. Taking inspiration from this mailing list message, here it is a simple CTE to get a partition status (you can find it on my GitHub repository):

WITH RECURSIVE inheritance_tree AS ( SELECT c.oid AS table_oid , c.relname AS table_name , NULL::text AS table_parent_name , c.relispartition AS is_partition FROM pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind = 'p' AND c.relispartition = false UNION ALL SELECT inh.inhrelid AS table_oid , c.relname AS table_name ,...
カテゴリー: postgresql

Browser tabs are probably the wrong metaphor

planet PHP - 2019-06-12(水) 06:14:00

Back when Internet Explorer was dominant, and every developer I knew installed Firefox on every family member and their dogs desktop, I remember a big selling point for convincing people to use Firefox was ‘Tabs’.

Firefox may not have been the first browser to introduce tabs, but in my experience it was the number one selling point to get people to switch. Especially those that otherwise didn’t care about Internet Explorer being dominant and stagnating the web.

Firefox 1.5 featuring tabs. Via: netfaqs.com

Since then, pretty much every desktop browser has adopted the same basic UI only mild variations.

One thing that kind of interests me is that every now and then I spot someone with a ridiculous number of tabs open. On Firefox this is still somewhat managable, because tabs don’t shrink below a certain size, but show arrows on each side to scroll through them, along with a menu.

Firefox 66 with a lot of tabs

On Chrome though, having a lot of tabs open makes it become completely unusable.

Chrome with a lot of tabs

The surprising thing to me is that I see this a lot on family members’ and friends’ screens.

When I would see this at first, I admit I may have made a little fun of people. Maybe commenting on their poor organizational skills, but I soon realized this pattern was common enough that it’s hard to blame the user, and I’ve started to feel that for many (if not most) browser users, the tab is just a bad UI.

When interviewing people with a ton of tabs on their screen and asking them why, the most common responses can be somewhat categorized as:

  • I need to get back to that tab later.
  • I just keep opening new tabs and forget about the old ones.

When asking the first group for more information, I realized that a lot of people use tabs as a sort of bookmark. Something to get back to later. One person I talked to was afraid for restarting their computer, because of the risk of not being able to get back to their open tabs.

I tend to use tabs for ‘currently active work’ and tend to keep them somewhat organized, and I’m sure there are plenty of people like myself. But for all the people that accumulate hundreds of tabs, I feels like the time is right for a better paradigm that combines bookmarks, tabs and history.

Perhaps organizing sessions in interactive timelines and grouping things together based on the user’s behavior might be a better approach. I believe that anything that requires active management and organizing would probably not work though. I don’t think I would remove the tab altogether, but just show me the last few things, treat them as emphemeral and provide an option to explore my current and previous session(s).

I’m sure experiments are out there, but so far it doesn’t seem like major browser vendors have had the guts to release something new. This is a bit surprising to me because despite the fact that competition is fierce and users tend to have strong opinions, everyone seems to be doing very similar things and converge on Chrome.

Perhaps part of the issue is that everyone just wants to cater to the largest possible audience, making everyone risk averse.

Anyway, this is all speculation with no data from a non-expert. Curious what your thoughts are, or if you know of any experiments that solve this problem.

Wanna respond? Reply to this tweet

カテゴリー: php

Jeff McCormick: What's New in Crunchy PostgreSQL Operator 4.0

planet postgresql - 2019-06-12(水) 00:27:48

Crunchy Data is pleased to release PostgreSQL Operator 4.0.

カテゴリー: postgresql

430 Would Block

planet PHP - 2019-06-12(水) 00:00:00

If you look at lists of HTTP status-codes, you might notice that there’s a gap between 429 Too Many Requests and 431 Request Header Fields Too Large.

I find this interesting, so I did some digging and it turns out that around the same time 429 and 431 there was another status code that never made it into a standard, defining 430 Would Block.

The draft specification has a few solutions to make HTTP/1.1 pipelining features usable. HTTP/1.1 pipelining is a feature that allows a browser to send multiple requests over a single TCP connection before having to wait for the response.

This potentially could be a major optimization, but adoption has been problematic. Since then HTTP/2 was introduces which solves this entirely. Pipelining support did exist in a number of clients and servers, but was often behind a flag that was disabled by default. Since HTTP/2 various clients such as Curl have removed HTTP/1.1 pipelining support entirely, and it’s unlikely this feature will ever come back.

The 430 Would Block status code was a code that a server could use to prevent pipelining multiple requests, for which one of the requests would block subsequent ones later in the pipeline.

Anyway, I wrote this mostly for historical interest sake. Don’t use this.

References
カテゴリー: php

Xdebug Update: May 2019

planet PHP - 2019-06-11(火) 17:17:00
Xdebug Update: May 2019
London, UK Tuesday, June 11th 2019, 09:17 BST

This is another of the monthly update reports in which I explain what happened with Xdebug development in this past month. It will be published on the first Tuesday after the 5th of each month. Patreon supporters will get it earlier, on the first of each month. You can become a patron here to support my work on Xdebug. More supporters, means that I can dedicate more of my time to improving Xdebug.

In May, I worked on Xdebug for 32 hours, and did the following things:

2.7.2 Release

I made the 2.7.2 release release available at the start of the month. This released addressed a few bugs:

  • Issue #1488: Rewrite DBGp 'property_set' to always use eval

  • Issue #1586: error_reporting()'s return value is incorrect during debugger's 'eval' command

  • Issue #1615: Turn off Zend OPcache when remote debugger is turned on

  • Issue #1656: remote_connect_back alters header if multiple values are present

  • Issue #1662: __debugInfo should not be used for user-defined classes

The first issue has been lingering since when Xdebug introduced support for PHP 7. PHP 7 changes the way how variables are handled in the engine, which means that is a lot harder to obtain a zval structure that is modifiable. Xdebug uses that existing functionality in the step debugger to modify a variable's contents, but only if a variable type was explicitly set as well. Because it is no longer possible to retrieve this zval structure for modification, Xdebug switched from direct modification to calling the engine's internal eval_string function to set new values for variables.

Xdebug's wrapper around the engine's eval_string function is also used when running the DBGp eval command. IDEs use this for implementing watch statements. Because Xdebug shouldn't leak warning or error messages during the use of DBGp protocol commands, Xdebug's wrapper sets error_reporting to 0. However, that means if you would run error_reporting() through the DBGp protocol with the eval command, it would always return 0. The second bug (#1586) fixed this, so that running error_reporting() with eval now returns the correct value.

The third issue in the list addresses a problem with Zend OPcache's optimiser turned on. It is possible that with optimisations turned on, variables no longer exist, or useless statements are removed to make your code run faster. However, this is highly annoying when you are debugging, because you can no longer reliably inspect what is going on. By turning of the code optimisation when Xdebug's step debugger is active, normality is restored.

The last two items in the 2.7.2 release are minor bug fixes.

Resolving Breakpoints

The fine folks at Jetbrains have looked at my implementation of issue #1388: Support 'resolved' flag for breakpoints. They found that although the implemented functionality works, it would not yet handle the resolving of breakpoints which are set in a scope that is currently being executed (i.e., when the function, method, or closure is currently active). I have briefly looked at solving this problem, but have not yet found a good solution. In addition, I am intending to change the line searching algorithm to scan at most 5 lines in each directions instead of 10. This should prevent unnecessary jumping around, and unintended breaks.

PHP 7.4 Support

The rest of th

Truncated by Planet PHP, read more at the original (another 1793 bytes)

カテゴリー: php

Hans-Juergen Schoenig: Tech preview: How PostgreSQL 12 handles prepared plans

planet postgresql - 2019-06-11(火) 17:00:50

PostgreSQL 12 is just around the corner and therefore we already want to present some of the new features we like. One important new feature gives users and devops the chance to control the behavior of the PostgreSQL optimizer. Prepared plans are always a major concern (especially people moving from Oracle seem to be most concerned) and therefore it makes sense to discuss the way plans are handled in PostgreSQL 12.

Firing up a PostgreSQL test database

To start I will create a simple table consisting of just two fields:

db12=# CREATE TABLE t_sample (id serial, name text); CREATE TABLE

Then some data is loaded:

db12=# INSERT INTO t_sample (name) SELECT 'hans' FROM generate_series(1, 1000000); INSERT 0 1000000 db12=# INSERT INTO t_sample (name) SELECT 'paul' FROM generate_series(1, 2); INSERT 0 2

Note that 1 million names are identical (“hans”) and just two people are called “paul”. The distribution of data is therefore quite special, which has a major impact as you will see later in this post.

To show how plans can change depending on the setting, an index on “name” is defined as shown in the next listing:

db12=# CREATE INDEX idx_name ON t_sample (name); CREATE INDEX

The PostgreSQL query optimizer at work

Let us run a simple query and see what happens:

db12=# explain SELECT count(*) FROM t_sample WHERE name = 'hans'; QUERY PLAN ------------------------------------------------------------------ Finalize Aggregate (cost=12656.23..12656.24 rows=1 width=8) -> Gather (cost=12656.01..12656.22 rows=2 width=8) Workers Planned: 2 -> Partial Aggregate (cost=11656.01..11656.02 rows=1 width=8) -> Parallel Seq Scan on t_sample (cost=0.00..10614.34 rows=416668 width=0) Filter: (name = 'hans'::text) (6 rows)

In this case PostgreSQL decided to ignore the index and go for a sequential scan. It has even seen that the table is already quite large and opted for a parallel query. Still, what we see is a sequential scan. All data in the table has to be

[...]
カテゴリー: postgresql

Luca Ferrari: Checking the sequences status on a single pass

planet postgresql - 2019-06-11(火) 09:00:00

It is quite simple to wrap a couple of queries in a function to have a glance at all the sequences and their cycling status.

Checking the sequences status on a single pass

The catalog pg_sequence keeps track about the definition of a single sequence, including the increment value and boundaries. Combined with pg_class and a few other functions it is possible to create a very simple administrative function to keep track about the overall sequences status.

I’ve created a seq_check() function that provides an output as follows:

testdb=# select * from seq_check() ORDER BY remaining; seq_name | current_value | lim | remaining ------------------------|---------------|------------|------------ public.persona_pk_seq | 5000000 | 2147483647 | 214248 public.root_pk_seq | 50000 | 2147483647 | 2147433647 public.students_pk_seq | 7 | 2147483647 | 2147483640 (3 rows)

As you can see, the function provides the current value of the sequence, the maximum value (limit) and how much values the sequence can still provide before it overflows or cycles. For example, persona_pk_seq has remained with 214248 values to provide. Combined with the current value, that is 5000000, this provides hint about the fact that the sequence has probably a too large increment interval.

The code of the function is as follows:

CREATE OR REPLACE FUNCTION seq_check() RETURNS TABLE( seq_name text, current_value bigint, lim...
カテゴリー: postgresql

Paul Ramsey: Parallel PostGIS and PgSQL 12 (2)

planet postgresql - 2019-06-08(土) 01:00:00

In my last post I demonstrated that PostgreSQL 12 with PostGIS 3 will provide, for the first time, automagical parallelization of many common spatial queries.

This is huge news, as it opens up the possibility of extracting more performance from modern server hardware. Commenters on the post immediately began conjuring images of 32-core machines reducing their query times to miliseconds.

So, the next question is: how much more performance can we expect?

To investigate, I acquired a 16 core machine on AWS (m5d.4xlarge), and installed the current development snapshots of PostgreSQL and PostGIS, the code that will become versions 12 and 3 respectively, when released in the fall.

How Many Workers?

The number of workers assigned to a query is determined by PostgreSQL: the system looks at a given query, and the size of the relations to be processed, and assigns workers proportional to the log of the relation size.

For parallel plans, the “explain” output of PostgreSQL will include a count of the number of workers planned and assigned. That count is exclusive of the leader process, and the leader process actually does work outside of its duties in coordinating the query, so the number of CPUs actually working is more than the num_workers, but slightly less than num_workers+1. For these graphs, we’ll assume the leader fully participates in the work, and that the number of CPUs in play is num_workers+1.

Forcing Workers

PostgreSQL’s automatic calculation of the number of workers could be a blocker to performing analysis of parallel performance, but fortunately there is a workaround.

Tables support a “storage parameter” called parallel_workers. When a relation with parallel_workers set participates in a parallel plan, the value of parallel_workers over-rides the automatically calculated number of workers.

ALTER TABLE pd SET ( parallel_workers = 8);

In order to generate my data, I re-ran my queries, upping the number of parallel_workers on my tables for each run.

Setup

Before running the tests, I set all the global limits o

[...]
カテゴリー: postgresql

International PHP Conference 2019 - Fall Edition

php.net - 2019-06-07(金) 21:37:00
カテゴリー: php

ページ