フィードアグリゲーター

REGINA OBE: Coming PostGIS 2.5 ST_OrientedEnvelope

planet postgresql - 2018-06-19(火) 06:42:00

PostGIS 2.5 is just around the corner. One of the new functions coming is the ST_OrientedEnvelop. This is something I've been waiting for for years. It is the true minimum bounding rectangle, as opposed to ST_Envelope which is an axis aligned bounding rectable.

Below is a pictorial view showing the difference between the two.

Continue reading "Coming PostGIS 2.5 ST_OrientedEnvelope"
カテゴリー: postgresql

Sebastian Insausti: A Performance Cheat Sheet for PostgreSQL

planet postgresql - 2018-06-18(月) 18:17:42

Performance is one of the most important and most complex tasks when managing a database. It can be affected by the configuration, the hardware or even the design of the system. By default, PostgreSQL is configured with compatibility and stability in mind, since the performance depends a lot on the hardware and on our system itself. We can have a system with a lot of data being read but the information does not change frequently. Or we can have a system that writes continuously. For this reason, it is impossible to define a default configuration that works for all types of workloads.

In this blog, we will see how one goes about analyzing the workload, or queries, that are running. We shall then review some basic configuration parameters to improve the performance of our PostgreSQL database. As we mentioned, we will see only some of the parameters. The list of PostgreSQL parameters is extensive, we would only touch on some of the key ones. However, one can always consult the official documentation to delve into the parameters and configurations that seem most important or useful in our environment.

EXPLAIN

One of the first steps we can take to understand how to improve the performance of our database is to analyze the queries that are made.

PostgreSQL devises a query plan for each query it receives. To see this plan, we will use EXPLAIN.

The structure of a query plan is a tree of plan nodes. The nodes in the lower level of the tree are scan nodes. They return raw rows from a table. There are different types of scan nodes for different methods of accessing the table. The EXPLAIN output has a line for each node in the plan tree.

world=# EXPLAIN SELECT * FROM city t1,country t2 WHERE id>100 AND t1.population>700000 AND t2.population<7000000; QUERY PLAN -------------------------------------------------------------------------- Nested Loop (cost=0.00..734.81 rows=50662 width=144) -> Seq Scan on city t1 (cost=0.00..93.19 rows=347 width=31) Fil[...]
カテゴリー: postgresql

Hans-Juergen Schoenig: PostgreSQL clusters: Monitoring performance

planet postgresql - 2018-06-18(月) 17:30:06

When people are talking about database performance monitoring they usually think of inspecting one PostgreSQL database server at a time. While this is certainly useful it can also be quite beneficial to inspect the status of an entire database cluster or to inspect a set of servers working together at once. Fortunately there are easy means to achieve that with PostgreSQL. How this works can be outlined in this post.

pg_stat_statements: The best tool to monitor PostgreSQL performance

If you want to take a deep loop at PostgreSQL performance there is really no way around pg_stat_statements. It offers a lot of information and is really easy to use.

To install pg_stat_statements, the following steps are necessary:

  • run “CREATE EXTENSION pg_stat_statements” in your desired database
  • add the following line to postgresql.conf:
    • shared_preload_libraries = ‘pg_stat_statements’
  • restart PostgreSQL

Once this is done, PostgreSQL will already be busy collecting data on your database hosts. However, how can we create a “clusterwide pg_stat_statements” view so that we can inspect an entire set of servers at once?

Using pg_stat_statements to check an entire database cluster

Our goal is to show data from a list of servers in a single view. One way to do that is to make use of PostgreSQL’s foreign data wrapper infrastructure. We can simply connect to all servers in the cluster and unify the data in a single view.

Let us assume we have 3 servers, a local machine, “a_server”, and “b_server”. Let us get started by connecting to the local server to run the following commands:

CREATE USER dbmonitoring LOGIN PASSWORD 'abcd' SUPERUSER; GRANT USAGE ON SCHEMA pg_catalog TO dbmonitoring; GRANT ALL ON pg_stat_statements TO dbmonitoring;

In the first step I created a simple user to do the database monitoring. Of course you can handle users and so on differently but it seems like an attractive idea to use a special user for that purpose.

The next command enables the postgres_fdw extension, which is necessary to connect to those remote serve

[...]
カテゴリー: postgresql

Tatsuo Ishii: Even more fine load balancing control with Pgpool-II 4.0

planet postgresql - 2018-06-15(金) 13:36:00
While back go, I posted an article titled "More load balancing fine control" to explain a new feature of upcoming Pgpool-II 4.0. Today I would like talk about yet another new feature of load balancing in Pgpool-II.

Pgpool-II is already able to control read query load balancing in several granularity:
(See the document for more details).
  •  Whether enable load balancing or not (load_balance_mode)
  • By database (database_redirect_preference_list)
  • By application name (app_name_redirect_preference_list)
  • By functions used in the query (white_function_list, black_function_list)
  • Statement level (/*NO LOAD BALANCE*/ comment)
 The last method allows the statement level control on load balancing but it needs to rewrite a query which is often impossible if you are using commercial software.

A new configuration parameter called "black_query_pattern_list" allows you to disable load balancing for queries specified by the parameter. The parameter takes a regular expression string. If a query matched with the expression, load balancing is disabled for the query: the query is sent to the primary (master) PostgreSQL server.

Here are some examples.

We have:
black_query_pattern_list = 'SELECT \* FROM t1\;;'
in pgpool.conf. Note that some special characters must be qualified by using '\' (back slash) characters.

We have no SELECTs issued to both primary and standby.

test=# show pool_nodes;
 node_id | hostname | port  | status | lb_weight |  role   | select_cnt | load_balance_node | replication_delay | last_status_change 
---------+----------+-------+--------+-----------+---------+------------+-------------------+-------------------+---------------------
 0       | /tmp     | 11002 | up     | 0.500000  | primary | 0          | false             | 0                 | 2018-06-15 11:57:25
 1       | /tmp     | 11003 | up     | 0.500000  | standby | 0          | true              | 0                 | 2018-06-15 11:57:25
(2 rows)

If following query is issued, then the "select_cnt" column of standby should be incremented since standb[...]
カテゴリー: postgresql

Marco Slot: Scalable incremental data aggregation on Postgres and Citus

planet postgresql - 2018-06-15(金) 01:36:00

Many companies generate large volumes of time series data from events happening in their application. It’s often useful to have a real-time analytics dashboard to spot trends and changes as they happen. You can build a real-time analytics dashboard on Postgres by constructing a simple pipeline:

  1. Load events into a raw data table in batches
  2. Periodically aggregate new events into a rollup table
  3. Select from the rollup table in the dashboard

For large data streams, Citus (an open source extension to Postgres that scales out Postgres horizontally) can scale out each of these steps across all the cores in a cluster of Postgres nodes.

One of the challenges of maintaining a rollup table is tracking which events have already been aggregated—so you can make sure that each event is aggregated exactly once. A common technique to ensure exactly-once aggregation is to run the aggregation for a particular time period after that time period is over. We often recommend aggregating at the end of the time period for its simplicity, but you cannot provide any results before the time period is over and backfilling is complicated.

Building rollup tables in a new and different way

In this blog post, we’ll introduce a new approach to building rollup tables which addresses the limitations of using time windows. When you load older data into the events table, the rollup tables will automatically be updated, which enables backfilling and late arrivals. You can also start aggregating events from the current time period before the current time period is over, giving a more real time view of the data.

We assume all events have an identifier which is drawn from a sequence and provide a simple SQL function that enables you to incrementally aggregate ranges of sequence values in a safe, transactional manner.

We tested this approach for a CDN use case and found that a 4-node Citus database cluster can simultaneously:

  • Ingest and aggregate over a million rows per second
  • Keep the rollup table up-to-date within ~10s
  • Answer analytical queries in u
[...]
カテゴリー: postgresql

gabrielle roth: PDXPUG: June meeting

planet postgresql - 2018-06-14(木) 22:35:12

When: 6-8pm Thursday June 21, 2018
Where: iovation
Who: Mark Wong
What: Intro to OmniDB with PostgreSQL

OmniDB is an open source browser-based app designed to access and manage many different Database Management systems, e.g. PostgreSQL, Oracle and MySQL. OmniDB can run either as an App or via Browser, combining the flexibility needed for various access paths with a design that puts security first.

OmniDB’s main objective is to offer an unified workspace with all functionalities needed to manipulate different DBMS. It is built with simplicity in mind, designed to be a fast and lightweight browser-based application.

Get a tour of OmniDB with PostgreSQL!

 

Mark leads the 2ndQuadrant performance practice as a Performance Consultant for English Speaking Territories, based out of Oregon in the USA. He is a long time Contributor to PostgreSQL, co-organizer of the Portland PostgreSQL User Group, and serves as a Director and Treasurer
for the United States PostgreSQL Association.


If you have a job posting or event you would like me to announce at the meeting, please send it along. The deadline for inclusion is 5pm the day before the meeting.

Our meeting will be held at iovation, on the 3rd floor of the US Bancorp Tower at 111 SW 5th (5th & Oak). It’s right on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots. No bikes in the office, sorry! For access to the 3rd floor of the plaza, please either take the lobby stairs to the third floor or take the plaza elevator (near Subway and Rabbit’s Cafe) to the third floor. There will be signs directing you to the meeting room. All attendess must check in at the iovation front desk.

See you there!

カテゴリー: postgresql

Andrew Dunstan: Road test your patch in one command

planet postgresql - 2018-06-14(木) 21:39:04

If you have Docker installed on your development machine, there is a simple way to road test your code using the buildfarm client, in a nicely contained environment.

These preparatory steps only need to be done once. First clone the repository that has the required container definitions:

git clone https://github.com/PGBuildFarm/Dockerfiles.git bf-docker

Then build a container image to run the command (in this example we use the file based on Fedora 28):

cd bf-docker docker build --rm=true -t bf-f28 -f Dockerfile.fedora-28 .

Make a directory to contain all the build artefacts:

mkdir buildroot-f28

That’s all the preparation required. Now you can road test your code with this command:

docker run -v buildroot-f28:/app/buildroot \ -v /path/to/postgres/source:/app/pgsrc bf-f28 \ run_build.pl --config=build-fromsource.conf

The config file can be customized if required, but this is a pretty simple way to get started.

カテゴリー: postgresql

Andrew Dunstan: Road test your patch in one command

planet postgresql - 2018-06-14(木) 21:39:04

If you have Docker installed on your development machine, there is a simple way to road test your code using the buildfarm client, in a nicely contained environment.

These preparatory steps only need to be done once. First clone the repository that has the required container definitions:

git clone https://github.com/PGBuildFarm/Dockerfiles.git bf-docker

Then build a container image to run the command (in this example we use the file based on Fedora 28):

cd bf-docker docker build --rm=true -t bf-f28 Dockerfile.f28 .

Make a directory to contain all the build artefacts:

mkdir buildroot-f28

That’s all the preparation required. Now you can road test your code with this command:

docker run -v buildroot-f28:/app/buildroot \ -v /path/to/postgres/source:/app/pgsrc bf-f28 \ run_build.pl --config=build-fromsource.conf

The config file can be customized if required, but this is a pretty simple way to get started.

カテゴリー: postgresql

Andrew Dunstan: Road test your patch in one command

planet postgresql - 2018-06-14(木) 21:39:04

If you have Docker installed on your development machine, there is a simple way to road test your code using the buildfarm client, in a nicely contained environment.

These preparatory steps only need to be done once. First clone the repository that has the required container definitions:

git clone https://github.com.PGBuildFarm/Dockerfiles.git bf-docker

Then build a container image to run the command (in this example we use the file based on Fedora 28):

cd bf-docker docker build --rm=true -t bf-f28 Dockerfile.f28 .

Make a directory to contain all the build artefacts:

mkdir buildroot-f28

That’s all the preparation required. Now you can road test your code with this command:

docker run -v buildroot-f28:/app/buildroot \ -v /path/to/postgres/source:/app/pgsrc bf-f28 \ run_build.pl --config=build-fromsource.conf

The config file can be customized if required, but this is a pretty simple way to get started.

カテゴリー: postgresql

Regina Obe: Unpivoting data using JSON functions

planet postgresql - 2018-06-14(木) 16:12:00

Most of our use-cases for the built-in json support in PostgreSQL is not to implement schemaless design storage, but instead to remold data. Remolding can take the form of restructuring data into json documents suitable for web maps, javascript charting web apps, or datagrids. It also has uses beyond just outputting data in json form. In addition the functions are useful for unraveling json data into a more meaningful relational form.

One of the common cases we use json support is what we call UNPIVOTING data. We demonstrated this in Postgres Vision 2018 presentation in slide 23. This trick won't work in other relational databases that support JSON because it also uses a long existing feature of PostgreSQL to be able to treat a row as a data field.

Continue reading "Unpivoting data using JSON functions"
カテゴリー: postgresql

Haroon .: Using Window Functions for Time Series IoT Analytics in Postgres-BDR

planet postgresql - 2018-06-14(木) 01:35:45

Internet of Things tends to generate large volumes of data at a great velocity. Often times this data is collected from geographically distributed sources and aggregated at a central location for data scientists to perform their magic i.e. find patterns, trends and make predictions.

Let’s explore what the IoT Solution using Postgres-BDR has to offer for Data Analytics. Postgres-BDR is offered as an extension on top of PostgreSQL 10 and above. It is not a fork. Therefore, we get the power of complete set of analytic functions that PostgreSQL has to offer. In particular, I am going to play around with PostgreSQL’s Window Functions here to analyze a sample of time series data from temperature sensors.

Let’s take an example of IoT temperature sensor time series data spread over a period of 7 days. In a typical scenario, temperature sensors are sending readings every minute. Some cases could be even more frequent. For the sake of simplicity, however, I am going to use one reading per day. The objective is to use PostgreSQL’s Window Functions for running analytics and that would not change with increasing the frequency of data points.

Here’s our sample data. Again, the number of table fields in a real world IoT temperature sensor would be higher, but our fields of interest in this case are restricted to timestamp of the temperature recording, device that reported it and the actual reading.

CREATE TABLE iot_temperature_sensor_data ( ts timestamp without time zone, device_id text, reading float );

and we add some random timeseries temperature sensor reading data for seven consecutive days

ts | device_id | reading ---------------------+----------------------------------+--------- 2017-06-01 00:00:00 | ff0d1c4fd33f8429b7e3f163753a9cb0 | 10 2017-06-02 00:00:00 | d125efa43a62af9f50c1a1edb733424d | 9 2017-06-03 00:00:00 | 0b4ee949cc1ae588dd092a23f794177c | 1 2017-06-04 00:00:00 | 8b9cef086e02930a808ee97b87b07b03 | 3 2017-06-05 00:00:00 [...]
カテゴリー: postgresql

php[world] 2018 - Call for Speakers

php.net - 2018-06-13(水) 21:00:51
カテゴリー: php

Pavel Stehule: Article about PostgreSQL 11

planet postgresql - 2018-06-13(水) 17:52:00
My new article is in Czech language, but Google translator can help.
カテゴリー: postgresql

Craig Kerstiens: Configuring memory for Postgres

planet postgresql - 2018-06-13(水) 01:44:00

work_mem is perhaps the most confusing setting within Postgres. work_mem is a configuration within Postgres that determines how much memory can be used during certain operations. At its surface, the work_mem setting seems simple: after all, work_mem just specifies the amount of memory available to be used by internal sort operations and hash tables before writing data to disk. And yet, leaving work_mem unconfigured can bring on a host of issues. What perhaps is more troubling, though, is when you receive an out of memory error on your database and you jump in to tune work_mem, only for it to behave in an un-intuitive manner.

Setting your default memory

The work_mem value defaults to 4MB in Postgres, and that’s likely a bit low. This means that per Postgres activity (each join, some sorts, etc.) can consume 4MB before it starts spilling to disk. When Postgres starts writing temp files to disk, obviously things will be much slower than in memory. You can find out if you’re spilling to disk by searching for temporary file within your PostgreSQL logs when you have log_temp_files enabled. If you see temporary file, it can be worth increasing your work_mem.

On Citus Cloud (our fully-managed database as a service that scales out Postgres horizontally), we automatically tune work_mem based on the overall memory available to the box. Our tuning is based on the years of experience of what we’ve seen work for a variety of production Postgres workloads, coupled with statistics to compute variations based on cluster sizing.

It’s tough to get the right value for work_mem perfect, but often a sane default can be something like 64 MB, if you’re looking for a one size fits all answer.

It’s not just about the memory for queries

Let’s use an example to explore how to think about optimizing your work_mem setting.

Say you have a certain amount of memory, say 10 GB. If you have 100 running Postgres queries, and each of those queries has a 10 MB connection overhead, then 100*10 MB (1 GB) of memory is taken up by the 100 connections—whi

[...]
カテゴリー: postgresql

LaravelConf Taiwan 2018

php.net - 2018-06-11(月) 20:00:00
カテゴリー: php

Vincenzo Romano: Temporary functions (kind of) without schema qualifiers

planet postgresql - 2018-06-11(月) 19:40:04

In a previous article of mine I’ve been bitten by the “temporary function issue” (which isn’t an issue at all, of course).

I needed a way to use the now() function in different ways to do a sort of “time travel” over a history table.

I’ve found a few easy ways to accomplish the same task and have now distilled one that seems to me to be the “best one“. I will make use of a temporary table.

The trick is that a new function, called mynow() will access a table without an explicit schema qualifier but relying on a controlled search_path. This approach opens the door to a controlled table masking thanks to the temporary schema each session has. Let’s see this function first.

create or replace function mynow( out ts timestamp ) language plpgsql as $l0$ begin select * into ts from mytime; if not found then ts := now(); end if; end; $l0$;

If you put a timestamp in the table mytime, then that timestamp will be used a the current time. If there’s nothing in there, then the normal function output will be used.

First of all, you need to create an always-empty non-temporary table like this in the public schema:

create table mytime ( ts timestamp );

So the function will start working soon with the default time line. If I inserted a time stamp straight in there, I’d set the function behavior for all sessions at once. But this isn’t normally the objective.

As soon as a user needs a different reference time for its time travelling, she needs to do the following:

set search_path = pg_temp,"$user", public; -- 1st create table mytime ( like public.mytime including all ); -- 2nd insert into mytime values ( '2000-01-01' ); -- 3rd

That’s it. More or less.

The first statement alters the search_path setting so the pg_temp schema becomes the first one to be searched in and, thus, the “default” schema.

The second one creates a temporary table (re-read the previous sentence, please!) “just like the public one“. Please note the schema qualifying used with the like predicate.

The third one will insert into that temporary table (you

[...]
カテゴリー: postgresql

Laurenz Albe: Adding an index can decrease SELECT performance

planet postgresql - 2018-06-11(月) 17:30:07
© Laurenz Albe 2018

 

We all know that you have to pay a price for a new index you create — data modifying operations will become slower, and indexes use disk space. That’s why you try to have no more indexes than you actually need.

But most people think that SELECT performance will never suffer from a new index. The worst that can happen is that the new index is not used.

However, this is not always true, as I have seen more than once in the field. I’ll show you such a case and tell you what you can do about it.

An example

We will experiment with this table:

CREATE TABLE skewed ( sort integer NOT NULL, category integer NOT NULL, interesting boolean NOT NULL ); INSERT INTO skewed SELECT i, i%1000, i>50000 FROM generate_series(1, 1000000) i; CREATE INDEX skewed_category_idx ON skewed (category); VACUUM (ANALYZE) skewed;

We want to find the first twenty interesting rows in category 42:

EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM skewed WHERE interesting AND category = 42 ORDER BY sort LIMIT 20;

This performs fine:

QUERY PLAN -------------------------------------------------------------------- Limit (cost=2528.75..2528.80 rows=20 width=9) (actual time=4.548..4.558 rows=20 loops=1) Buffers: shared hit=1000 read=6 -> Sort (cost=2528.75..2531.05 rows=919 width=9) (actual time=4.545..4.549 rows=20 loops=1) Sort Key: sort Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=1000 read=6 -> Bitmap Heap Scan on skewed (cost=19.91..2504.30 rows=919 width=9) (actual time=0.685..4.108 rows=950 loops=1) Recheck Cond: (category = 42) Filter: interesting Rows Removed by Filter: 50 Heap Blocks: exact=1000 Buffers: shared hit=1000 read=6 -> Bitmap Index Scan on skewed_category_idx (cost=0.00..19.68 rows=967 width=0) [...]
カテゴリー: postgresql

Regina Obe: PostgresVision 2018 Slides and Impressions

planet postgresql - 2018-06-09(土) 17:56:00

Leo and I attended PostgresVision 2018 which ended a couple of days ago.

We gave a talk on spatial extensions with main focus being PostGIS. Here are links to our slides PostgresVision2018_SpatialExtensions HTML version PDF.

Unfortunately there are no slides of the pgRouting part, except the one that says PGRouting Live Demos because Leo will only do live demos. He has no fear of his demos not working.

Side note, if you are on windows and use the PostGIS bundle, all the extensions listed in the PostGIS box of the spatial extensions diagram, as well as the pointcloud, pgRouting, and ogr_fdw are included in the bundle.

Continue reading "PostgresVision 2018 Slides and Impressions"
カテゴリー: postgresql

Jignesh Shah: Setting up PostgreSQL 11 Beta 1 in Amazon RDS Database Preview Environment

planet postgresql - 2018-06-08(金) 16:00:00

PostgreSQL 11 Beta 1 has been out for more than couple of weeks. The best way to experience it is  to try out the new version and test drive it yourself.

Rather than building it directly from source, I nowadays take the easy way out and deploy it in the cloud. Fortunately, it is already available in Amazon RDS Database Preview Environment.



For this post I am going to use the AWS CLI since it is easy to understand the command line and copy/paste it and also easier to script it for repetitive testing. To use the Database Preview environment, the endpoint has to be modified to use https://rds-preview.us-east-2.amazonaws.com/ instead of the default for the region.

Because there might be multiple PostgreSQL 11 beta releases possible, it is important to understand which build version is being deployed.  I can always leave it to the default which typically would be the latest preferred version but lot of times I want to make sure on the version I am deploying. The command to get all the versions of PostgreSQL 11 is describe-db-engine-versions.

$ aws rds describe-db-engine-versions --engine postgres --db-parameter-group-family postgres11 --endpoint-url  https://rds-preview.us-east-2.amazonaws.com/  {     "DBEngineVersions": [         {             "Engine": "postgres",              "DBParameterGroupFamily": "postgres11",              "SupportsLogExportsToCloudwatchLogs": false,              "SupportsReadReplica": true,              "DBEngineDescription": "PostgreSQL",              "EngineVersion": "11.20180419",              "DBEngineVersionDescription": "PostgreSQL 11.20180419 (68c23cba)",              "ValidUpgradeTarget": [                 {                     "Engine": "postgres",                      "IsMajorVersionUpgrade": false,                      "AutoUpgrade": false,                      "EngineVersion": "11.20180524"                 }             ]         },          {             "Engine": "postgres",              "DBParameterGroupFamily": "postgres11",              "SupportsLogExportsToCloudwatch[...]
カテゴリー: postgresql

Craig Kerstiens: Citus what is it good for? OLTP? OLAP? HTAP?

planet postgresql - 2018-06-08(金) 05:09:00

Earlier this week as I was waiting to begin a talk at a conference, I chatted with someone in the audience that had a few questions. They led off with this question: is Citus a good fit for X? The heart of what they were looking to figure out: is the Citus distributed database a better fit for analytical (data warehousing) workloads, or for more transactional workloads, to power applications? We hear this question quite a lot, so I thought I’d elaborate more on the use cases that make sense for Citus from a technical perspective.

Before I dig in, if you’re not familiar with Citus; we transform Postgres into a distributed database that allows you to scale your Postgres database horizontally. Under the covers, your data is sharded across multiple nodes, meanwhile things still appear as a single node to your application. By appearing still like a single node database, your application doesn’t need to know about the sharding. We do this as a pure extension to Postgres, which means you get all the power and flexibility that’s included within Postgres such as JSONB, PostGIS, rich indexing, and more.

OLAP - Data warehousing as it was 5 years ago

Once upon a time (when Citus Data was first started ~7 years ago), we focused on building a fast database to power analytics. Analytical workloads often had different needs and requirements around them. Transactions weren’t necessarily needed. Data was often loaded in bulk as opposed to single row inserts and updates. Analytics workloads have evolved and moved from pure-OLAP to a mix of OLTP and OLAP, and so Citus has too.

Data warehousing for slower storage and massive exploration

Going down the data warehousing rabbit hole, you’ll find use cases for storing hundreds of terabytes of data. This can range from historical audit logs, analytics data, to event systems. Much of the data is seldomly accessed, but one day you may need it so you want to retain it. This data is typically used internally by a data analyst that has some mix of defined reports (maybe they run them month

[...]
カテゴリー: postgresql

ページ