Joshua Drake: Breaking down the walls of exclusivity

planet postgresql - 2018-11-27(火) 06:03:00
When you are considering a conference about Postgres, one should pick the one that is focused on building the community. PostgresConf is all about building the community and we even captured it on video!

PostgresConf embraces a holistic view of what community is. We want everyone to feel welcome and encouraged to give back to However, that is not the only opportunity for you to give back to the Postgres community. We all have different talents and some of those don't extend to writing patches or Docbook XML.  Giving back When considering who is part of the community and who is contributing to the community, we want to introduce you to a couple of fantastic organizers of our conference: Debra Cerda and Viral Shah. Some in the community will know Debra. She has been in the community for years and is one of the primary organizers of Austin Postgres.
Debra Cerda Debra is our Attendee and Speaker Liaison as well as our Volunteer Coordinator. She is also a key asset in the development and performance of our Career Fair.
Viral Shah Viral is our on-site logistics lead and is part of the volunteer acquisition team. It is Viral that works with the hotel using a fine tooth comb to make sure everything is on target, on budget, and executed with extreme efficiency.
Without her amazing attention to detail and dedication to service we wouldn't be able to deliver the level of conference our community has come to expect from PostgresConf.

Building relationships There a lot of reasons to go to a conference. You may be looking for education on a topic, a sales lead, or possibly just to experience a central location of top talent, products, and services. All of these reasons are awesome but we find that the most important reason is to build relationships. The following are two exceptional examples of community projects.
Our first example is ZomboDB. No, they are not a sponsor (yet!) but they have a fantastic Open Source extension to Postgres that integrates Elasticsearch into Postgres. 
カテゴリー: postgresql

Bruce Momjian: First Wins, Last Wins, Huh?

planet postgresql - 2018-11-26(月) 22:30:01

Someone recently pointed out an odd behavior in Postgres's configuration files. Specifically, they mentioned that the last setting for a variable in postgresql.conf is the one that is honored, while the first matching connection line in pg_hba.conf in honored. They are both configuration files in the cluster's data directory, but they behave differently. It is clear why they behave differently — because the order of lines in pg_hba.conf is significant, and more specific lines can be placed before more general lines (see the use of reject lines.) Still, it can be confusing, so I wanted to point it out.

カテゴリー: postgresql

Adrien Nayrat: PostgreSQL and heap-only-tuples updates - part 3

planet postgresql - 2018-11-26(月) 16:00:00
Here is a series of articles that will focus on a new feature in version 11. During the development of this version, a feature caught my attention. It can be found in releases notes : Allow heap-only-tuple (HOT) updates for expression indexes when the values of the expressions are unchanged (Konstantin Knizhnik) I admit that this is not very explicit and this feature requires some knowledge about how postgres works, that I will try to explain through several articles:
カテゴリー: postgresql

Stefan Fercot: PostgreSQL 12 preview - recovery.conf disappears

planet postgresql - 2018-11-26(月) 09:00:00

PostgreSQL needs some infrastructure changes to have a more dynamic reconfiguration around recovery, eg. to change the primary_conninfo at runtime.

The first step, mostly to avoid having to duplicate the GUC logic, results on the following patch.

On 25th of November 2018, Peter Eisentraut committed Integrate recovery.conf into postgresql.conf:

recovery.conf settings are now set in postgresql.conf (or other GUC sources). Currently, all the affected settings are PGC_POSTMASTER; this could be refined in the future case by case. Recovery is now initiated by a file recovery.signal. Standby mode is initiated by a file standby.signal. The standby_mode setting is gone. If a recovery.conf file is found, an error is issued. The trigger_file setting has been renamed to promote_trigger_file as part of the move. The documentation chapter "Recovery Configuration" has been integrated into "Server Configuration". pg_basebackup -R now appends settings to and creates a standby.signal file. Author: Fujii Masao <> Author: Simon Riggs <> Author: Abhijit Menon-Sen <> Author: Sergei Kornilov <> Discussion:

Let’s compare a simple example between PostgreSQL 11 and 12.

Initialize replication on v11

With a default postgresql11-server installation on CentOS 7, let’s start archiving on our primary server:

$ mkdir /var/lib/pgsql/11/archives $ echo "archive_mode = 'on'" >> /var/lib/pgsql/11/data/postgresql.conf $ echo "archive_command = 'cp %p /var/lib/pgsql/11/archives/%f'" \ >> /var/lib/pgsql/11/data/postgresql.conf # systemctl start postgresql-11.service

Check that the archiver process is running:

$ psql -c "SELECT pg_switch_wal();" pg_switch_wal --------------- 0/16AC7D0 (1 row) $ ps -ef |grep postgres|grep archiver ... postgres: archiver last was 000000010000000000000001 $ ls -l /var/lib/pgsql/11/archives/ total 16384 -rw-------. 1 postgres po[...]
カテゴリー: postgresql

Pavel Stehule: plpgsql_check can be used as profiler

planet postgresql - 2018-11-26(月) 02:05:00
Today I integrated profiling functionality into plpgsql_check. When you enable profiling, then you don't need configure more.
postgres=# select lineno, avg_time, source from plpgsql_profiler_function_tb('fx(int)');
│ lineno │ avg_time │ source │
│ 1 │ │ │
│ 2 │ │ declare result int = 0; │
│ 3 │ 0.075 │ begin │
│ 4 │ 0.202 │ for i in 1..$1 loop │
│ 5 │ 0.005 │ select result + i into result; select result + i into result; │
│ 6 │ │ end loop; │
│ 7 │ 0 │ return result; │
│ 8 │ │ end; │
(9 rows)
In this case, the function profile is stored in session memory, and when session is closed, the profile is lost.

There is possibility to load plpgsql_check by shared_preload_libraries config option. In this case, the profile is stored in shared memory and it is "pseudo" persistent. It is cleaned, when profile reset is required or when PostgreSQL is restarted.

There is another good PLpgSQL profiler. I designed integrated plpgsql_check profiler because I would to collect different data from running time, and I would to use this profiler for calculating test coverage. More, this profiler can be used without any special PostgreSQL configuration, what can be useful for some cases, when there are not a possibility to restart a server.
カテゴリー: postgresql

Regina Obe: PostGIS 2.3.8, 2.4.6

planet postgresql - 2018-11-24(土) 09:00:00

The PostGIS development team is pleased to provide bug fix 2.3.8 and 2.4.6 for the 2.3 and 2.4 stable branches.

Continue Reading by clicking title hyperlink ..
カテゴリー: postgresql

Brian Fehrle: Cloud Backup Options for PostgreSQL

planet postgresql - 2018-11-23(金) 19:58:00

As with any other component of a business, databases are extremely important its inner workings.

Whether it’s the core of the business or just another component, databases should be backed up regularly, and stored in safe locations for possible future recovery.

Should I Backup To The Cloud?

A general rule is to have at least 3 copies of anything of value and to store those backups in different locations. Backups on the same drive are useless if the drive itself dies, same host backups are also at risk if the host goes down, and same building backups are also in danger if the building burns down (drastic and unlikely, but possible).

Related resources  ClusterControl for PostgreSQL  Top Backup Tools for PostgreSQL  How to Minimize RPO for Your PostgreSQL Databases Using Point in Time Recovery  Using Barman to Backup PostgreSQL - An Overview  Using pg_dump and pg_dumpall to Backup PostgreSQL

Cloud backups offer an easy solution for the need of off-site backups without having to spin up new hardware in a secondary location. There are many different cloud services that offer backup storage, and choosing the right one will depend on backup needs, size requirements, cost, and security.

The benefits of having cloud backups are many, but mainly revolve around having these backups stored in a different location than the main database, allowing us to have a safety net in the case of a disaster recovery. While we won’t go into detail about how to set up each of these backup options, we will explore some different ideas and configurations for backups.

There are some downsides to storing backups in the cloud, starting with the transfer. If the backups for the database are extremely large, it could take a long time to do the actual upload, and could even have increased costs if the cloud service charges for bandwidth transfer. Compression is highly suggested to keep time and costs low.

Security could be another concern with hosting backups in the cloud, while some companies have strict guidelines for where their data

カテゴリー: postgresql

Regina Obe: PostGIS 2.2.8 EOL

planet postgresql - 2018-11-22(木) 09:00:00

The PostGIS development team is pleased to provide bug fix 2.2.8 for the 2.2 stable branch.

This is the End-Of-Life and final release for PostGIS 2.2 series.

We encourage you to upgrade to a newer minor PostGIS version. Refer to our Version compatibility and EOL Policy for details on versions you can upgrade to.

This release supports PostgreSQL 9.1-9.6.


Continue Reading by clicking title hyperlink ..
カテゴリー: postgresql

Richard Yen: PgBouncer Pro Tip: Use auth_user

planet postgresql - 2018-11-22(木) 08:00:00

Anyone running a database in a production environment with over a hundred users should seriously consider employing a connection pooler to keep resource usage under control. PgBouncer is one such tool, and it’s great because it’s lightweight and yet has a handful of nifty features for DBAs that have very specific needs.

One of these nifty features that I want to share about is the auth_user and auth_query combo that serves as an alternative to the default authentication process that uses userlist.txt “What’s wrong with userlist.txt” you may ask. For starters, it makes user/role administration a little tricky. Every time you add a new user to PG, you need to add it to userlist.txt in PgBouncer. And every time you change a password, you have to change it in userlist.txt as well. Multiply that by the 30+ servers you’re managing, and you’ve got a sysadmin’s nightmare on your hands. With auth_user and auth_query, you can centralize the password management and take one item off your checklist.

What’s auth_user?

In the [databases] section of your pgbouncer.ini, you would typically specify a user= and password= with which PgBouncer will connect to the Postgres database with. If left blank, the user/password are declared at the connection string (i.e., psql -U <username> <database>). When this happens, PgBouncer will perform a lookup of the provided username/password against userlist.txt to verify that the credentials are correct, and then the username/password are sent to Postgres for an actual database login.

When auth_user is provided, PgBouncer will still read in credentials from the connection string, but instead of comparing against userlist.txt, it logs in to Postgres with the specified auth_user (preferably a non-superuser) and runs auth_query to pull the corresponding md5 password hash for the desired user. The validation is performed at this point, and if correct, the specified user is allowed to log in.

An Example

Assuming Postgres is installed and running, you can get the auth_user and auth_query

カテゴリー: postgresql

Jobin Augustine: Installing and Configuring JIT in PostgreSQL 11

planet postgresql - 2018-11-20(火) 10:49:56

Just-in-time (JIT in PostgreSQL) compilation of SQL statements is one of the highlighted features in PostgreSQL 11. There is great excitement in the community because of the many claims of up to a 30% jump in performance. Not all queries and workloads get the benefit of JIT compilation. So you may want to test your workload against this new feature.

However, It is important to have a general understanding of what it does and where we can expect the performance gains. Installing PostgreSQL 11 with the new JIT compilation feature requires few extra steps and packages. Taking the time and effort to figure out how to do this shouldn’t be a reason to shy away from trying these cutting-edge features and testing a workload against the JIT feature. This blog post is for those who want to try it.

What is JIT and What it does in PostgreSQL

Normal SQL execution in any DBMS software is similar to what an interpreted language does to the source code. No machine code gets generated out of your SQL statement. But we all know that how dramatic the performance gains can be from a JIT compilation and execution of the machine code it generates. We saw the magic Google V8 engine did to JavaScript language. The quest for doing a similar thing with SQL statement was there for quite some time. But it is a challenging task.

It is challenging because we don’t have the source code (SQL statement) ready within the PostgreSQL server. The source code that needs to undergo JIT need to come from client connections and there could be expressions/functions with a different number of arguments, and it may be dealing with tables of different number and type of columns.

Generally, a computer program won’t get modified at this level while it is running, so branching-predictions are possible. The unpredictability and dynamic nature of SQL statements coming from client connections and hitting the database from time-to-time give no scope for doing advance prediction or compilation in advance. That means the JIT compiler should kick in every time the d

カテゴリー: postgresql

Magnus Hagander: PGConf.EU 2018 - the biggest one yet!

planet postgresql - 2018-11-20(火) 05:01:46

It's now almost a month since PGConf.EU 2018 in Lisbon. PGConf.EU 2018 was the biggest PGConf.EU ever, and as far as I know the biggest PostgreSQL community conference in the world! So it's time to share some of the statistics and feedback.

I'll start with some attendee statistics:

451 registered attendees 2 no-shows 449 actual present attendees

Of these 451 registrations, 47 were sponsor tickets, some of who were used by sponsors, and some were given away to their customers and partners. Another 4 sponsor tickets went unused.

Another 52 were speakers.

This year we had more cancellations than we've usually had, but thanks to having a waitlist on the conference we managed to re-fill all those spaces before the event started.

カテゴリー: postgresql

Hans-Juergen Schoenig: PostgresSQL: Implicit vs. explicit joins

planet postgresql - 2018-11-19(月) 18:00:36

If you happen to be an SQL developer, you will know that joins are really at the core of the language. Joins come in various flavors: Inner joins, left joins, full joins, natural joins, self joins, semi-joins, lateral joins, and so on. However, one of the most important distinctions is the difference between implicit and explicit joins. Over the years, flame wars have been fought over this issue. Still, not many people know what is really going on. Therefore my post might help to shed some light on the situation.


Using implicit joins

Before we dig into practical examples, it is necessary to create some tables that we can later use to perform our joins:

test=# CREATE TABLE a (id int, aid int); CREATE TABLE test=# CREATE TABLE b (id int, bid int); CREATE TABLE

In the next step some rows are added to those tables:

test=# INSERT INTO a VALUES (1, 1), (2, 2), (3, 3); INSERT 0 3 test=# INSERT INTO b VALUES (2, 2), (3, 3), (4, 4); INSERT 0 3

An implicit join is the simplest way to join data. The following example shows an implicit join:

test=# SELECT * FROM a, b WHERE =; id | aid | id | bid ----+-----+----+----- 2 | 2 | 2 | 2 3 | 3 | 3 | 3 (2 rows)

In this case, all tables are listed in the FROM clause and are later connected in the WHERE clause. In my experience, an implicit join is the most common way to connect two tables. However, my observation might be heavily biased, because an implicit join is the way I tend to write things in my daily work.

Using explicit joins

The following example shows an explicit join. Some people prefer the explicit join syntax over implicit joins because of readability or for whatever other reason:

test=# SELECT * FROM a JOIN b ON (aid = bid); id | aid | id | bid ----+-----+----+----- 2 | 2 | 2 | 2 3 | 3 | 3 | 3 (2 rows)

In this case tables are connected directly using an ON-clause. The ON-clause simply contains the conditions we want to use to join those tables together.

Explicit joins

カテゴリー: postgresql

Adrien Nayrat: PostgreSQL and heap-only-tuples updates - part 2

planet postgresql - 2018-11-19(月) 16:00:00
Here is a series of articles that will focus on a new feature in version 11. During the development of this version, a feature caught my attention. It can be found in releases notes : Allow heap-only-tuple (HOT) updates for expression indexes when the values of the expressions are unchanged (Konstantin Knizhnik) I admit that this is not very explicit and this feature requires some knowledge about how postgres works, that I will try to explain through several articles:
カテゴリー: postgresql

Regina Obe: PostGIS 2.5.1

planet postgresql - 2018-11-18(日) 09:00:00

The PostGIS development team is pleased to provide bug fix 2.5.1 for the 2.5 stable branch.

Although this release will work for PostgreSQL 9.4 thru PostgreSQL 11, to take full advantage of what PostGIS 2.5 offers, you should be running PostgreSQL 11 and GEOS 3.7.0.

Best served with PostgreSQL 11.1 and pgRouting 2.6.1.

WARNING: If compiling with PostgreSQL+JIT, LLVM >= 6 is required Supported PostgreSQL versions for this release are: PostgreSQL 9.4 - PostgreSQL 11 GEOS >= 3.5


Continue Reading by clicking title hyperlink ..
カテゴリー: postgresql

Pavel Stehule: new update pspg

planet postgresql - 2018-11-17(土) 20:28:00
I released new update of pspg It is bugfix release + new HiColor theme.
カテゴリー: postgresql

Andreas Scherbaum: Google Code-In 2018 - Halftime

planet postgresql - 2018-11-17(土) 06:05:00

Andreas 'ads' Scherbaum

The PostgreSQL Project participates in Google Code-In (GCI) 2018. This is a program which allows pre-university students to pick up tasks defined by the partnering open source projects, learn about these projects, and also win a prize (certificates, t-shirts, hoodies, but also a trip to Google HQ).


Every project creates a number different tasks, some technical, some design based, some about updating documentation, or validating bugs. Whatever is useful in order to get to know the project better. Students can select tasks and submit their work. Mentors from the project then evaluate the work, and either approve it or send it back to the student because more work is needed.


Now we are halfway into this year's competition, it's time to run the numbers.


Continue reading "Google Code-In 2018 - Halftime"
カテゴリー: postgresql

Avinash Kumar: Newly-Released PostgreSQL Minor Versions: Time to Update!

planet postgresql - 2018-11-17(土) 02:25:42

In this blog post we’ll look at what the newly-released PostgreSQL minor versions contain. You probably want to update your current versions and use these updates.

You might already have seen that they released the updates for supported PostgreSQL versions on November 8, 2018. PostgreSQL releases minor versions with several bug fixes and feature enhancements each quarter. An important point to note is that PostgreSQL 9.3 got its final minor version release (9.3.24) this quarter, and is no longer supported.

We always recommended that you keep your PostgreSQL databases updated to the latest minor versions. Applying a minor release might need a restart after updating the new binaries. The following is the sequence of steps you should follow to upgrade to the latest minor versions:

  1. Shutdown the PostgreSQL database server
  2. Install the updated binaries
  3. Restart your PostgreSQL database server

Most times, you can choose to update the minor versions in a rolling fashion, in a master-slave (replication) setup. Just perform the update on one server after another, but not all-at-once. Rolling updates avoid downtime for both reads and writes simultaneously. However, we recommended that you shutdown, update and restart them all-at-once while you are performing the updates.

One of the most important fixes is a security fix: CVE-2018-16850. The bug allowed an attacker with CREATE privileges on some non-temporary schema or TRIGGER privileges on some table to create a malicious trigger that, when dumped and restored using pg_dump/pg_restore, would result in additional SQL statements being executed. This applies to PostgreSQL 10 and 11 versions.

Before proceeding further, let’s look at the list of minor versions released this quarter.

  • PostgreSQL 11.1
  • PostgreSQL 10.6
  • PostgreSQL 9.6.11
  • PostgreSQL 9.5.15
  • PostgreSQL 9.4.20
  • PostgreSQL 9.3.25

Now, let us look into the benefits you should see by updating your Postgres versions with the latest minor versions.

PostgreSQL 11.1

PostgreSQL 11.0 was released on October 18, 2018. You might w

カテゴリー: postgresql

Denish Patel: Postgres 11 partitioning

planet postgresql - 2018-11-17(土) 01:14:21

Postgres supported table partitioning  implementation based on inheritance and triggers for over more than a decade now. However, the declarative partition support was added in Postgres 10 release in Oct 2017.  Since Postgres 10, Postgres  supports built-in declarative partitioning so it was easier to create partitions but you still need to manage trigger to update records on parent table. Additionally, you couldn’t able to add Primary Key and Foreign Keys on partitioned tables. The recent release of Postgres 11 solves all of these problems.

Postgres 11 adds a lot more partitioning features to manage partitioned tables easier than ever! Below is the comparison of partitioning features across Postgres releases:

feature Postgres – 11 postgres -10 9.6 Declarative table partitioning Yes Yes No Default Partition –

A default partition stores data that does not match the partition key for any other partition

Yes No No Partitioning by a HASH key Yes Yes No Support for PRIMARY KEY, FOREIGN KEY, indexes, and triggers on partitioned tables Yes No No UPDATE on a partition key –

When a partition key is updated on a row, the row is moved to the appropriate partition.

Yes No No

Postgres 11 supports RANGE, LIST and HASH partition types. You can also create sub-partitions  on child tables too!

Let’s take an example to partition the table using RANGE and LIST partition types.

RANGE Partitioning: -- create parent table to store SMS campaign subscribers app=# CREATE TABLE sms_campaign_subscribers (id bigint not null, sms_campaign_id bigint not null) PARTITION BY RANGE (sms_campaign_id); CREATE TABLE -- create child table to store campaign with sms_campaign_id >= 111 and < 112 app=# CREATE TABLE sms_campaign_subscriber_111 PARTITION OF sms_campaign_subscribers FOR VALUES FROM (111) TO (112) ; CREATE TABLE -- Describe parent table app=# \d+ sms_campaign_subscribers Table "public.sms_campaign_subscribers" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description -----------------+[...]
カテゴリー: postgresql

Douglas Hunley: pgBackRest 2.07 and macOS Mojave

planet postgresql - 2018-11-16(金) 20:12:31

pgBackRest 2.07 was announced today. As usual, I immediately downloaded it and tried to get it up and running on my MacBook (currently running Mojave). It wasn’t as straightforward as one might hope, and the online instructions assume a Linux system, so I figured I’d write this up for anyone else attempting the same.

Since this is OSX, we have to do some work to make things right before we even start with the pgBackRest code. First up, get a real OpenSSL install. We’ll use Homebrew for this:

> brew install openssl [output snipped] > openssl version -a LibreSSL 2.6.4 built on: date not available platform: information not available options: bn(64,64) rc4(ptr,int) des(idx,cisc,16,int) blowfish(idx) compiler: information not available OPENSSLDIR: "/private/etc/ssl" > /usr/local/opt/openssl/bin/openssl version -a OpenSSL 1.0.2p 14 Aug 2018 built on: reproducible build, date unspecified platform: darwin64-x86_64-cc options: bn(64,64) rc4(ptr,int) des(idx,cisc,16,int) idea(int) blowfish(idx) compiler: clang -I. -I.. -I../include -fPIC -fno-common -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -arch x86_64 -O3 -DL_ENDIAN -Wall -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM OPENSSLDIR: "/usr/local/etc/openssl"

As you can see, the default SSL from OSX is in /usr/bin while the newly installed OpenSSL is in /usr/local/opt/openssl. In my testing, this is enough to proceed with pgBackRest but I prefer to have the openssl binary match the libs and I’m a glutton for punishment, so I replace the OSX binary with the Homebrew one:

> sudo mv /usr/bin/openssl /usr/bin/openssl.old > sudo ln -s /usr/local/opt/openssl/bin/openssl /usr/bin > ls -ld /usr/bin/openssl* lrwxr-xr-x 1 root wheel 34 Nov 16 11:39 /usr/bin/openssl -> /usr/local/opt/openssl/bin/openssl* -rwxr-xr-x 1 root wheel 1.2M Sep 21 00:16 /usr/bin/openssl.old*

OK, so now we h

カテゴリー: postgresql

Achilleas Mantzios: Five Cool Things I Learned at the PostgreSQL Conference Europe 2018

planet postgresql - 2018-11-16(金) 19:00:25

I spent a week in the magnificent city of Lisbon attending the annual European PostgeSQL Conference. This marked the 10th anniversary since the first European PostgreSQL conference and my sixth time attending.

First Impressions

The city was great, the atmosphere was great and it seemed that it would be a very productive and informative week full of interesting conversations with intelligent and friendly people. So basically the very first cool thing I learned in Lisbon is how great Lisbon and Portugal are, but I guess you came here for the rest of the story!

Shared Buffers

We attended the training session “PostgreSQL DBA toolbelt for day-to-day ops”

by Kaarel Moppel (Cybertec). One thing I noted was the setting of shared_buffers. Since shared_buffers actually competes or complements system’s cache it shouldn’t be set to any value between 25% and 75% of the total RAM available. So while, in general, the recommended setting for typical workloads is 25% of RAM, it could be set to >= 75% for special cases, but not in between.

Other things we learned in this session:

  • unfortunately easy online (or offline) activation/enablement of data-checksums is not yet in 11 (initdb/logical replication remains the only option)
  • beware of vm.overcommit_memory, you better disable it by setting it to 2. Set vm.overcommit_ratio to about 80.
Advanced Logical Replication

In the talk of Petr Jelinek (2nd Quadrant), the original authors of logical replication, we learned about more advanced uses of this new exciting technology:

  • Centralized Data Collection: we may have multiple publishers and then a central system with a subscriber to each of those publishers, making data from various sources available in a central system. (typical use: OLAP)
  • Shared global data or in other words a central system for maintaining global data and parameters (such as currencies, stocks, market/commodity values, weather, etc) which publishes to one or more subscribers. Then these data are maintained only in one system but available in all subscribers.
  • Log
カテゴリー: postgresql