planet postgresql

Subscribe to planet postgresql のフィード
Planet PostgreSQL
更新: 2時間 39分 前

Sebastian Insausti: Top GUI Tools for PostgreSQL

2019-01-26(土) 04:49:42

Managing databases from the command line does come with a learning curve to get the most out of it.

The command line can sometimes be arduous and the display may not be optimal for what you are doing.

Browsing through databases and tables, checking indexes or user privileges, monitoring, managing, and even coding can get really messy when trying to handle it through the console.

It’s not that you don't need to manage the command line commands (it's for sure a must), but there are some tools that can help you speed up many of the daily DBA tasks.

Let's look at what these tools are about and review some of them.

What is a GUI Tool?

A GUI or Graphical User Interface is a software that simplifies the tasks of the users through graphical icons and visual indicators. The actions are performed by using graphical elements.

Why Should I Use a GUI Tool?

Using a GUI is not a must, but it can be useful. One of the main advantages of the GUIs is that they are, in general, easier to learn than a lot of commands and probably one action on the GUI could generate a few commands to perform the task.

Another advantage could be that the GUI is more friendly than the command line, and in most cases, you don't need any programming or sysadmin knowledge to use it.

But, you should be careful before performing a task from the GUI, because by using the wrong button, you could generate a big issue like deleting a table; and for this reason, do be careful when using this kind of tool.

Top GUI Tools for PostgreSQL

Now, let's see some of the most commons GUI tools for PostgreSQL.

Note that, for the installation examples, we'll test it on Ubuntu 18.04 Bionic.

pgAdmin

pgAdmin is one of the most popular Open Source administration and development platforms for PostgreSQL.

It's designed to meet the needs of both novice and experienced PostgreSQL users alike, providing a powerful graphical interface that simplifies the creation, maintenance and use of database objects.

It's supported on Linux, Mac OS X, and Windows. It supports all PostgreSQL

[...]
カテゴリー: postgresql

Shaun M. Thomas: PG Phriday: Terrific Throughput Tracking

2019-01-26(土) 02:00:24

Postgres has a lot of built-in information functions that most people don’t know about. Some of these are critical components to identifying replication lag, but what if we could use them for something else, like throughput?

This man’s one simple trick can track actual database throughput; DBAs hate him!

Everybody Knows

Let’s take a look at a common query we might use to track replication lag between a Postgres 11 Primary and one or more Replica nodes.

SELECT client_addr, pg_wal_lsn_diff( pg_current_wal_lsn(), sent_lsn ) AS sent_lag, pg_wal_lsn_diff( pg_current_wal_lsn(), write_lsn ) AS write_lag, pg_wal_lsn_diff( pg_current_wal_lsn(), flush_lsn ) AS flush_lag, pg_wal_lsn_diff( pg_current_wal_lsn(), replay_lsn ) AS replay_lag FROM pg_stat_replication;

This gives us the number of bytes the Primary has transmitted to remote servers, given a number of different metrics. How much have we sent? Has that data been written to disk? How much has been synced and is thus safe from crashes? How much has actually been replayed on the Postgres data files? For more information on the pg_stat_replication view, check out the documentation.

If we were using slots instead, we might do this:

SELECT slot_name, pg_wal_lsn_diff( pg_current_wal_lsn(), restart_lsn ) as restart_lag, pg_wal_lsn_diff( pg_current_wal_lsn(), confirmed_flush_lsn ) as flush_lag FROM pg_replication_slots;

There is somewhat less information in pg_replication_slots, as it only really tells us the minimum LSN position the Primary needs to retain for that replica. Of course, if that stops advancing while the Primary keeps working, that’s lag we can see even while the replica is disconnected.

This is especially important with replication slots, since they necessarily mean the Primary is retaining WAL files a replica may need. We don’t want the replay lag v

[...]
カテゴリー: postgresql

Bruce Momjian: Pooler Authentication

2019-01-26(土) 00:15:01

One frequent complaint about connection poolers is the limited number of authentication methods they support. While some of this is caused by the large amount of work required to support all 14 Postgres authentication methods, the bigger reason is that only a few authentication methods allow for the clean passing of authentication credentials through an intermediate server.

Specifically, all the password-based authentication methods (scram-sha-256, md5, password) can easily pass credentials from the client through the pooler to the database server. (This is not possible using SCRAM with channel binding.) Many of the other authentication methods, e.g. cert, are designed to prevent man-in-the-middle attacks and therefore actively thwart passing through of credentials. For these, effectively, you have to set up two sets of credentials for each user — one for client to pooler, and another from pooler to database server, and keep them synchronized.

A pooler built-in to Postgres would have fewer authentication pass-through problems, though internal poolers have some down sides too, as I already stated.

カテゴリー: postgresql

Vasilis Ventirozos: PostgreSQL 12 : Where in copy from

2019-01-25(金) 19:44:00
5 days ago a commit made it to 12devel that implements WHERE clause in COPY FROM.
Today we're gonna see how it works and see how someone could achieve the same by using file_fdw.

To begin with, lets create a table and put some data in it.

create table test (
id int,
date timestamp without time zone,
);

insert into test (id,date) select generate_series (1,1000000)            ,'2015-01-01';
insert into test (id,date) select generate_series (1000001,2000000),'2016-01-01';
insert into test (id,date) select generate_series (2000001,3000000),'2017-01-01'; insert into test (id,date) select generate_series (3000001,4000000),'2018-01-01'; insert into test (id,date) select generate_series (4000001,5000000),'2019-01-01';
now, lets make this a bit larger than 170MB, dump the data in csv and truncate the table :
monkey=# insert into test select * from test; INSERT 0 5000000 monkey=# insert into test select * from test; INSERT 0 10000000 monkey=# select count(*) from test ;   count ----------  20000000 (1 row)
monkey=# copy test to '/home/vasilis/test.csv' with csv; COPY 20000000
monkey=# truncate test; TRUNCATE TABLE
vasilis@Wrath > ls -lh ~/test.csv -rw-r--r-- 1 vasilis vasilis 759M Jan 25 12:24 /home/vasilis/test.csv
Our test file is about 750Mb, now with an empty table , lets import only the rows that are up to 2 years old :
monkey=# copy test from '/home/vasilis/test.csv' with csv where date >= now() - INTERVAL '2 year' ; COPY 8000000 Time: 17770.267 ms (00:17.770)
It worked , awesome !
Now, lets compare to the alternative, file_fdw :
monkey=# CREATE EXTENSION file_fdw; CREATE EXTENSION Time: 7.288 ms
monkey=# CREATE SERVER pgtest FOREIGN DATA WRAPPER file_fdw; CREATE SERVER Time: 3.957 ms
monkey=# create foreign table file_test (id int, date timestamp without time zone, name text) server pgtest options (filename '/home/vasilis/test.csv', format 'csv'); CREATE FOREIGN TABLE Time: 16.463 ms
monkey=# truncate test ; TRUNCATE TABLE Time: 15.123 ms
monkey=# insert into test select * from file_test [...]
カテゴリー: postgresql

Jehan-Guillaume (ioguix) de Rorthais: Build a PostreSQL Automated Failover in 5 minutes

2019-01-25(金) 06:20:00

I’ve been working with Vagrant to quickly build a fresh test cluster for PAF development. Combined with virsh snapshot related commands, I save a lot of time during my tons of tests by quickly rollbacking the whole cluster to the initial state.

Continue Reading »
カテゴリー: postgresql

Robert Haas: How Much maintenance_work_mem Do I Need?

2019-01-25(金) 02:04:00
While I generally like PostgreSQL's documentation quite a bit, there are some areas where it is not nearly specific enough for users to understand what they need to do. The documentation for maintenance_work_mem is one of those places. It says, and I quote, "Larger settings might improve performance for vacuuming and for restoring database dumps," but that isn't really very much help, because if it might improve performance, it also might not improve performance, and you might like to know which is the case before deciding to raise the value, so that you don't waste memory.  TL;DR: Try maintenance_work_mem = 1GB.  Read on for more specific advice.

Read more »
カテゴリー: postgresql

Umur Cubukcu: Microsoft Acquires Citus Data: Creating the World’s Best Postgres Experience Together

2019-01-25(金) 01:41:00

Today, I’m very excited to announce the next chapter in our company’s journey: Microsoft has acquired Citus Data.

When we founded Citus Data eight years ago, the world was different. Clouds and big data were newfangled. The common perception was that relational databases were, by design, scale up only—limiting their ability to handle cloud scale applications and big data workloads. This brought the rise of Hadoop and all the other NoSQL databases people were creating at the time. At Citus Data, we had a different idea: that we would embrace the relational database, while also extending it to make it horizontally scalable, resilient, and worry-free. That instead of re-implementing the database from scratch, we would build upon PostgreSQL and its open and extensible ecosystem.

Fast forward to 2019 and today’s news: we are thrilled to join a team who deeply understands databases and is keenly focused on meeting customers where they are. Both Citus and Microsoft share a mission of openness, empowering developers, and choice. And we both love PostgreSQL. We are excited about joining forces, and the value that doing so will create: Delivering to our community and our customers the world’s best PostgreSQL experience.

As I reflect on our Citus Data journey, I am very proud of what our team has accomplished. We created Citus to transform PostgreSQL into a distributed database—giving developers game-changing performance improvements and delivering queries that are magnitudes faster than proprietary implementations of Postgres. We packaged Citus as an open source extension of PostgreSQL—so you could always stay current with the latest version of Postgres, unlike all forks of databases prior to it. We launched our Citus Cloud database as a service and grew it to power billions of transactions every day—creating the world’s first horizontally scalable relational database that you can run both on premises, and as a fully-managed service on the cloud.

The most fulfilling part of this journey has been seeing all the things our

[...]
カテゴリー: postgresql

Jeff McCormick: What's New in Crunchy PostgreSQL Operator 3.5

2019-01-25(金) 01:12:00

Crunchy Data is happy to announce the release of the open source  PostgreSQL Operator 3.5 for Kubernetes project, which you can find here: https://github.com/CrunchyData/postgres-operator/

This latest release provides further feature enhancements designed to support users intending to deploy large-scale PostgreSQL clusters on Kubernetes, with enterprise high-availability and disaster recovery requirements.

When combined with the Crunchy PostgreSQL Container Suite, the PostgreSQL Operator provides an open source, Kubernetes-native PostgreSQL-as-a-Service capability.

Read on to see what is new in PostgreSQL Operator 3.5.

カテゴリー: postgresql

Robert Haas: Who Contributed to PostgreSQL Development in 2018?

2019-01-24(木) 03:57:00
This is my third annual post on who contributes to PostgreSQL development.  I have been asked a few times to include information on who employs these contributors, but I have chosen not to do that, partly but not only because I couldn't really vouch for the accuracy of any such information, nor would I be able to make it complete.  The employers of several people who contributed prominently in 2018 are unknown to me.
Read more »
カテゴリー: postgresql

Bruce Momjian: Synchronizing Authentication

2019-01-24(木) 02:00:01

I have already talked about external password security. What I would like to talk about now is keeping an external-password data store synchronized with Postgres.

Synchronizing the password is not the problem (the password is only stored in the external password store), but what about the existence of the user. If you create a user in LDAP or PAM, you would like that user to also be created in Postgres. Another synchronization problem is role membership. If you add or remove someone from a role in LDAP, it would be nice if the user's Postgres role membership was also updated.

ldap2pg can do this in batch mode. It will compare LDAP and Postgres and modify Postgres users and role membership to match LDAP. This email thread talks about a custom solution then instantly creates users in Postgres when they are created in LDAP, rather than waiting for a periodic run of ldap2pg.

カテゴリー: postgresql

Richard Yen: The Curious Case of Split WAL Files

2019-01-23(水) 05:00:00
Introduction

I’ve come across this scenario maybe twice in the past eight years of using Streaming Replication in Postgres: replication on a standby refuses to proceed because a WAL file has already been removed from the primary server, so it executes Plan B by attempting to fetch the relevant WAL file using restore_command (assuming that archiving is set up and working properly), but upon replay of that fetched file, we hear another croak: “No such file or directory.” Huh? The file is there? ls shows that it is in the archive. On and on it goes, never actually progressing in the replication effort. What’s up with that? Here’s a snippet from the log:

2017-07-19 16:35:19 AEST [111282]: [96-1] user=,db=,client= (0:00000)LOG: restored log file "0000000A000000510000004D" from archive scp: /archive/xlog//0000000A000000510000004E: No such file or directory 2017-07-19 16:35:20 AEST [114528]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 51/4D000000 on timeline 10 2017-07-19 16:35:20 AEST [114528]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000A000000510000004D has already been removed scp: /archive/xlog//0000000B.history: No such file or directory scp: /archive/xlog//0000000A000000510000004E: No such file or directory 2017-07-19 16:35:20 AEST [114540]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 51/4D000000 on timeline 10 2017-07-19 16:35:20 AEST [114540]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 0000000A000000510000004D has already been removed scp: /archive/xlog//0000000B.history: No such file or directory scp: /archive/xlog//0000000A000000510000004E: No such file or directory 2017-07-19 16:35:25 AEST [114550]: [1-1] user=,db=,client= (0:00000)LOG: started streaming WAL from primary at 51/4D000000 on timeline 10 2017-07-19 16:35:25 AEST [114550]: [2-1] user=,db=,client= (0:XX000)FATAL: could not receive[...]
カテゴリー: postgresql

Jonathan Katz: Scheduling Backups En Masse with the Postgres Operator

2019-01-23(水) 01:27:00

An important part of running a production PostgreSQL database system (and for that matter, any database software) is to ensure you are prepared for disaster. There are many ways to go about preparing your system for disaster, but one of the simplest and most effective ways to do this is by taking periodic backups of your database clusters.

How does one typically go about setting up taking a periodic backup? If you’re running PostgreSQL on a Linux based system, the solution is to often use cron, and setting up a crontab entry similar to this in your superuser account:

# take a daily base backup at 1am to a mount point on an external disk # using pg_basebackup 0 1 * * * /usr/bin/env pg_basebackup –D /your/external/mount/

However, if you’re managing tens, if not hundreds and thousands of PostgreSQL databases, this very quickly becomes an onerous task and you will need some automation to help you scale your disaster recovery safely and efficiently.

Automating Periodic Backups

The Crunchy PostgreSQL Operator, an application for managing PostgreSQL databases in a Kubernetes-based environment in is designed for managing thousands of PostgreSQL database from a single interface to help with challenges like the above. One of the key features of the PostgreSQL Operator is to utilize Kubernetes Labels to apply commands across many PostgreSQL databases. Later in this article, we will see how we can take advantage of labels in order to set backup policies across many clusters.

カテゴリー: postgresql

Bruce Momjian: Insufficient Passwords

2019-01-22(火) 00:45:01

As I already mentioned, passwords were traditionally used to prove identity electronically, but are showing their weakness with increased computing power has increased and expanded attack vectors. Basically, user passwords have several restrictions:

  • must be simple enough to remember
  • must be short enough to type repeatedly
  • must be complex enough to not be easily guessed
  • must be long enough to not be easily cracked (discovered by repeated password attempts) or the number of password attempts must be limited

As you can see, the simple/short and complex/long requirements are at odds, so there is always a tension between them. Users often choose simple or short passwords, and administrators often add password length and complexity requirements to counteract that, though there is a limit to the length and complexity that users will accept. Administrators can also add delays or a lockout after unsuccessful authorization attempts to reduce the cracking risk. Logging of authorization failures can sometimes help too.

While Postgres records failed login attempts in the server logs, it doesn't provide any of the other administrative tools for password control. Administrators are expected to use an external authentication service like LDAP or PAM, which have password management features.

Continue Reading »

カテゴリー: postgresql

Hans-Juergen Schoenig: pg_permission: Inspecting your PostgreSQL security system

2019-01-22(火) 00:34:21

Security is a super important topic. This is not only true in the PostgreSQL world – it holds true for pretty much any modern IT system. Databases, however, have special security requirements. More often than not confidential data is stored and therefore it makes sense to ensure that data is protected properly. Security first !

PostgreSQL: Listing all permissions

Gaining an overview of all permissions granted to users in PostgreSQL can be quite difficult. However, if you want to secure your system, gaining an overview is really everything – it can be quite easy to forget a permission here and there and fixing things can be a painful task. To make life easier, Cybertec has implemented pg_permission (https://github.com/cybertec-postgresql/pg_permission). There are a couple of things, which can be achieved with pg_permission:

  • Gain a faster overview and list all permissions
  • Compare your “desired state” to what you got
  • Instantly fix errors

In short: pg_permission can do more than just listing what there is. However, let us get started with the simple case – listing all permissions. pg_permission provides a couple of views, which can be accessed directly once the extension has been deployed. Here is an example:

test=# \x Expanded display is on. test=# SELECT * FROM all_permissions WHERE role_name = 'workspace_owner'; -[ RECORD 1 ]------------------------------------------------------- object_type | TABLE role_name | workspace_owner schema_name | public object_name | b column_name | permission | SELECT granted | t -[ RECORD 2 ]------------------------------------------------------- object_type | TABLE role_name | workspace_owner schema_name | public object_name | b column_name | permission | INSERT granted | t -[ RECORD 3 ]------------------------------------------------------- object_type | TABLE role_name | workspace_owner schema_name | public object_name | b column_name | permission | UPDATE granted | f

The easiest way is to use the “all_permissions” view to gain an

[...]
カテゴリー: postgresql

Luca Ferrari: PostgreSQL to Microsoft SQL Server Using TDS Foreign Data Wrapper

2019-01-18(金) 09:00:00

I needed to push data from a Microsoft SQL Server 2005 to our beloved database, so why don’t use a FDW to the purpose? It has not been as simple as with other FDW, but works!

PostgreSQL to Microsoft SQL Server Using TDS Foreign Data Wrapper

At work I needed to push data out from a Microsoft SQL Server 2005 to a PostgreSQL 11 instance. Foreign Data Wrappers was my first thought! Perl to the rescue was my second, but since I had some time, I decided to investigate the first way first.


The scenario was the following:

  • CentOS 7 machine running PostgreSQL 11, it is not my preferred setup (I do prefer either FreeBSD or Ubuntu), but I have to deal with that;
  • Microsoft SQL Server 2005 running on a Windows Server , surely not something I like to work with and to which I had to connect via remote desktop (argh!).
First Step: get TDS working

After a quick research on the web, I discovered that MSSQL talks the Table Data Stream (TDS for short), so I don’t need to install an ODBC stack on my Linux box. And luckily, there are binaries for CentOS:

$ sudo yum install freetds $ sudo yum install freetds-devel freetds-doc

freetds comes along with a tsql terminal command that is meant to be a diagnosing tool, so nothing as complete as a psql terminal. You should really test your connectivity with tsql before proceeding further, since it can save you hours of debugging when things do not work.
Thanks to a pragmatic test with tsql I discovered that I needed to open port 1433 (default MSSQL port) on our...

カテゴリー: postgresql

Venkata Nagothi: An Overview of JSON Capabilities Within PostgreSQL

2019-01-18(金) 05:19:55
What is JSON?

JSON stands for “JavaScript Object Notification” which is a type of data format popularly used by web applications. This means, the data would be transmitted between web applications and servers in such a format. JSON was introduced as an alternative to the XML format. In the “good old days” the data used to get transmitted in XML format which is a heavy weight data type compared to JSON.Below is an example of JSON formatted string:

{ "ID":"001","name": "Ven", "Country": "Australia", "city": "Sydney", "Job Title":"Database Consultant"}

A JSON string can contain another JSON object with-in itself like shown below:

{ "ID":"001", "name": "Ven", "Job Title":"Database Consultant", "Location":{"Suburb":"Dee Why","city": "Sydney","State":"NSW","Country": "Australia"}}

Modern day web and mobile applications mostly generate the data in JSON format, also termed as “JSON Bytes” which are picked up by the application servers and are sent across to the database. The JSON bytes are in-turn processed, broken down into separate column values and inserted into an RDBMS table.
Example:

{ "ID":"001","name": "Ven", "Country": "Australia", "city": "Sydney", "Job Title":"Database Consultant"}

Above JSON data is converted to an SQL like below..

Insert into test (id, name, country,city,job_title) values (001,'Ven','Australia','Sydney','Database Consultant');

When it comes to storing and processing the JSON data, there are various NoSQL databases supporting it and the most popular one is MongoDB. When it comes to RDBMS databases, until recent times, JSON strings were treated as normal text and there were no data types which specifically recognize, store or process JSON format strings. PostgreSQL, the most popular open-source RDBMS database has come up with JSON data-type which turned out to be highly beneficial for performance, functionality and scalability when it comes to handling JSON data.

PostgreSQL + JSON

PostgreSQL database has become more-and-more popular ever since the JSON data-type was introduced. In-fact, P

[...]
カテゴリー: postgresql

Michael Paquier: Postgres 12 highlight - SKIP_LOCKED for VACUUM and ANALYZE

2019-01-17(木) 16:25:09

The following commit has been merged into Postgres 12, adding a new option for VACUUM and ANALYZE:

commit: 803b1301e8c9aac478abeec62824a5d09664ffff author: Michael Paquier <michael@paquier.xyz> date: Thu, 4 Oct 2018 09:00:33 +0900 Add option SKIP_LOCKED to VACUUM and ANALYZE When specified, this option allows VACUUM to skip the work on a relation if there is a conflicting lock on it when trying to open it at the beginning of its processing. Similarly to autovacuum, this comes with a couple of limitations while the relation is processed which can cause the process to still block: - when opening the relation indexes. - when acquiring row samples for table inheritance trees, partition trees or certain types of foreign tables, and that a lock is taken on some leaves of such trees. Author: Nathan Bossart Reviewed-by: Michael Paquier, Andres Freund, Masahiko Sawada Discussion: https://postgr.es/m/9EF7EBE4-720D-4CF1-9D0E-4403D7E92990@amazon.com Discussion: https://postgr.es/m/20171201160907.27110.74730@wrigleys.postgresql.org

Postgres 11 has extended VACUUM so as multiple relations can be specified in a single query, processing each relation one at a time. However if VACUUM gets stuck on a relation which is locked for a reason or another for a long time, it is up to the application layer which has triggered VACUUM to be careful to look at that and unblock the situation. SKIP_LOCKED brings more control regarding that by skipping immediately any relation that cannot be locked at the beginning of VACUUM or ANALYZE processing, meaning that the processing will finish on a timely manner at the cost of potentially doing nothing, which can also be dangerous if a table keeps accumulating bloat and is not cleaned up. As mentioned in the commit message, there are some limitations similar to autovacuum:

  • Relation indexes may need to be locked, which would cause the processing to still block when working on them.
  • The list of relations part of a partition or inheritance tree to process is built at the beginning of VACUUM or
[...]
カテゴリー: postgresql

Bruce Momjian: Removable Certificate Authentication

2019-01-17(木) 03:00:01

I mentioned previously that it is possible to implement certificate authentication on removable media, e.g., a USB memory stick. This blog post shows how it is done. First, root and server certificates and key files must be created:

$ cd $PGDATA # create root certificate and key file $ openssl req -new -nodes -text -out root.csr -keyout root.key -subj "/CN=root.momjian.us" $ chmod og-rwx root.key $ openssl x509 -req -in root.csr -text -days 3650 -extfile /etc/ssl/openssl.cnf -extensions v3_ca -signkey root.key -out root.crt # create server certificate and key file $ openssl req -new -nodes -text -out server.csr -keyout server.key -subj "/CN=momjian.us" $ chmod og-rwx server.key $ openssl x509 -req -in server.csr -text -days 365 -CA root.crt -CAkey root.key -CAcreateserial -out server.crt

Continue Reading »

カテゴリー: postgresql

Craig Kerstiens: Contributing to Postgres

2019-01-16(水) 02:48:00

About once a month I get this question: “How do I contribute to Postgres?”. PostgreSQL is a great database with a solid code base and for many of us, contributing back to open source is a worthwhile cause. The thing about contributing back to Postgres is you generally don’t just jump right in and commit code on day one. So figuring out where to start can be a bit overwhelming. If you’re considering getting more involved with Postgres, here’s a few tips that you may find helpful.

Follow what’s happening

The number one way to familiarize yourself with the Postgres development and code community is to subscribe to the mailing lists. Even if you’re not considering contributing back, the mailing lists can be a great place to level up your knowledge and skills around Postgres. Fair warning: the mailing lists can be very active. But that’s ok, as you don’t necessarily need to read every email as it happens—daily digests work just fine. There is a long list of mailing lists you can subscribe to, but here are a few I think you should know about:

  • pgsql-general - This is the most active of mailing lists where you’ll find questions about working with Postgres and troubleshooting. It’s a great place to start to chime in and help others as you see questions.
  • pgsql-hackers - Where core development happens. A must read to follow along for a few months before you start contributing yourself
  • pgsql-announce - Major announcements about new releases and happenings with Postgres.
  • pgsql-advocacy - If you’re more interested in the evangelism side this one is worth a subscription.

Following along to these lists will definitely prepare you to contribute code in the future. And will give you the opportunity to chime in to the discussions.

Familiarize yourself with the process

As you are reading along with the mailing lists the docs will become one of your best friends for understanding how certain things work. Postgres docs are rich with how things work so when you have questions best to check there ahead of asking what may have already

[...]
カテゴリー: postgresql

Joshua Drake: CFP extended until Friday!

2019-01-15(火) 23:30:00


We’ve had a great response to our PostgresConf US 2019 call for proposals with over 170 potential presentations -- thank you to everyone who has submitted so far! As with what has become a tradition among Postgres Conferences, we are extending our deadline by one week to allow those final opportunities to trickle in!

The new deadline is Friday, January 18th, submit now!
We accept all topics that relate to People, Postgres, Data including any Postgres related topic, such as open source technologies (Linux, Python, Ruby, Golang, PostGIS).

Talks especially in high demand are sessions related to Regulated Industries including healthtech, fintech, govtech, etc., especially use case and case studies.
Interested in attending this year’s conference? We’ve expanded our offerings, with trainings and tutorials open to everyone who purchases a Platinum registration. No separate fees for Mondays trainings (but it will be first come, first serve for seating).

Don’t forget that Early Bird registration ends this Friday, January 18. Tickets are substantially discounted when purchased early.
Register for PostgresConf 2019 Interested in an AWESOME international Postgres Conference opportunity? Consider attending PgConf Russia




カテゴリー: postgresql

ページ