<?xml version="1.0" encoding="UTF-8" ?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" version="2.0"><channel><title>David Youatt | CrunchyData Blog</title>
<atom:link href="https://www.crunchydata.com/blog/author/david-youatt/rss.xml" rel="self" type="application/rss+xml" />
<link>https://www.crunchydata.com/blog/author/david-youatt</link>

<description>PostgreSQL experts from Crunchy Data share advice, performance tips, and guides on successfully running PostgreSQL and Kubernetes solutions</description>
<language>en-us</language>
<pubDate>Wed, 30 Sep 2020 05:00:00 EDT</pubDate>
<dc:date>2020-09-30T09:00:00.000Z</dc:date>
<dc:language>en-us</dc:language>
<sy:updatePeriod>hourly</sy:updatePeriod>
<sy:updateFrequency>1</sy:updateFrequency>
<item><title><![CDATA[ Synchronous Replication in PostgreSQL ]]></title>
<link>https://www.crunchydata.com/blog/synchronous-replication-in-postgresql</link>
<description><![CDATA[ PostgreSQL has supported streaming replication and hot standbys since version 9.0 (2010), and synchronous replication since version 9.1 (2011). ]]></description>
<content:encoded><![CDATA[ <p>PostgreSQL has supported <a href=/blog/wheres-my-replica-troubleshooting-streaming-replication-synchronization-in-postgresql>streaming replication</a> and hot standbys since version 9.0 (2010), and synchronous replication since version 9.1 (2011). Streaming replication (and in this case we're referring to "binary" streaming replication, not "logical") sends the <a href=/blog/how-to-recover-when-postgresql-is-missing-a-wal-file>PostgreSQL WAL</a> stream over a network connection from primary to a replica. By default, streaming replication is asynchronous: the primary does not wait for a replica to indicate that it wrote the data. With synchronous replication, the primary will wait for any or all replicas (based on synchronous replication mode) to confirm that they received and wrote the data.<p>Depending on your business requirements, you may only need the default asynchronous behavior, or you may need to configure one or more synchronous replicas. Fortunately, PostgreSQL lets you choose and provides options for tuning the consistency and performance (latency) behavior depending on your requirements.<p>See <a href=https://www.postgresql.org/docs/current/runtime-config-replication.html>the PostgreSQL documentation</a> for more details on streaming replication.<h2 id=preparation><a href=#preparation>Preparation</a></h2><p>Name your instances. Life will be simpler. You do that by setting a configuration parameter in each instance's <code>postgresql.conf</code> file. We'll see why it makes things easier later using the <code>pg_stat_replication</code> table's contents on the primary. Since you use the name of a replica to configure it as synchronous, if each replica has a unique name, you can configure individual replicas as synchronous or asynchronous.<p>For example, on one of the replicas in <code>postgresql.conf</code><p>(Note: If your cluster is managed or created by Patroni or <a href=https://www.crunchydata.com/products/crunchy-high-availability-postgresql>Crunchy HA PostgreSQL</a>, it will manage the contents of <code>postgresql.conf</code>, so make the changes in the Patroni config, which will generate the <code>postgresql.conf</code> file that is used by the servers in your cluster)<pre><code class=language-ini>cluster_name = 'replica2' # added to process titles if nonempty
</code></pre><h2 id=creating-a-replica><a href=#creating-a-replica>Creating a Replica</a></h2><p>The first step in creating a replica is to clone the primary. There are several ways to do that, but the straightforward way is to use <code>pg_basebackup</code>, which clones a running primary PostgreSQL instance. For the simple test case for this article, to create two replicas, as user postgres<pre><code class=language-shell>/usr/lib/postgresql/12/bin/pg_basebackup -Xs -D ~/12/replica1 -R -p 5433 -h localhost -U replicant
/usr/lib/postgresql/12/bin/pg_basebackup -Xs -D ~/12/replica2 -R -p 5443 -h localhost -U replicant
</code></pre><p>These connect to the primary as a client as the user replicant which must exist as a user in the primary DB instance and have <code>REPLICATION</code> privileges.<p>Note the <code>-Xs</code> argument which will open a second connection to the primary to capture WAL changes to the DB as it operates normally. The <code>-R</code> argument will tell <code>pg_basebackup</code> to create the recovery configuration in the data directory for the new replica.<p>Other ways to clone a primary are:<ul><li>pgBackrest to backup the primary and restore to the new replica location.<li>OS level backups or snapshots, but you must be certain that you get a consistent copy of the primary. A safe way to do this is to stop the instance first.</ul><h2 id=example-replica-settings><a href=#example-replica-settings>Example replica settings</a></h2><p>(Note that synchronous replication is independent of replication slots. Either can be used with or without the other.)<p>On a replica:<p>Note that on the replica's configuration, including the recovery section, there is no indication that it's sync or async. Whether a replica is synchronous or async is determined by the primary's configuration.<p>This was generated by <code>pg_basebackup</code> to clone the primary and create a replica.<pre><code class=language-shell>cat /var/lib/postgresql/12/replica2/postgresql.auto.conf
</code></pre><p>Do not edit this file manually!<ul><li>It will be overwritten by the <code>ALTER SYSTEM</code> command.<li>On PostgreSQL versions prior to 12, this information is stored in the <code>recovery.conf</code> file<li>Recovery settings generated by pgBackRest restore on 2020-02-13 13:11:08<li><code>standby_mode</code> on # only on pg versions &lt;12. Replaced by the <code>standby.signal</code> file in pg12</ul><pre><code class=language-ini>recovery_target_timeline 'latest'
recovery_target_action = 'promote'
primary_conninfo = 'user=replication_user passfile=''/var/lib/postgresql/.pgpass'' port=5433 host=''localhost'' user=''replicant'''
</code></pre><h2 id=make-a-replica-synchronous><a href=#make-a-replica-synchronous>Make a replica synchronous</a></h2><p>Once you have streaming replication working, on the primary add a replica name to <code>synchronous_standby_names</code> in <code>postgresql.conf</code>:<pre><code class=language-ini>synchronous_standby_names = 'replica2'
</code></pre><p>and tell the postgresql primary to reload the config (e.g. <code>- pg_ctl reload -D $PGDATA</code> or the method your HA support uses). Now the instance named <code>replica2</code> is a synchronous replica. That's all. Really.<p>Changing a replica from synchronous back to the default asynchronous is similar. Just remove that replica's name from the list in <code>synchronous_standby_names</code> in the primary's config and tell the primary to reload its configuration.<h2 id=reviewing-and-checking-the-current-replication-cluster><a href=#reviewing-and-checking-the-current-replication-cluster>Reviewing and checking the current replication cluster</a></h2><p>Back on the primary:<pre><code class=language-text>  pid  | usesysid | usename  | application_name | client_addr | client_hostname | client_port |         backend_start         | backend_xmin |   state   |  sent_lsn  | write_lsn  | flush_lsn  | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_state |          reply_time
-------+----------+----------+------------------+-------------+-----------------+-------------+-------------------------------+--------------+-----------+------------+------------+------------+------------+-----------+-----------+------------+---------------+------------+-------------------------------
 28488 |       10 | replicant | walreceiver      | 127.0.0.1   |                 |       43221 | 2020-08-25 08:25:22.658642-07 |              | streaming | 0/2E000060 | 0/2E000060 | 0/2E000060 | 0/2E000060 |           |           |            |             0 | async      | 2020-08-25 08:25:22.400688-07
 15936 |    24794 | replicant | replica2         | 127.0.0.1   |                 |       50772 | 2020-08-25 08:25:06.760228-07 |              | streaming | 0/2E000060 | 0/2E000060 | 0/2E000060 | 0/2E000060 |           |           |            |             0 | sync       | 2020-08-25 08:25:56.915357-07
(2 rows)
</code></pre><ul><li>Note that we have a sync and async replica.<li>Note that one of the replicas has <code>cluster_name</code> unset/defaulting, so <code>walreceiver</code> is its name.<li>Recall this setting in <code>postgresql.conf</code> on the primary.</ul><pre><code class=language-ini>synchronous_standby_names = 'replica2'
</code></pre><p>Life will be much easier if the replicas don't all default to <code>cluster_name</code> unset, where all the replicas will have the default name <code>walreceiver</code>.<h2 id=changing-a-replica-tofrom-synchronous><a href=#changing-a-replica-tofrom-synchronous>Changing a replica to/from synchronous</a></h2><p>Changing a replica from synchronous to asynchronous, or vice versa, is easy. Just add the replica name to <code>synchronous_standby_names</code> in the primary's <code>postgresql.conf</code> and tell PostgreSQL to reload the configuration; no DB restart needed.<h2 id=how-synchronous-is-it-waiting-for-storage><a href=#how-synchronous-is-it-waiting-for-storage>How synchronous is it? Waiting for storage</a></h2><p>How synchronous is it? The value of <code>synchronous_commit</code> on the primary determines this.<pre><code class=language-ini>synchronous_commit = on  # the default
</code></pre><p>In order of increasing "safety" (durability), and increasing latency, values of <code>synchronous_commit</code> on the primary:<ul><li>When we set <code>synchronous_commit = off</code>, a <code>COMMIT</code> does not wait for the transaction record to be flushed to disk.<li>When we set <code>synchronous_commit = local</code>, a <code>COMMIT</code> waits until the transaction record is flushed to the local disk.<li>When we set <code>synchronous_commit = on</code>, a <code>COMMIT</code> will wait until the server(s) specified by <code>synchronous_standby_names</code> confirm that the transaction record was safely written to disk.<ul><li>Note: When <code>synchronous_standby_names</code> is empty, this setting behaves same as <code>synchronous_commit = local</code>.</ul><li>When we set <code>synchronous_commit = remote_write</code>, a <code>COMMIT</code> will wait until the server(s) specified by <code>synchronous_standby_names</code> confirm write of the transaction record to the operating system but has not necessarily reached the disk on the replica.<li>When we set <code>synchronous_commit = remote_apply</code>, a <code>COMMIT</code> will wait until the server(s) specified by <code>synchronous_standby_names</code> confirm that the transaction record was applied to the replica's database.</ul><p>How much data might not be replicated in a case where the replica loses connectivity with the primary, with the faster and less durable options? That depends on more PostgreSQL settings - <code>wal_writer_delay</code> and <code>wal_writer_flush_after</code>. The first one flushes WAL after a specified time period (200ms default), the second one flushes if the specified number of WAL files are created since the last flush. If you set <code>synchronous_commit</code> to <code>off</code>, then these two settings will limit how much WAL remains uncommitted.<h2 id=setting-synchronous-behavior-in-a-session-client><a href=#setting-synchronous-behavior-in-a-session-client>Setting synchronous behavior in a session (client)</a></h2><p>Since the synchronous commit behavior is related to a transaction, it can be changed by a client for the session and during a session, so a client can set different values for each statement. You can set the synchronous behavior at any of these levels:<ul><li>Single statement / transaction - <code>SET LOCAL synchronous_commit =</code><li>Session - `SET synchronous_commit = ``<li>User - <code>ALTER USER someuser SET synchronous_commit =</code><li>Database - <code>ALTER DATABASE SET synchronous_commit =</code><li>And of course cluster wide by updating <code>postgreql.conf</code></ul><h2 id=adding-priority-or-quorum-to-the-list-of-synchronous-replicas><a href=#adding-priority-or-quorum-to-the-list-of-synchronous-replicas>Adding priority or quorum to the list of synchronous replicas</a></h2><p>In addition to specifying how synchronous a remote replica, you can also create a list of synchronous replicas by priority - <code>FIRST</code> , or a quorum of replicas - <code>ANY</code>.<p>Quorum is an important aspect of distributed computing. You may already know what it is, but if not, here’s a simplified explanation. In this case we are concerned with consistency of the DB data across multiple DB nodes. When a number of nodes - that you choose - all have the same data committed, then the cluster is considered to be in a consistent state. The number of nodes can be all the nodes in the cluster, or it can be a subset of the nodes. The number of nodes is the quorum number and is chosen depending on your business requirements for data consistency. In this case, the nodes “vote” for quorum by replying to the primary that they have received and applied the replicated data. It’s common to have an odd number of nodes in a cluster and then define quorum as the majority of nodes with consistent copies of the DB data (e.g. - 2 of 3 nodes or 3 of 5 nodes). Quorum is used for other purposes in distributed computing. A common case is electing a new primary from a cluster of nodes when the current primary fails or is unavailable.<p>There are more options in the primary's <code>postgresql.conf</code> setting of <code>synchronous_standby_names</code> to support priority and simple quorum.<p>For the priority case, the <code>FIRST</code> keyword:<pre><code class=language-ini>synchronous_standby_names = ‘FIRST num (standby_name [, …])’
</code></pre><p>The synchronous commit will wait for a reply from at least num number of standbys listed in the order of priority.<p>For the quorum case, the <code>ANY</code> keyword:<pre><code class=language-ini>synchronous_standby_names = ‘ANY num (standby_name [, …])’
</code></pre><p>The same rules as above apply. So, for example setting <code>synchronous_standby_names = 'ANY 2 (\*)'</code> will cause synchronous commit to wait for reply from any 2 standby servers. Double check your syntax and test that the settings implement your business rules for consistency.<h2 id=configure-so-you-dont-wait-forever><a href=#configure-so-you-dont-wait-forever>Configure so you don't wait forever</a></h2><p>With synchronous replication, you've built in a dependency that a transaction is not committed on the primary until it's written to the synchronous replica, so depending on the configuration options above, your primary can hang forever if the replica (or quorum of replicas) is not reachable by the primary. Obviously, you're dependent on the connection between primary and replicas.<p>If you have a single synchronous replica and it is unavailable, your primary will wait for it to return, and will block until it does. To avoid that you want to have at least two replicas and use the <code>FIRST</code> or <code>ANY</code> options to <code>synchronous_standby_names</code> described above. You could disable synchronous replication by commenting out <code>synchronous_standby_names</code> but then of course, you don't have a synchronous replica.<h2 id=the-tldr><a href=#the-tldr>The TL;DR</a></h2><p>To convert a streaming binary replica to synchronous, add its name to the primary's <code>postgresql.conf</code> setting <code>synchronous_standby_names</code>, and reload the primary.<p>To convert a synchronous replica to asynchronous, remove its name from the primary's <code>postgresql.conf</code> setting <code>synchronous_standby_names</code>, and reload the primary.<p>These will be much easier if you've added a unique name to the replica's <code>cluster_name</code> setting in its <code>postgresql.conf</code><h2 id=references><a href=#references>References</a></h2><p>As always, the PostgreSQL documentation is the place to look for more information - <a href=https://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION>https://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION</a> ]]></content:encoded>
<category><![CDATA[ Production Postgres ]]></category>
<author><![CDATA[ David.Youatt@crunchydata.com (David Youatt) ]]></author>
<dc:creator><![CDATA[ David Youatt ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/synchronous-replication-in-postgresql</guid>
<pubDate>Wed, 30 Sep 2020 05:00:00 EDT</pubDate>
<dc:date>2020-09-30T09:00:00.000Z</dc:date>
<atom:updated>2020-09-30T09:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ How to Recover When PostgreSQL is Missing a WAL File ]]></title>
<link>https://www.crunchydata.com/blog/how-to-recover-when-postgresql-is-missing-a-wal-file</link>
<description><![CDATA[ Creation and clean up of WAL files in the primary's pg_wal folder (pg_xlog prior to PG10) is a normal part of PostgreSQL operation. The WAL files on the primary are used to ensure data consistency during crash recovery. Use of write-ahead logs (also called redo logs or transaction logs in other products) is common for data stores that must provide durability and consistency of data when writing to storage. The same technique is used in modern journaling and log-structured filesystems. ]]></description>
<content:encoded><![CDATA[ <p>Creation and clean up of WAL files in the primary's <code>pg_wal</code> folder (<code>pg_xlog</code> prior to PG10) is a normal part of PostgreSQL operation. The WAL files on the primary are used to ensure data consistency during crash recovery. Use of write-ahead logs (also called redo logs or transaction logs in other products) is common for data stores that must provide durability and consistency of data when writing to storage. The same technique is used in modern journaling and log-structured filesystems.<p>As the DB is operating, blocks of data are first written serially and synchronously as WAL files, then some time later, usually a very short time later, written to the DB data files. Once the data contained in these WAL files has been flushed out to their final destination in the data files, they are no longer needed by the primary. At some point, depending on your configuration, the primary will remove or recycle the WAL files whose data has been committed to the DB. This is necessary to keep the primary's disk from filling up. However, these WAL files are also what streaming replicas read when they are replicating data from the primary. If the replica is able to keep up with the primary, using these WAL files generally isn't an issue.<p>If the replica falls behind or is disconnected from the primary for an extended period of time, the primary may have already removed or recycled the WAL file(s) that a replica needs (but see <a href=#streaming-replication-slots>Streaming Replication Slots</a> below). A replica can fall behind on a primary with a high write rate. How far the replica falls behind will depend on network bandwidth from the primary, as well as storage performance on the replica.<p>To account for this possibility, we recommend keeping secondary copies of the WAL files in another location using a WAL archiving mechanism. This is known as <em>WAL archiving</em> and is done by ensuring <code>archive_mode</code> is turned on and a value has been set for the <code>archive_command</code>. These are variables contained in the <code>postgresql.conf</code> file.<p>Whenever the primary generates a WAL file, this command is run to make a secondary copy of it. Until that <code>archive_command</code> succeeds, the primary will keep that WAL file, so you must monitor for this command failing. Otherwise the primary's disk may fill. Once WAL archiving is in place, you can then configure your replicas to use that secondary location for WAL replay if they ever lose their connection to the primary.<p>This process is explained more in the <a href=https://www.postgresql.org/docs/current/continuous-archiving.html>PostgreSQL documentation</a>.<p>Note that configuration details for creating a cluster have changed starting in PostgreSQL 12. In particular, the <code>recovery.conf</code> file on a replica instance no longer exists and those configuration lines are now part of <code>postgresql.conf</code>. If you're using PostgreSQL 12 or newer, read the documentation carefully and note the new files <code>recovery.signal</code> and <code>standby.signal</code>.<p>Crunchy Data provides the pgBackRest tool which provides full WAL archiving functionality as well as maintaining binary backups. We do not recommend the simple copy mechanism given as an example in the documentation since that does not provide the resiliency typically required for production databases. pgBackRest provides full, differential and incremental backups as well as integrated WAL file management.<p>WAL archiving and backups are typically used together since this then provides <dfn>point-in-time recovery</dfn> (<abbr>PITR</abbr>) where you can restore a backup to any specific point in time as long as you have the full WAL stream available between all backups.<p>pgBackRest is an integral part of the <a href=https://www.crunchydata.com/products/crunchy-high-availability-postgresql>Crunchy HA</a> and <a href=https://github.com/CrunchyData/postgres-operator>Crunchy PostgreSQL Operator</a> products, and is what we recommend for a binary backup and archive tool.<p>See <a href=https://pgbackrest.org/>https://pgbackrest.org/</a> for more information on pgBackRest.<h2 id=environment><a href=#environment>Environment</a></h2><p>The most common case where PostgreSQL won't start because it can't find a WAL file is in a replicated cluster where a replica has been disconnected from the cluster for some time. Most of the rest of this article will discuss ways to diagnose and recover this case.<p>For most of this article we will discuss the cluster case, where:<ul><li><p>Cluster of 2 or more PostgreSQL hosts<li><p>Using WAL archiving via the PostgreSQL <code>archive_command</code> configuration, plus binary backups, likely and preferably pgBackRest<li><p>On replicas, the <code>recovery.conf</code> (or <code>postgresql.auto.conf</code> in PG 12 and newer, see also <code>standby.signal</code> and <code>recovery.signal</code>) has a line with <code>restore_command =</code> that pulls WAL files from a binary backup location and applies them to the replica.<li><p>If using pgBackRest, the line may look like:</ul><pre><code class=language-ini>restore_command = 'pgbackrest --stanza=demo archive-get %f "%p"'
</code></pre><ul><li>It's common to use both PostgreSQL binary streaming replication <em>and</em> WAL file archiving together.<li>It's common to use a tool like pgBackRest for both binary backups and WAL archiving. Combining the two gives you the opportunity to restore a DB to a specific point in time (PITR).</ul><h2 id=symptoms><a href=#symptoms>Symptoms</a></h2><ul><li>A replica will doesn't start completely and won't accept read-only connections.<li>If using Crunchy HA (or Patroni), <code>patronictl list</code> may show no leader, or Lag in DB Unknown and cluster members stopped.<li>This is a common symptom when the primary has already recycled/removed the requested WAL file:</ul><pre><code class=language-txt>2020-03-13 09:32:22.572 EDT [101800] ERROR: requested WAL segment 00000002000000050000007C has already been removed
</code></pre><ul><li>You may see log entries in the <code>pg_log</code> logs like:</ul><pre><code class=language-txt>2020-04-17 14:29:49.479 P00 INFO: unable to find 0000001600000000000000C2 in the archive,
</code></pre><p>and/or<pre><code class=language-txt>2020-04-17 14:29:49 EDT [379]: [6-1], , FATAL: requested timeline 23 does not contain minimum recovery point 0/C2A56FC0 on timeline 22
</code></pre><p>and especially these:<pre><code class=language-txt>2020-04-17 14:29:49 EDT [376]: [5-1], , LOG: database system is shut down
</code></pre><pre><code class=language-txt>2020-04-17 14:29:49 EDT [459]: [1-1], , LOG: database system was interrupted while in recovery at log time 2020-04-17 14:19:28 EDT
</code></pre><pre><code class=language-txt>2020-04-17 14:29:49 EDT [459]: [2-1], ,
</code></pre><p>HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.<h2 id=common-causes><a href=#common-causes>Common Causes</a></h2><p>The underlying cause for a replica getting out of sync and not able to replay WAL (either via streaming replication or from the WAL archive / backup) is almost always an infrastructure issue and almost always network connectivity interruptions. To have a HA cluster you must have reliable network connectivity between the cluster members and from each of the cluster members to the WAL archive (backup) location.<p>It's worth a reminder that time synchronization across cluster member hosts is critically important for correct cluster operation. Always check and confirm that all nodes in the cluster have a NTP service running (e.g. - <code>ntpd</code> or <code>chronyd</code>), and that the nodes are correctly synced to each other and to a master time source.<p>It is common to use the same tools for binary backups and WAL archiving. A common configuration in a HA cluster is to use pgBackRest as both the backup tool and the WAL archiving and playback tool. With pgBackRest and other binary backup tools, you will likely have a backup schedule that does a full backup of the cluster primary server periodically and differential or incremental backups periodically between the full backups. Along with the backup schedule are backup retention settings. For example, you may have configured your cluster to retain the last three full backups and the last three incremental or differential backups.<p>In addition to the backups, pgBackRest will retain WAL files that are needed to do a point-in-time recovery from your full, differential and incremental backups. So if a replica has been disconnected long enough (several backup cycles) for the archived WAL files to be past their retention period, when reconnected, PostgreSQL will be far behind the current state and will attempt to restore archived WAL files that no longer exist. In that case, you will need to reset or reinitialize the replica from current backups and the WAL files that are relative to them.<p>If the DB data disk fills completely on the replica, the replica will stop accepting and applying WAL updates from the primary. If this isn't caught and repaired for some time, the primary may have removed older WAL files. The length of time will depend on the configuration on the primary and the change rate on the primary. See the <a href=https://www.postgresql.org/docs/current/runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER>documentation for <code>wal_keep_segments</code></a> to retain more older WAL files.<p>Another failure mode is when you have intermittent network connectivity among hosts, and the cluster fails over and fails back several times. For each failover, a replica is promoted to primary (failover or switchover), and the WAL <em>timeline</em> is incremented. If one of the replicas can't communicate with the cluster for some time, its local DB will be based on an earlier timeline, and when it attempts to restore a WAL file from the earlier timeline, that WAL file won't be found in the archive, which will default to the current primary's timeline. Note that there is an option to specify the timeline in the <code>recovery.conf</code> file but you probably want to fix the replica to be on the current primary's timeline. See <a href=https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-TIMELINES>here for more information</a>.<p>This is simplified when using pgBackRest, which is the recommended method and the method used by Crunchy HA, and the backup-standby option is enabled. With this option enabled, backups are done from a replica/standby host rather than the primary. There is more explanation here <a href=https://pgbackrest.org/faq.html#backup-standby>in the pgBackRest documentation.</a><hr><p>Check and confirm that the former primary was properly re-synch'ed to the new primary. This should have been done automatically by your cluster software, and involves cloning the new primary's data directory to the new replica using a tool like pg_rewind or pgBackRest's delta restore (or full restore). If the new replica (former primary) is logging messages about being on an older timeline, the re-synch may not have happened or may not have been done correctly.<h2 id=repairing--fixing-a-replica><a href=#repairing--fixing-a-replica>Repairing / Fixing a Replica</a></h2><p>It's likely that the best and only option will be to restore the replica from the backup/archive server. We recommend pgBackRest. Crunchy Data products, including the Crunchy PostgreSQL Operator, Crunchy HA and others include and use pgBackRest.<ul><li>pgBackRest restore<ul><li>full restore vs delta restore. A pgBackRest delta restore can save time and resources. It checks the PostgreSQL destination and restore only files that are needed and don't exist in the destination.</ul></ul><p>If you're using Crunchy HA or the Crunchy PostgreSQL Operator, you want to use the HA layer tools rather than using pgBackRest directly.<p>First use<pre><code class=language-shell>patronictl reinit
</code></pre><p>to restore a cluster member node (PostgreSQL instance) from the backup/archive system.<h2 id=repairing--fixing-a-standalone-or-failed-primary><a href=#repairing--fixing-a-standalone-or-failed-primary>Repairing / Fixing a Standalone or Failed Primary</a></h2><ul><li><p>This one is potentially more serious and will be a reminder of why you want regular, reliable and carefully tested backups, especially the standalone server case, and why a properly managed replica and backups are a very good idea for data you like.<li><p>This can be caused by broken or failing hardware, misconfigured storage, power failures, or mistakenly using OS level commands to modify the DB directory contents.<li><p>Avoid the temptation to just run <code>pg_resetwal</code> (<code>pg_resetxlog</code> in versions prior to PG 10). More on this below, and please read the man page and/or pg docs on <code>pg_resetwal</code> risks.<li><p>The first thing to do is properly and completely stop the failing standalone or primary.<li><p>If your primary is part of a cluster, and your cluster uses software to manage auto-failover, like Crunchy HA, Patroni, or pacemaker/corosync, your cluster control software should detect a failure, and properly failover to one of the replicas. Occasionally it doesn't and you will have to manually stop the failed primary/standalone. If you're not using software to manage your cluster, failover is a manual process, where you first shut down the failed primary, then promote one of the replicas to be the new primary. If you are using software for auto-failover, and it hasn't automatically failed over, what you need to do is <strong>first, shutdown the failed primary instance</strong>, preferably using the same mechanism that was used to start it - <code>patronictl</code> if it's using Patroni or Crunchy HA, or <code>pcs</code> or <code>crmsh</code> if you're using pcs/corosync/pacemaker clustering, or an OS-level utility like systemd's <code>systemctl</code> or the Linux service, or <code>pg_ctl</code>. Even if those complete successfully, you want to check to be certain that all the PostgreSQL processes have stopped. Occasionally, none of those stop all the PostgreSQL processes. In that case, you need to use the Linux kill command. To stop the postmaster, the three signals that correspond to "smart", "fast" and "immediate" shutdown are <code>SIGTERM</code>, <code>SIGINT</code> and <code>SIGQUIT</code>. The very last option to use is <code>SIGKILL</code> (<code>kill -9</code>); avoid doing that unless it's a true emergency.<li><p>Once all the PostgreSQL processes are no longer running, make a copy of the DB's data directory using your favorite OS-level snapshot utility - <code>tar</code>, <code>cpio</code>, <code>rsync</code> or your infrastructure's disk snapshotting tool. This is a safe copy in case attempts to repair the DB cause more damage.<li><p>To get the DB back to healthy, the best option is to recreate the DB from a good backup. If you have a good replica (now likely the primary), make the failed primary a replica from the new primary, after fixing the underlying cause.<li><p>In the worst case, where you have only the broken DB directory, you may be able to get it working again but you will likely lose data. First make an OS level copy of the DB directory. Avoid using OS level commands to move or delete DB files; you will likely make things worse.<li><p>It may be tempting to just run <code>pg_resetwal</code>. While it may be possible to recover from some errors using <code>pg_resetwal</code>, you will likely lose data in the process. You can also do more damage to the DB. If you need to minimize data loss, don't have a replica to restore or a good backup, and don't have detailed knowledge of PostgreSQL, it's time to ask for help. It may be possible to recover most of your DB but read the documentation and note this advice from the documentation: "It should be used only as a last resort, when the server will not start due to such corruption."</ul><h2 id=streaming-replication-slots><a href=#streaming-replication-slots>Streaming Replication Slots</a></h2><p>Streaming replication slots are a feature available since PostgreSQL 9.4. Using replication slots will cause the primary to retain WAL files until the primary has been notified that the replica has received the WAL file.<p>There is a tradeoff for keeping the WAL files on the primary until the primary is notified that the replica has received the WAL file. If the replica is unavailable for any reason, WAL files will accumulate on the primary until it can deliver them. On a busy database, with replicas unavailable, disk space can be consumed very quickly as unreplicated WAL files accumulate. This can also happen on a busy DB with relatively slow infrastructure (network and storage). Streaming replication slots are a very useful feature; if you do use them, carefully monitor disk space on the primary.<p>There is much more on <a href=https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS>replication slots and replication configuration</a> generally.<h2 id=multi-datacenter><a href=#multi-datacenter>Multi-datacenter</a></h2><p>If you have deployed HA across multiple data centers, there's another layer of complexity. Crunchy HA provides example Ansible files for several MDC configurations. In the case where the primary data center fails and the DR data center has been promoted to primary, extra steps are required when you recover the original failed primary. In particular, when you start the recovered original primary, be certain that you start it as a DR data center. You will likely need to copy (rsync or other reliable mechanism) the pgBackRest repo from the new primary data center (old DR data center) when recovering the original primary data center.<h2 id=what-not-to-do--dont-do-this><a href=#what-not-to-do--dont-do-this>What not to do / Don't do this</a></h2><ul><li>Again, avoid just running <code>pg_resetwal</code> (<code>pg_resetxlog</code>). You may need to use it as a last resort, but don't start by running it.<li>Do not try to cherry-pick or copy individual files from a backup / archive repo directly to the PostgreSQL the <code>pg_wal</code> (<code>pg_xlog</code> in earlier versions) directory. That is part of the database runtime that is managed by PostgreSQL. Deleting or copying things there can break your DB. Let the tools do their jobs.<li>Never use OS tools to manually delete, modify or add files to the PostgreSQL <code>pg_wal</code> (<code>pg_xlog</code>) directory.<li>Never use OS tools to manually delete, modify or add files to the pgBackRest repo. The pgBackRest repo contains state and metadata about current backups and which WAL files depend on which backups. (Note that listing or viewing the files in a pgBackRest repo can be helpful to diagnose replicas and restore issues. It's also OK to copy the repo in its entirety as long as you use a copy utility that verifies that the files have been copied correctly, for example rsync.)</ul> ]]></content:encoded>
<category><![CDATA[ Production Postgres ]]></category>
<author><![CDATA[ David.Youatt@crunchydata.com (David Youatt) ]]></author>
<dc:creator><![CDATA[ David Youatt ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/how-to-recover-when-postgresql-is-missing-a-wal-file</guid>
<pubDate>Fri, 19 Jun 2020 05:00:00 EDT</pubDate>
<dc:date>2020-06-19T09:00:00.000Z</dc:date>
<atom:updated>2020-06-19T09:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ How To Improve PgBouncer Security with TLS/SSL ]]></title>
<link>https://www.crunchydata.com/blog/improving-pgbouncer-security-with-tlsssl</link>
<description><![CDATA[ PgBouncer is a commonly deployed and recommended connection pooler for PostgreSQL and supports a number of authentication methods including TLS/SSL client certificate authentication. ]]></description>
<content:encoded><![CDATA[ <p>PgBouncer is a commonly deployed and recommended connection pooler for PostgreSQL. It supports a number of authentication methods including TLS/SSL client certificate authentication.<p>Since PgBouncer is located logically between the client and PostgreSQL you have the option of using TLS and cert authentication from client to PgBouncer and from PgBouncer to PostgreSQL. In this brief blog post, we’ll describe configuring securing the client-to-PgBouncer transport first, then build on that to use client certificate authentication to PgBouncer.<p>A central part of this is TLS and tools for creating and maintaining keys, certificates, signing requests, signing and more. For this talk we use the widely used open source software OpenSSL, but any utilities that produce valid keys and certificates could be used.<p>The client certificates will need to be signed by the same CA (certificate authority) that signed the PgBouncer certificate. For testing and for this article we’ll use self-signed certificates but for production you should at least create a local CA, or preferably, use a public CA, though the latter can get expensive if you have many client certificates. Both <strong>PgBouncer</strong> and <strong>PostgreSQL</strong> have a configuration option that determines the level of root certificate verification, ranging from no verification to strict verification. This accommodates a range of uses, including self-signed certificates for internal use to more secure environments that must use certs signed by a public CA.<p>Testing for this post was done with PgBouncer 1.12.0 on Linux.<h2 id=creating-a-tls-certificate-for-pgbouncer><a href=#creating-a-tls-certificate-for-pgbouncer>Creating a TLS certificate for PgBouncer</a></h2><p>We’ll use <code>openssl</code> to create a certificate for PgBouncer, to enable TLS transport security. Here are the steps:<ol><li><p>Generate a private key (you must provide a passphrase).<pre><code class=language-shell>openssl genrsa -des3 -out server.key 1024
</code></pre><li><p>Remove the passphrase (but remember it).<pre><code class=language-shell>openssl rsa -in server.key -out server.key
</code></pre><li><p>Set appropriate permission and owner on the private key file.<pre><code class=language-shell>chmod 400 server.key
chown postgres.postgres server.key
</code></pre><li><p>Create the server certificate signing request. Note that this is where the process differs depending on whether you use self-signed certificates (like here) or create a <abbr>CSR</abbr> (<dfn>certificate signing request</dfn>) and send it to a CA to be signed, and who will return you the signed certificate, private key and root (or intermediate) certificates for your new signed cert. Note the <code>-x509</code> below that produces a self-signed certificate instead of a CSR, and <code>-subj</code> is a shortcut to avoid prompting for the info and typing it interactively.</ol><ul><li><p>Creating a self-signed cert with the <code>-x509</code> argument. You probably don't want to do this in production:<pre><code class=language-shell>openssl req -new -key server.key -days 3650 -out server.crt -x509 -subj '/C=US/ST=Washington/L=Redmond/O=Crunchy Data/CN=crunchy-testuser1/emailAddress=testuser1@example.com'
</code></pre><li><p>or instead, generate a <abbr>CSR</abbr> (<dfn>certificate signing request</dfn>) for a real certificate, use this and send the <code>.csr</code> file to your CA to be signed:<pre><code class=language-shell>openssl req -new -key server.key -out server.csr -subj '/C=US/ST=Washington/L=Redmond/O=Crunchy Data/CN=crunchy-testuser1/emailAddress=testuser1@example.com'
</code></pre><p>Your CA will return a signed certificate and key to you.<p>Change the <code>-subj</code> arg details for your environment of course, and note that the <code>CN=</code> part of the cert's needs to be the hostname of your PgBouncer host. You can use <abbr>SAN</abbr> (<dfn>Subject Alt Names</dfn>) to define more than one hostname in the cert, but that's outside the scope of this post.</ul><p>At this point, you have a signed certificate, its private key and a root certificate from the signing CA, or your self-signed cert.<h2 id=configuring-pgbouncer-to-use-tls-transport-security-prerequisite-for-cert-authentication><a href=#configuring-pgbouncer-to-use-tls-transport-security-prerequisite-for-cert-authentication>Configuring PgBouncer to use TLS transport security (prerequisite for cert authentication)</a></h2><p>Once you have a signed certificate for PgBouncer, configuring for TLS transport security is pretty straightforward.<p>You need to set these options in your <code>pgbouncer.ini</code> file (<code>/etc/pgbouncer/pgbouncer.ini</code> on Linux):<pre><code class=language-ini>    client_tls_sslmode = require
    client_tls_ca_file = /etc/pgbouncer/root.crt
    client_tls_key_file = /etc/pgbouncer/server.key
    client_tls_cert_file = /etc/pgbouncer/server.crt
    client_tls_ciphers = normal
</code></pre><p>Note that there are stricter checking options for <code>client_tls_sslmode</code> but the <code>require</code> value will not allow non-TLS/SSL connections from clients. And we can restrict the allowed TLS cipher suites and versions with <code>client_tls_ciphers</code> but one step at a time.<p>For now, leave <code>auth_type</code> to something other than cert, for example <code>md5</code>.<p>Restart PgBouncer using whatever your platform requires, for example <code>systemctl restart pgbouncer</code> on a Linux system that uses systemd.<p>And let’s test that it’s working. Or not. Here’s a simple test using psql with <code>auth_type = trust</code> in <code>pgbouncer.ini</code>:<pre><code class=language-shell>bash> psql "sslmode=require host=localhost port=6432"
Password for user testuser1:
psql (12.1)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.testuser1=# \q&#60
</code></pre><p>If the client requests a SSL connection, it succeeds.<pre><code class=language-shell>bash> psql "host=localhost port=6432"
psql: ERROR:  SSL required
bash>
</code></pre><p>If the client does not request a SSL connection, it fails.<p>OK. encrypted TLS connections between client and PgBouncer are working, and using a fairly secure cipher suite and TLS version. With <code>client_tls_sslmode = require</code>, if the client doesn’t request a TLS/SSL connection, it’s denied. So we have the transport layer from client to PgBouncer using TLS.<p>Note that this process is almost identical to configuring PostgreSQL to use TLS/SSL transport authentication. Later, in a follow up blog post, we’ll see how to configure PgBouncer to use TLS/SSL from PgBouncer to the DB. But first, let’s continue to configure TLS client cert authentication to PgBouncer.<h2 id=client-certs-and-options-for-cas><a href=#client-certs-and-options-for-cas>Client Certs and options for CA's</a></h2><p>At this point, we have TLS transport encryption between client and PgBouncer configured and working. What's next is to create and deploy client certificates and enable cert authentication in PgBouncer.<p>These steps closely mirror the procedure described here in and earlier blog post on how to <a href=/blog/ssl-certificate-authentication-postgresql-docker-containers>set up TLS authentication within Docker containers</a>.<p>An important consideration and a choice to make when doing this is what you will use for a CA (<a href=https://en.wikipedia.org/wiki/Certificate_authority>Certificate Authority</a>). There are at least three options, depending on your needs and requirements for security.<ol><li><p>The simplest is to not have a CA and use self-signed certificates. This requires that you use one of the less strict verification options for <code>client_tls_sslmode</code>. It's also not recommended for use in production. For this method, we create a self-signed cert for the PgBouncer server, then use its private key to sign the client certs, and the PgBouncer cert is also the root CA cert.<li><p>Build and manage your own local CA.<p>For help building a local, private CA, see one of these:<ul><li>the open source <em>minica</em> project <a href=https://github.com/jsha/minica>https://github.com/jsha/minica</a><li>the new open source <em>mkcert</em> project <a href=https://github.com/FiloSottile/mkcert>https://github.com/FiloSottile/mkcert</a><li>Vault (if you’re already using it) can be a local CA<li>Most Linux distros have tools and instructions for creating and managing a local CA, see for example the easy-rsa package, also see the official Ubuntu docs here: <a href=https://help.ubuntu.com/lts/serverguide/certificates-and-security.html>https://help.ubuntu.com/lts/serverguide/certificates-and-security.html</a></ul><li><p>Use a public CA, though this can get expensive and complicated to administer if you have many clients (or clients <em>and</em> servers).</ol><p>The high-level steps are:<ol><li><p>Create the client certificate and Certificate Signing Request. The key thing here is that the CN (Common Name) in the client cert must be a valid user in the PostgreSQL instance.<pre><code class=language-shell>openssl req -newkey rsa:4096 -keyout testuser1_key.pem -out testuser1_csr.pem -nodes -days 365 -subj "/CN=testuser1"
</code></pre><li><p>Sign the client certificate with the root certificate that's installed in PgBouncer. For the example here, we're using self-signed certs, so we'll use the private key for the PgBouncer server certificate to sign the client cert.<pre><code class=language-shell>openssl x509 -req -in testuser1_csr.pem -CA server.crt -CAkey server.key -out testuser1_cert.pem -set_serial 01 -days 365
</code></pre><li><p>Install the signed client cert on the client(s).<pre><code class=language-shell>cp server.crt ~/.postgresql/root.crt
cp testuser1_cert.pem ~/.postgresql/postgresql.crt
cp testuser1_key.pem ~/.postgresql/postgresql.key
chmod 400 ~/.postgresql/postgresql.key
</code></pre><p>Note that there are number of <a href=https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLMODE>connection parameters you can set for SSL/TLS</a> and related <a href=https://www.postgresql.org/docs/current/libpq-envars.html>environment variables</a>.<p>Also note that these are for clients that use the <code>libpq</code> PostgreSQL library. If your client does not use it, see the docs for your client for TLS/SSL support and location of client certs and keys.<li><p>Update the PgBouncer config to use certificate authentication with an acceptable level of signature verification. Edit your <code>pgbouncer.ini</code> file and set<pre><code class=language-ini>auth_type = cert
;; required for cert auth
client_tls_sslmode = verify-full
</code></pre><p>Restart PgBouncer<li><p>Test<ol><li><p>Simple psql test<pre><code class=language-shell>testuser1-bash> psql -U testuser1 -p 6432 -h crunchy-testuser1
psql (12.1)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.

testuser1=# \q
</code></pre><li><p>Test with <code>PGSSLMODE=require</code> vs <code>PGSSLMODE=verify-ca</code> vs <code>PGSSLMODE=verify-full</code> client-side environment variables.<li><p>Try different DB username (fails)<pre><code class=language-shell>testuser1-bash> psql -U postgres -p 6432 -h crunchy-testuser1
psql: error: could not connect to server: ERROR:  TLS certificate name mismatch
</code></pre><li><p>Copy client cert and key from user <code>testuser1</code> to different user's <code>~/.postgresql</code> and try connecting as that user (fails)<pre><code class=language-shell>testuser-bash> PGSSLMODE=verify-full psql -h crunchy-testuser1 -p 6432
psql: error: could not connect to server: ERROR:  TLS certificate name mismatch
</code></pre><p>but note this, logged in as user <code>testuser</code> and you have installed the key and cert for user<pre><code class=language-txt>testuser1
</code></pre><pre><code class=language-shell>testuser-bash> PGSSLMODE=verify-full psql -U testuser1 -h crunchy-testuser1 -p 6432
psql (12.1)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.

testuser1=#
</code></pre></ol></ol><p>The moral for this test: Protect your key and password.<h3 id=note-on-this-test-environment><a href=#note-on-this-test-environment>Note on this test environment</a></h3><p>For this article, we're implementing to a common configuration, where PgBouncer and PostgreSQL are both running on the same host, and we use only local connections (Unix domain sockets) from PgBouncer to PostgreSQL.<p>Here's the <code>pgbouncer.ini</code> from the test environment:<pre><code class=language-ini>[databases]
;; Three DB's, with PgBouncer connecting only on local/UDS
testuser1 =
template1 =
postgres =
[users]
[pgbouncer]
logfile = /var/log/postgresql/pgbouncer.log
pidfile = /var/run/postgresql/pgbouncer.pid
listen_addr = *
listen_port = 6432
unix_socket_dir = /var/run/postgresql
client_tls_sslmode = verify-full
client_tls_ca_file = /etc/pgbouncer/root.crt
client_tls_key_file = /etc/pgbouncer/server.key
client_tls_cert_file = /etc/pgbouncer/server.crt
client_tls_ciphers = normal
auth_type = cert
auth_file = /etc/pgbouncer/userlist.txt
admin_users = testuser1
</code></pre><p>Contents of <code>/etc/pgbouncer/pgbouncer.ini</code>:<pre><code class=language-shell>bash> cat /etc/pgbouncer/userlist.txt
"testuser1" "&#60hashed password from pg_shadow for user testuser1 here>"
"testuser2" "&#60hashed password from pg_shadow for user testuser2 here>"
</code></pre><p>And contents of <code>${PGDATA}/pg_hba.conf</code>. Note that this PostgreSQL instance is listening only on <code>localhost</code> and on a local Unix domain socket:<pre><code class=language-ini># Database administrative login by Unix domain socket
local   all             postgres                                peer
# TYPE  DATABASE        USER            ADDRESS                 METHOD
local   all             testuser1                               md5
local   all             testuser2                               md5
# "local" is for Unix domain socket connections only
local   all             all                                     peer
#
# IPv4 local connections:# IPv6 local connections:
host    all             all             127.0.0.1/32            md5
host    all             all             ::1/128                 md5
</code></pre><p>Note that for user <code>postgres</code>, <code>peer</code> auth works because PgBouncer is running as user <code>postgres</code> in this environment.<p>For authenticating from PgBouncer to PostgreSQL, we're still relying on passwords (md5 hashed in this case).<h2 id=connection-from-pgbouncer-to-postgresql><a href=#connection-from-pgbouncer-to-postgresql>Connection from PgBouncer to PostgreSQL</a></h2><p>At this point, we have a secure transport <em>from the app client to PgBouncer</em>, the client-supplied DB username (which can be specified in the client) must match the CN in the client certificate, and the hostname parameter of the connection string from the client must match the signer (Issuer) in the client cert. On the client/app side, you can adjust how strict validation is using the <code>PGSSLMODE</code> environment variable for apps that use <code>libpq</code>.<p>But we're still relying on the credentials in PgBouncer's <code>userlist.txt</code> <code>auth_file</code> parameter together with PostgreSQL's <code>pg_hba.conf</code> file to authenticate from PgBouncer to PostgreSQL.<p>In a follow up blog post, we will describe how to reduce the overhead of managing passwords in the <code>auth_file</code> using the method described in Doug Hunley's post here: <a href=https://hunleyd.github.io/posts/pgbouncer-and-auth-pass-thru/>https://hunleyd.github.io/posts/pgbouncer-and-auth-pass-thru/</a><h2 id=certificate-authentication-from-pgbouncer-to-postgresql><a href=#certificate-authentication-from-pgbouncer-to-postgresql>Certificate Authentication from PgBouncer to PostgreSQL</a></h2><p>Another common PgBouncer configuration is where the PgBouncer service does not reside on the DB server. It might be on a dedicated server, or there may be a PgBouncer service on each of several app or web servers.<p>In that case you will have one or more TCP connections from each PgBouncer service to the PostgreSQL server that can be secured using TLS transport as well as configured to user cert auth to the PostgreSQL DB. The configuration is very similar to the description above but in this case, you configure the PgBouncer services to each have a client cert and use <code>hostssl ... cert</code> authentication in the PostgreSQL server after you have enabled SSL in the DB server and configured TLS certs for it and the PgBouncer clients.<p>That's a topic for another blog post.<p>To prepare, and to enable SSL/TLS in the PostgreSQL DB, start with <a href=https://www.postgresql.org/docs/current/ssl-tcp.html>https://www.postgresql.org/docs/current/ssl-tcp.html</a> and <a href=https://www.crunchydata.com/blog/ssl-certificate-authentication-postgresql-docker-containers>https://www.crunchydata.com/blog/ssl-certificate-authentication-postgresql-docker-containers</a>. ]]></content:encoded>
<category><![CDATA[ Security ]]></category>
<author><![CDATA[ David.Youatt@crunchydata.com (David Youatt) ]]></author>
<dc:creator><![CDATA[ David Youatt ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/improving-pgbouncer-security-with-tlsssl</guid>
<pubDate>Mon, 16 Dec 2019 04:00:00 EST</pubDate>
<dc:date>2019-12-16T09:00:00.000Z</dc:date>
<atom:updated>2019-12-16T09:00:00.000Z</atom:updated></item></channel></rss>