<?xml version="1.0" encoding="UTF-8" ?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" version="2.0"><channel><title>David Steele | CrunchyData Blog</title>
<atom:link href="https://www.crunchydata.com/blog/author/david-steele/rss.xml" rel="self" type="application/rss+xml" />
<link>https://www.crunchydata.com/blog/author/david-steele</link>

<description>PostgreSQL experts from Crunchy Data share advice, performance tips, and guides on successfully running PostgreSQL and Kubernetes solutions</description>
<language>en-us</language>
<pubDate>Fri, 09 Jun 2023 09:00:00 EDT</pubDate>
<dc:date>2023-06-09T13:00:00.000Z</dc:date>
<dc:language>en-us</dc:language>
<sy:updatePeriod>hourly</sy:updatePeriod>
<sy:updateFrequency>1</sy:updateFrequency>
<item><title><![CDATA[ pgBackRest File Bundling and Block Incremental Backup ]]></title>
<link>https://www.crunchydata.com/blog/pgbackrest-file-bundling-and-block-incremental-backup</link>
<description><![CDATA[ pgBackRest has some new features that allow you to bundle files in your backup repo and do a block incremental storage. These can really help with storage efficiency and performance. David has some sample code to help you get started. ]]></description>
<content:encoded><![CDATA[ <p>Crunchy Data is proud to support the pgBackRest project, an essential production grade backup tool used in our <a href=https://crunchybridge.com/>fully managed</a> and <a href=https://www.crunchydata.com/products/crunchy-postgresql-for-kubernetes>self managed</a> Postgres products. pgBackRest is also available as an open source project.<p><a href=https://github.com/pgbackrest/>pgBackRest</a> provides:<ul><li>Full, differential, and incremental backups<li>Checksum validation of backup integrity<li>Point-in-Time recovery</ul><p>pgBackRest recently released v2.46 with support for block incremental backup, which saves space in the repository by storing only changed parts of files. File bundling, released in v2.39, combines smaller files together for speed and cost savings, especially on object stores.<p>Efficiently storing backups is a major priority for the pgBackRest project but we also strive to balance this goal with backup and restore performance. The file bundling and block incremental backup features improve backup and, in many cases, restore performance while also saving space in the repository.<p>In this blog we will provide working examples to help you get started with these exciting features.<h4 id=file-bundling><a href=#file-bundling>File bundling</a></h4><ul><li>combines smaller files together<li>improves speed on object stores like S3, Azure, GCS</ul><h4 id=block-incremental-backup><a href=#block-incremental-backup>Block incremental backup</a></h4><ul><li>saves space by storing only changed file parts<li>improves efficiency of delta restore</ul><h3 id=sample-repository-set-up><a href=#sample-repository-set-up>Sample repository set up</a></h3><p>To demonstrate these features we will create two repositories. The first repository will use defaults. The second will have file bundling and block incremental backup enabled.<p>Configure both repositories:<pre><code class=language-ini>[global]
log-level-console=info
start-fast=y

repo1-path=/var/lib/pgbackrest/1
repo1-retention-full=2

repo2-path=/var/lib/pgbackrest/2
repo2-retention-full=2
repo2-bundle=y
repo2-block=y

[demo]
pg1-path=/var/lib/postgresql/12/demo
</code></pre><p>Create the stanza on both repositories:<pre><code class=language-shell>pgbackrest --stanza=demo stanza-create
</code></pre><p>The block incremental backup feature is best demonstrated with a larger dataset. In particular, we would prefer to have at least one table that is near the maximum segment size of 1GB. This can be accomplished by creating data with <code>pgbench</code>:<pre><code class=language-shell>/usr/lib/postgresql/12/bin/pgbench -i -s 65
</code></pre><p>PostgreSQL splits tables into segment files of 1GB each, so the main table that <code>pgbench</code> created above will be contained in a single file. The format PostgreSQL uses to store tables on disk will be important in the examples below.<h3 id=file-bundling-1><a href=#file-bundling-1>File bundling</a></h3><p>File bundling stores data in the repository more efficiently by combining smaller files together. This results in fewer files overall in the backup which improves the speed of all repository operations, especially on object stores like S3, Azure, and GCS. There may also be cost savings on repositories that have a cost per operation since there will be fewer lists, deletes, etc.<p>To demonstrate this we'll make a backup on repo1, which does not have bundling enabled:<pre><code class=language-shell>pgbackrest --stanza=demo --type=full --repo=1 backup
</code></pre><p>Now we check the number of files in repo1 for the latest backup:<pre><code class=language-shell>$ find /var/lib/pgbackrest/1/backup/demo/latest/ -type f | wc -l

991
</code></pre><p>This is pretty normal for a small database without bundling enabled since each file is stored separately. There are also a few metadata files that pgBackRest uses to track the backup.<p>Now we'll perform the same actions on repo2, which has file bundling enabled:<pre><code class=language-shell>$ pgbackrest --stanza=demo --type=full --repo=2 backup
$ find /var/lib/pgbackrest/2/backup/demo/latest/ -type f | wc -l

7
</code></pre><p>This time there are far fewer files. The small files have been bundled together and zero-length files are stored only in the manifest.<p>The <code>repo-bundle-size</code> option can be used to control the maximum size of bundles before compression and other operations are applied. The <code>repo-bundle-limit</code> option limits the files that will be added to bundles. It is not a good idea to set these options too large because any failure in the bundle on backup or restore will require the entire bundle to be retried. The goal of file bundling is to combine small files -- there is very seldom any benefit in combining larger files.<h3 id=block-incremental-backup-1><a href=#block-incremental-backup-1>Block incremental backup</a></h3><p>Block incremental backup saves space in the repository by storing only the parts of the file that have changed since the last backup. The block size depends on the file size and when the file was last modified, i.e. larger, older files will get larger block sizes. Blocks are compressed and encrypted into super blocks that can be retrieved independently to make restore more efficient.<p>To demonstrate the block incremental feature, we need to make some changes to the database. With <code>pgbench</code> we can update 100 random rows in the main table, which is about 1GB in size.<pre><code class=language-shell>/usr/lib/postgresql/12/bin/pgbench -n -b simple-update -t 100
</code></pre><p>On repo1 the time to make an incremental backup is very similar to making a full backup. As previously discussed, PostgreSQL breaks tables up into 1GB segments so in our case the main table consists of a single file that contains most of the data in our database.<pre><code class=language-shell>$ pgbackrest --stanza=demo --type=incr --repo=1 backup

&#60...>
INFO: backup command end: completed successfully (12525ms)
</code></pre><p>Here we can see that the incremental backup is nearly as large as the full backup, 52.8MB vs 55.5MB. This is expected since the bulk of the database is contained in a single file and by default incremental backups copy the entire file if any part of the file has changed.<pre><code class=language-shell>$ pgbackrest --stanza=demo --repo=1 info

full backup: 20230520-082323F
    database size: 995.7MB, database backup size: 995.7MB
    repo1: backup size: 55.5MB

incr backup: 20230520-082323F_20230520-082934I
    database size: 995.7MB, database backup size: 972.8MB
    repo1: backup size: 52.8MB
</code></pre><p>However, on repo2 with block incremental enabled, the backup is significantly faster.<pre><code class=language-shell>$ pgbackrest --stanza=demo --type=incr --repo=2 backup

&#60...>
INFO: backup command end: completed successfully (3589ms)
</code></pre><p>And also much smaller, 943KB vs 52.8MB on the repo without block incremental enabled. This is more than 50x improvement in backup size! Note that the block incremental backup feature also works with differential backups.<pre><code class=language-shell>$ pgbackrest --stanza=demo --repo=2 info

full backup: 20230520-082438F
    database size: 995.7MB, database backup size: 995.7MB
    repo2: backup size: 56MB

incr backup: 20230520-082438F_20230520-083027I
    database size: 995.7MB, database backup size: 972.8MB
    repo2: backup size: 943.3KB
</code></pre><p>The block incremental feature also improves the efficiency of the delta restore command. Here we stop the cluster and perform a delta restore back to the full backup in repo 1:<pre><code class=language-shell>$ pg_ctlcluster 12 demo stop
$ pgbackrest --stanza=demo --delta --repo=1 --set=20230526-053458F restore

&#60...>
INFO: restore command end: completed successfully (3697ms)
</code></pre><p>As we saw above the main table is contained in a single file, so the restore must copy and decompress the entire file from repo 1 (compressed size 30.4MB) because it was changed since the full backup.<p>To test a delta restore of the full backup in repo 2 we need to first restore the cluster to the most recent backup in repo 2:<pre><code class=language-shell>pgbackrest --stanza=demo --delta --repo=2 restore
</code></pre><p>And then perform a delta restore back to the full backup in repo 2:<pre><code class=language-shell>$ pgbackrest --stanza=demo --delta --repo=2 --set=20230526-053406F restore

&#60...>
INFO: restore command end: completed successfully (1536ms)
</code></pre><p>This is noticeably faster even on our fairly small demo database. When storage latency is high (e.g. S3) the performance improvement will be more pronounced. With block incremental enabled, delta restore only had to copy 3.5MB of the main table file from repo 2, as compared to 30.4MB from repo 1.<p>It is best to avoid long chains of block incremental backups since they can have a negative impact on restore performance. In this case pgBackRest may be forced to pull from many backups to restore a file.<h3 id=conclusion><a href=#conclusion>Conclusion</a></h3><p>Block incremental and file bundling both help make backup and restore more efficient and they are a powerful combination when used together. In general you should consider enabling both on all your repositories, with the caveat that these features are not backward compatible with older versions of pgBackRest. ]]></content:encoded>
<category><![CDATA[ Production Postgres ]]></category>
<author><![CDATA[ David.Steele@crunchydata.com (David Steele) ]]></author>
<dc:creator><![CDATA[ David Steele ]]></dc:creator>
<guid isPermalink="false">1ee4ea5d69bfce3b2b37a41a5a897d1d5e93da72894bdd89457f623ac4b791d2</guid>
<pubDate>Fri, 09 Jun 2023 09:00:00 EDT</pubDate>
<dc:date>2023-06-09T13:00:00.000Z</dc:date>
<atom:updated>2023-06-09T13:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Introducing pgBackRest Multiple Repository Support ]]></title>
<link>https://www.crunchydata.com/blog/introducing-pgbackrest-multiple-repository-support</link>
<description><![CDATA[ The pgBackRest team is pleased to announce the introduction of multiple repository support in v2.33. Backups already provide redundancy by creating an offline copy of your PostgreSQL cluster that can be used in disaster recovery. ]]></description>
<content:encoded><![CDATA[ <p>The pgBackRest team is pleased to announce the introduction of multiple repository support in v2.33. Backups already provide redundancy by creating an offline copy of your PostgreSQL cluster that can be used in disaster recovery. Multiple repositories allow you to have copies of your backups and WAL archives in separate locations to increase your redundancy and provide even more protection for your data. This feature is the culmination of many months of hard work, so let's delve into why we think multiple repositories are so important and how they can help preserve your data.<p>If you are unfamiliar with <a href=https://pgbackrest.org/>pgBackRest</a>, general repository configuration, or configuring PostgreSQL to work with pgBackRest, please read the <a href=https://pgbackrest.org/user-guide-rhel.html#quickstart>pgBackRest Quick Start</a> before proceeding.<h2 id=configuration><a href=#configuration>Configuration</a></h2><p>Up to four repositories may be configured and each one can be any repo type, e.g. S3 or Posix. The configuration below defines two repositories. <code>repo1</code> is Posix and stored on locally-mounted NFS volume; the repository should always be located off the PostgreSQL server in case disaster strikes. <code>repo2</code> is stored on Azure.<pre><code class=language-shell>$ cat /etc/pgbackrest/pgbackrest.conf

[demo]
pg1-path=/var/lib/postgresql/13/demo

[global]
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2

repo2-type=azure
repo2-azure-account=pgbackrest
repo2-azure-container=demo-container
repo2-azure-key=YXpLZXk=
repo2-path=/demo-repo
repo2-retention-full=8
</code></pre><p>Note that the retention has been configured differently on each repository. <code>repo1</code> has a shorter retention to save space but still provide several backups and plenty of WAL on local storage that is both fast and cheap (in terms of bandwidth cost) to access. <code>repo2</code> has a longer retention and is stored in Azure where storage is cheap but retrieving data is both slower and more expensive than <code>repo1</code>. <code>repo2</code> is both a fail-safe in case something goes wrong with <code>repo1</code> and a resource to restore backups from remote sites.<p>pgBackRest will treat <code>repo1</code> with higher priority than <code>repo2</code> for certain commands like restore and archive-get. In general, the lower-numbered repositories should be faster and/or cheaper than the higher-numbered repositories.<h2 id=usage><a href=#usage>Usage</a></h2><p>Now that the repositories are configured, run <code>stanza-create</code>. This will initialize the <code>demo</code> stanza in each repository.<pre><code class=language-shell>$ pgbackrest --stanza=demo stanza-create

&#60...>
INFO: stanza-create for stanza 'demo' on repo1
INFO: stanza-create for stanza 'demo' on repo2
&#60...>

</code></pre><p>Once the repositories have been initialized it is a good idea to run <code>check</code> to ensure that everything is working. Note that WAL segments are being pushed to both repositories.<pre><code class=language-shell>$ pgbackrest --stanza=demo check

&#60...>
INFO: check repo1 configuration (primary)
INFO: check repo2 configuration (primary)
INFO: check repo1 archive for WAL (primary)
INFO: WAL segment 000000010000000000000003 successfully archived to '/var/lib/pgbackrest/archive/demo/13-1/0000000100000000/000000010000000000000003-c981b4ddc8c1437c539eda05427a6aa454a0923e.gz' on repo1
INFO: check repo2 archive for WAL (primary)
INFO: WAL segment 000000010000000000000003 successfully archived to '/demo-repo/archive/demo/13-1/0000000100000000/00000001000000000000000
</code></pre><p>Now a backup should be run for each repository. Most commands operate automatically on all repos but backup requires the repo to be specified. Each repo is likely to have different retention and backup schedules so backups should be run independently.<pre><code class=language-shell>pgbackrest --stanza=demo --repo=1 backup
pgbackrest --stanza=demo --repo=2 backup
</code></pre><p>Info for the repositories is shown in a unified fashion with the organizing unit being the stanza. This way all the backups for a stanza can be seen together and the most recent backup easily identified since the list is sorted oldest to newest.<pre><code class=language-shell>$ pgbackrest info

stanza: demo
    status: ok
    cipher: none

    db (current)
        wal archive min/max (13): 000000010000000000000003/000000010000000000000006

        full backup: 20210331-155726F
            timestamp start/stop: 2021-03-31 15:57:26 / 2021-03-31 15:57:30
            wal start/stop: 000000010000000000000005 / 000000010000000000000005
            database size: 23.1MB, database backup size: 23.1MB
            repo1: backup set size: 2.8MB, backup size: 2.8MB

        full backup: 20210331-155736F
            timestamp start/stop: 2021-03-31 15:57:36 / 2021-03-31 15:57:41
            wal start/stop: 000000010000000000000006 / 000000010000000000000006
            database size: 23.1MB, database backup size: 23.1MB
            repo2: backup set size: 2.8MB, backup size: 2.8MB
</code></pre><p>It is also possible to get info for a single repository by specifying the <code>--repo</code> option.<pre><code class=language-shell>$ pgbackrest --repo=2 info

stanza: demo
    status: ok
    cipher: none

    db (current)
        wal archive min/max (13): 000000010000000000000003/000000010000000000000006

        full backup: 20210331-155736F
            timestamp start/stop: 2021-03-31 15:57:36 / 2021-03-31 15:57:41
            wal start/stop: 000000010000000000000006 / 000000010000000000000006
            database size: 23.1MB, database backup size: 23.1MB
            repo2: backup set size: 2.8MB, backup size: 2.8MB

</code></pre><p>When restoring, pgBackRest will automatically select the best backup from the repositories based on your criteria. Here is an example of time-based recovery:<pre><code class=language-shell>pgbackrest --stanza=demo --delta --type=time --target="2021-03-31 15:57:31-04" restore

&#60...>
INFO: repo1: restore backup set 20210331-155726F
&#60...>
</code></pre><p>In this case only the backup from <code>repo1</code> was valid because of the time restriction. Even if a later time was specified that would seem to favor the later backup in <code>repo2</code>, the same backup from <code>repo1</code> will be selected. This is because pgBackRest tends to prefer the repo with higher priority, i.e. repo1 over repo2, under the assumption that it will be faster and/or cheaper than a repository with lower priority.<p>If you want to restore from a specific repository then simply specify the preferred repository using <code>--repo</code>. The <code>archive-get</code> command generated for recovery will always search all repositories in priority order for WAL segments.<h2 id=conclusion><a href=#conclusion>Conclusion</a></h2><p>Multiple repositories allow for more redundancy as well as cost savings and performance improvements by allowing a repository to be located close to the PostgreSQL clusters while also having a repository located safely out in the cloud or in another data center.<p>For more information about multiple repository support, see the <a href=https://pgbackrest.org/user-guide-rhel.html#multi-repo>User Guide</a>. ]]></content:encoded>
<category><![CDATA[ Production Postgres ]]></category>
<author><![CDATA[ David.Steele@crunchydata.com (David Steele) ]]></author>
<dc:creator><![CDATA[ David Steele ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/introducing-pgbackrest-multiple-repository-support</guid>
<pubDate>Fri, 09 Apr 2021 05:00:00 EDT</pubDate>
<dc:date>2021-04-09T09:00:00.000Z</dc:date>
<atom:updated>2021-04-09T09:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ pgBackRest - Reliable PostgreSQL Backup & Restore ]]></title>
<link>https://www.crunchydata.com/blog/pgbackrest-reliable-postgresql-backup-restore</link>
<description><![CDATA[ Learn about pgBackRest, and open source tool for performing PostgreSQL backup and restore in an efficient and effective way. ]]></description>
<content:encoded><![CDATA[ <p>In our ongoing series of blog posts designed to help you better run, manage, and support PostgreSQL, today we have a post discussing pgBackRest, a powerful open source tool for managing backups and restores of PostgreSQL databases...<h2 id=introduction><a href=#introduction>Introduction</a></h2><p><a href=https://www.pgbackrest.org/>pgBackRest</a> aims to be a simple, reliable backup and restore system that can seamlessly scale up to the largest databases and workloads.<p>Instead of relying on traditional backup tools like tar and rsync, pgBackRest implements all backup features internally and uses a custom protocol for communicating with remote systems. Removing reliance on tar and rsync allows for better solutions to database-specific backup challenges. The custom remote protocol allows for more flexibility and limits the types of connections that are required to perform a backup which increases security.<h2 id=features><a href=#features>Features</a></h2><h3 id=multi-process-backup--restore><a href=#multi-process-backup--restore>Multi-process Backup &#38 Restore</a></h3><p>Compression is usually the bottleneck during backup operations but, even with now ubiquitous multi-core servers, most database backup solutions are still single-process. pgBackRest solves the compression bottleneck with multi-processing.<p>Utilizing multiple cores for compression makes it possible to achieve 1TB/hr raw throughput even on a 1Gb/s link. More cores and a faster network lead to even higher throughput.<h3 id=local-or-remote-operation><a href=#local-or-remote-operation>Local or Remote Operation</a></h3><p>A custom protocol allows pgBackRest to backup, restore, and archive locally or remotely via SSH with minimal configuration. An interface to query PostgreSQL is also provided via the protocol layer so that remote access to PostgreSQL is never required, which enhances security.<h3 id=full-incremental--differential-backups><a href=#full-incremental--differential-backups>Full, Incremental, &#38 Differential Backups</a></h3><p>Full, differential, and incremental backups are supported. pgBackRest is not susceptible to the time resolution issues of rsync, making differential and incremental backups completely safe.<h3 id=backup-from-a-standby-cluster><a href=#backup-from-a-standby-cluster>Backup from a Standby Cluster</a></h3><p>Performing backups on a standby host greatly reduces CPU and IO load on the master host. pgBackRest copies the majority of the files from the standby and only a few from the master, while still producing a backup exactly as if it were performed entirely on the master.<h3 id=backup-rotation--archive-expiration><a href=#backup-rotation--archive-expiration>Backup Rotation &#38 Archive Expiration</a></h3><p>Retention polices can be set for full and differential backups to create coverage for any timeframe. WAL archive can be maintained for all backups or strictly for the most recent backups. In the latter case WAL required to make older backups consistent will be maintained in the archive.<h3 id=backup-integrity><a href=#backup-integrity>Backup Integrity</a></h3><p>Checksums are calculated for every file in the backup and rechecked during a restore. After a backup finishes copying files, it waits until every WAL segment required to make the backup consistent reaches the repository.<p>Backups in the repository are stored in the same format as a standard PostgreSQL cluster (including tablespaces). If compression is disabled and hard links are enabled it is possible to snapshot a backup in the repository and bring up a PostgreSQL cluster directly on the snapshot. This is advantageous for terabyte-scale databases that are time consuming to restore in the traditional way.<p>All operations utilize file and directory level fsync to ensure durability.<h3 id=backup-resume><a href=#backup-resume>Backup Resume</a></h3><p>An aborted backup can be resumed from the point where it was stopped. Files that were already copied are compared with the checksums in the manifest to ensure integrity. Since this operation can take place entirely on the backup server, it reduces load on the database server and saves time since checksum calculation is faster than compressing and retransmitting data.<h3 id=streaming-compression--checksums><a href=#streaming-compression--checksums>Streaming Compression &#38 Checksums</a></h3><p>Compression and checksum calculations are performed in stream while files are being copied to the repository, whether the repository is located locally or remotely.<p>If the repository is on a backup server, compression is performed on the database server and files are transmitted in a compressed format and simply stored on the backup server. When compression is disabled a lower level of compression is utilized to make efficient use of available bandwidth while keeping CPU cost to a minimum.<h3 id=delta-restore><a href=#delta-restore>Delta Restore</a></h3><p>The manifest contains checksums for every file in the backup so that during a restore it is possible to use these checksums to speed processing enormously. On a delta restore any files not present in the backup are first removed and then checksums are taken for the remaining files. Files that match the backup are left in place and the rest of the files are restored as usual. Multi-processing can lead to a dramatic reduction in restore times.<h3 id=advanced-archiving><a href=#advanced-archiving>Advanced Archiving</a></h3><p>Dedicated commands are included for both pushing WAL to the archive and retrieving WAL from the archive.<p>The push command automatically detects WAL segments that are pushed multiple times and de-duplicates when the segment is identical, otherwise an error is raised. The push and get commands both ensure that the database and repository match by comparing PostgreSQL versions and system identifiers. This precludes the possibility of misconfiguring the WAL archive location.<p>Asynchronous archiving allows compression and transfer to be offloaded to another process which maintains a continuous connection to the remote server, improving throughput significantly. This can be a critical feature for databases with extremely high write volume.<h3 id=selective-restore><a href=#selective-restore>Selective Restore</a></h3><p>Selected databases can be restored from a cluster backup to save space when not all the databases are required. WAL replay during restore takes places for all databases so some space will be used, but generally far less than if the unneeded databases were restored completely. After recovery completes the unrestored databases will not be accessible but can be dropped in the usual way.<h3 id=tablespace--link-support><a href=#tablespace--link-support>Tablespace &#38 Link Support</a></h3><p>Tablespaces are fully supported and on restore tablespaces can be remapped to any location. It is also possible to remap all tablespaces to one location with a single command which is useful for development restores.<p>File and directory links are supported for any file or directory in the PostgreSQL cluster. When restoring it is possible to restore all links to their original locations, remap some or all links, or restore some or all links as normal files or directories within the cluster directory.<h2 id=compatibility-with-postgresql--83><a href=#compatibility-with-postgresql--83>Compatibility with PostgreSQL >= 8.3</a></h2><p>pgBackRest includes support for versions down to 8.3, since older versions of PostgreSQL are still regularly utilized.<h2 id=additional-resources><a href=#additional-resources>Additional Resources</a></h2><ul><li>Download pgBackRest <a href=https://github.com/pgbackrest/pgbackrest/releases>here</a>.<li>Documentation for pgBackRest can be found <a href=https://www.pgbackrest.org/>here</a>.<li>User Guide for pgBackRest is <a href=https://www.pgbackrest.org/user-guide.html>here</a>.</ul><p>Photo Credit: <a href=https://commons.wikimedia.org/wiki/User:Evan-Amos>Evan-Amos</a> ]]></content:encoded>
<category><![CDATA[ Production Postgres ]]></category>
<author><![CDATA[ David.Steele@crunchydata.com (David Steele) ]]></author>
<dc:creator><![CDATA[ David Steele ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/pgbackrest-reliable-postgresql-backup-restore</guid>
<pubDate>Tue, 20 Sep 2016 05:00:00 EDT</pubDate>
<dc:date>2016-09-20T09:00:00.000Z</dc:date>
<atom:updated>2016-09-20T09:00:00.000Z</atom:updated></item></channel></rss>