PostgreSQL Build Farm Log

Details for system "bowerbird" failure at stage recoveryCheck, snapshot taken 2018-03-10 20:30:20

System Information Farm member Branch OS Compiler Architecture Owner
bowerbird HEAD Windows 10 Pro Visual Studio 2012 x86_64 andrew [ a t ] dunslane.net

Other branches REL_16_STABLE REL_15_STABLE REL_14_STABLE REL_13_STABLE REL_12_STABLE

Stage Logs

Total run time
01:07:15
SCM-checkout (00:00:03) configure (00:01:50) make (00:11:00) check (00:03:36)
make-install (00:00:44) check-pg_upgrade (00:07:51) initdb-check (00:00:28) pg_archivecleanup-check (00:00:02)
pg_basebackup-check (00:04:58) pg_config-check (00:00:01) pg_controldata-check (00:00:24) pg_ctl-check (00:02:12)
pg_dump-check (00:03:58) pg_rewind-check (00:09:30) pgbench-check (00:02:51) scripts-check (00:07:46)
recovery-check (00:10:01)      

Configuration summary

$Script_Config = {
                   'steps_completed' => [
                                          'SCM-checkout',
                                          'Configure',
                                          'Make',
                                          'Check',
                                          'Install',
                                          'pg_upgradeCheck',
                                          'initdbCheck',
                                          'pg_archivecleanupCheck',
                                          'pg_basebackupCheck',
                                          'pg_configCheck',
                                          'pg_controldataCheck',
                                          'pg_ctlCheck',
                                          'pg_dumpCheck',
                                          'pg_rewindCheck',
                                          'pgbenchCheck',
                                          'scriptsCheck'
                                        ],
                   'bf_perl_version' => '5.16.2',
                   'config_opts' => {
                                      'asserts' => 1,
                                      'krb5' => undef,
                                      'iconv' => 'c:\\prog\\3p64',
                                      'python' => 'c:\\python27',
                                      'ldap' => 0,
                                      'xslt' => 'c:\\prog\\3p64',
                                      'tap_tests' => 1,
                                      'perl' => 'c:\\perl64',
                                      'openssl' => 'c:\\prog\\3p64\\openssl',
                                      'xml' => 'c:\\prog\\3p64',
                                      'tcl' => undef,
                                      'integer_datetimes' => 1,
                                      'zlib' => 'c:\\prog\\3p64',
                                      'nls' => undef
                                    },
                   'target' => 'https://buildfarm.postgresql.org/cgi-bin/pgstatus.pl',
                   'make_jobs' => undef,
                   'web_script_version' => 'REL_6.1',
                   'current_ts' => 1520717856,
                   'script_version' => 'REL_6.1',
                   'git_ignore_mirror_failure' => 1,
                   'build_env' => {
                                    'Framework35Version' => 'v3.5',
                                    'Platform' => 'x64',
                                    'VisualStudioVersion' => '11.0',
                                    'VCINSTALLDIR' => 'C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC',
                                    'DevEnvDir' => 'C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\Common7\\IDE',
                                    'FrameworkVersion32' => 'v4.0.30319',
                                    'TEMPDIR' => 'c:/prog/bf/root/tmp',
                                    'FrameworkDIR64' => 'C:\\windows\\Microsoft.NET\\Framework64',
                                    'VSINSTALLDIR' => 'C:\\Program Files (x86)\\Microsoft Visual Studio 11.0',
                                    'LIBPATH' => 'C:\\windows\\Microsoft.NET\\Framework64\\v4.0.30319i;C:\\windows\\Microsoft.NET\\Framework64\\v3.5;C:\\windows\\Microsoft.NET\\Framework\\v4.0.30319;C:\\windows\\Microsoft.NET\\Framework\\v3.5;C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\VC\\LIB\\amd64;C:\\Program Files (x86)\\Windows Kits\\8.0\\References\\CommonConfiguration\\Neutral;C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v8.0\\ExtensionSDKs\\Microsoft.VCLibs\\11.0\\References\\CommonConfiguration\\neutral',
                                    'WindowsSdkDir' => 'C:\\Program Files (x86)\\Windows Kits\\8.0',
                                    'PERL5LIB' => 'c:/prog/bf/p5lib',
                                    'PATH' => 'C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\Common7\\IDE\\CommonExtensions\\Microsoft\\TestWindow;C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\Common7\\IDE;C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\VC\\BIN\\x86_amd64;C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\VC\\BIN;C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\Common7\\Tools;C:\\windows\\Microsoft.NET\\Framework\\v4.0.30319;C:\\windows\\Microsoft.NET\\Framework\\v3.5;C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\VC\\VCPackages;C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\Team Tools\\Performance Tools;C:\\Program Files (x86)\\Windows Kits\\8.0\\bin\\x86;C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v8.0A\\bin\\NETFX 4.0 Tools;C:\\Perl64\\Bin;c:\\prog\\3p64\\bin;c:\\prog\\git\\cmd;c:\\prog\\git\\bin;c:\\perl64\\bin;c:\\perl64\\bin;C:\\ProgramData\\Oracle\\Java\\javapath;c:\\Program Files (x86)\\Intel\\iCLS Client\\;c:\\Program Files\\Intel\\iCLS Client\\;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files (x86)\\Intel\\OpenCL SDK\\2.0\\bin\\x86;C:\\Program Files (x86)\\Intel\\OpenCL SDK\\2.0\\bin\\x64;C:\\Program Files (x86)\\Windows Live\\Shared;C:\\prog\\Git\\cmd;C:\\Program Files\\Microsoft SQL Server\\110\\Tools\\Binn\\;C:\\vagrant\\vagrant\\bin;C:\\Program Files (x86)\\Microsoft SQL Server\\110\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\110\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\110\\Tools\\Binn\\ManagementStudio\\;C:\\Program Files (x86)\\Microsoft SQL Server\\110\\DTS\\Binn\\;C:\\Program Files\\jEdit;C:\\Program Files (x86)\\GtkSharp\\2.12\\bin;%USERPROFILE%\\.dnx\\bin;C:\\Program Files\\Microsoft DNX\\Dnvm\\;C:\\Program Files\\Microsoft SQL Server\\130\\Tools\\Binn\\;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files (x86)\\Skype\\Phone\\;C:\\Users\\pgrunner\\AppData\\Local\\Microsoft\\WindowsApps',
                                    'INCLUDE' => 'C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\VC\\INCLUDE;C:\\Program Files (x86)\\Windows Kits\\8.0\\include\\shared;C:\\Program Files (x86)\\Windows Kits\\8.0\\include\\um;C:\\Program Files (x86)\\Windows Kits\\8.0\\include\\winrt',
                                    'WindowsSdkDir_old' => 'C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v8.0A',
                                    'ExtensionSdkDir' => 'C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v8.0\\ExtensionSDKs',
                                    'LIB' => 'C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\VC\\LIB\\amd64;C:\\Program Files (x86)\\Windows Kits\\8.0\\lib\\win8\\um\\x64',
                                    'FrameworkVersion64' => 'v4.0.30319',
                                    'FrameworkDIR32' => 'C:\\windows\\Microsoft.NET\\Framework',
                                    'FrameworkVersion' => 'v4.0.30319',
                                    'INSTALLDIR' => 'C:\\Program Files (x86)\\Microsoft Visual Studio 11.0\\VC',
                                    'FrameworkDir' => 'C:\\windows\\Microsoft.NET\\Framework'
                                  },
                   'git_use_workdirs' => 1,
                   'base_port' => 5678,
                   'force_every' => {},
                   'ccache_failure_remove' => undef,
                   'locales' => [
                                  'C'
                                ],
                   'trigger_include' => undef,
                   'optional_steps' => {},
                   'invocation_args' => [
                                          '--skip-steps',
                                          'ecpg-check',
                                          '--verbose',
                                          '--config',
                                          'bowerbird.conf',
                                          'HEAD'
                                        ],
                   'modules' => [
                                  'TestUpgrade'
                                ],
                   'keep_error_builds' => 0,
                   'use_vpath' => undef,
                   'orig_env' => {
                                   'USERPROFILE' => 'xxxxxx',
                                   'COMMONPROGRAMFILES' => 'xxxxxx',
                                   'VBOX_MSI_INSTALL_PATH' => 'xxxxxx',
                                   'APPDATA' => 'xxxxxx',
                                   'PROGRAMFILES' => 'xxxxxx',
                                   'OS' => 'xxxxxx',
                                   'LOCALAPPDATA' => 'xxxxxx',
                                   'PATHEXT' => 'xxxxxx',
                                   'PROGRAMW6432' => 'xxxxxx',
                                   'VS120COMNTOOLS' => 'xxxxxx',
                                   'PROMPT' => 'xxxxxx',
                                   'ONLINESERVICES' => 'xxxxxx',
                                   'VS140COMNTOOLS' => 'xxxxxx',
                                   'NUMBER_OF_PROCESSORS' => 'xxxxxx',
                                   'FP_NO_HOST_CHECK' => 'xxxxxx',
                                   'COMMONPROGRAMFILES(X86)' => 'xxxxxx',
                                   'PATH' => 'c:\\perl64\\bin;C:\\ProgramData\\Oracle\\Java\\javapath;c:\\Program Files (x86)\\Intel\\iCLS Client\\;c:\\Program Files\\Intel\\iCLS Client\\;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files (x86)\\Intel\\OpenCL SDK\\2.0\\bin\\x86;C:\\Program Files (x86)\\Intel\\OpenCL SDK\\2.0\\bin\\x64;C:\\Program Files (x86)\\Windows Live\\Shared;C:\\prog\\Git\\cmd;C:\\Program Files\\Microsoft SQL Server\\110\\Tools\\Binn\\;C:\\vagrant\\vagrant\\bin;C:\\Program Files (x86)\\Microsoft SQL Server\\110\\Tools\\Binn\\;C:\\Program Files\\Microsoft SQL Server\\110\\DTS\\Binn\\;C:\\Program Files (x86)\\Microsoft SQL Server\\110\\Tools\\Binn\\ManagementStudio\\;C:\\Program Files (x86)\\Microsoft SQL Server\\110\\DTS\\Binn\\;C:\\Program Files\\jEdit;C:\\Program Files (x86)\\GtkSharp\\2.12\\bin;%USERPROFILE%\\.dnx\\bin;C:\\Program Files\\Microsoft DNX\\Dnvm\\;C:\\Program Files\\Microsoft SQL Server\\130\\Tools\\Binn\\;C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\;C:\\Program Files (x86)\\Skype\\Phone\\;C:\\Users\\pgrunner\\AppData\\Local\\Microsoft\\WindowsApps',
                                   'OPENSSL_CONF' => 'xxxxxx',
                                   'USERDOMAIN' => 'xxxxxx',
                                   'PROGRAMDATA' => 'xxxxxx',
                                   'PROCESSOR_ARCHITECTURE' => 'xxxxxx',
                                   'PSMODULEPATH' => 'xxxxxx',
                                   'TEMP' => 'xxxxxx',
                                   'PROCESSOR_REVISION' => 'xxxxxx',
                                   'SYSTEMDRIVE' => 'xxxxxx',
                                   'SYSTEMROOT' => 'xxxxxx',
                                   'COMMONPROGRAMW6432' => 'xxxxxx',
                                   'COMSPEC' => 'xxxxxx',
                                   'WINDIR' => 'xxxxxx',
                                   'VS110COMNTOOLS' => 'xxxxxx',
                                   'PROCESSOR_LEVEL' => 'xxxxxx',
                                   'PUBLIC' => 'xxxxxx',
                                   'USERNAME' => 'xxxxxx',
                                   'PROCESSOR_IDENTIFIER' => 'xxxxxx',
                                   'PLATFORM' => 'xxxxxx',
                                   'PROGRAMFILES(X86)' => 'xxxxxx',
                                   'GTK_BASEPATH' => 'xxxxxx',
                                   'COMPUTERNAME' => 'xxxxxx',
                                   'ALLUSERSPROFILE' => 'xxxxxx',
                                   'TMP' => 'xxxxxx'
                                 },
                   'scm' => 'git',
                   'git_keep_mirror' => 1,
                   'extra_config' => {
                                       'DEFAULT' => [
                                                      'log_line_prefix = \'%m [%p:%l] %q%a \'',
                                                      'log_connections = \'true\'',
                                                      'log_disconnections = \'true\'',
                                                      'log_statement = \'all\'',
                                                      'fsync = off'
                                                    ]
                                     },
                   'module_versions' => {
                                          'PGBuild::SCM' => 'REL_6.1',
                                          'PGBuild::Modules::TestUpgrade' => 'REL_6.1',
                                          'PGBuild::Options' => 'REL_6.1',
                                          'PGBuild::Utils' => 'REL_6.1',
                                          'PGBuild::WebTxn' => 'REL_6.1'
                                        },
                   'wait_timeout' => undef,
                   'mail_events' => {
                                      'green' => [],
                                      'change' => [],
                                      'fail' => [],
                                      'all' => []
                                    },
                   'scmrepo' => undef,
                   'use_git_cvsserver' => undef,
                   'tar_log_cmd' => undef,
                   'make' => 'make',
                   'core_file_glob' => 'core*',
                   'alerts' => {},
                   'build_root' => '/prog/bf/root',
                   'git_ignore_errors' => 1,
                   'using_msvc' => 'true',
                   'config_env' => {
                                     'CC' => 'ccache gcc'
                                   },
                   'trigger_exclude' => qr/(?^:^doc\/|\.po$)/,
                   'use_default_ccache_dir' => 1,
                   'scm_url' => undef,
                   'upgrade_target' => 'https://buildfarm.postgresql.org/cgi-bin/upgrade.pl',
                   'animal' => 'bowerbird'
                 };

Files changed this run

src/bin/psql/tab-complete.c
Git log

Change Set for this build

b6e132ddc8 Sat Mar 10 18:18:21 2018 UTC  In psql, restore old behavior of Query_for_list_of_functions.

Files changed since last success

not recorded

Log

Last file mtime in snapshot: Sat Mar 10 18:18:21 2018 GMT
===================================================
[16:30:02] t/001_stream_rep.pl .................. ok   148130 ms
[16:31:03] t/002_archiving.pl ................... ok    61253 ms
[16:34:35] t/003_recovery_targets.pl ............ ok   211573 ms
[16:35:56] t/004_timeline_switch.pl ............. ok    80426 ms
[16:37:05] t/005_replay_delay.pl ................ ok    69472 ms
Bailout called.  Further testing stopped:  system pg_ctl failed
FAILED--Further testing stopped: system pg_ctl failed


================== pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_master.log ===================
2018-03-10 16:27:57.234 EST [1716] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:27:57.236 EST [1716] LOG:  listening on IPv4 address "127.0.0.1", port 58233
2018-03-10 16:27:57.266 EST [1716] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:27:57.266 EST [1716] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:27:57.266 EST [1716] DEBUG:  disabling huge pages
2018-03-10 16:27:57.395 EST [6560] LOG:  database system was shut down at 2018-03-10 16:27:56 EST
2018-03-10 16:27:57.395 EST [6560] DEBUG:  checkpoint record is at 0/15CF4D8
2018-03-10 16:27:57.396 EST [6560] DEBUG:  redo record is at 0/15CF4D8; shutdown true
2018-03-10 16:27:57.422 EST [6560] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:27:57.422 EST [6560] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:27:57.422 EST [6560] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:27:57.422 EST [6560] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:27:57.422 EST [6560] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:27:57.422 EST [6560] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:27:57.422 EST [6560] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:27:57.422 EST [6560] DEBUG:  starting up replication slots
2018-03-10 16:27:57.424 EST [6560] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:27:57.424 EST [6560] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:27:57.583 EST [6980] DEBUG:  autovacuum launcher started
2018-03-10 16:27:57.591 EST [1716] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:27:57.619 EST [1716] LOG:  database system is ready to accept connections
2018-03-10 16:27:57.634 EST [6964] DEBUG:  logical replication launcher started
2018-03-10 16:27:57.836 EST [12484] 001_stream_rep.pl LOG:  received replication command: SHOW wal_segment_size
2018-03-10 16:27:57.837 EST [12484] 001_stream_rep.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:27:57.837 EST [12484] 001_stream_rep.pl LOG:  received replication command: BASE_BACKUP LABEL 'pg_basebackup base backup'    NOWAIT  
2018-03-10 16:27:57.891 EST [11560] DEBUG:  performing replication slot checkpoint
2018-03-10 16:27:58.351 EST [10368] 001_stream_rep.pl LOG:  received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_10368" TEMPORARY PHYSICAL RESERVE_WAL
2018-03-10 16:27:58.393 EST [10368] 001_stream_rep.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:27:58.393 EST [10368] 001_stream_rep.pl LOG:  received replication command: START_REPLICATION SLOT "pg_basebackup_10368" 0/2000000 TIMELINE 1
2018-03-10 16:27:58.400 EST [10368] 001_stream_rep.pl DEBUG:  standby "001_stream_rep.pl" has now caught up with primary
2018-03-10 16:28:08.809 EST [12484] 001_stream_rep.pl DEBUG:  contents of directory "pg_dynshmem" excluded from backup
2018-03-10 16:28:08.820 EST [12484] 001_stream_rep.pl DEBUG:  contents of directory "pg_notify" excluded from backup
2018-03-10 16:28:08.821 EST [12484] 001_stream_rep.pl DEBUG:  contents of directory "pg_replslot" excluded from backup
2018-03-10 16:28:08.821 EST [12484] 001_stream_rep.pl DEBUG:  contents of directory "pg_serial" excluded from backup
2018-03-10 16:28:08.822 EST [12484] 001_stream_rep.pl DEBUG:  contents of directory "pg_snapshots" excluded from backup
2018-03-10 16:28:08.824 EST [12484] 001_stream_rep.pl DEBUG:  contents of directory "pg_stat_tmp" excluded from backup
2018-03-10 16:28:08.824 EST [12484] 001_stream_rep.pl DEBUG:  contents of directory "pg_subtrans" excluded from backup
2018-03-10 16:28:08.835 EST [12484] 001_stream_rep.pl DEBUG:  file "postmaster.opts" excluded from backup
2018-03-10 16:28:08.836 EST [12484] 001_stream_rep.pl DEBUG:  file "postmaster.pid" excluded from backup
2018-03-10 16:28:28.415 EST [6756] standby_1 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:28:28.415 EST [6756] standby_1 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:28:28.415 EST [6756] standby_1 DEBUG:  standby "standby_1" has now caught up with primary
2018-03-10 16:28:49.670 EST [1716] LOG:  received fast shutdown request
2018-03-10 16:28:49.690 EST [1716] LOG:  aborting any active transactions
2018-03-10 16:28:49.691 EST [6964] DEBUG:  logical replication launcher shutting down
2018-03-10 16:28:49.691 EST [6980] DEBUG:  autovacuum launcher shutting down
2018-03-10 16:28:49.693 EST [1716] LOG:  background worker "logical replication launcher" (PID 6964) exited with exit code 1
2018-03-10 16:28:49.696 EST [11560] LOG:  shutting down
2018-03-10 16:28:49.712 EST [11560] DEBUG:  performing replication slot checkpoint
2018-03-10 16:28:49.883 EST [1716] LOG:  database system is shut down
2018-03-10 16:29:11.457 EST [6508] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:29:11.459 EST [6508] LOG:  listening on IPv4 address "127.0.0.1", port 58233
2018-03-10 16:29:11.489 EST [6508] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:29:11.489 EST [6508] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:29:11.489 EST [6508] DEBUG:  disabling huge pages
2018-03-10 16:29:11.614 EST [14040] LOG:  database system was shut down at 2018-03-10 16:28:49 EST
2018-03-10 16:29:11.614 EST [14040] DEBUG:  checkpoint record is at 0/3000060
2018-03-10 16:29:11.615 EST [14040] DEBUG:  redo record is at 0/3000060; shutdown true
2018-03-10 16:29:11.615 EST [14040] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:29:11.615 EST [14040] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:29:11.615 EST [14040] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:29:11.615 EST [14040] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:29:11.615 EST [14040] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:29:11.615 EST [14040] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:11.615 EST [14040] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:29:11.615 EST [14040] DEBUG:  starting up replication slots
2018-03-10 16:29:11.616 EST [14040] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:29:11.616 EST [14040] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:29:11.767 EST [11968] DEBUG:  autovacuum launcher started
2018-03-10 16:29:11.778 EST [6508] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:29:11.809 EST [6508] LOG:  database system is ready to accept connections
2018-03-10 16:29:11.826 EST [13896] DEBUG:  logical replication launcher started
2018-03-10 16:29:11.950 EST [13568] standby_1 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:29:11.950 EST [13568] standby_1 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:29:11.951 EST [13568] standby_1 DEBUG:  standby "standby_1" has now caught up with primary
2018-03-10 16:29:26.652 EST [9580] 001_stream_rep.pl LOG:  statement: CREATE TABLE tab_int AS SELECT generate_series(1,1002) AS a
2018-03-10 16:29:26.836 EST [12756] 001_stream_rep.pl LOG:  statement: SELECT pg_current_wal_insert_lsn()
2018-03-10 16:29:27.051 EST [11392] 001_stream_rep.pl LOG:  statement: SELECT '0/3027588' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';
2018-03-10 16:29:28.686 EST [8924] 001_stream_rep.pl LOG:  statement: SHOW transaction_read_only
2018-03-10 16:29:28.687 EST [8924] 001_stream_rep.pl LOG:  statement: SHOW port;
2018-03-10 16:29:28.909 EST [11788] 001_stream_rep.pl LOG:  statement: SHOW transaction_read_only
2018-03-10 16:29:28.909 EST [11788] 001_stream_rep.pl LOG:  statement: SHOW port;
2018-03-10 16:29:29.077 EST [11456] 001_stream_rep.pl LOG:  statement: SHOW port;
2018-03-10 16:29:29.477 EST [6508] LOG:  received fast shutdown request
2018-03-10 16:29:29.505 EST [6508] LOG:  aborting any active transactions
2018-03-10 16:29:29.505 EST [13896] DEBUG:  logical replication launcher shutting down
2018-03-10 16:29:29.505 EST [11968] DEBUG:  autovacuum launcher shutting down
2018-03-10 16:29:29.508 EST [6508] LOG:  background worker "logical replication launcher" (PID 13896) exited with exit code 1
2018-03-10 16:29:29.555 EST [4248] LOG:  shutting down
2018-03-10 16:29:29.630 EST [4248] DEBUG:  performing replication slot checkpoint
2018-03-10 16:29:29.899 EST [6508] LOG:  database system is shut down
2018-03-10 16:29:30.067 EST [12024] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:29:30.069 EST [12024] LOG:  listening on IPv4 address "127.0.0.1", port 58233
2018-03-10 16:29:30.099 EST [12024] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:29:30.099 EST [12024] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:29:30.099 EST [12024] DEBUG:  disabling huge pages
2018-03-10 16:29:30.232 EST [2860] LOG:  database system was shut down at 2018-03-10 16:29:29 EST
2018-03-10 16:29:30.232 EST [2860] DEBUG:  checkpoint record is at 0/3027588
2018-03-10 16:29:30.233 EST [2860] DEBUG:  redo record is at 0/3027588; shutdown true
2018-03-10 16:29:30.233 EST [2860] DEBUG:  next transaction ID: 0:557; next OID: 16387
2018-03-10 16:29:30.233 EST [2860] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:29:30.233 EST [2860] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:29:30.233 EST [2860] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:29:30.234 EST [2860] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:29:30.234 EST [2860] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:30.234 EST [2860] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:29:30.234 EST [2860] DEBUG:  starting up replication slots
2018-03-10 16:29:30.234 EST [2860] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:29:30.234 EST [2860] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:29:30.421 EST [13320] DEBUG:  autovacuum launcher started
2018-03-10 16:29:30.432 EST [12024] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:29:30.484 EST [12024] LOG:  database system is ready to accept connections
2018-03-10 16:29:30.505 EST [6376] DEBUG:  logical replication launcher started
2018-03-10 16:29:30.565 EST [13488] standby_1 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:29:30.565 EST [13488] standby_1 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:29:30.566 EST [13488] standby_1 DEBUG:  standby "standby_1" has now caught up with primary
2018-03-10 16:29:30.696 EST [12156] 001_stream_rep.pl LOG:  statement: SELECT pg_create_physical_replication_slot('standby_1');
2018-03-10 16:29:31.871 EST [912] standby_1 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:29:31.872 EST [912] standby_1 LOG:  received replication command: START_REPLICATION SLOT "standby_1" 0/3000000 TIMELINE 1
2018-03-10 16:29:31.872 EST [912] standby_1 DEBUG:  standby "standby_1" has now caught up with primary
2018-03-10 16:29:33.332 EST [13420] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NULL AND catalog_xmin IS NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_1';
		
2018-03-10 16:29:33.601 EST [12996] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_1'
2018-03-10 16:29:34.219 EST [2836] 001_stream_rep.pl LOG:  statement: CREATE TABLE replayed(val integer);
2018-03-10 16:29:34.386 EST [5684] 001_stream_rep.pl LOG:  statement: INSERT INTO replayed(val) SELECT coalesce(max(val),0) + 1 AS newval FROM replayed RETURNING val
2018-03-10 16:29:34.627 EST [7136] 001_stream_rep.pl LOG:  statement: SELECT pg_current_wal_insert_lsn()
2018-03-10 16:29:34.761 EST [14288] 001_stream_rep.pl LOG:  statement: SELECT '0/303D608' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';
2018-03-10 16:29:36.438 EST [2836] 001_stream_rep.pl LOG:  statement: INSERT INTO replayed(val) SELECT coalesce(max(val),0) + 1 AS newval FROM replayed RETURNING val
2018-03-10 16:29:36.604 EST [8024] 001_stream_rep.pl LOG:  statement: SELECT pg_current_wal_insert_lsn()
2018-03-10 16:29:36.833 EST [7204] 001_stream_rep.pl LOG:  statement: SELECT '0/303D670' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';
2018-03-10 16:29:37.826 EST [5792] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NOT NULL AND catalog_xmin IS NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_1';
		
2018-03-10 16:29:38.068 EST [13472] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_1'
2018-03-10 16:29:38.595 EST [10600] 001_stream_rep.pl LOG:  statement: do $$
	begin
	  for i in 10000..11000 loop
	    -- use an exception block so that each iteration eats an XID
	    begin
	      insert into tab_int values (i);
	    exception
	      when division_by_zero then null;
	    end;
	  end loop;
	end$$;
2018-03-10 16:29:38.781 EST [13568] 001_stream_rep.pl LOG:  statement: VACUUM;
2018-03-10 16:29:40.101 EST [7988] 001_stream_rep.pl LOG:  statement: CHECKPOINT;
2018-03-10 16:29:40.146 EST [12712] DEBUG:  performing replication slot checkpoint
2018-03-10 16:29:40.767 EST [11820] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin <> '559'
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_1';
		
2018-03-10 16:29:41.005 EST [11812] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_1'
2018-03-10 16:29:42.809 EST [13280] 001_stream_rep.pl LOG:  statement: INSERT INTO replayed(val) SELECT coalesce(max(val),0) + 1 AS newval FROM replayed RETURNING val
2018-03-10 16:29:43.042 EST [5700] 001_stream_rep.pl LOG:  statement: SELECT pg_current_wal_insert_lsn()
2018-03-10 16:29:43.160 EST [8644] 001_stream_rep.pl LOG:  statement: SELECT '0/30C33B8' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';
2018-03-10 16:29:44.255 EST [13680] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NULL AND catalog_xmin IS NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_1';
		
2018-03-10 16:29:44.512 EST [7472] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_1'
2018-03-10 16:29:46.110 EST [13460] 001_stream_rep.pl LOG:  statement: INSERT INTO tab_int VALUES (11000);
2018-03-10 16:29:46.262 EST [11172] 001_stream_rep.pl LOG:  statement: INSERT INTO replayed(val) SELECT coalesce(max(val),0) + 1 AS newval FROM replayed RETURNING val
2018-03-10 16:29:46.432 EST [11388] 001_stream_rep.pl LOG:  statement: SELECT pg_current_wal_insert_lsn()
2018-03-10 16:29:46.645 EST [12412] 001_stream_rep.pl LOG:  statement: SELECT '0/30C5078' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';
2018-03-10 16:29:50.318 EST [12024] LOG:  received immediate shutdown request
2018-03-10 16:29:50.350 EST [912] standby_1 WARNING:  terminating connection because of crash of another server process
2018-03-10 16:29:50.350 EST [912] standby_1 DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:29:50.350 EST [912] standby_1 HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:29:50.357 EST [13320] WARNING:  terminating connection because of crash of another server process
2018-03-10 16:29:50.357 EST [13320] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:29:50.357 EST [13320] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:29:50.463 EST [12024] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_1.log ===================
2018-03-10 16:28:27.802 EST [8356] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:28:27.808 EST [8356] LOG:  listening on IPv4 address "127.0.0.1", port 58234
2018-03-10 16:28:27.972 EST [8356] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:28:27.972 EST [8356] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:28:27.973 EST [8356] DEBUG:  disabling huge pages
2018-03-10 16:28:28.092 EST [10388] LOG:  database system was interrupted; last known up at 2018-03-10 16:27:58 EST
2018-03-10 16:28:28.092 EST [10388] LOG:  entering standby mode
2018-03-10 16:28:28.092 EST [10388] DEBUG:  backup time 2018-03-10 16:27:58 EST in file "backup_label"
2018-03-10 16:28:28.093 EST [10388] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:28:28.093 EST [10388] DEBUG:  backup timeline 1 in file "backup_label"
2018-03-10 16:28:28.093 EST [10388] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:28:28.093 EST [10388] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:28:28.093 EST [10388] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:28:28.093 EST [10388] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:28:28.093 EST [10388] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:28:28.093 EST [10388] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:28:28.093 EST [10388] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:28:28.093 EST [10388] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:28:28.093 EST [10388] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:28:28.093 EST [10388] DEBUG:  starting up replication slots
2018-03-10 16:28:28.142 EST [10388] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:28:28.143 EST [10388] DEBUG:  initializing for hot standby
2018-03-10 16:28:28.143 EST [10388] LOG:  redo starts at 0/2000028
2018-03-10 16:28:28.143 EST [10388] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:28:28.143 EST [10388] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 556 latestCompletedXid 555 oldestRunningXid 556
2018-03-10 16:28:28.143 EST [10388] DEBUG:  end of backup reached
2018-03-10 16:28:28.143 EST [10388] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:28:28.149 EST [10388] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:28:28.149 EST [10388] LOG:  invalid record length at 0/3000060: wanted 24, got 0
2018-03-10 16:28:28.239 EST [8356] LOG:  database system is ready to accept read only connections
2018-03-10 16:28:28.414 EST [14148] 001_stream_rep.pl LOG:  received replication command: SHOW wal_segment_size
2018-03-10 16:28:28.414 EST [14148] 001_stream_rep.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:28:28.414 EST [14148] 001_stream_rep.pl LOG:  received replication command: BASE_BACKUP LABEL 'pg_basebackup base backup'    NOWAIT  
2018-03-10 16:28:28.415 EST [13788] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:28:28.574 EST [9064] 001_stream_rep.pl LOG:  received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_9064" TEMPORARY PHYSICAL RESERVE_WAL
2018-03-10 16:28:28.628 EST [9064] 001_stream_rep.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:28:28.629 EST [9064] 001_stream_rep.pl LOG:  received replication command: START_REPLICATION SLOT "pg_basebackup_9064" 0/2000000 TIMELINE 1
2018-03-10 16:28:29.698 EST [9064] 001_stream_rep.pl DEBUG:  standby "001_stream_rep.pl" has now caught up with primary
2018-03-10 16:28:39.553 EST [14148] 001_stream_rep.pl DEBUG:  contents of directory "pg_dynshmem" excluded from backup
2018-03-10 16:28:39.555 EST [14148] 001_stream_rep.pl DEBUG:  contents of directory "pg_notify" excluded from backup
2018-03-10 16:28:39.555 EST [14148] 001_stream_rep.pl DEBUG:  contents of directory "pg_replslot" excluded from backup
2018-03-10 16:28:39.555 EST [14148] 001_stream_rep.pl DEBUG:  contents of directory "pg_serial" excluded from backup
2018-03-10 16:28:39.555 EST [14148] 001_stream_rep.pl DEBUG:  contents of directory "pg_snapshots" excluded from backup
2018-03-10 16:28:39.556 EST [14148] 001_stream_rep.pl DEBUG:  contents of directory "pg_stat_tmp" excluded from backup
2018-03-10 16:28:39.556 EST [14148] 001_stream_rep.pl DEBUG:  contents of directory "pg_subtrans" excluded from backup
2018-03-10 16:28:39.557 EST [14148] 001_stream_rep.pl DEBUG:  file "postmaster.opts" excluded from backup
2018-03-10 16:28:39.557 EST [14148] 001_stream_rep.pl DEBUG:  file "postmaster.pid" excluded from backup
2018-03-10 16:28:49.793 EST [10388] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:28:49.793 EST [10388] CONTEXT:  WAL redo at 0/3000060 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/3000060; tli 1; prev tli 1; fpw true; xid 0:556; oid 12359; multi 1; offset 0; oldest xid 549 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown
2018-03-10 16:28:49.818 EST [13788] LOG:  replication terminated by primary server
2018-03-10 16:28:49.818 EST [13788] DETAIL:  End of WAL reached on timeline 1 at 0/30000D0.
2018-03-10 16:28:49.818 EST [13788] FATAL:  could not send end-of-streaming message to primary: no COPY in progress
2018-03-10 16:28:49.818 EST [10388] LOG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:49.998 EST [5636] 001_stream_rep.pl LOG:  received replication command: SHOW wal_segment_size
2018-03-10 16:28:49.998 EST [5636] 001_stream_rep.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:28:49.998 EST [5636] 001_stream_rep.pl LOG:  received replication command: BASE_BACKUP LABEL 'pg_basebackup base backup'    NOWAIT  
2018-03-10 16:28:50.000 EST [9652] DEBUG:  performing replication slot checkpoint
2018-03-10 16:28:50.145 EST [8464] 001_stream_rep.pl LOG:  received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_8464" TEMPORARY PHYSICAL RESERVE_WAL
2018-03-10 16:28:50.192 EST [8464] 001_stream_rep.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:28:50.192 EST [8464] 001_stream_rep.pl LOG:  received replication command: START_REPLICATION SLOT "pg_basebackup_8464" 0/3000000 TIMELINE 1
2018-03-10 16:28:50.192 EST [8464] 001_stream_rep.pl DEBUG:  standby "001_stream_rep.pl" has now caught up with primary
2018-03-10 16:28:50.909 EST [12788] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:50.909 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:51.987 EST [2860] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:51.998 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:53.081 EST [2724] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:53.087 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:54.159 EST [10048] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:54.159 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:55.222 EST [12916] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:55.222 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:56.303 EST [12100] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:56.305 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:56.339 EST [5636] 001_stream_rep.pl DEBUG:  contents of directory "pg_dynshmem" excluded from backup
2018-03-10 16:28:56.340 EST [5636] 001_stream_rep.pl DEBUG:  contents of directory "pg_notify" excluded from backup
2018-03-10 16:28:56.340 EST [5636] 001_stream_rep.pl DEBUG:  contents of directory "pg_replslot" excluded from backup
2018-03-10 16:28:56.341 EST [5636] 001_stream_rep.pl DEBUG:  contents of directory "pg_serial" excluded from backup
2018-03-10 16:28:56.341 EST [5636] 001_stream_rep.pl DEBUG:  contents of directory "pg_snapshots" excluded from backup
2018-03-10 16:28:56.341 EST [5636] 001_stream_rep.pl DEBUG:  contents of directory "pg_stat_tmp" excluded from backup
2018-03-10 16:28:56.341 EST [5636] 001_stream_rep.pl DEBUG:  contents of directory "pg_subtrans" excluded from backup
2018-03-10 16:28:56.342 EST [5636] 001_stream_rep.pl DEBUG:  file "postmaster.opts" excluded from backup
2018-03-10 16:28:56.342 EST [5636] 001_stream_rep.pl DEBUG:  file "postmaster.pid" excluded from backup
2018-03-10 16:28:57.393 EST [12268] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:57.394 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:58.456 EST [9904] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:58.456 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:28:59.519 EST [10112] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:28:59.520 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:00.597 EST [11800] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:00.597 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:01.675 EST [10936] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:01.675 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:02.753 EST [11980] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:02.753 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:03.832 EST [14184] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:03.832 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:04.894 EST [1032] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:04.894 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:05.956 EST [11436] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:05.957 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:07.019 EST [7196] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:07.019 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:08.082 EST [13344] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:08.082 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:09.160 EST [10684] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:09.160 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:10.222 EST [12656] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:10.222 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:11.285 EST [1904] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:11.286 EST [10388] DEBUG:  invalid record length at 0/30000D0: wanted 24, got 0
2018-03-10 16:29:11.950 EST [13076] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:29:26.590 EST [7072] standby_2 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:29:26.591 EST [7072] standby_2 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:29:26.598 EST [7072] standby_2 DEBUG:  standby "standby_2" has now caught up with primary
2018-03-10 16:29:27.337 EST [10136] 001_stream_rep.pl LOG:  statement: SELECT pg_last_wal_replay_lsn()
2018-03-10 16:29:27.508 EST [3184] 001_stream_rep.pl LOG:  statement: SELECT '0/3027588' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_2';
2018-03-10 16:29:27.927 EST [11496] 001_stream_rep.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:29:28.402 EST [7524] 001_stream_rep.pl LOG:  statement: INSERT INTO tab_int VALUES (1)
2018-03-10 16:29:28.403 EST [7524] 001_stream_rep.pl ERROR:  cannot execute INSERT in a read-only transaction
2018-03-10 16:29:28.403 EST [7524] 001_stream_rep.pl STATEMENT:  INSERT INTO tab_int VALUES (1)
2018-03-10 16:29:28.856 EST [3444] 001_stream_rep.pl LOG:  statement: SHOW transaction_read_only
2018-03-10 16:29:29.310 EST [2328] 001_stream_rep.pl LOG:  statement: SHOW port;
2018-03-10 16:29:29.661 EST [10388] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:29.661 EST [10388] CONTEXT:  WAL redo at 0/3027588 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/3027588; tli 1; prev tli 1; fpw true; xid 0:557; oid 16387; multi 1; offset 0; oldest xid 549 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown
2018-03-10 16:29:29.790 EST [13076] LOG:  replication terminated by primary server
2018-03-10 16:29:29.790 EST [13076] DETAIL:  End of WAL reached on timeline 1 at 0/30275F8.
2018-03-10 16:29:29.790 EST [13076] FATAL:  could not send end-of-streaming message to primary: no COPY in progress
2018-03-10 16:29:29.817 EST [10388] LOG:  invalid record length at 0/30275F8: wanted 24, got 0
2018-03-10 16:29:29.898 EST [12100] FATAL:  could not connect to the primary server: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:29:30.302 EST [10388] DEBUG:  invalid record length at 0/30275F8: wanted 24, got 0
2018-03-10 16:29:30.565 EST [7836] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:29:30.865 EST [8356] LOG:  received fast shutdown request
2018-03-10 16:29:30.896 EST [8356] LOG:  aborting any active transactions
2018-03-10 16:29:30.896 EST [7836] FATAL:  terminating walreceiver process due to administrator command
2018-03-10 16:29:30.956 EST [9652] LOG:  shutting down
2018-03-10 16:29:30.974 EST [9652] DEBUG:  performing replication slot checkpoint
2018-03-10 16:29:31.227 EST [8356] LOG:  database system is shut down
2018-03-10 16:29:31.442 EST [6736] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:29:31.444 EST [6736] LOG:  listening on IPv4 address "127.0.0.1", port 58234
2018-03-10 16:29:31.474 EST [6736] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:29:31.474 EST [6736] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:29:31.474 EST [6736] DEBUG:  disabling huge pages
2018-03-10 16:29:31.609 EST [9912] LOG:  database system was shut down in recovery at 2018-03-10 16:29:31 EST
2018-03-10 16:29:31.610 EST [9912] LOG:  entering standby mode
2018-03-10 16:29:31.610 EST [9912] DEBUG:  checkpoint record is at 0/3027588
2018-03-10 16:29:31.611 EST [9912] DEBUG:  redo record is at 0/3027588; shutdown true
2018-03-10 16:29:31.611 EST [9912] DEBUG:  next transaction ID: 0:557; next OID: 16387
2018-03-10 16:29:31.611 EST [9912] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:29:31.611 EST [9912] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:29:31.611 EST [9912] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:29:31.611 EST [9912] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:29:31.611 EST [9912] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:31.611 EST [9912] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:29:31.611 EST [9912] DEBUG:  starting up replication slots
2018-03-10 16:29:31.632 EST [9912] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:29:31.633 EST [9912] DEBUG:  initializing for hot standby
2018-03-10 16:29:31.633 EST [9912] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:29:31.634 EST [9912] LOG:  redo starts at 0/30275F8
2018-03-10 16:29:31.634 EST [9912] LOG:  consistent recovery state reached at 0/3027958
2018-03-10 16:29:31.634 EST [9912] LOG:  invalid magic number 0000 in log segment 000000010000000000000003, offset 163840
2018-03-10 16:29:31.731 EST [6736] LOG:  database system is ready to accept read only connections
2018-03-10 16:29:31.872 EST [12784] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:29:31.874 EST [4224] standby_2 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:29:31.874 EST [4224] standby_2 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:29:31.881 EST [4224] standby_2 DEBUG:  standby "standby_2" has now caught up with primary
2018-03-10 16:29:31.978 EST [1440] 001_stream_rep.pl LOG:  statement: SELECT pg_create_physical_replication_slot('standby_2');
2018-03-10 16:29:33.330 EST [14280] standby_2 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:29:33.330 EST [14280] standby_2 LOG:  received replication command: START_REPLICATION SLOT "standby_2" 0/3000000 TIMELINE 1
2018-03-10 16:29:33.331 EST [14280] standby_2 DEBUG:  standby "standby_2" has now caught up with primary
2018-03-10 16:29:33.834 EST [11580] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NULL AND catalog_xmin IS NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_2';
		
2018-03-10 16:29:33.975 EST [10800] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_2'
2018-03-10 16:29:34.992 EST [11560] 001_stream_rep.pl LOG:  statement: SELECT pg_last_wal_replay_lsn()
2018-03-10 16:29:35.112 EST [11232] 001_stream_rep.pl LOG:  statement: SELECT '0/303D608' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_2';
2018-03-10 16:29:35.390 EST [10340] 001_stream_rep.pl LOG:  statement: SELECT 1 FROM replayed WHERE val = 1
2018-03-10 16:29:35.860 EST [9984] 001_stream_rep.pl LOG:  statement: ALTER SYSTEM SET hot_standby_feedback = on;
2018-03-10 16:29:36.023 EST [6736] LOG:  received SIGHUP, reloading configuration files
2018-03-10 16:29:36.024 EST [6736] LOG:  parameter "hot_standby_feedback" changed to "on"
2018-03-10 16:29:37.074 EST [7340] 001_stream_rep.pl LOG:  statement: SELECT pg_last_wal_replay_lsn()
2018-03-10 16:29:37.193 EST [3400] 001_stream_rep.pl LOG:  statement: SELECT '0/303D670' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_2';
2018-03-10 16:29:37.455 EST [12912] 001_stream_rep.pl LOG:  statement: SELECT 1 FROM replayed WHERE val = 2
2018-03-10 16:29:38.207 EST [12796] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NOT NULL AND catalog_xmin IS NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_2';
		
2018-03-10 16:29:38.358 EST [13476] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_2'
2018-03-10 16:29:41.309 EST [1208] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin <> '560'
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_2';
		
2018-03-10 16:29:41.985 EST [4380] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_2'
2018-03-10 16:29:42.239 EST [13172] 001_stream_rep.pl LOG:  statement: ALTER SYSTEM SET hot_standby_feedback = off;
2018-03-10 16:29:42.403 EST [6736] LOG:  received SIGHUP, reloading configuration files
2018-03-10 16:29:42.405 EST [6736] LOG:  parameter "hot_standby_feedback" changed to "off"
2018-03-10 16:29:43.453 EST [10136] 001_stream_rep.pl LOG:  statement: SELECT pg_last_wal_replay_lsn()
2018-03-10 16:29:43.646 EST [2960] 001_stream_rep.pl LOG:  statement: SELECT '0/30C33B8' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_2';
2018-03-10 16:29:43.800 EST [4700] 001_stream_rep.pl LOG:  statement: SELECT 1 FROM replayed WHERE val = 3
2018-03-10 16:29:44.742 EST [7116] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NULL AND catalog_xmin IS NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_2';
		
2018-03-10 16:29:44.988 EST [5524] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_2'
2018-03-10 16:29:46.784 EST [12592] 001_stream_rep.pl LOG:  statement: SELECT pg_last_wal_replay_lsn()
2018-03-10 16:29:46.914 EST [8580] 001_stream_rep.pl LOG:  statement: SELECT '0/30C5078' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_2';
2018-03-10 16:29:47.169 EST [10820] 001_stream_rep.pl LOG:  statement: SELECT 1 FROM replayed WHERE val = 4
2018-03-10 16:29:48.738 EST [2080] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NOT NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_2';
		
2018-03-10 16:29:48.896 EST [9428] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_2'
2018-03-10 16:29:49.640 EST [8852] standby_2 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:29:49.640 EST [8852] standby_2 LOG:  received replication command: START_REPLICATION SLOT "standby_2" 0/3000000 TIMELINE 1
2018-03-10 16:29:49.643 EST [8852] standby_2 DEBUG:  standby "standby_2" has now caught up with primary
2018-03-10 16:29:49.673 EST [6516] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_2';
		
2018-03-10 16:29:49.971 EST [14172] 001_stream_rep.pl LOG:  statement: 
			SELECT xmin IS NULL
			FROM pg_catalog.pg_replication_slots
			WHERE slot_name = 'standby_2';
		
2018-03-10 16:29:50.136 EST [3816] 001_stream_rep.pl LOG:  statement: SELECT plugin, slot_type, datoid, database, active, active_pid, xmin, catalog_xmin, restart_lsn FROM pg_catalog.pg_replication_slots WHERE slot_name = 'standby_2'
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:29:50.351 EST [12784] FATAL:  could not receive data from WAL stream: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:29:50.351 EST [9912] LOG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:50.462 EST [9008] FATAL:  could not connect to the primary server: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:29:50.850 EST [9912] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:51.927 EST [11484] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:51.928 EST [9912] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:53.006 EST [13052] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58233?
2018-03-10 16:29:53.006 EST [9912] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:53.177 EST [6736] LOG:  received immediate shutdown request
2018-03-10 16:29:53.209 EST [8852] standby_2 WARNING:  terminating connection because of crash of another server process
2018-03-10 16:29:53.209 EST [8852] standby_2 DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:29:53.209 EST [8852] standby_2 HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:29:53.391 EST [8924] [unknown] FATAL:  the database system is shutting down
2018-03-10 16:29:53.603 EST [6736] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_2.log ===================
2018-03-10 16:29:26.067 EST [13972] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:29:26.068 EST [13972] LOG:  listening on IPv4 address "127.0.0.1", port 58235
2018-03-10 16:29:26.099 EST [13972] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:29:26.099 EST [13972] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:29:26.099 EST [13972] DEBUG:  disabling huge pages
2018-03-10 16:29:26.205 EST [13008] LOG:  database system was interrupted while in recovery at log time 2018-03-10 16:27:57 EST
2018-03-10 16:29:26.205 EST [13008] HINT:  If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.
2018-03-10 16:29:26.206 EST [13008] LOG:  entering standby mode
2018-03-10 16:29:26.206 EST [13008] DEBUG:  backup time 2018-03-10 16:28:28 EST in file "backup_label"
2018-03-10 16:29:26.206 EST [13008] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:29:26.206 EST [13008] DEBUG:  backup timeline 1 in file "backup_label"
2018-03-10 16:29:26.206 EST [13008] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:29:26.206 EST [13008] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:29:26.206 EST [13008] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:29:26.206 EST [13008] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:29:26.206 EST [13008] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:29:26.206 EST [13008] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:29:26.206 EST [13008] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:29:26.206 EST [13008] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:26.206 EST [13008] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:29:26.206 EST [13008] DEBUG:  starting up replication slots
2018-03-10 16:29:26.254 EST [13008] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:29:26.255 EST [13008] DEBUG:  initializing for hot standby
2018-03-10 16:29:26.255 EST [13008] LOG:  redo starts at 0/2000028
2018-03-10 16:29:26.256 EST [13008] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:29:26.256 EST [13008] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 556 latestCompletedXid 555 oldestRunningXid 556
2018-03-10 16:29:26.256 EST [13008] DEBUG:  end of backup reached
2018-03-10 16:29:26.256 EST [13008] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:29:26.261 EST [13008] DEBUG:  end of backup reached
2018-03-10 16:29:26.355 EST [13008] LOG:  consistent recovery state reached at 0/3000060
2018-03-10 16:29:26.356 EST [13008] LOG:  invalid record length at 0/3000060: wanted 24, got 0
2018-03-10 16:29:26.432 EST [13972] LOG:  database system is ready to accept read only connections
2018-03-10 16:29:26.591 EST [12652] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:29:26.605 EST [13008] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:26.605 EST [13008] CONTEXT:  WAL redo at 0/3000060 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/3000060; tli 1; prev tli 1; fpw true; xid 0:556; oid 12359; multi 1; offset 0; oldest xid 549 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown
2018-03-10 16:29:28.195 EST [11460] 001_stream_rep.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:29:28.551 EST [9440] 001_stream_rep.pl LOG:  statement: INSERT INTO tab_int VALUES (1)
2018-03-10 16:29:28.552 EST [9440] 001_stream_rep.pl ERROR:  cannot execute INSERT in a read-only transaction
2018-03-10 16:29:28.552 EST [9440] 001_stream_rep.pl STATEMENT:  INSERT INTO tab_int VALUES (1)
2018-03-10 16:29:29.661 EST [13008] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:29.661 EST [13008] CONTEXT:  WAL redo at 0/3027588 for XLOG/CHECKPOINT_SHUTDOWN: redo 0/3027588; tli 1; prev tli 1; fpw true; xid 0:557; oid 16387; multi 1; offset 0; oldest xid 549 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown
2018-03-10 16:29:31.148 EST [12652] LOG:  replication terminated by primary server
2018-03-10 16:29:31.148 EST [12652] DETAIL:  End of WAL reached on timeline 1 at 0/3028000.
2018-03-10 16:29:31.148 EST [12652] FATAL:  could not send end-of-streaming message to primary: no COPY in progress
2018-03-10 16:29:31.161 EST [13008] LOG:  invalid magic number 0000 in log segment 000000010000000000000003, offset 163840
2018-03-10 16:29:31.226 EST [11860] FATAL:  could not connect to the primary server: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:29:31.661 EST [13008] DEBUG:  invalid magic number 0000 in log segment 000000010000000000000003, offset 163840
2018-03-10 16:29:31.874 EST [11716] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:29:32.189 EST [13972] LOG:  received fast shutdown request
2018-03-10 16:29:32.209 EST [13972] LOG:  aborting any active transactions
2018-03-10 16:29:32.209 EST [11716] FATAL:  terminating walreceiver process due to administrator command
2018-03-10 16:29:32.241 EST [6592] LOG:  shutting down
2018-03-10 16:29:32.312 EST [6592] DEBUG:  performing replication slot checkpoint
2018-03-10 16:29:32.743 EST [13972] LOG:  database system is shut down
2018-03-10 16:29:32.911 EST [12760] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:29:32.913 EST [12760] LOG:  listening on IPv4 address "127.0.0.1", port 58235
2018-03-10 16:29:32.943 EST [12760] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:29:32.943 EST [12760] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:29:32.943 EST [12760] DEBUG:  disabling huge pages
2018-03-10 16:29:33.064 EST [12124] LOG:  database system was shut down in recovery at 2018-03-10 16:29:32 EST
2018-03-10 16:29:33.065 EST [12124] LOG:  entering standby mode
2018-03-10 16:29:33.065 EST [12124] DEBUG:  checkpoint record is at 0/3027588
2018-03-10 16:29:33.066 EST [12124] DEBUG:  redo record is at 0/3027588; shutdown true
2018-03-10 16:29:33.066 EST [12124] DEBUG:  next transaction ID: 0:557; next OID: 16387
2018-03-10 16:29:33.066 EST [12124] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:29:33.066 EST [12124] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:29:33.066 EST [12124] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:29:33.066 EST [12124] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:29:33.066 EST [12124] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:33.066 EST [12124] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:29:33.066 EST [12124] DEBUG:  starting up replication slots
2018-03-10 16:29:33.084 EST [12124] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:29:33.085 EST [12124] DEBUG:  initializing for hot standby
2018-03-10 16:29:33.085 EST [12124] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:29:33.086 EST [12124] LOG:  redo starts at 0/30275F8
2018-03-10 16:29:33.086 EST [12124] LOG:  consistent recovery state reached at 0/3029988
2018-03-10 16:29:33.086 EST [12124] LOG:  invalid record length at 0/3029988: wanted 24, got 0
2018-03-10 16:29:33.184 EST [12760] LOG:  database system is ready to accept read only connections
2018-03-10 16:29:33.330 EST [11344] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:29:35.668 EST [9580] 001_stream_rep.pl LOG:  statement: SELECT 1 FROM replayed WHERE val = 1
2018-03-10 16:29:36.133 EST [6300] 001_stream_rep.pl LOG:  statement: ALTER SYSTEM SET hot_standby_feedback = on;
2018-03-10 16:29:36.316 EST [12760] LOG:  received SIGHUP, reloading configuration files
2018-03-10 16:29:36.318 EST [12760] LOG:  parameter "hot_standby_feedback" changed to "on"
2018-03-10 16:29:37.696 EST [14220] 001_stream_rep.pl LOG:  statement: SELECT 1 FROM replayed WHERE val = 2
2018-03-10 16:29:42.590 EST [12156] 001_stream_rep.pl LOG:  statement: ALTER SYSTEM SET hot_standby_feedback = off;
2018-03-10 16:29:42.684 EST [12760] LOG:  received SIGHUP, reloading configuration files
2018-03-10 16:29:42.686 EST [12760] LOG:  parameter "hot_standby_feedback" changed to "off"
2018-03-10 16:29:44.037 EST [12344] 001_stream_rep.pl LOG:  statement: SELECT 1 FROM replayed WHERE val = 3
2018-03-10 16:29:45.217 EST [4908] 001_stream_rep.pl LOG:  statement: ALTER SYSTEM SET hot_standby_feedback = on;
2018-03-10 16:29:45.369 EST [12760] LOG:  received SIGHUP, reloading configuration files
2018-03-10 16:29:45.370 EST [12760] LOG:  parameter "hot_standby_feedback" changed to "on"
2018-03-10 16:29:47.350 EST [4824] 001_stream_rep.pl LOG:  statement: SELECT 1 FROM replayed WHERE val = 4
2018-03-10 16:29:47.600 EST [14160] 001_stream_rep.pl LOG:  statement: ALTER SYSTEM SET hot_standby_feedback = off;
2018-03-10 16:29:47.745 EST [12760] LOG:  received fast shutdown request
2018-03-10 16:29:47.772 EST [12760] LOG:  aborting any active transactions
2018-03-10 16:29:47.772 EST [11344] FATAL:  terminating walreceiver process due to administrator command
2018-03-10 16:29:48.216 EST [6136] LOG:  shutting down
2018-03-10 16:29:48.271 EST [6136] DEBUG:  performing replication slot checkpoint
2018-03-10 16:29:48.587 EST [12760] LOG:  database system is shut down
2018-03-10 16:29:49.193 EST [12544] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:29:49.196 EST [12544] LOG:  listening on IPv4 address "127.0.0.1", port 58235
2018-03-10 16:29:49.229 EST [12544] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:29:49.229 EST [12544] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:29:49.229 EST [12544] DEBUG:  disabling huge pages
2018-03-10 16:29:49.379 EST [5668] LOG:  database system was shut down in recovery at 2018-03-10 16:29:48 EST
2018-03-10 16:29:49.379 EST [5668] LOG:  entering standby mode
2018-03-10 16:29:49.380 EST [5668] DEBUG:  checkpoint record is at 0/30C3260
2018-03-10 16:29:49.381 EST [5668] DEBUG:  redo record is at 0/30C3228; shutdown false
2018-03-10 16:29:49.381 EST [5668] DEBUG:  next transaction ID: 0:1562; next OID: 24579
2018-03-10 16:29:49.381 EST [5668] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:29:49.381 EST [5668] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:29:49.381 EST [5668] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:29:49.381 EST [5668] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:29:49.381 EST [5668] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:29:49.381 EST [5668] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:29:49.381 EST [5668] DEBUG:  starting up replication slots
2018-03-10 16:29:49.397 EST [5668] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:29:49.398 EST [5668] DEBUG:  initializing for hot standby
2018-03-10 16:29:49.398 EST [5668] LOG:  redo starts at 0/30C3228
2018-03-10 16:29:49.399 EST [5668] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:29:49.399 EST [5668] CONTEXT:  WAL redo at 0/30C3228 for Standby/RUNNING_XACTS: nextXid 1562 latestCompletedXid 1561 oldestRunningXid 1562
2018-03-10 16:29:49.400 EST [5668] LOG:  consistent recovery state reached at 0/30C5078
2018-03-10 16:29:49.400 EST [5668] LOG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:49.499 EST [12544] LOG:  database system is ready to accept read only connections
2018-03-10 16:29:49.641 EST [2216] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:29:53.210 EST [2216] FATAL:  could not receive data from WAL stream: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:29:53.219 EST [5668] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:53.391 EST [11580] FATAL:  could not connect to the primary server: FATAL:  the database system is shutting down
2018-03-10 16:29:53.709 EST [5668] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:54.771 EST [5208] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58234?
2018-03-10 16:29:54.772 EST [5668] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:55.834 EST [12836] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58234?
2018-03-10 16:29:55.834 EST [5668] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:56.928 EST [12040] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58234?
2018-03-10 16:29:56.928 EST [5668] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:58.022 EST [3612] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58234?
2018-03-10 16:29:58.022 EST [5668] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:59.100 EST [10808] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 58234?
2018-03-10 16:29:59.100 EST [5668] DEBUG:  invalid record length at 0/30C5078: wanted 24, got 0
2018-03-10 16:29:59.901 EST [12544] LOG:  received immediate shutdown request
2018-03-10 16:29:59.963 EST [12544] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/002_archiving_master.log ===================
2018-03-10 16:30:25.820 EST [5344] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:30:25.823 EST [5344] LOG:  listening on IPv4 address "127.0.0.1", port 53709
2018-03-10 16:30:25.852 EST [5344] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:30:25.852 EST [5344] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:30:25.852 EST [5344] DEBUG:  disabling huge pages
2018-03-10 16:30:26.004 EST [10460] LOG:  database system was shut down at 2018-03-10 16:30:25 EST
2018-03-10 16:30:26.005 EST [10460] DEBUG:  checkpoint record is at 0/15CF4D8
2018-03-10 16:30:26.005 EST [10460] DEBUG:  redo record is at 0/15CF4D8; shutdown true
2018-03-10 16:30:26.023 EST [10460] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:30:26.023 EST [10460] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:30:26.023 EST [10460] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:30:26.023 EST [10460] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:30:26.023 EST [10460] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:30:26.023 EST [10460] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:30:26.023 EST [10460] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:30:26.023 EST [10460] DEBUG:  starting up replication slots
2018-03-10 16:30:26.024 EST [10460] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:30:26.024 EST [10460] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:30:26.212 EST [888] DEBUG:  autovacuum launcher started
2018-03-10 16:30:26.257 EST [5344] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:30:26.287 EST [5344] LOG:  database system is ready to accept connections
2018-03-10 16:30:26.373 EST [5616] DEBUG:  logical replication launcher started
2018-03-10 16:30:26.524 EST [9332] 002_archiving.pl LOG:  received replication command: SHOW wal_segment_size
2018-03-10 16:30:26.524 EST [9332] 002_archiving.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:30:26.524 EST [9332] 002_archiving.pl LOG:  received replication command: BASE_BACKUP LABEL 'pg_basebackup base backup'    NOWAIT  
2018-03-10 16:30:26.627 EST [848] DEBUG:  performing replication slot checkpoint
        1 file(s) copied.
2018-03-10 16:30:26.889 EST [3088] DEBUG:  archived write-ahead log file "000000010000000000000001"
2018-03-10 16:30:27.149 EST [8700] 002_archiving.pl LOG:  received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_8700" TEMPORARY PHYSICAL RESERVE_WAL
2018-03-10 16:30:27.197 EST [8700] 002_archiving.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:30:27.197 EST [8700] 002_archiving.pl LOG:  received replication command: START_REPLICATION SLOT "pg_basebackup_8700" 0/2000000 TIMELINE 1
2018-03-10 16:30:27.206 EST [8700] 002_archiving.pl DEBUG:  standby "002_archiving.pl" has now caught up with primary
2018-03-10 16:30:37.606 EST [9332] 002_archiving.pl DEBUG:  contents of directory "pg_dynshmem" excluded from backup
2018-03-10 16:30:37.608 EST [9332] 002_archiving.pl DEBUG:  contents of directory "pg_notify" excluded from backup
2018-03-10 16:30:37.608 EST [9332] 002_archiving.pl DEBUG:  contents of directory "pg_replslot" excluded from backup
2018-03-10 16:30:37.608 EST [9332] 002_archiving.pl DEBUG:  contents of directory "pg_serial" excluded from backup
2018-03-10 16:30:37.608 EST [9332] 002_archiving.pl DEBUG:  contents of directory "pg_snapshots" excluded from backup
2018-03-10 16:30:37.608 EST [9332] 002_archiving.pl DEBUG:  contents of directory "pg_stat_tmp" excluded from backup
2018-03-10 16:30:37.609 EST [9332] 002_archiving.pl DEBUG:  contents of directory "pg_subtrans" excluded from backup
2018-03-10 16:30:37.610 EST [9332] 002_archiving.pl DEBUG:  file "postmaster.opts" excluded from backup
2018-03-10 16:30:37.610 EST [9332] 002_archiving.pl DEBUG:  file "postmaster.pid" excluded from backup
        1 file(s) copied.
2018-03-10 16:30:38.099 EST [3088] DEBUG:  archived write-ahead log file "000000010000000000000002"
        1 file(s) copied.
2018-03-10 16:30:38.152 EST [3088] DEBUG:  archived write-ahead log file "000000010000000000000002.00000028.backup"
2018-03-10 16:30:56.137 EST [1440] 002_archiving.pl LOG:  statement: CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a
2018-03-10 16:30:56.361 EST [11996] 002_archiving.pl LOG:  statement: SELECT pg_current_wal_lsn();
2018-03-10 16:30:56.619 EST [5660] 002_archiving.pl LOG:  statement: SELECT pg_switch_wal()
2018-03-10 16:30:57.068 EST [13920] 002_archiving.pl LOG:  statement: INSERT INTO tab_int VALUES (generate_series(1001,2000))
        1 file(s) copied.
2018-03-10 16:30:57.094 EST [3088] DEBUG:  archived write-ahead log file "000000010000000000000003"
2018-03-10 16:30:57.909 EST [5344] LOG:  received immediate shutdown request
2018-03-10 16:30:57.933 EST [888] WARNING:  terminating connection because of crash of another server process
2018-03-10 16:30:57.933 EST [888] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:30:57.933 EST [888] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:30:57.937 EST [5344] LOG:  archiver process (PID 3088) exited with exit code 1
2018-03-10 16:30:58.043 EST [5344] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/002_archiving_standby.log ===================
2018-03-10 16:30:55.415 EST [8024] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:30:55.417 EST [8024] LOG:  listening on IPv4 address "127.0.0.1", port 53710
2018-03-10 16:30:55.448 EST [8024] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:30:55.448 EST [8024] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:30:55.448 EST [8024] DEBUG:  disabling huge pages
2018-03-10 16:30:55.573 EST [3816] LOG:  database system was interrupted; last known up at 2018-03-10 16:30:26 EST
2018-03-10 16:30:55.574 EST [3816] LOG:  entering standby mode
2018-03-10 16:30:55.574 EST [3816] DEBUG:  backup time 2018-03-10 16:30:27 EST in file "backup_label"
2018-03-10 16:30:55.574 EST [3816] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:30:55.574 EST [3816] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:30:55.721 EST [3816] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:30:55.725 EST [3816] DEBUG:  got WAL segment from archive
2018-03-10 16:30:55.725 EST [3816] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:30:55.725 EST [3816] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:30:55.725 EST [3816] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:30:55.725 EST [3816] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:30:55.725 EST [3816] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:30:55.725 EST [3816] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:30:55.725 EST [3816] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:30:55.725 EST [3816] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:30:55.725 EST [3816] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:30:55.725 EST [3816] DEBUG:  starting up replication slots
2018-03-10 16:30:55.775 EST [3816] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:30:55.776 EST [3816] DEBUG:  initializing for hot standby
2018-03-10 16:30:55.776 EST [3816] LOG:  redo starts at 0/2000028
2018-03-10 16:30:55.776 EST [3816] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:30:55.776 EST [3816] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 556 latestCompletedXid 555 oldestRunningXid 556
2018-03-10 16:30:55.776 EST [3816] DEBUG:  end of backup reached
2018-03-10 16:30:55.776 EST [3816] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:30:55.811 EST [3816] LOG:  consistent recovery state reached at 0/20000F8
The system cannot find the file specified.
2018-03-10 16:30:55.821 EST [3816] LOG:  invalid record length at 0/3000060: wanted 24, got 0
The system cannot find the file specified.
2018-03-10 16:30:55.830 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
2018-03-10 16:30:55.869 EST [8024] LOG:  database system is ready to accept read only connections
The system cannot find the file specified.
2018-03-10 16:30:55.922 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The system cannot find the file specified.
2018-03-10 16:30:56.016 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The system cannot find the file specified.
2018-03-10 16:30:56.111 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The system cannot find the file specified.
2018-03-10 16:30:56.203 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The system cannot find the file specified.
2018-03-10 16:30:56.318 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The system cannot find the file specified.
2018-03-10 16:30:56.407 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The system cannot find the file specified.
2018-03-10 16:30:56.533 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The system cannot find the file specified.
2018-03-10 16:30:56.598 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The process cannot access the file because it is being used by another process.
        0 file(s) copied.
2018-03-10 16:30:56.688 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
The process cannot access the file because it is being used by another process.
        0 file(s) copied.
2018-03-10 16:30:56.803 EST [3816] DEBUG:  invalid record length at 0/3000060: wanted 24, got 0
        1 file(s) copied.
2018-03-10 16:30:57.095 EST [3816] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:30:57.100 EST [3816] DEBUG:  got WAL segment from archive
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the file specified.
2018-03-10 16:30:57.497 EST [7612] 002_archiving.pl LOG:  statement: SELECT '0/3027498'::pg_lsn <= pg_last_wal_replay_lsn()
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the file specified.
2018-03-10 16:30:57.748 EST [13480] 002_archiving.pl LOG:  statement: SELECT count(*) FROM tab_int
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the file specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
2018-03-10 16:31:02.498 EST [8024] LOG:  received immediate shutdown request
The system cannot find the path specified.
2018-03-10 16:31:02.654 EST [8024] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_master.log ===================
2018-03-10 16:31:28.714 EST [11336] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:31:28.716 EST [11336] LOG:  listening on IPv4 address "127.0.0.1", port 53544
2018-03-10 16:31:28.746 EST [11336] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:31:28.746 EST [11336] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:31:28.746 EST [11336] DEBUG:  disabling huge pages
2018-03-10 16:31:28.903 EST [5324] LOG:  database system was shut down at 2018-03-10 16:31:28 EST
2018-03-10 16:31:28.904 EST [5324] DEBUG:  checkpoint record is at 0/15CF4D8
2018-03-10 16:31:28.904 EST [5324] DEBUG:  redo record is at 0/15CF4D8; shutdown true
2018-03-10 16:31:28.917 EST [5324] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:31:28.917 EST [5324] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:31:28.917 EST [5324] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:31:28.917 EST [5324] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:31:28.917 EST [5324] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:31:28.917 EST [5324] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:31:28.917 EST [5324] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:31:28.917 EST [5324] DEBUG:  starting up replication slots
2018-03-10 16:31:28.918 EST [5324] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:31:28.918 EST [5324] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:31:29.140 EST [7132] DEBUG:  autovacuum launcher started
2018-03-10 16:31:29.196 EST [11336] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:31:29.227 EST [11336] LOG:  database system is ready to accept connections
2018-03-10 16:31:29.258 EST [10912] DEBUG:  logical replication launcher started
2018-03-10 16:31:29.798 EST [14176] 003_recovery_targets.pl LOG:  statement: CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a
2018-03-10 16:31:30.037 EST [6756] 003_recovery_targets.pl LOG:  statement: SELECT pg_current_wal_lsn();
2018-03-10 16:31:30.271 EST [11728] 003_recovery_targets.pl LOG:  received replication command: SHOW wal_segment_size
2018-03-10 16:31:30.272 EST [11728] 003_recovery_targets.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:31:30.272 EST [11728] 003_recovery_targets.pl LOG:  received replication command: BASE_BACKUP LABEL 'pg_basebackup base backup'    NOWAIT  
2018-03-10 16:31:30.392 EST [11488] DEBUG:  performing replication slot checkpoint
        1 file(s) copied.
2018-03-10 16:31:30.516 EST [12340] DEBUG:  archived write-ahead log file "000000010000000000000001"
2018-03-10 16:31:33.180 EST [5660] 003_recovery_targets.pl LOG:  received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_5660" TEMPORARY PHYSICAL RESERVE_WAL
2018-03-10 16:31:33.217 EST [5660] 003_recovery_targets.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:31:33.217 EST [5660] 003_recovery_targets.pl LOG:  received replication command: START_REPLICATION SLOT "pg_basebackup_5660" 0/2000000 TIMELINE 1
2018-03-10 16:31:33.232 EST [5660] 003_recovery_targets.pl DEBUG:  standby "003_recovery_targets.pl" has now caught up with primary
2018-03-10 16:31:44.271 EST [11728] 003_recovery_targets.pl DEBUG:  file "pg_internal.init" excluded from backup
2018-03-10 16:31:44.674 EST [11728] 003_recovery_targets.pl DEBUG:  file "pg_internal.init" excluded from backup
2018-03-10 16:31:44.675 EST [11728] 003_recovery_targets.pl DEBUG:  contents of directory "pg_dynshmem" excluded from backup
2018-03-10 16:31:44.677 EST [11728] 003_recovery_targets.pl DEBUG:  contents of directory "pg_notify" excluded from backup
2018-03-10 16:31:44.677 EST [11728] 003_recovery_targets.pl DEBUG:  contents of directory "pg_replslot" excluded from backup
2018-03-10 16:31:44.677 EST [11728] 003_recovery_targets.pl DEBUG:  contents of directory "pg_serial" excluded from backup
2018-03-10 16:31:44.678 EST [11728] 003_recovery_targets.pl DEBUG:  contents of directory "pg_snapshots" excluded from backup
2018-03-10 16:31:44.678 EST [11728] 003_recovery_targets.pl DEBUG:  contents of directory "pg_stat_tmp" excluded from backup
2018-03-10 16:31:44.678 EST [11728] 003_recovery_targets.pl DEBUG:  contents of directory "pg_subtrans" excluded from backup
2018-03-10 16:31:44.679 EST [11728] 003_recovery_targets.pl DEBUG:  file "postmaster.opts" excluded from backup
2018-03-10 16:31:44.679 EST [11728] 003_recovery_targets.pl DEBUG:  file "postmaster.pid" excluded from backup
        1 file(s) copied.
2018-03-10 16:31:45.152 EST [12340] DEBUG:  archived write-ahead log file "000000010000000000000002"
        1 file(s) copied.
2018-03-10 16:31:45.203 EST [12340] DEBUG:  archived write-ahead log file "000000010000000000000002.00000028.backup"
2018-03-10 16:31:51.285 EST [9936] 003_recovery_targets.pl LOG:  statement: INSERT INTO tab_int VALUES (generate_series(1001,2000))
2018-03-10 16:31:51.598 EST [7308] 003_recovery_targets.pl LOG:  statement: SELECT pg_current_wal_lsn(), txid_current();
2018-03-10 16:31:51.739 EST [9888] 003_recovery_targets.pl LOG:  statement: INSERT INTO tab_int VALUES (generate_series(2001,3000))
2018-03-10 16:31:51.906 EST [12460] 003_recovery_targets.pl LOG:  statement: SELECT pg_current_wal_lsn();
2018-03-10 16:31:52.128 EST [11608] 003_recovery_targets.pl LOG:  statement: SELECT now()
2018-03-10 16:31:52.381 EST [3184] 003_recovery_targets.pl LOG:  statement: INSERT INTO tab_int VALUES (generate_series(3001,4000))
2018-03-10 16:31:52.541 EST [12236] 003_recovery_targets.pl LOG:  statement: SELECT pg_current_wal_lsn();
2018-03-10 16:31:52.932 EST [12468] 003_recovery_targets.pl LOG:  statement: SELECT pg_create_restore_point('my_target');
2018-03-10 16:31:52.932 EST [12468] 003_recovery_targets.pl LOG:  restore point "my_target" created at 0/30322F0
2018-03-10 16:31:52.932 EST [12468] 003_recovery_targets.pl STATEMENT:  SELECT pg_create_restore_point('my_target');
2018-03-10 16:31:53.664 EST [6624] 003_recovery_targets.pl LOG:  statement: INSERT INTO tab_int VALUES (generate_series(4001,5000))
2018-03-10 16:31:53.876 EST [1656] 003_recovery_targets.pl LOG:  statement: SELECT pg_current_wal_lsn()
2018-03-10 16:31:54.145 EST [1788] 003_recovery_targets.pl LOG:  statement: INSERT INTO tab_int VALUES (generate_series(5001,6000))
2018-03-10 16:31:54.321 EST [7104] 003_recovery_targets.pl LOG:  statement: SELECT pg_switch_wal()
        1 file(s) copied.
2018-03-10 16:31:54.956 EST [12340] DEBUG:  archived write-ahead log file "000000010000000000000003"
2018-03-10 16:32:29.381 EST [13376] DEBUG:  autovacuum: processing database "postgres"
2018-03-10 16:33:29.474 EST [4308] DEBUG:  autovacuum: processing database "postgres"
2018-03-10 16:34:17.299 EST [11336] LOG:  received immediate shutdown request
2018-03-10 16:34:17.350 EST [7132] WARNING:  terminating connection because of crash of another server process
2018-03-10 16:34:17.350 EST [7132] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:34:17.350 EST [7132] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:34:17.354 EST [11336] LOG:  archiver process (PID 12340) exited with exit code 1
2018-03-10 16:34:17.430 EST [11336] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_1.log ===================
2018-03-10 16:32:08.747 EST [7108] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:32:08.749 EST [7108] LOG:  listening on IPv4 address "127.0.0.1", port 53545
2018-03-10 16:32:08.779 EST [7108] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:32:08.779 EST [7108] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:32:08.779 EST [7108] DEBUG:  disabling huge pages
2018-03-10 16:32:08.885 EST [11780] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:32:08.885 EST [11780] LOG:  entering standby mode
2018-03-10 16:32:08.885 EST [11780] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:32:08.886 EST [11780] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:32:08.886 EST [11780] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:32:09.045 EST [11780] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:32:09.054 EST [11780] DEBUG:  got WAL segment from archive
2018-03-10 16:32:09.054 EST [11780] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:32:09.054 EST [11780] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:32:09.054 EST [11780] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:32:09.054 EST [11780] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:32:09.054 EST [11780] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:32:09.054 EST [11780] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:32:09.054 EST [11780] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:32:09.054 EST [11780] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:32:09.054 EST [11780] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:32:09.054 EST [11780] DEBUG:  starting up replication slots
2018-03-10 16:32:09.107 EST [11780] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:32:09.108 EST [11780] DEBUG:  initializing for hot standby
2018-03-10 16:32:09.109 EST [11780] LOG:  redo starts at 0/2000028
2018-03-10 16:32:09.109 EST [11780] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:32:09.109 EST [11780] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:32:09.109 EST [11780] DEBUG:  end of backup reached
2018-03-10 16:32:09.109 EST [11780] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:32:09.117 EST [11780] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:32:09.117 EST [11780] LOG:  recovery stopping after reaching consistency
2018-03-10 16:32:09.117 EST [11780] LOG:  recovery has paused
2018-03-10 16:32:09.117 EST [11780] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:32:09.199 EST [7108] LOG:  database system is ready to accept read only connections
2018-03-10 16:32:09.359 EST [14184] 003_recovery_targets.pl LOG:  statement: SELECT '0/15F6998'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:32:10.656 EST [12328] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:32:11.848 EST [7108] LOG:  received immediate shutdown request
2018-03-10 16:32:11.938 EST [7108] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_2.log ===================
2018-03-10 16:32:25.998 EST [12692] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:32:26.000 EST [12692] LOG:  listening on IPv4 address "127.0.0.1", port 53546
2018-03-10 16:32:26.030 EST [12692] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:32:26.030 EST [12692] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:32:26.030 EST [12692] DEBUG:  disabling huge pages
2018-03-10 16:32:26.142 EST [12760] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:32:26.142 EST [12760] LOG:  entering standby mode
2018-03-10 16:32:26.142 EST [12760] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:32:26.143 EST [12760] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:32:26.143 EST [12760] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:32:26.288 EST [12760] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:32:26.305 EST [12760] DEBUG:  got WAL segment from archive
2018-03-10 16:32:26.305 EST [12760] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:32:26.305 EST [12760] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:32:26.305 EST [12760] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:32:26.305 EST [12760] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:32:26.305 EST [12760] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:32:26.305 EST [12760] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:32:26.305 EST [12760] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:32:26.305 EST [12760] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:32:26.306 EST [12760] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:32:26.306 EST [12760] DEBUG:  starting up replication slots
2018-03-10 16:32:26.357 EST [12760] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:32:26.366 EST [12760] DEBUG:  initializing for hot standby
2018-03-10 16:32:26.366 EST [12760] LOG:  redo starts at 0/2000028
2018-03-10 16:32:26.366 EST [12760] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:32:26.366 EST [12760] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:32:26.366 EST [12760] DEBUG:  end of backup reached
2018-03-10 16:32:26.366 EST [12760] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:32:26.373 EST [12760] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:32:26.462 EST [12692] LOG:  database system is ready to accept read only connections
        1 file(s) copied.
2018-03-10 16:32:26.551 EST [12760] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:32:26.556 EST [12760] DEBUG:  got WAL segment from archive
2018-03-10 16:32:26.566 EST [12760] LOG:  recovery stopping after commit of transaction 558, time 2018-03-10 16:31:51.599193-05
2018-03-10 16:32:26.566 EST [12760] LOG:  recovery has paused
2018-03-10 16:32:26.566 EST [12760] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:32:26.688 EST [8104] 003_recovery_targets.pl LOG:  statement: SELECT '0/3012C90'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:32:26.912 EST [7340] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:32:27.063 EST [12692] LOG:  received immediate shutdown request
2018-03-10 16:32:27.190 EST [12692] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_3.log ===================
2018-03-10 16:32:41.671 EST [9320] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:32:41.672 EST [9320] LOG:  listening on IPv4 address "127.0.0.1", port 53547
2018-03-10 16:32:41.706 EST [9320] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:32:41.706 EST [9320] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:32:41.706 EST [9320] DEBUG:  disabling huge pages
2018-03-10 16:32:41.845 EST [13580] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:32:41.846 EST [13580] LOG:  entering standby mode
2018-03-10 16:32:41.846 EST [13580] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:32:41.846 EST [13580] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:32:41.846 EST [13580] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:32:41.993 EST [13580] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:32:42.044 EST [13580] DEBUG:  got WAL segment from archive
2018-03-10 16:32:42.044 EST [13580] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:32:42.044 EST [13580] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:32:42.044 EST [13580] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:32:42.044 EST [13580] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:32:42.044 EST [13580] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:32:42.044 EST [13580] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:32:42.044 EST [13580] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:32:42.044 EST [13580] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:32:42.044 EST [13580] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:32:42.044 EST [13580] DEBUG:  starting up replication slots
2018-03-10 16:32:42.093 EST [13580] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:32:42.094 EST [13580] DEBUG:  initializing for hot standby
2018-03-10 16:32:42.094 EST [13580] LOG:  redo starts at 0/2000028
2018-03-10 16:32:42.094 EST [13580] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:32:42.094 EST [13580] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:32:42.094 EST [13580] DEBUG:  end of backup reached
2018-03-10 16:32:42.094 EST [13580] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:32:42.100 EST [13580] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:32:42.205 EST [9320] LOG:  database system is ready to accept read only connections
        1 file(s) copied.
2018-03-10 16:32:42.408 EST [13580] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:32:42.416 EST [13580] DEBUG:  got WAL segment from archive
2018-03-10 16:32:42.530 EST [13580] LOG:  recovery stopping before commit of transaction 560, time 2018-03-10 16:31:52.38407-05
2018-03-10 16:32:42.530 EST [13580] LOG:  recovery has paused
2018-03-10 16:32:42.530 EST [13580] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:32:42.690 EST [12228] 003_recovery_targets.pl LOG:  statement: SELECT '0/30227A0'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:32:42.948 EST [14068] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:32:43.108 EST [9320] LOG:  received immediate shutdown request
2018-03-10 16:32:43.221 EST [9320] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_4.log ===================
2018-03-10 16:32:57.094 EST [7668] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:32:57.095 EST [7668] LOG:  listening on IPv4 address "127.0.0.1", port 53548
2018-03-10 16:32:57.125 EST [7668] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:32:57.125 EST [7668] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:32:57.125 EST [7668] DEBUG:  disabling huge pages
2018-03-10 16:32:57.227 EST [11428] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:32:57.227 EST [11428] LOG:  entering standby mode
2018-03-10 16:32:57.228 EST [11428] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:32:57.228 EST [11428] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:32:57.228 EST [11428] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:32:57.372 EST [11428] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:32:57.381 EST [11428] DEBUG:  got WAL segment from archive
2018-03-10 16:32:57.381 EST [11428] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:32:57.381 EST [11428] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:32:57.381 EST [11428] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:32:57.381 EST [11428] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:32:57.381 EST [11428] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:32:57.381 EST [11428] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:32:57.381 EST [11428] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:32:57.381 EST [11428] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:32:57.381 EST [11428] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:32:57.381 EST [11428] DEBUG:  starting up replication slots
2018-03-10 16:32:57.438 EST [11428] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:32:57.442 EST [11428] DEBUG:  initializing for hot standby
2018-03-10 16:32:57.443 EST [11428] LOG:  redo starts at 0/2000028
2018-03-10 16:32:57.443 EST [11428] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:32:57.443 EST [11428] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:32:57.443 EST [11428] DEBUG:  end of backup reached
2018-03-10 16:32:57.443 EST [11428] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:32:57.451 EST [11428] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:32:57.552 EST [7668] LOG:  database system is ready to accept read only connections
        1 file(s) copied.
2018-03-10 16:32:57.664 EST [11428] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:32:57.669 EST [11428] DEBUG:  got WAL segment from archive
2018-03-10 16:32:57.690 EST [11428] LOG:  recovery stopping at restore point "my_target", time 2018-03-10 16:31:52.932882-05
2018-03-10 16:32:57.691 EST [11428] LOG:  recovery has paused
2018-03-10 16:32:57.691 EST [11428] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:32:57.752 EST [12412] 003_recovery_targets.pl LOG:  statement: SELECT '0/3032288'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:32:57.940 EST [12580] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:32:58.094 EST [7668] LOG:  received immediate shutdown request
2018-03-10 16:32:58.190 EST [7668] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_5.log ===================
2018-03-10 16:33:12.126 EST [12944] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:33:12.127 EST [12944] LOG:  listening on IPv4 address "127.0.0.1", port 53549
2018-03-10 16:33:12.157 EST [12944] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:33:12.157 EST [12944] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:33:12.157 EST [12944] DEBUG:  disabling huge pages
2018-03-10 16:33:12.266 EST [13336] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:33:12.266 EST [13336] LOG:  entering standby mode
2018-03-10 16:33:12.266 EST [13336] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:33:12.267 EST [13336] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:33:12.267 EST [13336] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:33:12.420 EST [13336] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:33:12.428 EST [13336] DEBUG:  got WAL segment from archive
2018-03-10 16:33:12.428 EST [13336] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:33:12.428 EST [13336] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:33:12.428 EST [13336] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:33:12.428 EST [13336] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:33:12.428 EST [13336] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:33:12.428 EST [13336] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:33:12.428 EST [13336] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:33:12.428 EST [13336] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:33:12.428 EST [13336] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:33:12.428 EST [13336] DEBUG:  starting up replication slots
2018-03-10 16:33:12.485 EST [13336] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:33:12.486 EST [13336] DEBUG:  initializing for hot standby
2018-03-10 16:33:12.486 EST [13336] LOG:  redo starts at 0/2000028
2018-03-10 16:33:12.486 EST [13336] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:33:12.486 EST [13336] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:33:12.486 EST [13336] DEBUG:  end of backup reached
2018-03-10 16:33:12.486 EST [13336] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:33:12.517 EST [13336] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:33:12.577 EST [12944] LOG:  database system is ready to accept read only connections
        1 file(s) copied.
2018-03-10 16:33:12.709 EST [13336] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:33:12.715 EST [13336] DEBUG:  got WAL segment from archive
2018-03-10 16:33:12.774 EST [13336] LOG:  recovery stopping after WAL location (LSN) "0/3041DC0"
2018-03-10 16:33:12.774 EST [13336] LOG:  recovery has paused
2018-03-10 16:33:12.774 EST [13336] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:33:12.847 EST [3228] 003_recovery_targets.pl LOG:  statement: SELECT '0/3041DC0'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:33:13.101 EST [6468] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:33:13.240 EST [12944] LOG:  received immediate shutdown request
2018-03-10 16:33:13.348 EST [12944] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_6.log ===================
2018-03-10 16:33:28.611 EST [9096] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:33:28.613 EST [9096] LOG:  listening on IPv4 address "127.0.0.1", port 53550
2018-03-10 16:33:28.643 EST [9096] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:33:28.643 EST [9096] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:33:28.643 EST [9096] DEBUG:  disabling huge pages
2018-03-10 16:33:28.756 EST [8412] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:33:28.756 EST [8412] LOG:  entering standby mode
2018-03-10 16:33:28.756 EST [8412] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:33:28.757 EST [8412] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:33:28.757 EST [8412] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:33:28.898 EST [8412] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:33:28.906 EST [8412] DEBUG:  got WAL segment from archive
2018-03-10 16:33:28.906 EST [8412] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:33:28.906 EST [8412] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:33:28.906 EST [8412] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:33:28.906 EST [8412] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:33:28.907 EST [8412] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:33:28.907 EST [8412] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:33:28.907 EST [8412] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:33:28.907 EST [8412] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:33:28.907 EST [8412] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:33:28.907 EST [8412] DEBUG:  starting up replication slots
2018-03-10 16:33:28.954 EST [8412] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:33:28.956 EST [8412] DEBUG:  initializing for hot standby
2018-03-10 16:33:28.956 EST [8412] LOG:  redo starts at 0/2000028
2018-03-10 16:33:28.956 EST [8412] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:33:28.956 EST [8412] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:33:28.956 EST [8412] DEBUG:  end of backup reached
2018-03-10 16:33:28.956 EST [8412] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:33:28.961 EST [8412] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:33:29.071 EST [9096] LOG:  database system is ready to accept read only connections
        1 file(s) copied.
2018-03-10 16:33:29.143 EST [8412] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:33:29.149 EST [8412] DEBUG:  got WAL segment from archive
2018-03-10 16:33:29.179 EST [8412] LOG:  recovery stopping before commit of transaction 560, time 2018-03-10 16:31:52.38407-05
2018-03-10 16:33:29.179 EST [8412] LOG:  recovery has paused
2018-03-10 16:33:29.179 EST [8412] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:33:29.310 EST [8008] 003_recovery_targets.pl LOG:  statement: SELECT '0/30227A0'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:33:29.630 EST [4936] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:33:29.768 EST [9096] LOG:  received immediate shutdown request
2018-03-10 16:33:29.865 EST [9096] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_7.log ===================
2018-03-10 16:33:44.471 EST [5024] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:33:44.475 EST [5024] LOG:  listening on IPv4 address "127.0.0.1", port 53551
2018-03-10 16:33:44.505 EST [5024] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:33:44.505 EST [5024] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:33:44.505 EST [5024] DEBUG:  disabling huge pages
2018-03-10 16:33:44.691 EST [11348] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:33:44.693 EST [11348] LOG:  entering standby mode
2018-03-10 16:33:44.693 EST [11348] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:33:44.694 EST [11348] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:33:44.694 EST [11348] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:33:44.844 EST [11348] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:33:44.848 EST [11348] DEBUG:  got WAL segment from archive
2018-03-10 16:33:44.848 EST [11348] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:33:44.848 EST [11348] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:33:44.848 EST [11348] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:33:44.848 EST [11348] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:33:44.848 EST [11348] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:33:44.848 EST [11348] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:33:44.848 EST [11348] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:33:44.848 EST [11348] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:33:44.848 EST [11348] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:33:44.848 EST [11348] DEBUG:  starting up replication slots
2018-03-10 16:33:44.908 EST [11348] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:33:44.909 EST [11348] DEBUG:  initializing for hot standby
2018-03-10 16:33:44.910 EST [11348] LOG:  redo starts at 0/2000028
2018-03-10 16:33:44.910 EST [11348] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:33:44.910 EST [11348] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:33:44.910 EST [11348] DEBUG:  end of backup reached
2018-03-10 16:33:44.910 EST [11348] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:33:44.917 EST [11348] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:33:45.012 EST [5024] LOG:  database system is ready to accept read only connections
        1 file(s) copied.
2018-03-10 16:33:45.122 EST [11348] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:33:45.126 EST [11348] DEBUG:  got WAL segment from archive
2018-03-10 16:33:45.135 EST [11348] LOG:  recovery stopping after commit of transaction 558, time 2018-03-10 16:31:51.599193-05
2018-03-10 16:33:45.135 EST [11348] LOG:  recovery has paused
2018-03-10 16:33:45.135 EST [11348] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:33:45.236 EST [12404] 003_recovery_targets.pl LOG:  statement: SELECT '0/3012C90'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:33:45.901 EST [13484] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:33:46.071 EST [5024] LOG:  received immediate shutdown request
2018-03-10 16:33:46.370 EST [5024] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_8.log ===================
2018-03-10 16:34:00.629 EST [13584] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:34:00.630 EST [13584] LOG:  listening on IPv4 address "127.0.0.1", port 53552
2018-03-10 16:34:00.660 EST [13584] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:34:00.660 EST [13584] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:34:00.660 EST [13584] DEBUG:  disabling huge pages
2018-03-10 16:34:00.794 EST [10620] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:34:00.795 EST [10620] LOG:  entering standby mode
2018-03-10 16:34:00.795 EST [10620] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:34:00.795 EST [10620] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:34:00.795 EST [10620] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:34:00.951 EST [10620] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:34:00.955 EST [10620] DEBUG:  got WAL segment from archive
2018-03-10 16:34:00.955 EST [10620] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:34:00.955 EST [10620] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:34:00.955 EST [10620] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:34:00.955 EST [10620] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:34:00.955 EST [10620] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:34:00.955 EST [10620] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:34:00.955 EST [10620] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:34:00.955 EST [10620] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:34:00.955 EST [10620] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:34:00.955 EST [10620] DEBUG:  starting up replication slots
2018-03-10 16:34:01.003 EST [10620] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:34:01.004 EST [10620] DEBUG:  initializing for hot standby
2018-03-10 16:34:01.004 EST [10620] LOG:  redo starts at 0/2000028
2018-03-10 16:34:01.004 EST [10620] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:34:01.004 EST [10620] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:34:01.004 EST [10620] DEBUG:  end of backup reached
2018-03-10 16:34:01.004 EST [10620] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:34:01.033 EST [10620] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:34:01.103 EST [13584] LOG:  database system is ready to accept read only connections
        1 file(s) copied.
2018-03-10 16:34:01.214 EST [10620] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:34:01.218 EST [10620] DEBUG:  got WAL segment from archive
2018-03-10 16:34:01.240 EST [10620] LOG:  recovery stopping at restore point "my_target", time 2018-03-10 16:31:52.932882-05
2018-03-10 16:34:01.240 EST [10620] LOG:  recovery has paused
2018-03-10 16:34:01.240 EST [10620] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:34:01.308 EST [3228] 003_recovery_targets.pl LOG:  statement: SELECT '0/3032288'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:34:01.511 EST [13876] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:34:01.669 EST [13584] LOG:  received immediate shutdown request
2018-03-10 16:34:01.772 EST [13584] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_9.log ===================
2018-03-10 16:34:16.051 EST [12900] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:34:16.052 EST [12900] LOG:  listening on IPv4 address "127.0.0.1", port 53553
2018-03-10 16:34:16.082 EST [12900] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:34:16.082 EST [12900] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:34:16.083 EST [12900] DEBUG:  disabling huge pages
2018-03-10 16:34:16.194 EST [14108] LOG:  database system was interrupted; last known up at 2018-03-10 16:31:32 EST
2018-03-10 16:34:16.194 EST [14108] LOG:  entering standby mode
2018-03-10 16:34:16.194 EST [14108] DEBUG:  backup time 2018-03-10 16:31:33 EST in file "backup_label"
2018-03-10 16:34:16.194 EST [14108] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:34:16.194 EST [14108] DEBUG:  backup timeline 1 in file "backup_label"
        1 file(s) copied.
2018-03-10 16:34:16.387 EST [14108] LOG:  restored log file "000000010000000000000002" from archive
2018-03-10 16:34:16.396 EST [14108] DEBUG:  got WAL segment from archive
2018-03-10 16:34:16.397 EST [14108] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:34:16.397 EST [14108] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:34:16.397 EST [14108] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:34:16.397 EST [14108] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:34:16.397 EST [14108] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:34:16.397 EST [14108] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:34:16.397 EST [14108] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:34:16.397 EST [14108] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:34:16.397 EST [14108] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:34:16.397 EST [14108] DEBUG:  starting up replication slots
2018-03-10 16:34:16.441 EST [14108] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:34:16.443 EST [14108] DEBUG:  initializing for hot standby
2018-03-10 16:34:16.443 EST [14108] LOG:  redo starts at 0/2000028
2018-03-10 16:34:16.443 EST [14108] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:34:16.443 EST [14108] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:34:16.443 EST [14108] DEBUG:  end of backup reached
2018-03-10 16:34:16.443 EST [14108] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:34:16.450 EST [14108] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:34:16.598 EST [12900] LOG:  database system is ready to accept read only connections
        1 file(s) copied.
2018-03-10 16:34:16.725 EST [14108] LOG:  restored log file "000000010000000000000003" from archive
2018-03-10 16:34:16.729 EST [14108] DEBUG:  got WAL segment from archive
2018-03-10 16:34:16.758 EST [14108] LOG:  recovery stopping after WAL location (LSN) "0/3041DC0"
2018-03-10 16:34:16.758 EST [14108] LOG:  recovery has paused
2018-03-10 16:34:16.758 EST [14108] HINT:  Execute pg_wal_replay_resume() to continue.
2018-03-10 16:34:16.804 EST [12548] 003_recovery_targets.pl LOG:  statement: SELECT '0/3041DC0'::pg_lsn <= pg_last_wal_replay_lsn()
2018-03-10 16:34:16.975 EST [7788] 003_recovery_targets.pl LOG:  statement: SELECT count(*) FROM tab_int
2018-03-10 16:34:17.133 EST [12900] LOG:  received immediate shutdown request
2018-03-10 16:34:17.226 EST [12900] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_master.log ===================
2018-03-10 16:34:57.959 EST [11392] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:34:57.961 EST [11392] LOG:  listening on IPv4 address "127.0.0.1", port 53266
2018-03-10 16:34:57.991 EST [11392] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:34:57.991 EST [11392] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:34:57.991 EST [11392] DEBUG:  disabling huge pages
2018-03-10 16:34:58.141 EST [6516] LOG:  database system was shut down at 2018-03-10 16:34:57 EST
2018-03-10 16:34:58.142 EST [6516] DEBUG:  checkpoint record is at 0/15CF4D8
2018-03-10 16:34:58.143 EST [6516] DEBUG:  redo record is at 0/15CF4D8; shutdown true
2018-03-10 16:34:58.162 EST [6516] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:34:58.162 EST [6516] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:34:58.162 EST [6516] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:34:58.162 EST [6516] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:34:58.162 EST [6516] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:34:58.162 EST [6516] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:34:58.162 EST [6516] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:34:58.162 EST [6516] DEBUG:  starting up replication slots
2018-03-10 16:34:58.163 EST [6516] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:34:58.163 EST [6516] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:34:58.331 EST [4120] DEBUG:  autovacuum launcher started
2018-03-10 16:34:58.346 EST [11392] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:34:58.382 EST [11392] LOG:  database system is ready to accept connections
2018-03-10 16:34:58.403 EST [2500] DEBUG:  logical replication launcher started
2018-03-10 16:34:58.597 EST [8228] 004_timeline_switch.pl LOG:  received replication command: SHOW wal_segment_size
2018-03-10 16:34:58.597 EST [8228] 004_timeline_switch.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:34:58.597 EST [8228] 004_timeline_switch.pl LOG:  received replication command: BASE_BACKUP LABEL 'pg_basebackup base backup'    NOWAIT  
2018-03-10 16:34:58.662 EST [12864] DEBUG:  performing replication slot checkpoint
2018-03-10 16:34:59.131 EST [11872] 004_timeline_switch.pl LOG:  received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_11872" TEMPORARY PHYSICAL RESERVE_WAL
2018-03-10 16:34:59.196 EST [11872] 004_timeline_switch.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:34:59.197 EST [11872] 004_timeline_switch.pl LOG:  received replication command: START_REPLICATION SLOT "pg_basebackup_11872" 0/2000000 TIMELINE 1
2018-03-10 16:34:59.208 EST [11872] 004_timeline_switch.pl DEBUG:  standby "004_timeline_switch.pl" has now caught up with primary
2018-03-10 16:35:05.551 EST [8228] 004_timeline_switch.pl DEBUG:  contents of directory "pg_dynshmem" excluded from backup
2018-03-10 16:35:05.552 EST [8228] 004_timeline_switch.pl DEBUG:  contents of directory "pg_notify" excluded from backup
2018-03-10 16:35:05.552 EST [8228] 004_timeline_switch.pl DEBUG:  contents of directory "pg_replslot" excluded from backup
2018-03-10 16:35:05.552 EST [8228] 004_timeline_switch.pl DEBUG:  contents of directory "pg_serial" excluded from backup
2018-03-10 16:35:05.552 EST [8228] 004_timeline_switch.pl DEBUG:  contents of directory "pg_snapshots" excluded from backup
2018-03-10 16:35:05.552 EST [8228] 004_timeline_switch.pl DEBUG:  contents of directory "pg_stat_tmp" excluded from backup
2018-03-10 16:35:05.552 EST [8228] 004_timeline_switch.pl DEBUG:  contents of directory "pg_subtrans" excluded from backup
2018-03-10 16:35:05.554 EST [8228] 004_timeline_switch.pl DEBUG:  file "postmaster.opts" excluded from backup
2018-03-10 16:35:05.554 EST [8228] 004_timeline_switch.pl DEBUG:  file "postmaster.pid" excluded from backup
2018-03-10 16:35:28.804 EST [648] standby_1 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:35:28.804 EST [648] standby_1 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:35:28.804 EST [648] standby_1 DEBUG:  standby "standby_1" has now caught up with primary
2018-03-10 16:35:43.909 EST [7764] standby_2 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:35:43.909 EST [7764] standby_2 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:35:43.909 EST [7764] standby_2 DEBUG:  standby "standby_2" has now caught up with primary
2018-03-10 16:35:43.980 EST [13788] 004_timeline_switch.pl LOG:  statement: CREATE TABLE tab_int AS SELECT generate_series(1,1000) AS a
2018-03-10 16:35:44.116 EST [11028] 004_timeline_switch.pl LOG:  statement: SELECT pg_current_wal_lsn()
2018-03-10 16:35:44.377 EST [8904] 004_timeline_switch.pl LOG:  statement: SELECT '0/3027498' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';
2018-03-10 16:35:44.552 EST [11392] LOG:  received immediate shutdown request
2018-03-10 16:35:44.571 EST [7764] standby_2 WARNING:  terminating connection because of crash of another server process
2018-03-10 16:35:44.571 EST [7764] standby_2 DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:35:44.571 EST [7764] standby_2 HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:35:44.572 EST [648] standby_1 WARNING:  terminating connection because of crash of another server process
2018-03-10 16:35:44.572 EST [648] standby_1 DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:35:44.572 EST [648] standby_1 HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:35:44.584 EST [4120] WARNING:  terminating connection because of crash of another server process
2018-03-10 16:35:44.584 EST [4120] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:35:44.584 EST [4120] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:35:44.741 EST [13744] [unknown] FATAL:  the database system is shutting down
2018-03-10 16:35:44.769 EST [8648] [unknown] FATAL:  the database system is shutting down
2018-03-10 16:35:44.796 EST [11392] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_standby_1.log ===================
2018-03-10 16:35:28.258 EST [11528] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:35:28.259 EST [11528] LOG:  listening on IPv4 address "127.0.0.1", port 53267
2018-03-10 16:35:28.291 EST [11528] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:35:28.291 EST [11528] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:35:28.292 EST [11528] DEBUG:  disabling huge pages
2018-03-10 16:35:28.412 EST [12268] LOG:  database system was interrupted; last known up at 2018-03-10 16:34:58 EST
2018-03-10 16:35:28.412 EST [12268] LOG:  entering standby mode
2018-03-10 16:35:28.412 EST [12268] DEBUG:  backup time 2018-03-10 16:34:59 EST in file "backup_label"
2018-03-10 16:35:28.412 EST [12268] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:35:28.413 EST [12268] DEBUG:  backup timeline 1 in file "backup_label"
2018-03-10 16:35:28.413 EST [12268] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:35:28.413 EST [12268] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:35:28.413 EST [12268] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:35:28.413 EST [12268] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:35:28.413 EST [12268] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:35:28.413 EST [12268] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:35:28.413 EST [12268] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:35:28.413 EST [12268] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:35:28.413 EST [12268] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:35:28.413 EST [12268] DEBUG:  starting up replication slots
2018-03-10 16:35:28.461 EST [12268] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:35:28.463 EST [12268] DEBUG:  initializing for hot standby
2018-03-10 16:35:28.463 EST [12268] LOG:  redo starts at 0/2000028
2018-03-10 16:35:28.464 EST [12268] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:35:28.464 EST [12268] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 556 latestCompletedXid 555 oldestRunningXid 556
2018-03-10 16:35:28.464 EST [12268] DEBUG:  end of backup reached
2018-03-10 16:35:28.464 EST [12268] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:35:28.523 EST [12268] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:35:28.548 EST [12268] LOG:  invalid record length at 0/3000060: wanted 24, got 0
2018-03-10 16:35:28.659 EST [11528] LOG:  database system is ready to accept read only connections
2018-03-10 16:35:28.804 EST [3176] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:35:44.575 EST [3176] FATAL:  could not receive data from WAL stream: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:35:44.603 EST [12268] LOG:  invalid record length at 0/3027498: wanted 24, got 0
2018-03-10 16:35:44.769 EST [11172] FATAL:  could not connect to the primary server: FATAL:  the database system is shutting down
2018-03-10 16:35:44.943 EST [12268] DEBUG:  invalid record length at 0/3027498: wanted 24, got 0
2018-03-10 16:35:44.944 EST [12268] LOG:  received promote request
2018-03-10 16:35:44.944 EST [12268] LOG:  redo done at 0/3027460
2018-03-10 16:35:44.944 EST [12268] LOG:  last completed transaction was at log time 2018-03-10 16:35:43.986125-05
2018-03-10 16:35:44.944 EST [12268] DEBUG:  resetting unlogged relations: cleanup 0 init 1
2018-03-10 16:35:44.947 EST [12268] LOG:  selected new timeline ID: 2
2018-03-10 16:35:45.193 EST [12268] DEBUG:  could not remove file "pg_wal/000000020000000000000003": No such file or directory
2018-03-10 16:35:45.227 EST [12268] LOG:  archive recovery complete
2018-03-10 16:35:45.232 EST [12268] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:35:45.232 EST [12268] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:35:45.337 EST [13492] DEBUG:  performing replication slot checkpoint
2018-03-10 16:35:45.583 EST [11528] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:35:45.603 EST [12080] DEBUG:  autovacuum launcher started
2018-03-10 16:35:45.617 EST [11528] LOG:  database system is ready to accept connections
2018-03-10 16:35:45.654 EST [6396] DEBUG:  logical replication launcher started
2018-03-10 16:35:46.212 EST [5880] standby_2 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:35:46.212 EST [5880] standby_2 LOG:  received replication command: TIMELINE_HISTORY 2
2018-03-10 16:35:46.230 EST [1060] 004_timeline_switch.pl LOG:  statement: INSERT INTO tab_int VALUES (generate_series(1001,2000))
2018-03-10 16:35:46.259 EST [5880] standby_2 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:35:46.263 EST [5880] standby_2 DEBUG:  walsender reached end of timeline at 0/3027498 (sent up to 0/3027498)
2018-03-10 16:35:46.263 EST [5880] standby_2 DEBUG:  standby "standby_2" has now caught up with primary
2018-03-10 16:35:46.300 EST [5880] standby_2 LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:35:46.301 EST [5880] standby_2 LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 2
2018-03-10 16:35:46.302 EST [5880] standby_2 DEBUG:  standby "standby_2" has now caught up with primary
2018-03-10 16:35:46.479 EST [9892] 004_timeline_switch.pl LOG:  statement: SELECT pg_current_wal_lsn()
2018-03-10 16:35:46.598 EST [13004] 004_timeline_switch.pl LOG:  statement: SELECT '0/303A148' <= replay_lsn FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby_2';
2018-03-10 16:35:49.696 EST [11528] LOG:  received immediate shutdown request
2018-03-10 16:35:49.728 EST [5880] standby_2 WARNING:  terminating connection because of crash of another server process
2018-03-10 16:35:49.728 EST [5880] standby_2 DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:35:49.728 EST [5880] standby_2 HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:35:49.732 EST [12080] WARNING:  terminating connection because of crash of another server process
2018-03-10 16:35:49.732 EST [12080] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:35:49.732 EST [12080] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:35:49.824 EST [11528] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_standby_2.log ===================
2018-03-10 16:35:43.493 EST [10940] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:35:43.494 EST [10940] LOG:  listening on IPv4 address "127.0.0.1", port 53268
2018-03-10 16:35:43.524 EST [10940] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:35:43.524 EST [10940] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:35:43.525 EST [10940] DEBUG:  disabling huge pages
2018-03-10 16:35:43.646 EST [3796] LOG:  database system was interrupted; last known up at 2018-03-10 16:34:58 EST
2018-03-10 16:35:43.646 EST [3796] LOG:  entering standby mode
2018-03-10 16:35:43.647 EST [3796] DEBUG:  backup time 2018-03-10 16:34:59 EST in file "backup_label"
2018-03-10 16:35:43.647 EST [3796] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:35:43.647 EST [3796] DEBUG:  backup timeline 1 in file "backup_label"
2018-03-10 16:35:43.647 EST [3796] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:35:43.648 EST [3796] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:35:43.648 EST [3796] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:35:43.648 EST [3796] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:35:43.648 EST [3796] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:35:43.648 EST [3796] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:35:43.648 EST [3796] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:35:43.648 EST [3796] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:35:43.648 EST [3796] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:35:43.648 EST [3796] DEBUG:  starting up replication slots
2018-03-10 16:35:43.696 EST [3796] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:35:43.697 EST [3796] DEBUG:  initializing for hot standby
2018-03-10 16:35:43.698 EST [3796] LOG:  redo starts at 0/2000028
2018-03-10 16:35:43.698 EST [3796] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:35:43.698 EST [3796] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 556 latestCompletedXid 555 oldestRunningXid 556
2018-03-10 16:35:43.698 EST [3796] DEBUG:  end of backup reached
2018-03-10 16:35:43.698 EST [3796] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:35:43.703 EST [3796] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:35:43.703 EST [3796] LOG:  invalid record length at 0/3000060: wanted 24, got 0
2018-03-10 16:35:43.786 EST [10940] LOG:  database system is ready to accept read only connections
2018-03-10 16:35:43.909 EST [7284] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:35:44.573 EST [7284] FATAL:  could not receive data from WAL stream: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:35:44.591 EST [3796] LOG:  invalid record length at 0/3027498: wanted 24, got 0
2018-03-10 16:35:44.742 EST [13548] FATAL:  could not connect to the primary server: FATAL:  the database system is shutting down
2018-03-10 16:35:45.071 EST [3796] DEBUG:  invalid record length at 0/3027498: wanted 24, got 0
2018-03-10 16:35:45.370 EST [10940] LOG:  received fast shutdown request
2018-03-10 16:35:45.401 EST [10940] LOG:  aborting any active transactions
2018-03-10 16:35:45.401 EST [4380] FATAL:  terminating walreceiver process due to administrator command
2018-03-10 16:35:45.452 EST [11608] LOG:  shutting down
2018-03-10 16:35:45.559 EST [10940] LOG:  database system is shut down
2018-03-10 16:35:45.790 EST [8132] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:35:45.797 EST [8132] LOG:  listening on IPv4 address "127.0.0.1", port 53268
2018-03-10 16:35:45.821 EST [8132] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:35:45.821 EST [8132] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:35:45.822 EST [8132] DEBUG:  disabling huge pages
2018-03-10 16:35:45.958 EST [5188] LOG:  database system was shut down in recovery at 2018-03-10 16:35:45 EST
2018-03-10 16:35:45.958 EST [5188] LOG:  entering standby mode
2018-03-10 16:35:45.958 EST [5188] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:35:45.958 EST [5188] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:35:45.958 EST [5188] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:35:45.958 EST [5188] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:35:45.959 EST [5188] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:35:45.959 EST [5188] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:35:45.959 EST [5188] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:35:45.959 EST [5188] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:35:45.959 EST [5188] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:35:45.959 EST [5188] DEBUG:  starting up replication slots
2018-03-10 16:35:45.964 EST [5188] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:35:45.965 EST [5188] DEBUG:  initializing for hot standby
2018-03-10 16:35:45.965 EST [5188] LOG:  redo starts at 0/2000028
2018-03-10 16:35:45.966 EST [5188] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:35:45.966 EST [5188] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 556 latestCompletedXid 555 oldestRunningXid 556
2018-03-10 16:35:45.973 EST [5188] LOG:  consistent recovery state reached at 0/3027498
2018-03-10 16:35:45.973 EST [5188] LOG:  invalid record length at 0/3027498: wanted 24, got 0
2018-03-10 16:35:46.033 EST [8132] LOG:  database system is ready to accept read only connections
2018-03-10 16:35:46.212 EST [13352] LOG:  fetching timeline history file for timeline 2 from primary server
2018-03-10 16:35:46.259 EST [13352] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:35:46.265 EST [13352] LOG:  replication terminated by primary server
2018-03-10 16:35:46.265 EST [13352] DETAIL:  End of WAL reached on timeline 1 at 0/3027498.
2018-03-10 16:35:46.299 EST [13352] DEBUG:  walreceiver ended streaming and awaits new instructions
2018-03-10 16:35:46.300 EST [5188] LOG:  new target timeline is 2
2018-03-10 16:35:46.300 EST [5188] DEBUG:  invalid record length at 0/3027498: wanted 24, got 0
2018-03-10 16:35:46.303 EST [13352] LOG:  restarted WAL streaming at 0/3000000 on timeline 2
2018-03-10 16:35:46.902 EST [10176] 004_timeline_switch.pl LOG:  statement: SELECT count(*) FROM tab_int
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:35:49.729 EST [13352] FATAL:  could not receive data from WAL stream: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:35:49.759 EST [5188] LOG:  invalid record length at 0/303A1F0: wanted 24, got 0
2018-03-10 16:35:49.824 EST [2468] FATAL:  could not connect to the primary server: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:35:50.227 EST [5188] DEBUG:  invalid record length at 0/303A1F0: wanted 24, got 0
2018-03-10 16:35:51.274 EST [3208] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 53267?
2018-03-10 16:35:51.275 EST [5188] DEBUG:  invalid record length at 0/303A1F0: wanted 24, got 0
2018-03-10 16:35:52.337 EST [11928] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 53267?
2018-03-10 16:35:52.412 EST [5188] DEBUG:  invalid record length at 0/303A1F0: wanted 24, got 0
2018-03-10 16:35:53.571 EST [12488] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 53267?
2018-03-10 16:35:53.572 EST [5188] DEBUG:  invalid record length at 0/303A1F0: wanted 24, got 0
2018-03-10 16:35:53.919 EST [8132] LOG:  received immediate shutdown request
2018-03-10 16:35:54.046 EST [8132] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/005_replay_delay_master.log ===================
2018-03-10 16:36:19.901 EST [11312] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:36:19.902 EST [11312] LOG:  listening on IPv4 address "127.0.0.1", port 56907
2018-03-10 16:36:19.933 EST [11312] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:36:19.933 EST [11312] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:36:19.933 EST [11312] DEBUG:  disabling huge pages
2018-03-10 16:36:20.060 EST [7164] LOG:  database system was shut down at 2018-03-10 16:36:19 EST
2018-03-10 16:36:20.061 EST [7164] DEBUG:  checkpoint record is at 0/15CF4D8
2018-03-10 16:36:20.062 EST [7164] DEBUG:  redo record is at 0/15CF4D8; shutdown true
2018-03-10 16:36:20.088 EST [7164] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:36:20.088 EST [7164] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:36:20.088 EST [7164] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:36:20.088 EST [7164] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:36:20.088 EST [7164] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:36:20.088 EST [7164] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:36:20.088 EST [7164] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:36:20.088 EST [7164] DEBUG:  starting up replication slots
2018-03-10 16:36:20.089 EST [7164] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:36:20.089 EST [7164] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:36:20.258 EST [14296] DEBUG:  autovacuum launcher started
2018-03-10 16:36:20.268 EST [11312] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:36:20.304 EST [11312] LOG:  database system is ready to accept connections
2018-03-10 16:36:20.359 EST [11124] DEBUG:  logical replication launcher started
2018-03-10 16:36:20.545 EST [5024] 005_replay_delay.pl LOG:  statement: CREATE TABLE tab_int AS SELECT generate_series(1, 10) AS a
2018-03-10 16:36:20.766 EST [8916] 005_replay_delay.pl LOG:  received replication command: SHOW wal_segment_size
2018-03-10 16:36:20.766 EST [8916] 005_replay_delay.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:36:20.767 EST [8916] 005_replay_delay.pl LOG:  received replication command: BASE_BACKUP LABEL 'pg_basebackup base backup'    NOWAIT  
2018-03-10 16:36:20.823 EST [10400] DEBUG:  performing replication slot checkpoint
2018-03-10 16:36:23.707 EST [13132] 005_replay_delay.pl LOG:  received replication command: CREATE_REPLICATION_SLOT "pg_basebackup_13132" TEMPORARY PHYSICAL RESERVE_WAL
2018-03-10 16:36:23.747 EST [13132] 005_replay_delay.pl LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:36:23.747 EST [13132] 005_replay_delay.pl LOG:  received replication command: START_REPLICATION SLOT "pg_basebackup_13132" 0/2000000 TIMELINE 1
2018-03-10 16:36:23.754 EST [13132] 005_replay_delay.pl DEBUG:  standby "005_replay_delay.pl" has now caught up with primary
2018-03-10 16:36:33.180 EST [8916] 005_replay_delay.pl DEBUG:  file "pg_internal.init" excluded from backup
2018-03-10 16:36:33.189 EST [8916] 005_replay_delay.pl DEBUG:  file "pg_internal.init" excluded from backup
2018-03-10 16:36:33.190 EST [8916] 005_replay_delay.pl DEBUG:  contents of directory "pg_dynshmem" excluded from backup
2018-03-10 16:36:33.191 EST [8916] 005_replay_delay.pl DEBUG:  contents of directory "pg_notify" excluded from backup
2018-03-10 16:36:33.191 EST [8916] 005_replay_delay.pl DEBUG:  contents of directory "pg_replslot" excluded from backup
2018-03-10 16:36:33.191 EST [8916] 005_replay_delay.pl DEBUG:  contents of directory "pg_serial" excluded from backup
2018-03-10 16:36:33.192 EST [8916] 005_replay_delay.pl DEBUG:  contents of directory "pg_snapshots" excluded from backup
2018-03-10 16:36:33.192 EST [8916] 005_replay_delay.pl DEBUG:  contents of directory "pg_stat_tmp" excluded from backup
2018-03-10 16:36:33.192 EST [8916] 005_replay_delay.pl DEBUG:  contents of directory "pg_subtrans" excluded from backup
2018-03-10 16:36:33.193 EST [8916] 005_replay_delay.pl DEBUG:  file "postmaster.opts" excluded from backup
2018-03-10 16:36:33.193 EST [8916] 005_replay_delay.pl DEBUG:  file "postmaster.pid" excluded from backup
2018-03-10 16:36:57.733 EST [6756] standby LOG:  received replication command: IDENTIFY_SYSTEM
2018-03-10 16:36:57.733 EST [6756] standby LOG:  received replication command: START_REPLICATION 0/3000000 TIMELINE 1
2018-03-10 16:36:57.734 EST [6756] standby DEBUG:  standby "standby" has now caught up with primary
2018-03-10 16:36:57.781 EST [11344] 005_replay_delay.pl LOG:  statement: INSERT INTO tab_int VALUES (generate_series(11, 20))
2018-03-10 16:36:57.918 EST [14052] 005_replay_delay.pl LOG:  statement: SELECT pg_current_wal_lsn()
2018-03-10 16:37:01.064 EST [11312] LOG:  received immediate shutdown request
2018-03-10 16:37:01.091 EST [6756] standby WARNING:  terminating connection because of crash of another server process
2018-03-10 16:37:01.091 EST [6756] standby DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:37:01.091 EST [6756] standby HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:37:01.094 EST [14296] WARNING:  terminating connection because of crash of another server process
2018-03-10 16:37:01.094 EST [14296] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-03-10 16:37:01.094 EST [14296] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:37:01.204 EST [11312] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/005_replay_delay_standby.log ===================
2018-03-10 16:36:57.278 EST [12940] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:36:57.279 EST [12940] LOG:  listening on IPv4 address "127.0.0.1", port 56908
2018-03-10 16:36:57.311 EST [12940] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:36:57.311 EST [12940] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:36:57.311 EST [12940] DEBUG:  disabling huge pages
2018-03-10 16:36:57.441 EST [9556] LOG:  database system was interrupted; last known up at 2018-03-10 16:36:23 EST
2018-03-10 16:36:57.441 EST [9556] LOG:  entering standby mode
2018-03-10 16:36:57.441 EST [9556] DEBUG:  backup time 2018-03-10 16:36:23 EST in file "backup_label"
2018-03-10 16:36:57.442 EST [9556] DEBUG:  backup label pg_basebackup base backup in file "backup_label"
2018-03-10 16:36:57.442 EST [9556] DEBUG:  backup timeline 1 in file "backup_label"
2018-03-10 16:36:57.442 EST [9556] DEBUG:  checkpoint record is at 0/2000060
2018-03-10 16:36:57.442 EST [9556] DEBUG:  redo record is at 0/2000028; shutdown false
2018-03-10 16:36:57.442 EST [9556] DEBUG:  next transaction ID: 0:557; next OID: 24576
2018-03-10 16:36:57.442 EST [9556] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:36:57.442 EST [9556] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:36:57.442 EST [9556] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:36:57.442 EST [9556] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:36:57.442 EST [9556] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:36:57.442 EST [9556] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:36:57.442 EST [9556] DEBUG:  starting up replication slots
2018-03-10 16:36:57.481 EST [9556] DEBUG:  resetting unlogged relations: cleanup 1 init 0
2018-03-10 16:36:57.484 EST [9556] DEBUG:  initializing for hot standby
2018-03-10 16:36:57.485 EST [9556] LOG:  redo starts at 0/2000028
2018-03-10 16:36:57.485 EST [9556] DEBUG:  recovery snapshots are now enabled
2018-03-10 16:36:57.485 EST [9556] CONTEXT:  WAL redo at 0/2000028 for Standby/RUNNING_XACTS: nextXid 557 latestCompletedXid 556 oldestRunningXid 557
2018-03-10 16:36:57.485 EST [9556] DEBUG:  end of backup reached
2018-03-10 16:36:57.485 EST [9556] CONTEXT:  WAL redo at 0/20000D0 for XLOG/BACKUP_END: 0/2000028
2018-03-10 16:36:57.493 EST [9556] LOG:  consistent recovery state reached at 0/20000F8
2018-03-10 16:36:57.494 EST [9556] LOG:  invalid record length at 0/3000060: wanted 24, got 0
2018-03-10 16:36:57.593 EST [12940] LOG:  database system is ready to accept read only connections
2018-03-10 16:36:57.733 EST [10280] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1
2018-03-10 16:36:58.062 EST [6848] 005_replay_delay.pl LOG:  statement: SELECT (pg_last_wal_replay_lsn() - '0/3002838'::pg_lsn) >= 0
2018-03-10 16:36:58.375 EST [468] 005_replay_delay.pl LOG:  statement: SELECT (pg_last_wal_replay_lsn() - '0/3002838'::pg_lsn) >= 0
2018-03-10 16:36:58.710 EST [11596] 005_replay_delay.pl LOG:  statement: SELECT (pg_last_wal_replay_lsn() - '0/3002838'::pg_lsn) >= 0
2018-03-10 16:36:59.045 EST [5212] 005_replay_delay.pl LOG:  statement: SELECT (pg_last_wal_replay_lsn() - '0/3002838'::pg_lsn) >= 0
2018-03-10 16:36:59.380 EST [3232] 005_replay_delay.pl LOG:  statement: SELECT (pg_last_wal_replay_lsn() - '0/3002838'::pg_lsn) >= 0
2018-03-10 16:36:59.738 EST [5820] 005_replay_delay.pl LOG:  statement: SELECT (pg_last_wal_replay_lsn() - '0/3002838'::pg_lsn) >= 0
2018-03-10 16:37:00.659 EST [7788] 005_replay_delay.pl LOG:  statement: SELECT (pg_last_wal_replay_lsn() - '0/3002838'::pg_lsn) >= 0
2018-03-10 16:37:00.912 EST [4876] 005_replay_delay.pl LOG:  statement: SELECT (pg_last_wal_replay_lsn() - '0/3002838'::pg_lsn) >= 0
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2018-03-10 16:37:01.092 EST [10280] FATAL:  could not receive data from WAL stream: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:37:01.122 EST [9556] LOG:  invalid record length at 0/3002870: wanted 24, got 0
2018-03-10 16:37:01.203 EST [11544] FATAL:  could not connect to the primary server: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2018-03-10 16:37:01.591 EST [9556] DEBUG:  invalid record length at 0/3002870: wanted 24, got 0
2018-03-10 16:37:02.669 EST [4380] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 56907?
2018-03-10 16:37:02.670 EST [9556] DEBUG:  invalid record length at 0/3002870: wanted 24, got 0
2018-03-10 16:37:03.794 EST [7748] FATAL:  could not connect to the primary server: could not connect to server: Connection refused (0x0000274D/10061)
		Is the server running on host "127.0.0.1" and accepting
		TCP/IP connections on port 56907?
2018-03-10 16:37:03.795 EST [9556] DEBUG:  invalid record length at 0/3002870: wanted 24, got 0
2018-03-10 16:37:04.087 EST [12940] LOG:  received immediate shutdown request
2018-03-10 16:37:04.188 EST [12940] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/006_logical_decoding_master.log ===================
2018-03-10 16:37:30.811 EST [10204] DEBUG:  registering background worker "logical replication launcher"
2018-03-10 16:37:30.812 EST [10204] LOG:  listening on IPv4 address "127.0.0.1", port 51929
2018-03-10 16:37:30.843 EST [10204] DEBUG:  could not enable Lock Pages in Memory user right
2018-03-10 16:37:30.843 EST [10204] HINT:  Assign Lock Pages in Memory user right to the Windows user account which runs PostgreSQL.
2018-03-10 16:37:30.843 EST [10204] DEBUG:  disabling huge pages
2018-03-10 16:37:30.958 EST [13880] LOG:  database system was shut down at 2018-03-10 16:37:30 EST
2018-03-10 16:37:30.958 EST [13880] DEBUG:  checkpoint record is at 0/15CF4D8
2018-03-10 16:37:30.959 EST [13880] DEBUG:  redo record is at 0/15CF4D8; shutdown true
2018-03-10 16:37:30.983 EST [13880] DEBUG:  next transaction ID: 0:556; next OID: 12359
2018-03-10 16:37:30.983 EST [13880] DEBUG:  next MultiXactId: 1; next MultiXactOffset: 0
2018-03-10 16:37:30.983 EST [13880] DEBUG:  oldest unfrozen transaction ID: 549, in database 1
2018-03-10 16:37:30.983 EST [13880] DEBUG:  oldest MultiXactId: 1, in database 1
2018-03-10 16:37:30.983 EST [13880] DEBUG:  commit timestamp Xid oldest/newest: 0/0
2018-03-10 16:37:30.983 EST [13880] DEBUG:  transaction ID wrap limit is 2147484196, limited by database with OID 1
2018-03-10 16:37:30.983 EST [13880] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:37:30.983 EST [13880] DEBUG:  starting up replication slots
2018-03-10 16:37:30.985 EST [13880] DEBUG:  MultiXactId wrap limit is 2147483648, limited by database with OID 1
2018-03-10 16:37:30.985 EST [13880] DEBUG:  MultiXact member stop limit is now 4294914944 based on MultiXact 1
2018-03-10 16:37:31.147 EST [1436] DEBUG:  autovacuum launcher started
2018-03-10 16:37:31.156 EST [10204] DEBUG:  starting background worker process "logical replication launcher"
2018-03-10 16:37:31.179 EST [10204] LOG:  database system is ready to accept connections
2018-03-10 16:37:31.194 EST [10648] DEBUG:  logical replication launcher started
2018-03-10 16:37:31.532 EST [8316] 006_logical_decoding.pl LOG:  statement: CREATE TABLE decoding_test(x integer, y text);
2018-03-10 16:37:31.538 EST [8316] 006_logical_decoding.pl DEBUG:  building index "pg_toast_16384_index" on table "pg_toast_16384" serially
2018-03-10 16:37:31.923 EST [12884] 006_logical_decoding.pl LOG:  statement: SELECT pg_create_logical_replication_slot('test_slot', 'test_decoding');
2018-03-10 16:37:32.530 EST [12884] 006_logical_decoding.pl DEBUG:  searching for logical decoding starting point, starting at 0/15EEF00
2018-03-10 16:37:32.534 EST [12884] 006_logical_decoding.pl LOG:  logical decoding found consistent point at 0/15EEF00
2018-03-10 16:37:32.534 EST [12884] 006_logical_decoding.pl DETAIL:  There are no running transactions.
2018-03-10 16:37:32.534 EST [12884] 006_logical_decoding.pl STATEMENT:  SELECT pg_create_logical_replication_slot('test_slot', 'test_decoding');
2018-03-10 16:37:33.250 EST [12376] 006_logical_decoding.pl LOG:  statement: INSERT INTO decoding_test(x,y) SELECT s, s::text FROM generate_series(1,10) s;
2018-03-10 16:37:33.417 EST [6872] 006_logical_decoding.pl LOG:  statement: SELECT pg_logical_slot_get_changes('test_slot', NULL, NULL);
2018-03-10 16:37:33.418 EST [6872] 006_logical_decoding.pl LOG:  starting logical decoding for slot "test_slot"
2018-03-10 16:37:33.418 EST [6872] 006_logical_decoding.pl DETAIL:  streaming transactions committing after 0/15EEF38, reading WAL from 0/15EEF00
2018-03-10 16:37:33.418 EST [6872] 006_logical_decoding.pl STATEMENT:  SELECT pg_logical_slot_get_changes('test_slot', NULL, NULL);
2018-03-10 16:37:33.419 EST [6872] 006_logical_decoding.pl LOG:  logical decoding found consistent point at 0/15EEF00
2018-03-10 16:37:33.419 EST [6872] 006_logical_decoding.pl DETAIL:  There are no running transactions.
2018-03-10 16:37:33.419 EST [6872] 006_logical_decoding.pl STATEMENT:  SELECT pg_logical_slot_get_changes('test_slot', NULL, NULL);
2018-03-10 16:37:34.106 EST [10204] LOG:  received fast shutdown request
2018-03-10 16:37:34.139 EST [10204] LOG:  aborting any active transactions
2018-03-10 16:37:34.142 EST [10648] DEBUG:  logical replication launcher shutting down
2018-03-10 16:37:34.143 EST [1436] DEBUG:  autovacuum launcher shutting down
2018-03-10 16:37:34.145 EST [10204] LOG:  background worker "logical replication launcher" (PID 10648) exited with exit code 1
2018-03-10 16:37:34.281 EST [12308] LOG:  shutting down
2018-03-10 16:37:34.342 EST [12308] DEBUG:  performing replication slot checkpoint
2018-03-10 16:37:34.873 EST [10204] LOG:  database system is shut down


================== pgsql.build/src/test/recovery/tmp_check/log/regress_log_001_stream_rep ===================
# Checking port 58233
# Found free port 58233
Name: master
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/archives
Connection string: port=58233 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_master.log
# Running: initdb -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata -A trust -N
The files belonging to this database system will be owned by user "pgrunner".
This user must also own the server process.

The database cluster will be initialized with locale "English_United States.1252".
The default database encoding has accordingly been set to "WIN1252".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... windows
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok

Sync to disk skipped.
The data directory might become corrupt if the operating system crashes.

Success. You can now start the database server using:

    pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata -l logfile start

# Running: G:/prog/bf/root/HEAD/pgsql.build/Release/pg_regress/pg_regress --config-auth G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata
### Starting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_master.log start
waiting for server to start.... done
server started
# Postmaster PID for node "master" is 1716
# Taking pg_basebackup my_backup from node "master"
# Running: pg_basebackup -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/backup/my_backup -p 58233 --no-sync
# Backup finished
# Checking port 58234
# Found free port 58234
Name: standby_1
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/archives
Connection string: port=58234 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_1.log
# Initializing node "standby_1" from backup "my_backup" of node "master"
### Enabling streaming replication for node "standby_1"
### Starting node "standby_1"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_1.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_1" is 8356
# Taking pg_basebackup my_backup from node "standby_1"
# Running: pg_basebackup -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/backup/my_backup -p 58234 --no-sync
# Backup finished
### Stopping node "master" using mode fast
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata -m fast stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "master"
# Taking pg_basebackup my_backup_2 from node "standby_1"
# Running: pg_basebackup -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/backup/my_backup_2 -p 58234 --no-sync
# Backup finished
### Starting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_master.log start
waiting for server to start.... done
server started
# Postmaster PID for node "master" is 6508
# Checking port 58235
# Found free port 58235
Name: standby_2
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/archives
Connection string: port=58235 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_2.log
# Initializing node "standby_2" from backup "my_backup" of node "standby_1"
### Enabling streaming replication for node "standby_2"
### Starting node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_2.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_2" is 13972
Waiting for replication conn standby_1's replay_lsn to pass 0/3027588 on master
done
Waiting for replication conn standby_2's replay_lsn to pass 0/3027588 on standby_1
done
standby 1: 1002
ok 1 - check streamed content on standby 1
standby 2: 1002
ok 2 - check streamed content on standby 2
psql:<stdin>:1: ERROR:  cannot execute INSERT in a read-only transaction
ok 3 - read-only queries on standby 1
psql:<stdin>:1: ERROR:  cannot execute INSERT in a read-only transaction
ok 4 - read-only queries on standby 2
# testing connection parameter "target_session_attrs"
ok 5 - connect to node master if mode "read-write" and master,standby_1 listed
ok 6 - connect to node master if mode "read-write" and standby_1,master listed
ok 7 - connect to node master if mode "any" and master,standby_1 listed
ok 8 - connect to node standby_1 if mode "any" and standby_1,master listed
# switching to physical replication slot
### Restarting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_master.log restart
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
# Postmaster PID for node "master" is 12024
(standby_1,)
ok 9 - physical slot created on master
### Restarting node "standby_1"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_1.log restart
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
# Postmaster PID for node "standby_1" is 6736
(standby_2,)
ok 10 - physical slot created on intermediate replica
### Restarting node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_2.log restart
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
# Postmaster PID for node "standby_2" is 12760
ok 11 - xmin of non-cascaded slot null with no hs_feedback
ok 12 - catalog xmin of non-cascaded slot null with no hs_feedback
ok 13 - xmin of cascaded slot null with no hs_feedback
ok 14 - catalog xmin of cascaded slot null with no hs_feedback
Waiting for replication conn standby_1's replay_lsn to pass 0/303D608 on master
done
Waiting for replication conn standby_2's replay_lsn to pass 0/303D608 on standby_1
done
# enabling hot_standby_feedback
### Reloading node "standby_1"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/pgdata reload
server signaled
### Reloading node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata reload
server signaled
Waiting for replication conn standby_1's replay_lsn to pass 0/303D670 on master
done
Waiting for replication conn standby_2's replay_lsn to pass 0/303D670 on standby_1
done
ok 15 - xmin of non-cascaded slot non-null with hs feedback
ok 16 - catalog xmin of non-cascaded slot still null with hs_feedback
ok 17 - xmin of cascaded slot non-null with hs feedback
ok 18 - catalog xmin of cascaded slot still null with hs_feedback
# doing some work to advance xmin
# master slot's new xmin 1562, old xmin 559
ok 19 - xmin of non-cascaded slot with hs feedback has changed
ok 20 - catalog xmin of non-cascaded slot still null with hs_feedback unchanged
# standby_1 slot's new xmin 1562, old xmin 560
ok 21 - xmin of cascaded slot with hs feedback has changed
ok 22 - catalog xmin of cascaded slot still null with hs_feedback unchanged
# disabling hot_standby_feedback
### Reloading node "standby_1"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/pgdata reload
server signaled
### Reloading node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata reload
server signaled
Waiting for replication conn standby_1's replay_lsn to pass 0/30C33B8 on master
done
Waiting for replication conn standby_2's replay_lsn to pass 0/30C33B8 on standby_1
done
ok 23 - xmin of non-cascaded slot null with hs feedback reset
ok 24 - catalog xmin of non-cascaded slot still null with hs_feedback reset
ok 25 - xmin of cascaded slot null with hs feedback reset
ok 26 - catalog xmin of cascaded slot still null with hs_feedback reset
# re-enabling hot_standby_feedback and disabling while stopped
### Reloading node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata reload
server signaled
Waiting for replication conn standby_1's replay_lsn to pass 0/30C5078 on master
done
Waiting for replication conn standby_2's replay_lsn to pass 0/30C5078 on standby_1
done
### Stopping node "standby_2" using mode fast
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata -m fast stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_2"
ok 27 - xmin of cascaded slot non-null with postgres shut down
### Starting node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/001_stream_rep_standby_2.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_2" is 12544
ok 28 - xmin of cascaded slot reset after startup with hs feedback reset
### Stopping node "master" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_master_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "master"
### Stopping node "standby_1" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_1_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_1"
### Stopping node "standby_2" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_001_stream_rep_standby_2_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_2"


================== pgsql.build/src/test/recovery/tmp_check/log/regress_log_002_archiving ===================
# Checking port 53709
# Found free port 53709
Name: master
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/archives
Connection string: port=53709 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/002_archiving_master.log
# Running: initdb -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/pgdata -A trust -N
The files belonging to this database system will be owned by user "pgrunner".
This user must also own the server process.

The database cluster will be initialized with locale "English_United States.1252".
The default database encoding has accordingly been set to "WIN1252".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... windows
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok

Sync to disk skipped.
The data directory might become corrupt if the operating system crashes.

Success. You can now start the database server using:

    pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/pgdata -l logfile start

# Running: G:/prog/bf/root/HEAD/pgsql.build/Release/pg_regress/pg_regress --config-auth G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/pgdata
### Enabling WAL archiving for node "master"
### Starting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/002_archiving_master.log start
waiting for server to start.... done
server started
# Postmaster PID for node "master" is 5344
# Taking pg_basebackup my_backup from node "master"
# Running: pg_basebackup -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/backup/my_backup -p 53709 --no-sync
# Backup finished
# Checking port 53710
# Found free port 53710
Name: standby
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_standby_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_standby_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_standby_data/archives
Connection string: port=53710 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/002_archiving_standby.log
# Initializing node "standby" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby"
### Starting node "standby"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_standby_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/002_archiving_standby.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby" is 8024
ok 1 - check content from archives
### Stopping node "master" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_master_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "master"
### Stopping node "standby" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_002_archiving_standby_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby"


================== pgsql.build/src/test/recovery/tmp_check/log/regress_log_003_recovery_targets ===================
# Checking port 53544
# Found free port 53544
Name: master
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/archives
Connection string: port=53544 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_master.log
# Running: initdb -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/pgdata -A trust -N
The files belonging to this database system will be owned by user "pgrunner".
This user must also own the server process.

The database cluster will be initialized with locale "English_United States.1252".
The default database encoding has accordingly been set to "WIN1252".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... windows
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok

Sync to disk skipped.
The data directory might become corrupt if the operating system crashes.

Success. You can now start the database server using:

    pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/pgdata -l logfile start

# Running: G:/prog/bf/root/HEAD/pgsql.build/Release/pg_regress/pg_regress --config-auth G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/pgdata
### Enabling WAL archiving for node "master"
### Starting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_master.log start
waiting for server to start.... done
server started
# Postmaster PID for node "master" is 11336
# Taking pg_basebackup my_backup from node "master"
# Running: pg_basebackup -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/backup/my_backup -p 53544 --no-sync
# Backup finished
# Checking port 53545
# Found free port 53545
Name: standby_1
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_1_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_1_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_1_data/archives
Connection string: port=53545 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_1.log
# Initializing node "standby_1" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_1"
### Starting node "standby_1"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_1_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_1.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_1" is 7108
ok 1 - check standby content for immediate target
### Stopping node "standby_1" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_1_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_1"
# Checking port 53546
# Found free port 53546
Name: standby_2
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_2_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_2_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_2_data/archives
Connection string: port=53546 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_2.log
# Initializing node "standby_2" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_2"
### Starting node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_2_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_2.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_2" is 12692
ok 2 - check standby content for XID
### Stopping node "standby_2" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_2_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_2"
# Checking port 53547
# Found free port 53547
Name: standby_3
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_3_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_3_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_3_data/archives
Connection string: port=53547 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_3.log
# Initializing node "standby_3" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_3"
### Starting node "standby_3"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_3_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_3.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_3" is 9320
ok 3 - check standby content for time
### Stopping node "standby_3" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_3_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_3"
# Checking port 53548
# Found free port 53548
Name: standby_4
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_4_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_4_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_4_data/archives
Connection string: port=53548 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_4.log
# Initializing node "standby_4" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_4"
### Starting node "standby_4"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_4_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_4.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_4" is 7668
ok 4 - check standby content for name
### Stopping node "standby_4" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_4_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_4"
# Checking port 53549
# Found free port 53549
Name: standby_5
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_5_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_5_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_5_data/archives
Connection string: port=53549 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_5.log
# Initializing node "standby_5" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_5"
### Starting node "standby_5"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_5_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_5.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_5" is 12944
ok 5 - check standby content for LSN
### Stopping node "standby_5" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_5_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_5"
# Checking port 53550
# Found free port 53550
Name: standby_6
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_6_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_6_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_6_data/archives
Connection string: port=53550 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_6.log
# Initializing node "standby_6" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_6"
### Starting node "standby_6"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_6_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_6.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_6" is 9096
ok 6 - check standby content for name + XID + time
### Stopping node "standby_6" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_6_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_6"
# Checking port 53551
# Found free port 53551
Name: standby_7
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_7_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_7_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_7_data/archives
Connection string: port=53551 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_7.log
# Initializing node "standby_7" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_7"
### Starting node "standby_7"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_7_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_7.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_7" is 5024
ok 7 - check standby content for time + name + XID
### Stopping node "standby_7" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_7_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_7"
# Checking port 53552
# Found free port 53552
Name: standby_8
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_8_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_8_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_8_data/archives
Connection string: port=53552 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_8.log
# Initializing node "standby_8" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_8"
### Starting node "standby_8"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_8_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_8.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_8" is 13584
ok 8 - check standby content for XID + time + name
### Stopping node "standby_8" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_8_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_8"
# Checking port 53553
# Found free port 53553
Name: standby_9
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_9_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_9_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_9_data/archives
Connection string: port=53553 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_9.log
# Initializing node "standby_9" from backup "my_backup" of node "master"
### Enabling WAL restore for node "standby_9"
### Starting node "standby_9"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_9_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/003_recovery_targets_standby_9.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_9" is 12900
ok 9 - check standby content for XID + time + name + LSN
### Stopping node "standby_9" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_standby_9_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_9"
### Stopping node "master" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_003_recovery_targets_master_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "master"


================== pgsql.build/src/test/recovery/tmp_check/log/regress_log_004_timeline_switch ===================
# Checking port 53266
# Found free port 53266
Name: master
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/archives
Connection string: port=53266 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_master.log
# Running: initdb -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/pgdata -A trust -N
The files belonging to this database system will be owned by user "pgrunner".
This user must also own the server process.

The database cluster will be initialized with locale "English_United States.1252".
The default database encoding has accordingly been set to "WIN1252".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... windows
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok

Sync to disk skipped.
The data directory might become corrupt if the operating system crashes.

Success. You can now start the database server using:

    pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/pgdata -l logfile start

# Running: G:/prog/bf/root/HEAD/pgsql.build/Release/pg_regress/pg_regress --config-auth G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/pgdata
### Starting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_master.log start
waiting for server to start.... done
server started
# Postmaster PID for node "master" is 11392
# Taking pg_basebackup my_backup from node "master"
# Running: pg_basebackup -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/backup/my_backup -p 53266 --no-sync
# Backup finished
# Checking port 53267
# Found free port 53267
Name: standby_1
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_1_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_1_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_1_data/archives
Connection string: port=53267 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_standby_1.log
# Initializing node "standby_1" from backup "my_backup" of node "master"
### Enabling streaming replication for node "standby_1"
### Starting node "standby_1"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_1_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_standby_1.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_1" is 11528
# Checking port 53268
# Found free port 53268
Name: standby_2
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_2_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_2_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_2_data/archives
Connection string: port=53268 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_standby_2.log
# Initializing node "standby_2" from backup "my_backup" of node "master"
### Enabling streaming replication for node "standby_2"
### Starting node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_2_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_standby_2.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby_2" is 10940
Waiting for replication conn standby_1's replay_lsn to pass 0/3027498 on master
done
### Stopping node "master" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_master_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "master"
### Promoting node "standby_1"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_1_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_standby_1.log promote
waiting for server to promote.... done
server promoted
### Restarting node "standby_2"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_2_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/004_timeline_switch_standby_2.log restart
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
# Postmaster PID for node "standby_2" is 8132
Waiting for replication conn standby_2's replay_lsn to pass 0/303A148 on standby_1
done
ok 1 - check content of standby 2
### Stopping node "standby_1" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_1_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_1"
### Stopping node "standby_2" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_004_timeline_switch_standby_2_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby_2"


================== pgsql.build/src/test/recovery/tmp_check/log/regress_log_005_replay_delay ===================
# Checking port 56907
# Found free port 56907
Name: master
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/archives
Connection string: port=56907 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/005_replay_delay_master.log
# Running: initdb -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/pgdata -A trust -N
The files belonging to this database system will be owned by user "pgrunner".
This user must also own the server process.

The database cluster will be initialized with locale "English_United States.1252".
The default database encoding has accordingly been set to "WIN1252".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... windows
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok

Sync to disk skipped.
The data directory might become corrupt if the operating system crashes.

Success. You can now start the database server using:

    pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/pgdata -l logfile start

# Running: G:/prog/bf/root/HEAD/pgsql.build/Release/pg_regress/pg_regress --config-auth G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/pgdata
### Starting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/005_replay_delay_master.log start
waiting for server to start.... done
server started
# Postmaster PID for node "master" is 11312
# Taking pg_basebackup my_backup from node "master"
# Running: pg_basebackup -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/backup/my_backup -p 56907 --no-sync
# Backup finished
# Checking port 56908
# Found free port 56908
Name: standby
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_standby_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_standby_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_standby_data/archives
Connection string: port=56908 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/005_replay_delay_standby.log
# Initializing node "standby" from backup "my_backup" of node "master"
### Enabling streaming replication for node "standby"
### Starting node "standby"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_standby_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/005_replay_delay_standby.log start
waiting for server to start.... done
server started
# Postmaster PID for node "standby" is 12940
ok 1 - standby applies WAL only after replication delay
### Stopping node "master" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_master_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "master"
### Stopping node "standby" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_005_replay_delay_standby_data/pgdata -m immediate stop
waiting for server to shut down.... done
server stopped
# No postmaster PID for node "standby"


================== pgsql.build/src/test/recovery/tmp_check/log/regress_log_006_logical_decoding ===================
# Checking port 51929
# Found free port 51929
Name: master
Data directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata
Backup directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/backup
Archive directory: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/archives
Connection string: port=51929 host=127.0.0.1
Log file: G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/006_logical_decoding_master.log
# Running: initdb -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata -A trust -N
The files belonging to this database system will be owned by user "pgrunner".
This user must also own the server process.

The database cluster will be initialized with locale "English_United States.1252".
The default database encoding has accordingly been set to "WIN1252".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... windows
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok

Sync to disk skipped.
The data directory might become corrupt if the operating system crashes.

Success. You can now start the database server using:

    pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata -l logfile start

# Running: G:/prog/bf/root/HEAD/pgsql.build/Release/pg_regress/pg_regress --config-auth G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata
### Starting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/006_logical_decoding_master.log start
waiting for server to start.... done
server started
# Postmaster PID for node "master" is 10204
ok 1 - Decoding produced 12 rows inc BEGIN/COMMIT
### Restarting node "master"
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata -l G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/log/006_logical_decoding_master.log restart
waiting for server to shut down.... done
server stopped
waiting for server to start....The process cannot access the file because it is being used by another process.
 stopped waiting
pg_ctl: could not start server
Examine the log output.
Bail out!  system pg_ctl failed
### Stopping node "master" using mode immediate
# Running: pg_ctl -D G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata -m immediate stop
pg_ctl: PID file "G:/prog/bf/root/HEAD/pgsql.build/src/test/recovery/tmp_check/t_006_logical_decoding_master_data/pgdata/postmaster.pid" does not exist
Is server running?
Bail out!  system pg_ctl failed