FIPS Product: YES
FIPS Kernel: NO
FIPS Mode: NO
NSS DB directory: sql:/etc/ipsec.d
Initializing NSS
Opening NSS database "sql:/etc/ipsec.d" read-only
NSS initialized
NSS crypto library initialized
FIPS HMAC integrity support [enabled]
FIPS mode disabled for pluto daemon
FIPS HMAC integrity verification self-test FAILED
libcap-ng support [enabled]
Linux audit support [enabled]
Linux audit activated
Starting Pluto (Libreswan Version v3.28-827-gc9aa82b8a6-master-s2 XFRM(netkey) esp-hw-offload FORK PTHREAD_SETSCHEDPRIO NSS (IPsec profile) DNSSEC SYSTEMD_WATCHDOG FIPS_CHECK LABELED_IPSEC SECCOMP LIBCAP_NG LINUX_AUDIT XAUTH_PAM NETWORKMANAGER CURL(non-NSS)) pid:25034
core dump dir: /tmp
secrets file: /etc/ipsec.secrets
leak-detective disabled
NSS crypto [enabled]
XAUTH PAM support [enabled]
| libevent is using pluto's memory allocator
Initializing libevent in pthreads mode: headers: 2.1.8-stable (2010800); library: 2.1.8-stable (2010800)
| libevent_malloc: new ptr-libevent@0x55f21d0a9850 size 40
| libevent_malloc: new ptr-libevent@0x55f21d0aab00 size 40
| libevent_malloc: new ptr-libevent@0x55f21d0aab30 size 40
| creating event base
| libevent_malloc: new ptr-libevent@0x55f21d0aaac0 size 56
| libevent_malloc: new ptr-libevent@0x55f21d0aab60 size 664
| libevent_malloc: new ptr-libevent@0x55f21d0aae00 size 24
| libevent_malloc: new ptr-libevent@0x55f21d09c630 size 384
| libevent_malloc: new ptr-libevent@0x55f21d0aae20 size 16
| libevent_malloc: new ptr-libevent@0x55f21d0aae40 size 40
| libevent_malloc: new ptr-libevent@0x55f21d0aae70 size 48
| libevent_realloc: new ptr-libevent@0x55f21d02e370 size 256
| libevent_malloc: new ptr-libevent@0x55f21d0aaeb0 size 16
| libevent_free: release ptr-libevent@0x55f21d0aaac0
| libevent initialized
| libevent_realloc: new ptr-libevent@0x55f21d0aaed0 size 64
| global periodic timer EVENT_RESET_LOG_RATE_LIMIT enabled with interval of 3600 seconds
| init_nat_traversal() initialized with keep_alive=0s
NAT-Traversal support  [enabled]
| global one-shot timer EVENT_NAT_T_KEEPALIVE initialized
| global one-shot timer EVENT_FREE_ROOT_CERTS initialized
| global periodic timer EVENT_REINIT_SECRET enabled with interval of 3600 seconds
| global one-shot timer EVENT_REVIVE_CONNS initialized
| global periodic timer EVENT_PENDING_DDNS enabled with interval of 60 seconds
| global periodic timer EVENT_PENDING_PHASE2 enabled with interval of 120 seconds
Encryption algorithms:
  AES_CCM_16              IKEv1:     ESP     IKEv2:     ESP     FIPS  {256,192,*128}  aes_ccm, aes_ccm_c
  AES_CCM_12              IKEv1:     ESP     IKEv2:     ESP     FIPS  {256,192,*128}  aes_ccm_b
  AES_CCM_8               IKEv1:     ESP     IKEv2:     ESP     FIPS  {256,192,*128}  aes_ccm_a
  3DES_CBC                IKEv1: IKE ESP     IKEv2: IKE ESP     FIPS  [*192]  3des
  CAMELLIA_CTR            IKEv1:     ESP     IKEv2:     ESP           {256,192,*128}
  CAMELLIA_CBC            IKEv1: IKE ESP     IKEv2: IKE ESP           {256,192,*128}  camellia
  AES_GCM_16              IKEv1:     ESP     IKEv2: IKE ESP     FIPS  {256,192,*128}  aes_gcm, aes_gcm_c
  AES_GCM_12              IKEv1:     ESP     IKEv2: IKE ESP     FIPS  {256,192,*128}  aes_gcm_b
  AES_GCM_8               IKEv1:     ESP     IKEv2: IKE ESP     FIPS  {256,192,*128}  aes_gcm_a
  AES_CTR                 IKEv1: IKE ESP     IKEv2: IKE ESP     FIPS  {256,192,*128}  aesctr
  AES_CBC                 IKEv1: IKE ESP     IKEv2: IKE ESP     FIPS  {256,192,*128}  aes
  SERPENT_CBC             IKEv1: IKE ESP     IKEv2: IKE ESP           {256,192,*128}  serpent
  TWOFISH_CBC             IKEv1: IKE ESP     IKEv2: IKE ESP           {256,192,*128}  twofish
  TWOFISH_SSH             IKEv1: IKE         IKEv2: IKE ESP           {256,192,*128}  twofish_cbc_ssh
  NULL_AUTH_AES_GMAC      IKEv1:     ESP     IKEv2:     ESP     FIPS  {256,192,*128}  aes_gmac
  NULL                    IKEv1:     ESP     IKEv2:     ESP           []
  CHACHA20_POLY1305       IKEv1:             IKEv2: IKE ESP           [*256]  chacha20poly1305
Hash algorithms:
  MD5                     IKEv1: IKE         IKEv2:                 
  SHA1                    IKEv1: IKE         IKEv2:             FIPS  sha
  SHA2_256                IKEv1: IKE         IKEv2:             FIPS  sha2, sha256
  SHA2_384                IKEv1: IKE         IKEv2:             FIPS  sha384
  SHA2_512                IKEv1: IKE         IKEv2:             FIPS  sha512
PRF algorithms:
  HMAC_MD5                IKEv1: IKE         IKEv2: IKE               md5
  HMAC_SHA1               IKEv1: IKE         IKEv2: IKE         FIPS  sha, sha1
  HMAC_SHA2_256           IKEv1: IKE         IKEv2: IKE         FIPS  sha2, sha256, sha2_256
  HMAC_SHA2_384           IKEv1: IKE         IKEv2: IKE         FIPS  sha384, sha2_384
  HMAC_SHA2_512           IKEv1: IKE         IKEv2: IKE         FIPS  sha512, sha2_512
  AES_XCBC                IKEv1:             IKEv2: IKE               aes128_xcbc
Integrity algorithms:
  HMAC_MD5_96             IKEv1: IKE ESP AH  IKEv2: IKE ESP AH        md5, hmac_md5
  HMAC_SHA1_96            IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  sha, sha1, sha1_96, hmac_sha1
  HMAC_SHA2_512_256       IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  sha512, sha2_512, sha2_512_256, hmac_sha2_512
  HMAC_SHA2_384_192       IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  sha384, sha2_384, sha2_384_192, hmac_sha2_384
  HMAC_SHA2_256_128       IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  sha2, sha256, sha2_256, sha2_256_128, hmac_sha2_256
  HMAC_SHA2_256_TRUNCBUG  IKEv1:     ESP AH  IKEv2:         AH      
  AES_XCBC_96             IKEv1:     ESP AH  IKEv2: IKE ESP AH        aes_xcbc, aes128_xcbc, aes128_xcbc_96
  AES_CMAC_96             IKEv1:     ESP AH  IKEv2:     ESP AH  FIPS  aes_cmac
  NONE                    IKEv1:     ESP     IKEv2: IKE ESP     FIPS  null
DH algorithms:
  NONE                    IKEv1:             IKEv2: IKE ESP AH  FIPS  null, dh0
  MODP1536                IKEv1: IKE ESP AH  IKEv2: IKE ESP AH        dh5
  MODP2048                IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  dh14
  MODP3072                IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  dh15
  MODP4096                IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  dh16
  MODP6144                IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  dh17
  MODP8192                IKEv1: IKE ESP AH  IKEv2: IKE ESP AH  FIPS  dh18
  DH19                    IKEv1: IKE         IKEv2: IKE ESP AH  FIPS  ecp_256, ecp256
  DH20                    IKEv1: IKE         IKEv2: IKE ESP AH  FIPS  ecp_384, ecp384
  DH21                    IKEv1: IKE         IKEv2: IKE ESP AH  FIPS  ecp_521, ecp521
  DH31                    IKEv1: IKE         IKEv2: IKE ESP AH        curve25519
testing CAMELLIA_CBC:
  Camellia: 16 bytes with 128-bit key
  Camellia: 16 bytes with 128-bit key
  Camellia: 16 bytes with 256-bit key
  Camellia: 16 bytes with 256-bit key
testing AES_GCM_16:
  empty string
  one block
  two blocks
  two blocks with associated data
testing AES_CTR:
  Encrypting 16 octets using AES-CTR with 128-bit key
  Encrypting 32 octets using AES-CTR with 128-bit key
  Encrypting 36 octets using AES-CTR with 128-bit key
  Encrypting 16 octets using AES-CTR with 192-bit key
  Encrypting 32 octets using AES-CTR with 192-bit key
  Encrypting 36 octets using AES-CTR with 192-bit key
  Encrypting 16 octets using AES-CTR with 256-bit key
  Encrypting 32 octets using AES-CTR with 256-bit key
  Encrypting 36 octets using AES-CTR with 256-bit key
testing AES_CBC:
  Encrypting 16 bytes (1 block) using AES-CBC with 128-bit key
  Encrypting 32 bytes (2 blocks) using AES-CBC with 128-bit key
  Encrypting 48 bytes (3 blocks) using AES-CBC with 128-bit key
  Encrypting 64 bytes (4 blocks) using AES-CBC with 128-bit key
testing AES_XCBC:
  RFC 3566 Test Case #1: AES-XCBC-MAC-96 with 0-byte input
  RFC 3566 Test Case #2: AES-XCBC-MAC-96 with 3-byte input
  RFC 3566 Test Case #3: AES-XCBC-MAC-96 with 16-byte input
  RFC 3566 Test Case #4: AES-XCBC-MAC-96 with 20-byte input
  RFC 3566 Test Case #5: AES-XCBC-MAC-96 with 32-byte input
  RFC 3566 Test Case #6: AES-XCBC-MAC-96 with 34-byte input
  RFC 3566 Test Case #7: AES-XCBC-MAC-96 with 1000-byte input
  RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 16)
  RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 10)
  RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 18)
testing HMAC_MD5:
  RFC 2104: MD5_HMAC test 1
  RFC 2104: MD5_HMAC test 2
  RFC 2104: MD5_HMAC test 3
8 CPU cores online
starting up 7 crypto helpers
started thread for crypto helper 0
started thread for crypto helper 1
started thread for crypto helper 2
started thread for crypto helper 3
started thread for crypto helper 4
started thread for crypto helper 5
started thread for crypto helper 6
| checking IKEv1 state table
|   MAIN_R0: category: half-open IKE SA flags: 0:
|     -> MAIN_R1 EVENT_SO_DISCARD
|   MAIN_I1: category: half-open IKE SA flags: 0:
|     -> MAIN_I2 EVENT_RETRANSMIT
|   MAIN_R1: category: open IKE SA flags: 200:
|     -> MAIN_R2 EVENT_RETRANSMIT
|     -> UNDEFINED EVENT_RETRANSMIT
|     -> UNDEFINED EVENT_RETRANSMIT
|   MAIN_I2: category: open IKE SA flags: 0:
|     -> MAIN_I3 EVENT_RETRANSMIT
|     -> UNDEFINED EVENT_RETRANSMIT
|     -> UNDEFINED EVENT_RETRANSMIT
|   MAIN_R2: category: open IKE SA flags: 0:
|     -> MAIN_R3 EVENT_SA_REPLACE
|     -> MAIN_R3 EVENT_SA_REPLACE
|     -> UNDEFINED EVENT_SA_REPLACE
|   MAIN_I3: category: open IKE SA flags: 0:
|     -> MAIN_I4 EVENT_SA_REPLACE
|     -> MAIN_I4 EVENT_SA_REPLACE
|     -> UNDEFINED EVENT_SA_REPLACE
|   MAIN_R3: category: established IKE SA flags: 200:
|     -> UNDEFINED EVENT_NULL
|   MAIN_I4: category: established IKE SA flags: 0:
|     -> UNDEFINED EVENT_NULL
|   AGGR_R0: category: half-open IKE SA flags: 0:
|     -> AGGR_R1 EVENT_SO_DISCARD
|   AGGR_I1: category: half-open IKE SA flags: 0:
|     -> AGGR_I2 EVENT_SA_REPLACE
|     -> AGGR_I2 EVENT_SA_REPLACE
| starting up helper thread 3
|   AGGR_R1: category: open IKE SA flags: 200:
| starting up helper thread 2
| status value returned by setting the priority of this thread (crypto helper 2) 22
| crypto helper 2 waiting (nothing to do)
|     -> AGGR_R2 EVENT_SA_REPLACE
|     -> AGGR_R2 EVENT_SA_REPLACE
| starting up helper thread 6
| status value returned by setting the priority of this thread (crypto helper 3) 22
| crypto helper 3 waiting (nothing to do)
| starting up helper thread 0
| starting up helper thread 1
| status value returned by setting the priority of this thread (crypto helper 6) 22
| crypto helper 6 waiting (nothing to do)
|   AGGR_I2: category: established IKE SA flags: 200:
| status value returned by setting the priority of this thread (crypto helper 1) 22
|     -> UNDEFINED EVENT_NULL
|   AGGR_R2: category: established IKE SA flags: 0:
|     -> UNDEFINED EVENT_NULL
| starting up helper thread 4
| starting up helper thread 5
| crypto helper 1 waiting (nothing to do)
| status value returned by setting the priority of this thread (crypto helper 5) 22
| status value returned by setting the priority of this thread (crypto helper 4) 22
|   QUICK_R0: category: established CHILD SA flags: 0:
|     -> QUICK_R1 EVENT_RETRANSMIT
|   QUICK_I1: category: established CHILD SA flags: 0:
|     -> QUICK_I2 EVENT_SA_REPLACE
|   QUICK_R1: category: established CHILD SA flags: 0:
|     -> QUICK_R2 EVENT_SA_REPLACE
|   QUICK_I2: category: established CHILD SA flags: 200:
|     -> UNDEFINED EVENT_NULL
|   QUICK_R2: category: established CHILD SA flags: 0:
|     -> UNDEFINED EVENT_NULL
|   INFO: category: informational flags: 0:
|     -> UNDEFINED EVENT_NULL
|   INFO_PROTECTED: category: informational flags: 0:
|     -> UNDEFINED EVENT_NULL
|   XAUTH_R0: category: established IKE SA flags: 0:
|     -> XAUTH_R1 EVENT_NULL
|   XAUTH_R1: category: established IKE SA flags: 0:
|     -> MAIN_R3 EVENT_SA_REPLACE
|   MODE_CFG_R0: category: informational flags: 0:
|     -> MODE_CFG_R1 EVENT_SA_REPLACE
|   MODE_CFG_R1: category: established IKE SA flags: 0:
|     -> MODE_CFG_R2 EVENT_SA_REPLACE
|   MODE_CFG_R2: category: established IKE SA flags: 0:
|     -> UNDEFINED EVENT_NULL
|   MODE_CFG_I1: category: established IKE SA flags: 0:
|     -> MAIN_I4 EVENT_SA_REPLACE
|   XAUTH_I0: category: established IKE SA flags: 0:
|     -> XAUTH_I1 EVENT_RETRANSMIT
|   XAUTH_I1: category: established IKE SA flags: 0:
|     -> MAIN_I4 EVENT_RETRANSMIT
| checking IKEv2 state table
|   PARENT_I0: category: ignore flags: 0:
|     -> PARENT_I1 EVENT_RETRANSMIT send-request (initiate IKE_SA_INIT)
|   PARENT_I1: category: half-open IKE SA flags: 0:
|     -> PARENT_I1 EVENT_RETAIN send-request (Initiator: process SA_INIT reply notification)
|     -> PARENT_I2 EVENT_RETRANSMIT send-request (Initiator: process IKE_SA_INIT reply, initiate IKE_AUTH)
|   PARENT_I2: category: open IKE SA flags: 0:
|     -> PARENT_I2 EVENT_NULL (Initiator: process INVALID_SYNTAX AUTH notification)
|     -> PARENT_I2 EVENT_NULL (Initiator: process AUTHENTICATION_FAILED AUTH notification)
|     -> PARENT_I2 EVENT_NULL (Initiator: process UNSUPPORTED_CRITICAL_PAYLOAD AUTH notification)
|     -> V2_IPSEC_I EVENT_SA_REPLACE (Initiator: process IKE_AUTH response)
|     -> PARENT_I2 EVENT_NULL (IKE SA: process IKE_AUTH response containing unknown notification)
|   PARENT_I3: category: established IKE SA flags: 0:
|     -> PARENT_I3 EVENT_RETAIN (I3: Informational Request)
|     -> PARENT_I3 EVENT_RETAIN (I3: Informational Response)
|     -> PARENT_I3 EVENT_RETAIN (I3: INFORMATIONAL Request)
|     -> PARENT_I3 EVENT_RETAIN (I3: INFORMATIONAL Response)
|   PARENT_R0: category: half-open IKE SA flags: 0:
|     -> PARENT_R1 EVENT_SO_DISCARD send-request (Respond to IKE_SA_INIT)
|   PARENT_R1: category: half-open IKE SA flags: 0:
|     -> PARENT_R1 EVENT_SA_REPLACE send-request (Responder: process IKE_AUTH request (no SKEYSEED))
|     -> V2_IPSEC_R EVENT_SA_REPLACE send-request (Responder: process IKE_AUTH request)
|   PARENT_R2: category: established IKE SA flags: 0:
|     -> PARENT_R2 EVENT_RETAIN (R2: process Informational Request)
|     -> PARENT_R2 EVENT_RETAIN (R2: process Informational Response)
|     -> PARENT_R2 EVENT_RETAIN (R2: process INFORMATIONAL Request)
|     -> PARENT_R2 EVENT_RETAIN (R2: process INFORMATIONAL Response)
|   V2_CREATE_I0: category: established IKE SA flags: 0:
|     -> V2_CREATE_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IPsec SA)
|   V2_CREATE_I: category: established IKE SA flags: 0:
|     -> V2_IPSEC_I EVENT_SA_REPLACE (Process CREATE_CHILD_SA IPsec SA Response)
|   V2_REKEY_IKE_I0: category: established IKE SA flags: 0:
|     -> V2_REKEY_IKE_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IKE Rekey)
|   V2_REKEY_IKE_I: category: established IKE SA flags: 0:
|     -> PARENT_I3 EVENT_SA_REPLACE (Process CREATE_CHILD_SA IKE Rekey Response)
|   V2_REKEY_CHILD_I0: category: established IKE SA flags: 0:
|     -> V2_REKEY_CHILD_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IPsec Rekey SA)
|   V2_REKEY_CHILD_I: category: established IKE SA flags: 0: <none>
|   V2_CREATE_R: category: established IKE SA flags: 0:
|     -> V2_IPSEC_R EVENT_SA_REPLACE send-request (Respond to CREATE_CHILD_SA IPsec SA Request)
|   V2_REKEY_IKE_R: category: established IKE SA flags: 0:
|     -> PARENT_R2 EVENT_SA_REPLACE send-request (Respond to CREATE_CHILD_SA IKE Rekey)
|   V2_REKEY_CHILD_R: category: established IKE SA flags: 0: <none>
|   V2_IPSEC_I: category: established CHILD SA flags: 0: <none>
|   V2_IPSEC_R: category: established CHILD SA flags: 0: <none>
|   IKESA_DEL: category: established IKE SA flags: 0:
|     -> IKESA_DEL EVENT_RETAIN (IKE_SA_DEL: process INFORMATIONAL)
|   CHILDSA_DEL: category: informational flags: 0: <none>
Using Linux XFRM/NETKEY IPsec interface code on 5.2.11+
| Hard-wiring algorithms
| adding AES_CCM_16 to kernel algorithm db
| adding AES_CCM_12 to kernel algorithm db
| adding AES_CCM_8 to kernel algorithm db
| adding 3DES_CBC to kernel algorithm db
| adding CAMELLIA_CBC to kernel algorithm db
| adding AES_GCM_16 to kernel algorithm db
| adding AES_GCM_12 to kernel algorithm db
| adding AES_GCM_8 to kernel algorithm db
| adding AES_CTR to kernel algorithm db
| adding AES_CBC to kernel algorithm db
| adding SERPENT_CBC to kernel algorithm db
| adding TWOFISH_CBC to kernel algorithm db
| adding NULL_AUTH_AES_GMAC to kernel algorithm db
| adding NULL to kernel algorithm db
| adding CHACHA20_POLY1305 to kernel algorithm db
| adding HMAC_MD5_96 to kernel algorithm db
| adding HMAC_SHA1_96 to kernel algorithm db
| adding HMAC_SHA2_512_256 to kernel algorithm db
| adding HMAC_SHA2_384_192 to kernel algorithm db
| adding HMAC_SHA2_256_128 to kernel algorithm db
| adding HMAC_SHA2_256_TRUNCBUG to kernel algorithm db
| adding AES_XCBC_96 to kernel algorithm db
| adding AES_CMAC_96 to kernel algorithm db
| adding NONE to kernel algorithm db
| net.ipv6.conf.all.disable_ipv6=1 ignore ipv6 holes
| global periodic timer EVENT_SHUNT_SCAN enabled with interval of 20 seconds
| setup kernel fd callback
| add_fd_read_event_handler: new KERNEL_XRM_FD-pe@0x55f21d0b5280
| libevent_malloc: new ptr-libevent@0x55f21d0bc650 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0b04f0 size 16
| add_fd_read_event_handler: new KERNEL_ROUTE_FD-pe@0x55f21d0afb20
| libevent_malloc: new ptr-libevent@0x55f21d0bc6e0 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0aaf20 size 16
| global one-shot timer EVENT_CHECK_CRLS initialized
selinux support is enabled.
systemd watchdog not enabled - not sending watchdog keepalives
| unbound context created - setting debug level to 5
| /etc/hosts lookups activated
| /etc/resolv.conf usage activated
| outgoing-port-avoid set 0-65535
| outgoing-port-permit set 32768-60999
| Loading dnssec root key from:/var/lib/unbound/root.key
| No additional dnssec trust anchors defined via dnssec-trusted= option
| Setting up events, loop start
| add_fd_read_event_handler: new PLUTO_CTL_FD-pe@0x55f21d0af870
| libevent_malloc: new ptr-libevent@0x55f21d0c6c50 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0c6ce0 size 16
| libevent_realloc: new ptr-libevent@0x55f21d02c6c0 size 256
| libevent_malloc: new ptr-libevent@0x55f21d0c6d00 size 8
| libevent_realloc: new ptr-libevent@0x55f21d0bba50 size 144
| libevent_malloc: new ptr-libevent@0x55f21d0c6d20 size 152
| libevent_malloc: new ptr-libevent@0x55f21d0c6dc0 size 16
| signal event handler PLUTO_SIGCHLD installed
| libevent_malloc: new ptr-libevent@0x55f21d0c6de0 size 8
| libevent_malloc: new ptr-libevent@0x55f21d0c6e00 size 152
| signal event handler PLUTO_SIGTERM installed
| libevent_malloc: new ptr-libevent@0x55f21d0c6ea0 size 8
| libevent_malloc: new ptr-libevent@0x55f21d0c6ec0 size 152
| signal event handler PLUTO_SIGHUP installed
| libevent_malloc: new ptr-libevent@0x55f21d0c6f60 size 8
| libevent_realloc: release ptr-libevent@0x55f21d0bba50
| libevent_realloc: new ptr-libevent@0x55f21d0c6f80 size 256
| libevent_malloc: new ptr-libevent@0x55f21d0bba50 size 152
| signal event handler PLUTO_SIGSYS installed
| created addconn helper (pid:25165) using fork+execve
| forked child 25165
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| pluto_sd: executing action action: reloading(4), status 0
listening for IKE messages
| Inspecting interface lo 
| found lo with address 127.0.0.1
| Inspecting interface eth0 
| found eth0 with address 192.0.2.254
| Inspecting interface eth1 
| found eth1 with address 192.1.2.23
Kernel supports NIC esp-hw-offload
adding interface eth1/eth1 (esp-hw-offload not supported by kernel) 192.1.2.23:500
| NAT-Traversal: Trying sockopt style NAT-T
| NAT-Traversal: ESPINUDP(2) setup succeeded for sockopt style NAT-T family IPv4
adding interface eth1/eth1 192.1.2.23:4500
adding interface eth0/eth0 (esp-hw-offload not supported by kernel) 192.0.2.254:500
| NAT-Traversal: Trying sockopt style NAT-T
| NAT-Traversal: ESPINUDP(2) setup succeeded for sockopt style NAT-T family IPv4
adding interface eth0/eth0 192.0.2.254:4500
adding interface lo/lo (esp-hw-offload not supported by kernel) 127.0.0.1:500
| NAT-Traversal: Trying sockopt style NAT-T
| NAT-Traversal: ESPINUDP(2) setup succeeded for sockopt style NAT-T family IPv4
adding interface lo/lo 127.0.0.1:4500
| no interfaces to sort
| FOR_EACH_UNORIENTED_CONNECTION_... in check_orientations
| add_fd_read_event_handler: new ethX-pe@0x55f21d0b05f0
| libevent_malloc: new ptr-libevent@0x55f21d0c72f0 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0c7380 size 16
| setup callback for interface lo 127.0.0.1:4500 fd 22
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c73a0
| libevent_malloc: new ptr-libevent@0x55f21d0c73e0 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0c7470 size 16
| setup callback for interface lo 127.0.0.1:500 fd 21
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c7490
| libevent_malloc: new ptr-libevent@0x55f21d0c74d0 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0c7560 size 16
| setup callback for interface eth0 192.0.2.254:4500 fd 20
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c7580
| libevent_malloc: new ptr-libevent@0x55f21d0c75c0 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0c7650 size 16
| setup callback for interface eth0 192.0.2.254:500 fd 19
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c7670
| libevent_malloc: new ptr-libevent@0x55f21d0c76b0 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0c7740 size 16
| setup callback for interface eth1 192.1.2.23:4500 fd 18
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c7760
| libevent_malloc: new ptr-libevent@0x55f21d0c77a0 size 128
| libevent_malloc: new ptr-libevent@0x55f21d0c7830 size 16
| setup callback for interface eth1 192.1.2.23:500 fd 17
| certs and keys locked by 'free_preshared_secrets'
| certs and keys unlocked by 'free_preshared_secrets'
loading secrets from "/etc/ipsec.secrets"
| saving Modulus
| saving PublicExponent
| ignoring PrivateExponent
| ignoring Prime1
| ignoring Prime2
| ignoring Exponent1
| ignoring Exponent2
| ignoring Coefficient
| ignoring CKAIDNSS
| computed rsa CKAID  61 55 99 73  d3 ac ef 7d  3a 37 0e 3e  82 ad 92 c1
| computed rsa CKAID  8a 82 25 f1
loaded private key for keyid: PKK_RSA:AQO9bJbr3
| certs and keys locked by 'process_secret'
| certs and keys unlocked by 'process_secret'
| pluto_sd: executing action action: ready(5), status 0
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.519 milliseconds in whack
| crypto helper 5 waiting (nothing to do)
| status value returned by setting the priority of this thread (crypto helper 0) 22
| crypto helper 0 waiting (nothing to do)
| crypto helper 4 waiting (nothing to do)
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| Added new connection clear with policy AUTH_NEVER+GROUP+PASS+NEVER_NEGOTIATE
| counting wild cards for (none) is 15
| counting wild cards for (none) is 15
| connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@(nil): none
| new hp@0x55f21d093d70
added connection description "clear"
| ike_life: 0s; ipsec_life: 0s; rekey_margin: 0s; rekey_fuzz: 0%; keyingtries: 0; replay_window: 0; policy: AUTH_NEVER+GROUP+PASS+NEVER_NEGOTIATE
| 192.1.2.23---192.1.2.254...%group
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.0939 milliseconds in whack
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| Added new connection clear-or-private with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS
| ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31
| from whack: got --esp=
| ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128
| counting wild cards for ID_NULL is 0
| counting wild cards for ID_NULL is 0
| find_host_pair: comparing 192.1.2.23:500 to 0.0.0.0:500 but ignoring ports
| connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@0x55f21d093d70: clear
added connection description "clear-or-private"
| ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS
| 192.1.2.23[ID_NULL]---192.1.2.254...%opportunisticgroup[ID_NULL]
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.157 milliseconds in whack
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| Added new connection private-or-clear with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS
| ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31
| from whack: got --esp=
| ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128
| counting wild cards for ID_NULL is 0
| counting wild cards for ID_NULL is 0
| find_host_pair: comparing 192.1.2.23:500 to 0.0.0.0:500 but ignoring ports
| connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@0x55f21d093d70: clear-or-private
added connection description "private-or-clear"
| ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS
| 192.1.2.23[ID_NULL]---192.1.2.254...%opportunisticgroup[ID_NULL]
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.143 milliseconds in whack
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| Added new connection private with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP
| ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31
| from whack: got --esp=
| ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128
| counting wild cards for ID_NULL is 0
| counting wild cards for ID_NULL is 0
| find_host_pair: comparing 192.1.2.23:500 to 0.0.0.0:500 but ignoring ports
| connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@0x55f21d093d70: private-or-clear
added connection description "private"
| ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP
| 192.1.2.23[ID_NULL]---192.1.2.254...%opportunisticgroup[ID_NULL]
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.137 milliseconds in whack
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| Added new connection block with policy AUTH_NEVER+GROUP+REJECT+NEVER_NEGOTIATE
| counting wild cards for (none) is 15
| counting wild cards for (none) is 15
| find_host_pair: comparing 192.1.2.23:500 to 0.0.0.0:500 but ignoring ports
| connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@0x55f21d093d70: private
added connection description "block"
| ike_life: 0s; ipsec_life: 0s; rekey_margin: 0s; rekey_fuzz: 0%; keyingtries: 0; replay_window: 0; policy: AUTH_NEVER+GROUP+REJECT+NEVER_NEGOTIATE
| 192.1.2.23---192.1.2.254...%group
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.0541 milliseconds in whack
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| pluto_sd: executing action action: reloading(4), status 0
listening for IKE messages
| Inspecting interface lo 
| found lo with address 127.0.0.1
| Inspecting interface eth0 
| found eth0 with address 192.0.2.254
| Inspecting interface eth1 
| found eth1 with address 192.1.2.23
| no interfaces to sort
| libevent_free: release ptr-libevent@0x55f21d0c72f0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0b05f0
| add_fd_read_event_handler: new ethX-pe@0x55f21d0b05f0
| libevent_malloc: new ptr-libevent@0x55f21d0c72f0 size 128
| setup callback for interface lo 127.0.0.1:4500 fd 22
| libevent_free: release ptr-libevent@0x55f21d0c73e0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c73a0
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c73a0
| libevent_malloc: new ptr-libevent@0x55f21d0c73e0 size 128
| setup callback for interface lo 127.0.0.1:500 fd 21
| libevent_free: release ptr-libevent@0x55f21d0c74d0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c7490
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c7490
| libevent_malloc: new ptr-libevent@0x55f21d0c74d0 size 128
| setup callback for interface eth0 192.0.2.254:4500 fd 20
| libevent_free: release ptr-libevent@0x55f21d0c75c0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c7580
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c7580
| libevent_malloc: new ptr-libevent@0x55f21d0c75c0 size 128
| setup callback for interface eth0 192.0.2.254:500 fd 19
| libevent_free: release ptr-libevent@0x55f21d0c76b0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c7670
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c7670
| libevent_malloc: new ptr-libevent@0x55f21d0c76b0 size 128
| setup callback for interface eth1 192.1.2.23:4500 fd 18
| libevent_free: release ptr-libevent@0x55f21d0c77a0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c7760
| add_fd_read_event_handler: new ethX-pe@0x55f21d0c7760
| libevent_malloc: new ptr-libevent@0x55f21d0c77a0 size 128
| setup callback for interface eth1 192.1.2.23:500 fd 17
| certs and keys locked by 'free_preshared_secrets'
forgetting secrets
| certs and keys unlocked by 'free_preshared_secrets'
loading secrets from "/etc/ipsec.secrets"
| saving Modulus
| saving PublicExponent
| ignoring PrivateExponent
| ignoring Prime1
| ignoring Prime2
| ignoring Exponent1
| ignoring Exponent2
| ignoring Coefficient
| ignoring CKAIDNSS
| computed rsa CKAID  61 55 99 73  d3 ac ef 7d  3a 37 0e 3e  82 ad 92 c1
| computed rsa CKAID  8a 82 25 f1
loaded private key for keyid: PKK_RSA:AQO9bJbr3
| certs and keys locked by 'process_secret'
| certs and keys unlocked by 'process_secret'
loading group "/etc/ipsec.d/policies/block"
loading group "/etc/ipsec.d/policies/private"
loading group "/etc/ipsec.d/policies/private-or-clear"
loading group "/etc/ipsec.d/policies/clear-or-private"
loading group "/etc/ipsec.d/policies/clear"
| 192.1.2.23/32:0->192.1.2.254/32:0 0 sport 0 dport 0 clear
| 192.1.2.23/32:0->192.1.3.254/32:0 0 sport 0 dport 0 clear
| 192.1.2.23/32:0->192.1.3.253/32:0 0 sport 0 dport 0 clear
| 192.1.2.23/32:0->192.1.2.253/32:0 0 sport 0 dport 0 clear
| 192.1.2.23/32:0->192.1.3.209/32:0 0 sport 0 dport 0 clear-or-private
| 192.1.2.23/32:0->192.0.2.0/24:0 0 sport 0 dport 0 private
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in conn_by_name
| pluto_sd: executing action action: ready(5), status 0
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.446 milliseconds in whack
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| start processing: connection "clear" (in whack_route_connection() at rcv_whack.c:106)
| FOR_EACH_CONNECTION_... in conn_by_name
| suspend processing: connection "clear" (in route_group() at foodgroups.c:425)
| start processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in route_group() at foodgroups.c:425)
| could_route called for clear#192.1.2.254/32 (kind=CK_INSTANCE)
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.2.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL
| route_and_eroute() for proto 0, and source port 0 dest port 0
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.2.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL
| route_and_eroute with c: clear#192.1.2.254/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0
| shunt_eroute() called for connection 'clear#192.1.2.254/32' to 'add' for rt_kind 'prospective erouted' using protoports 192.1.2.23/32:0 --0->- 192.1.2.254/32:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.1.2.254/32:0
| priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf
| netlink_raw_eroute: SPI_PASS
| IPsec Sa SPD priority set to 1564639
| priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf
| netlink_raw_eroute: SPI_PASS
| IPsec Sa SPD priority set to 1564639
| route_and_eroute: firewall_notified: true
| running updown command "ipsec _updown" for verb prepare 
| command executing prepare-host
| executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT
| popen cmd is 1016 chars long
| cmd(   0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25:
| cmd(  80):4/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19:
| cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT:
| cmd( 320):OCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_:
| cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1:
| cmd( 480):.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_:
| cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_:
| cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTA:
| cmd( 720):NCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P:
| cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=:
| cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V:
| cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| running updown command "ipsec _updown" for verb route 
| command executing route-host
| executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0
| popen cmd is 1014 chars long
| cmd(   0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/:
| cmd(  80):32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL:
| cmd( 160):UTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.:
| cmd( 240):1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOC:
| cmd( 320):OL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PE:
| cmd( 400):ER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2:
| cmd( 480):.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PR:
| cmd( 560):OTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_PO:
| cmd( 640):LICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC:
| cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE:
| cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0:
| cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI:
| cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| suspend processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in route_group() at foodgroups.c:429)
| start processing: connection "clear" (in route_group() at foodgroups.c:429)
| FOR_EACH_CONNECTION_... in conn_by_name
| suspend processing: connection "clear" (in route_group() at foodgroups.c:425)
| start processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in route_group() at foodgroups.c:425)
| could_route called for clear#192.1.3.254/32 (kind=CK_INSTANCE)
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.3.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL
| route_and_eroute() for proto 0, and source port 0 dest port 0
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.3.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL
| route_and_eroute with c: clear#192.1.3.254/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0
| shunt_eroute() called for connection 'clear#192.1.3.254/32' to 'add' for rt_kind 'prospective erouted' using protoports 192.1.2.23/32:0 --0->- 192.1.3.254/32:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.1.3.254/32:0
| priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf
| netlink_raw_eroute: SPI_PASS
| IPsec Sa SPD priority set to 1564639
| priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf
| netlink_raw_eroute: SPI_PASS
| IPsec Sa SPD priority set to 1564639
| route_and_eroute: firewall_notified: true
| running updown command "ipsec _updown" for verb prepare 
| command executing prepare-host
| executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT
| popen cmd is 1016 chars long
| cmd(   0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25:
| cmd(  80):4/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19:
| cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT:
| cmd( 320):OCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_:
| cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1:
| cmd( 480):.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_:
| cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_:
| cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTA:
| cmd( 720):NCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P:
| cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=:
| cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V:
| cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| running updown command "ipsec _updown" for verb route 
| command executing route-host
| executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0
| popen cmd is 1014 chars long
| cmd(   0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/:
| cmd(  80):32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL:
| cmd( 160):UTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.:
| cmd( 240):1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOC:
| cmd( 320):OL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PE:
| cmd( 400):ER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3:
| cmd( 480):.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PR:
| cmd( 560):OTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_PO:
| cmd( 640):LICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC:
| cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE:
| cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0:
| cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI:
| cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| suspend processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in route_group() at foodgroups.c:429)
| start processing: connection "clear" (in route_group() at foodgroups.c:429)
| FOR_EACH_CONNECTION_... in conn_by_name
| suspend processing: connection "clear" (in route_group() at foodgroups.c:425)
| start processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in route_group() at foodgroups.c:425)
| could_route called for clear#192.1.3.253/32 (kind=CK_INSTANCE)
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.3.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL
| route_and_eroute() for proto 0, and source port 0 dest port 0
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.3.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL
| route_and_eroute with c: clear#192.1.3.253/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0
| shunt_eroute() called for connection 'clear#192.1.3.253/32' to 'add' for rt_kind 'prospective erouted' using protoports 192.1.2.23/32:0 --0->- 192.1.3.253/32:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.1.3.253/32:0
| priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf
| netlink_raw_eroute: SPI_PASS
| IPsec Sa SPD priority set to 1564639
| priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf
| netlink_raw_eroute: SPI_PASS
| IPsec Sa SPD priority set to 1564639
| route_and_eroute: firewall_notified: true
| running updown command "ipsec _updown" for verb prepare 
| command executing prepare-host
| executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT
| popen cmd is 1016 chars long
| cmd(   0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25:
| cmd(  80):3/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19:
| cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT:
| cmd( 320):OCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_:
| cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1:
| cmd( 480):.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_:
| cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_:
| cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTA:
| cmd( 720):NCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P:
| cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=:
| cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V:
| cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| running updown command "ipsec _updown" for verb route 
| command executing route-host
| executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0
| popen cmd is 1014 chars long
| cmd(   0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/:
| cmd(  80):32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL:
| cmd( 160):UTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.:
| cmd( 240):1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOC:
| cmd( 320):OL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PE:
| cmd( 400):ER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3:
| cmd( 480):.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PR:
| cmd( 560):OTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_PO:
| cmd( 640):LICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC:
| cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE:
| cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0:
| cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI:
| cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| suspend processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in route_group() at foodgroups.c:429)
| start processing: connection "clear" (in route_group() at foodgroups.c:429)
| FOR_EACH_CONNECTION_... in conn_by_name
| suspend processing: connection "clear" (in route_group() at foodgroups.c:425)
| start processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in route_group() at foodgroups.c:425)
| could_route called for clear#192.1.2.253/32 (kind=CK_INSTANCE)
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.2.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL
| route_and_eroute() for proto 0, and source port 0 dest port 0
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.2.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL
| route_and_eroute with c: clear#192.1.2.253/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0
| shunt_eroute() called for connection 'clear#192.1.2.253/32' to 'add' for rt_kind 'prospective erouted' using protoports 192.1.2.23/32:0 --0->- 192.1.2.253/32:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.1.2.253/32:0
| priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf
| netlink_raw_eroute: SPI_PASS
| IPsec Sa SPD priority set to 1564639
| priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf
| netlink_raw_eroute: SPI_PASS
| IPsec Sa SPD priority set to 1564639
| route_and_eroute: firewall_notified: true
| running updown command "ipsec _updown" for verb prepare 
| command executing prepare-host
| executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT
| popen cmd is 1016 chars long
| cmd(   0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25:
| cmd(  80):3/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19:
| cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT:
| cmd( 320):OCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_:
| cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1:
| cmd( 480):.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_:
| cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_:
| cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTA:
| cmd( 720):NCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P:
| cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=:
| cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V:
| cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| running updown command "ipsec _updown" for verb route 
| command executing route-host
| executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0
| popen cmd is 1014 chars long
| cmd(   0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/:
| cmd(  80):32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL:
| cmd( 160):UTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.:
| cmd( 240):1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOC:
| cmd( 320):OL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PE:
| cmd( 400):ER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2:
| cmd( 480):.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PR:
| cmd( 560):OTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_PO:
| cmd( 640):LICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC:
| cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE:
| cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0:
| cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI:
| cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| suspend processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in route_group() at foodgroups.c:429)
| start processing: connection "clear" (in route_group() at foodgroups.c:429)
| stop processing: connection "clear" (in whack_route_connection() at rcv_whack.c:116)
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 2.34 milliseconds in whack
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.0056 milliseconds in signal handler PLUTO_SIGCHLD
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.00343 milliseconds in signal handler PLUTO_SIGCHLD
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.00348 milliseconds in signal handler PLUTO_SIGCHLD
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.00338 milliseconds in signal handler PLUTO_SIGCHLD
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.0033 milliseconds in signal handler PLUTO_SIGCHLD
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.00342 milliseconds in signal handler PLUTO_SIGCHLD
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.00346 milliseconds in signal handler PLUTO_SIGCHLD
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.00336 milliseconds in signal handler PLUTO_SIGCHLD
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| start processing: connection "private-or-clear" (in whack_route_connection() at rcv_whack.c:106)
| stop processing: connection "private-or-clear" (in whack_route_connection() at rcv_whack.c:116)
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.0357 milliseconds in whack
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| start processing: connection "private" (in whack_route_connection() at rcv_whack.c:106)
| FOR_EACH_CONNECTION_... in conn_by_name
| suspend processing: connection "private" (in route_group() at foodgroups.c:425)
| start processing: connection "private#192.0.2.0/24" (in route_group() at foodgroups.c:425)
| could_route called for private#192.0.2.0/24 (kind=CK_TEMPLATE)
| FOR_EACH_CONNECTION_... in route_owner
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "private#192.0.2.0/24" unrouted: NULL; eroute owner: NULL
| route_and_eroute() for proto 0, and source port 0 dest port 0
| FOR_EACH_CONNECTION_... in route_owner
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn block mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "private#192.0.2.0/24" unrouted: NULL; eroute owner: NULL
| route_and_eroute with c: private#192.0.2.0/24 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0
| shunt_eroute() called for connection 'private#192.0.2.0/24' to 'add' for rt_kind 'prospective erouted' using protoports 192.1.2.23/32:0 --0->- 192.0.2.0/24:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.0.2.0/24:0
| priority calculation of connection "private#192.0.2.0/24" is 0x1fdfe7
| IPsec Sa SPD priority set to 2088935
| priority calculation of connection "private#192.0.2.0/24" is 0x1fdfe7
| route_and_eroute: firewall_notified: true
| running updown command "ipsec _updown" for verb prepare 
| command executing prepare-host
| executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='private#192.0.2.0/24' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='ID_NULL' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16428' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='ID_NULL' PLUTO_PEER_CLIENT='192.0.2.0/24' PLUTO_PEER_CLIENT_NET='192.0.2.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUPINSTANCE+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP' PLUTO_CONN_KIND='CK_TEMPLATE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_
| popen cmd is 1076 chars long
| cmd(   0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='private#192.0.2.:
| cmd(  80):0/24' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='ID_NULL' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1:
| cmd( 240):.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCO:
| cmd( 320):L='0' PLUTO_SA_REQID='16428' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEE:
| cmd( 400):R_ID='ID_NULL' PLUTO_PEER_CLIENT='192.0.2.0/24' PLUTO_PEER_CLIENT_NET='192.0.2.0:
| cmd( 480):' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL:
| cmd( 560):='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY=':
| cmd( 640):AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUPINSTANCE+IKEV2_ALLOW+SAREF_TRACK+:
| cmd( 720):IKE_FRAG_ALLOW+ESN_NO+failureDROP' PLUTO_CONN_KIND='CK_TEMPLATE' PLUTO_CONN_ADDR:
| cmd( 800):FAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUT:
| cmd( 880):O_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT:
| cmd( 960):='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_I:
| cmd(1040):N=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| running updown command "ipsec _updown" for verb route 
| command executing route-host
| executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='private#192.0.2.0/24' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='ID_NULL' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16428' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='ID_NULL' PLUTO_PEER_CLIENT='192.0.2.0/24' PLUTO_PEER_CLIENT_NET='192.0.2.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUPINSTANCE+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP' PLUTO_CONN_KIND='CK_TEMPLATE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFAC
| popen cmd is 1074 chars long
| cmd(   0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='private#192.0.2.0/:
| cmd(  80):24' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL:
| cmd( 160):UTO_MY_ID='ID_NULL' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2:
| cmd( 240):.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL=:
| cmd( 320):'0' PLUTO_SA_REQID='16428' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_:
| cmd( 400):ID='ID_NULL' PLUTO_PEER_CLIENT='192.0.2.0/24' PLUTO_PEER_CLIENT_NET='192.0.2.0' :
| cmd( 480):PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL=':
| cmd( 560):0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AU:
| cmd( 640):THNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUPINSTANCE+IKEV2_ALLOW+SAREF_TRACK+IK:
| cmd( 720):E_FRAG_ALLOW+ESN_NO+failureDROP' PLUTO_CONN_KIND='CK_TEMPLATE' PLUTO_CONN_ADDRFA:
| cmd( 800):MILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_:
| cmd( 880):PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT=':
| cmd( 960):0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=:
| cmd(1040):0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
| suspend processing: connection "private#192.0.2.0/24" (in route_group() at foodgroups.c:429)
| start processing: connection "private" (in route_group() at foodgroups.c:429)
| stop processing: connection "private" (in whack_route_connection() at rcv_whack.c:116)
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.609 milliseconds in whack
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.00531 milliseconds in signal handler PLUTO_SIGCHLD
| processing signal PLUTO_SIGCHLD
| waitpid returned nothing left to do (all child processes are busy)
| spent 0.00344 milliseconds in signal handler PLUTO_SIGCHLD
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in conn_by_name
| start processing: connection "block" (in whack_route_connection() at rcv_whack.c:106)
| stop processing: connection "block" (in whack_route_connection() at rcv_whack.c:116)
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 0.0366 milliseconds in whack
| processing signal PLUTO_SIGCHLD
| waitpid returned pid 25165 (exited with status 0)
| reaped addconn helper child (status 0)
| waitpid returned ECHILD (no child processes left)
| spent 0.0181 milliseconds in signal handler PLUTO_SIGCHLD
| processing global timer EVENT_SHUNT_SCAN
| expiring aged bare shunts from shunt table
| spent 0.00467 milliseconds in global timer EVENT_SHUNT_SCAN
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
| FOR_EACH_CONNECTION_... in show_connections_status
| FOR_EACH_CONNECTION_... in show_connections_status
| FOR_EACH_STATE_... in show_states_status (sort_states)
| close_any(fd@16) (in whack_process() at rcv_whack.c:700)
| spent 1.25 milliseconds in whack
| accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:721)
shutting down
| processing: RESET whack log_fd (was fd@16) (in exit_pluto() at plutomain.c:1825)
| pluto_sd: executing action action: stopping(6), status 0
| certs and keys locked by 'free_preshared_secrets'
forgetting secrets
| certs and keys unlocked by 'free_preshared_secrets'
| start processing: connection "block" (in delete_connection() at connections.c:189)
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| flush revival: connection 'block' wasn't on the list
| stop processing: connection "block" (in discard_connection() at connections.c:249)
| start processing: connection "private#192.0.2.0/24" (in delete_connection() at connections.c:189)
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| shunt_eroute() called for connection 'private#192.0.2.0/24' to 'delete' for rt_kind 'unrouted' using protoports 192.1.2.23/32:0 --0->- 192.0.2.0/24:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.0.2.0/24:0
| priority calculation of connection "private#192.0.2.0/24" is 0x1fdfe7
| priority calculation of connection "private#192.0.2.0/24" is 0x1fdfe7
| FOR_EACH_CONNECTION_... in route_owner
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn private-or-clear mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn private#192.0.2.0/24 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "private#192.0.2.0/24" unrouted: NULL
| running updown command "ipsec _updown" for verb unroute 
| command executing unroute-host
| executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='private#192.0.2.0/24' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='ID_NULL' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16428' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='ID_NULL' PLUTO_PEER_CLIENT='192.0.2.0/24' PLUTO_PEER_CLIENT_NET='192.0.2.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUPINSTANCE+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP' PLUTO_CONN_KIND='CK_TEMPLATE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_
| popen cmd is 1076 chars long
| cmd(   0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='private#192.0.2.:
| cmd(  80):0/24' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='ID_NULL' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1:
| cmd( 240):.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCO:
| cmd( 320):L='0' PLUTO_SA_REQID='16428' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEE:
| cmd( 400):R_ID='ID_NULL' PLUTO_PEER_CLIENT='192.0.2.0/24' PLUTO_PEER_CLIENT_NET='192.0.2.0:
| cmd( 480):' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL:
| cmd( 560):='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY=':
| cmd( 640):AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUPINSTANCE+IKEV2_ALLOW+SAREF_TRACK+:
| cmd( 720):IKE_FRAG_ALLOW+ESN_NO+failureDROP' PLUTO_CONN_KIND='CK_TEMPLATE' PLUTO_CONN_ADDR:
| cmd( 800):FAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUT:
| cmd( 880):O_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT:
| cmd( 960):='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_I:
| cmd(1040):N=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
"private#192.0.2.0/24": unroute-host output: Error: Peer netns reference is invalid.
| flush revival: connection 'private#192.0.2.0/24' wasn't on the list
| stop processing: connection "private#192.0.2.0/24" (in discard_connection() at connections.c:249)
| start processing: connection "private" (in delete_connection() at connections.c:189)
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| flush revival: connection 'private' wasn't on the list
| stop processing: connection "private" (in discard_connection() at connections.c:249)
| start processing: connection "private-or-clear" (in delete_connection() at connections.c:189)
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| flush revival: connection 'private-or-clear' wasn't on the list
| stop processing: connection "private-or-clear" (in discard_connection() at connections.c:249)
| start processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in delete_connection() at connections.c:189)
"clear#192.1.2.253/32" 0.0.0.0: deleting connection "clear#192.1.2.253/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0}
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| shunt_eroute() called for connection 'clear#192.1.2.253/32' to 'delete' for rt_kind 'unrouted' using protoports 192.1.2.23/32:0 --0->- 192.1.2.253/32:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.1.2.253/32:0
| priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf
| priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.2.253/32" unrouted: NULL
| running updown command "ipsec _updown" for verb unroute 
| command executing unroute-host
| executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_O
| popen cmd is 1018 chars long
| cmd(   0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25:
| cmd(  80):3/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19:
| cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT:
| cmd( 320):OCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_:
| cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1:
| cmd( 480):.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_:
| cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_:
| cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING:
| cmd( 720):_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO:
| cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE:
| cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no':
| cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
| flush revival: connection 'clear#192.1.2.253/32' wasn't on the list
| stop processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in discard_connection() at connections.c:249)
| start processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in delete_connection() at connections.c:189)
"clear#192.1.3.253/32" 0.0.0.0: deleting connection "clear#192.1.3.253/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0}
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| shunt_eroute() called for connection 'clear#192.1.3.253/32' to 'delete' for rt_kind 'unrouted' using protoports 192.1.2.23/32:0 --0->- 192.1.3.253/32:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.1.3.253/32:0
| priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf
| priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.3.253/32" unrouted: NULL
| running updown command "ipsec _updown" for verb unroute 
| command executing unroute-host
| executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_O
| popen cmd is 1018 chars long
| cmd(   0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25:
| cmd(  80):3/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19:
| cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT:
| cmd( 320):OCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_:
| cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1:
| cmd( 480):.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_:
| cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_:
| cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING:
| cmd( 720):_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO:
| cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE:
| cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no':
| cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
| flush revival: connection 'clear#192.1.3.253/32' wasn't on the list
| stop processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in discard_connection() at connections.c:249)
| start processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in delete_connection() at connections.c:189)
"clear#192.1.3.254/32" 0.0.0.0: deleting connection "clear#192.1.3.254/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0}
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| shunt_eroute() called for connection 'clear#192.1.3.254/32' to 'delete' for rt_kind 'unrouted' using protoports 192.1.2.23/32:0 --0->- 192.1.3.254/32:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.1.3.254/32:0
| priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf
| priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.3.254/32" unrouted: NULL
| running updown command "ipsec _updown" for verb unroute 
| command executing unroute-host
| executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_O
| popen cmd is 1018 chars long
| cmd(   0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25:
| cmd(  80):4/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19:
| cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT:
| cmd( 320):OCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_:
| cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1:
| cmd( 480):.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_:
| cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_:
| cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING:
| cmd( 720):_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO:
| cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE:
| cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no':
| cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
| flush revival: connection 'clear#192.1.3.254/32' wasn't on the list
| stop processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in discard_connection() at connections.c:249)
| start processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in delete_connection() at connections.c:189)
"clear#192.1.2.254/32" 0.0.0.0: deleting connection "clear#192.1.2.254/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0}
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| shunt_eroute() called for connection 'clear#192.1.2.254/32' to 'delete' for rt_kind 'unrouted' using protoports 192.1.2.23/32:0 --0->- 192.1.2.254/32:0
| netlink_shunt_eroute for proto 0, and source 192.1.2.23/32:0 dest 192.1.2.254/32:0
| priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf
| priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf
| FOR_EACH_CONNECTION_... in route_owner
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private#192.1.3.209/32 mark 0/00000000, 0/00000000
|  conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs
|  conn clear-or-private mark 0/00000000, 0/00000000
| route owner of "clear#192.1.2.254/32" unrouted: NULL
| running updown command "ipsec _updown" for verb unroute 
| command executing unroute-host
| executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_O
| popen cmd is 1018 chars long
| cmd(   0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25:
| cmd(  80):4/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' :
| cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19:
| cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT:
| cmd( 320):OCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_:
| cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1:
| cmd( 480):.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_:
| cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_:
| cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING:
| cmd( 720):_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO:
| cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE:
| cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no':
| cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1:
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
"clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid.
| flush revival: connection 'clear#192.1.2.254/32' wasn't on the list
| stop processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in discard_connection() at connections.c:249)
| start processing: connection "clear" (in delete_connection() at connections.c:189)
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| flush revival: connection 'clear' wasn't on the list
| stop processing: connection "clear" (in discard_connection() at connections.c:249)
| start processing: connection "clear-or-private#192.1.3.209/32" (in delete_connection() at connections.c:189)
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| flush revival: connection 'clear-or-private#192.1.3.209/32' wasn't on the list
| stop processing: connection "clear-or-private#192.1.3.209/32" (in discard_connection() at connections.c:249)
| start processing: connection "clear-or-private" (in delete_connection() at connections.c:189)
| Deleting states for connection - including all other IPsec SA's of this IKE SA
| pass 0
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| pass 1
| FOR_EACH_STATE_... in foreach_state_by_connection_func_delete
| FOR_EACH_CONNECTION_... in conn_by_name
| FOR_EACH_CONNECTION_... in foreach_connection_by_alias
| free hp@0x55f21d093d70
| flush revival: connection 'clear-or-private' wasn't on the list
| stop processing: connection "clear-or-private" (in discard_connection() at connections.c:249)
| crl fetch request list locked by 'free_crl_fetch'
| crl fetch request list unlocked by 'free_crl_fetch'
shutting down interface lo/lo 127.0.0.1:4500
shutting down interface lo/lo 127.0.0.1:500
shutting down interface eth0/eth0 192.0.2.254:4500
shutting down interface eth0/eth0 192.0.2.254:500
shutting down interface eth1/eth1 192.1.2.23:4500
shutting down interface eth1/eth1 192.1.2.23:500
| FOR_EACH_STATE_... in delete_states_dead_interfaces
| libevent_free: release ptr-libevent@0x55f21d0c72f0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0b05f0
| libevent_free: release ptr-libevent@0x55f21d0c73e0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c73a0
| libevent_free: release ptr-libevent@0x55f21d0c74d0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c7490
| libevent_free: release ptr-libevent@0x55f21d0c75c0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c7580
| libevent_free: release ptr-libevent@0x55f21d0c76b0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c7670
| libevent_free: release ptr-libevent@0x55f21d0c77a0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0c7760
| FOR_EACH_UNORIENTED_CONNECTION_... in check_orientations
| libevent_free: release ptr-libevent@0x55f21d0c6c50
| free_event_entry: release EVENT_NULL-pe@0x55f21d0af870
| libevent_free: release ptr-libevent@0x55f21d0bc6e0
| free_event_entry: release EVENT_NULL-pe@0x55f21d0afb20
| libevent_free: release ptr-libevent@0x55f21d0bc650
| free_event_entry: release EVENT_NULL-pe@0x55f21d0b5280
| global timer EVENT_REINIT_SECRET uninitialized
| global timer EVENT_SHUNT_SCAN uninitialized
| global timer EVENT_PENDING_DDNS uninitialized
| global timer EVENT_PENDING_PHASE2 uninitialized
| global timer EVENT_CHECK_CRLS uninitialized
| global timer EVENT_REVIVE_CONNS uninitialized
| global timer EVENT_FREE_ROOT_CERTS uninitialized
| global timer EVENT_RESET_LOG_RATE_LIMIT uninitialized
| global timer EVENT_NAT_T_KEEPALIVE uninitialized
| libevent_free: release ptr-libevent@0x55f21d0c6d20
| signal event handler PLUTO_SIGCHLD uninstalled
| libevent_free: release ptr-libevent@0x55f21d0c6e00
| signal event handler PLUTO_SIGTERM uninstalled
| libevent_free: release ptr-libevent@0x55f21d0c6ec0
| signal event handler PLUTO_SIGHUP uninstalled
| libevent_free: release ptr-libevent@0x55f21d0bba50
| signal event handler PLUTO_SIGSYS uninstalled
| releasing event base
| libevent_free: release ptr-libevent@0x55f21d0c6f80
| libevent_free: release ptr-libevent@0x55f21d09c630
| libevent_free: release ptr-libevent@0x55f21d0aae00
| libevent_free: release ptr-libevent@0x55f21d0aaed0
| libevent_free: release ptr-libevent@0x55f21d0aae20
| libevent_free: release ptr-libevent@0x55f21d0c6ce0
| libevent_free: release ptr-libevent@0x55f21d0c6dc0
| libevent_free: release ptr-libevent@0x55f21d0aaeb0
| libevent_free: release ptr-libevent@0x55f21d0b04f0
| libevent_free: release ptr-libevent@0x55f21d0aaf20
| libevent_free: release ptr-libevent@0x55f21d0c7830
| libevent_free: release ptr-libevent@0x55f21d0c7740
| libevent_free: release ptr-libevent@0x55f21d0c7650
| libevent_free: release ptr-libevent@0x55f21d0c7560
| libevent_free: release ptr-libevent@0x55f21d0c7470
| libevent_free: release ptr-libevent@0x55f21d0c7380
| libevent_free: release ptr-libevent@0x55f21d02e370
| libevent_free: release ptr-libevent@0x55f21d0c6ea0
| libevent_free: release ptr-libevent@0x55f21d0c6de0
| libevent_free: release ptr-libevent@0x55f21d0c6d00
| libevent_free: release ptr-libevent@0x55f21d0c6f60
| libevent_free: release ptr-libevent@0x55f21d02c6c0
| libevent_free: release ptr-libevent@0x55f21d0aae40
| libevent_free: release ptr-libevent@0x55f21d0aae70
| libevent_free: release ptr-libevent@0x55f21d0aab60
| releasing global libevent data
| libevent_free: release ptr-libevent@0x55f21d0a9850
| libevent_free: release ptr-libevent@0x55f21d0aab00
| libevent_free: release ptr-libevent@0x55f21d0aab30