FIPS Product: YES FIPS Kernel: NO FIPS Mode: NO NSS DB directory: sql:/etc/ipsec.d Initializing NSS Opening NSS database "sql:/etc/ipsec.d" read-only NSS initialized NSS crypto library initialized FIPS HMAC integrity support [enabled] FIPS mode disabled for pluto daemon FIPS HMAC integrity verification self-test FAILED libcap-ng support [enabled] Linux audit support [enabled] Linux audit activated Starting Pluto (Libreswan Version v3.28-685-gbfd5aef521-master-s2 XFRM(netkey) esp-hw-offload FORK PTHREAD_SETSCHEDPRIO NSS (IPsec profile) DNSSEC FIPS_CHECK LABELED_IPSEC SECCOMP LIBCAP_NG LINUX_AUDIT XAUTH_PAM NETWORKMANAGER CURL(non-NSS)) pid:29473 core dump dir: /tmp secrets file: /etc/ipsec.secrets leak-detective enabled NSS crypto [enabled] XAUTH PAM support [enabled] | libevent is using pluto's memory allocator Initializing libevent in pthreads mode: headers: 2.1.8-stable (2010800); library: 2.1.8-stable (2010800) | libevent_malloc: new ptr-libevent@0x55c6ab8387f8 size 40 | libevent_malloc: new ptr-libevent@0x55c6ab838cd8 size 40 | libevent_malloc: new ptr-libevent@0x55c6ab838dd8 size 40 | creating event base | libevent_malloc: new ptr-libevent@0x55c6ab8bbc48 size 56 | libevent_malloc: new ptr-libevent@0x55c6ab8680c8 size 664 | libevent_malloc: new ptr-libevent@0x55c6ab8bbcb8 size 24 | libevent_malloc: new ptr-libevent@0x55c6ab8bbd08 size 384 | libevent_malloc: new ptr-libevent@0x55c6ab8bbc08 size 16 | libevent_malloc: new ptr-libevent@0x55c6ab838908 size 40 | libevent_malloc: new ptr-libevent@0x55c6ab838d38 size 48 | libevent_realloc: new ptr-libevent@0x55c6ab868bc8 size 256 | libevent_malloc: new ptr-libevent@0x55c6ab8bbeb8 size 16 | libevent_free: release ptr-libevent@0x55c6ab8bbc48 | libevent initialized | libevent_realloc: new ptr-libevent@0x55c6ab8bbc48 size 64 | global periodic timer EVENT_RESET_LOG_RATE_LIMIT enabled with interval of 3600 seconds | init_nat_traversal() initialized with keep_alive=0s NAT-Traversal support [enabled] | global one-shot timer EVENT_NAT_T_KEEPALIVE initialized | global one-shot timer EVENT_FREE_ROOT_CERTS initialized | global periodic timer EVENT_REINIT_SECRET enabled with interval of 3600 seconds | global one-shot timer EVENT_REVIVE_CONNS initialized | global periodic timer EVENT_PENDING_DDNS enabled with interval of 60 seconds | global periodic timer EVENT_PENDING_PHASE2 enabled with interval of 120 seconds Encryption algorithms: AES_CCM_16 IKEv1: ESP IKEv2: ESP FIPS {256,192,*128} aes_ccm, aes_ccm_c AES_CCM_12 IKEv1: ESP IKEv2: ESP FIPS {256,192,*128} aes_ccm_b AES_CCM_8 IKEv1: ESP IKEv2: ESP FIPS {256,192,*128} aes_ccm_a 3DES_CBC IKEv1: IKE ESP IKEv2: IKE ESP FIPS [*192] 3des CAMELLIA_CTR IKEv1: ESP IKEv2: ESP {256,192,*128} CAMELLIA_CBC IKEv1: IKE ESP IKEv2: IKE ESP {256,192,*128} camellia AES_GCM_16 IKEv1: ESP IKEv2: IKE ESP FIPS {256,192,*128} aes_gcm, aes_gcm_c AES_GCM_12 IKEv1: ESP IKEv2: IKE ESP FIPS {256,192,*128} aes_gcm_b AES_GCM_8 IKEv1: ESP IKEv2: IKE ESP FIPS {256,192,*128} aes_gcm_a AES_CTR IKEv1: IKE ESP IKEv2: IKE ESP FIPS {256,192,*128} aesctr AES_CBC IKEv1: IKE ESP IKEv2: IKE ESP FIPS {256,192,*128} aes SERPENT_CBC IKEv1: IKE ESP IKEv2: IKE ESP {256,192,*128} serpent TWOFISH_CBC IKEv1: IKE ESP IKEv2: IKE ESP {256,192,*128} twofish TWOFISH_SSH IKEv1: IKE IKEv2: IKE ESP {256,192,*128} twofish_cbc_ssh NULL_AUTH_AES_GMAC IKEv1: ESP IKEv2: ESP FIPS {256,192,*128} aes_gmac NULL IKEv1: ESP IKEv2: ESP [] CHACHA20_POLY1305 IKEv1: IKEv2: IKE ESP [*256] chacha20poly1305 Hash algorithms: MD5 IKEv1: IKE IKEv2: SHA1 IKEv1: IKE IKEv2: FIPS sha SHA2_256 IKEv1: IKE IKEv2: FIPS sha2, sha256 SHA2_384 IKEv1: IKE IKEv2: FIPS sha384 SHA2_512 IKEv1: IKE IKEv2: FIPS sha512 PRF algorithms: HMAC_MD5 IKEv1: IKE IKEv2: IKE md5 HMAC_SHA1 IKEv1: IKE IKEv2: IKE FIPS sha, sha1 HMAC_SHA2_256 IKEv1: IKE IKEv2: IKE FIPS sha2, sha256, sha2_256 HMAC_SHA2_384 IKEv1: IKE IKEv2: IKE FIPS sha384, sha2_384 HMAC_SHA2_512 IKEv1: IKE IKEv2: IKE FIPS sha512, sha2_512 AES_XCBC IKEv1: IKEv2: IKE aes128_xcbc Integrity algorithms: HMAC_MD5_96 IKEv1: IKE ESP AH IKEv2: IKE ESP AH md5, hmac_md5 HMAC_SHA1_96 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS sha, sha1, sha1_96, hmac_sha1 HMAC_SHA2_512_256 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS sha512, sha2_512, sha2_512_256, hmac_sha2_512 HMAC_SHA2_384_192 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS sha384, sha2_384, sha2_384_192, hmac_sha2_384 HMAC_SHA2_256_128 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS sha2, sha256, sha2_256, sha2_256_128, hmac_sha2_256 HMAC_SHA2_256_TRUNCBUG IKEv1: ESP AH IKEv2: AH AES_XCBC_96 IKEv1: ESP AH IKEv2: IKE ESP AH aes_xcbc, aes128_xcbc, aes128_xcbc_96 AES_CMAC_96 IKEv1: ESP AH IKEv2: ESP AH FIPS aes_cmac NONE IKEv1: ESP IKEv2: IKE ESP FIPS null DH algorithms: NONE IKEv1: IKEv2: IKE ESP AH FIPS null, dh0 MODP1536 IKEv1: IKE ESP AH IKEv2: IKE ESP AH dh5 MODP2048 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh14 MODP3072 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh15 MODP4096 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh16 MODP6144 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh17 MODP8192 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh18 DH19 IKEv1: IKE IKEv2: IKE ESP AH FIPS ecp_256, ecp256 DH20 IKEv1: IKE IKEv2: IKE ESP AH FIPS ecp_384, ecp384 DH21 IKEv1: IKE IKEv2: IKE ESP AH FIPS ecp_521, ecp521 DH31 IKEv1: IKE IKEv2: IKE ESP AH curve25519 testing CAMELLIA_CBC: Camellia: 16 bytes with 128-bit key Camellia: 16 bytes with 128-bit key Camellia: 16 bytes with 256-bit key Camellia: 16 bytes with 256-bit key testing AES_GCM_16: empty string one block two blocks two blocks with associated data testing AES_CTR: Encrypting 16 octets using AES-CTR with 128-bit key Encrypting 32 octets using AES-CTR with 128-bit key Encrypting 36 octets using AES-CTR with 128-bit key Encrypting 16 octets using AES-CTR with 192-bit key Encrypting 32 octets using AES-CTR with 192-bit key Encrypting 36 octets using AES-CTR with 192-bit key Encrypting 16 octets using AES-CTR with 256-bit key Encrypting 32 octets using AES-CTR with 256-bit key Encrypting 36 octets using AES-CTR with 256-bit key testing AES_CBC: Encrypting 16 bytes (1 block) using AES-CBC with 128-bit key Encrypting 32 bytes (2 blocks) using AES-CBC with 128-bit key Encrypting 48 bytes (3 blocks) using AES-CBC with 128-bit key Encrypting 64 bytes (4 blocks) using AES-CBC with 128-bit key testing AES_XCBC: RFC 3566 Test Case #1: AES-XCBC-MAC-96 with 0-byte input RFC 3566 Test Case #2: AES-XCBC-MAC-96 with 3-byte input RFC 3566 Test Case #3: AES-XCBC-MAC-96 with 16-byte input RFC 3566 Test Case #4: AES-XCBC-MAC-96 with 20-byte input RFC 3566 Test Case #5: AES-XCBC-MAC-96 with 32-byte input RFC 3566 Test Case #6: AES-XCBC-MAC-96 with 34-byte input RFC 3566 Test Case #7: AES-XCBC-MAC-96 with 1000-byte input RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 16) RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 10) RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 18) testing HMAC_MD5: RFC 2104: MD5_HMAC test 1 RFC 2104: MD5_HMAC test 2 RFC 2104: MD5_HMAC test 3 8 CPU cores online starting up 7 crypto helpers started thread for crypto helper 0 started thread for crypto helper 1 started thread for crypto helper 2 | starting up helper thread 0 started thread for crypto helper 3 | status value returned by setting the priority of this thread (crypto helper 0) 22 | crypto helper 0 waiting (nothing to do) | starting up helper thread 3 | starting up helper thread 4 | status value returned by setting the priority of this thread (crypto helper 4) 22 | crypto helper 4 waiting (nothing to do) started thread for crypto helper 4 started thread for crypto helper 5 | status value returned by setting the priority of this thread (crypto helper 3) 22 | crypto helper 3 waiting (nothing to do) | starting up helper thread 1 | status value returned by setting the priority of this thread (crypto helper 1) 22 | crypto helper 1 waiting (nothing to do) | starting up helper thread 6 | status value returned by setting the priority of this thread (crypto helper 6) 22 | crypto helper 6 waiting (nothing to do) started thread for crypto helper 6 | checking IKEv1 state table | MAIN_R0: category: half-open IKE SA flags: 0: | -> MAIN_R1 EVENT_SO_DISCARD | MAIN_I1: category: half-open IKE SA flags: 0: | -> MAIN_I2 EVENT_RETRANSMIT | starting up helper thread 2 | status value returned by setting the priority of this thread (crypto helper 2) 22 | crypto helper 2 waiting (nothing to do) | MAIN_R1: category: open IKE SA flags: 200: | starting up helper thread 5 | -> MAIN_R2 EVENT_RETRANSMIT | status value returned by setting the priority of this thread (crypto helper 5) 22 | crypto helper 5 waiting (nothing to do) | -> UNDEFINED EVENT_RETRANSMIT | -> UNDEFINED EVENT_RETRANSMIT | MAIN_I2: category: open IKE SA flags: 0: | -> MAIN_I3 EVENT_RETRANSMIT | -> UNDEFINED EVENT_RETRANSMIT | -> UNDEFINED EVENT_RETRANSMIT | MAIN_R2: category: open IKE SA flags: 0: | -> MAIN_R3 EVENT_SA_REPLACE | -> MAIN_R3 EVENT_SA_REPLACE | -> UNDEFINED EVENT_SA_REPLACE | MAIN_I3: category: open IKE SA flags: 0: | -> MAIN_I4 EVENT_SA_REPLACE | -> MAIN_I4 EVENT_SA_REPLACE | -> UNDEFINED EVENT_SA_REPLACE | MAIN_R3: category: established IKE SA flags: 200: | -> UNDEFINED EVENT_NULL | MAIN_I4: category: established IKE SA flags: 0: | -> UNDEFINED EVENT_NULL | AGGR_R0: category: half-open IKE SA flags: 0: | -> AGGR_R1 EVENT_SO_DISCARD | AGGR_I1: category: half-open IKE SA flags: 0: | -> AGGR_I2 EVENT_SA_REPLACE | -> AGGR_I2 EVENT_SA_REPLACE | AGGR_R1: category: open IKE SA flags: 200: | -> AGGR_R2 EVENT_SA_REPLACE | -> AGGR_R2 EVENT_SA_REPLACE | AGGR_I2: category: established IKE SA flags: 200: | -> UNDEFINED EVENT_NULL | AGGR_R2: category: established IKE SA flags: 0: | -> UNDEFINED EVENT_NULL | QUICK_R0: category: established CHILD SA flags: 0: | -> QUICK_R1 EVENT_RETRANSMIT | QUICK_I1: category: established CHILD SA flags: 0: | -> QUICK_I2 EVENT_SA_REPLACE | QUICK_R1: category: established CHILD SA flags: 0: | -> QUICK_R2 EVENT_SA_REPLACE | QUICK_I2: category: established CHILD SA flags: 200: | -> UNDEFINED EVENT_NULL | QUICK_R2: category: established CHILD SA flags: 0: | -> UNDEFINED EVENT_NULL | INFO: category: informational flags: 0: | -> UNDEFINED EVENT_NULL | INFO_PROTECTED: category: informational flags: 0: | -> UNDEFINED EVENT_NULL | XAUTH_R0: category: established IKE SA flags: 0: | -> XAUTH_R1 EVENT_NULL | XAUTH_R1: category: established IKE SA flags: 0: | -> MAIN_R3 EVENT_SA_REPLACE | MODE_CFG_R0: category: informational flags: 0: | -> MODE_CFG_R1 EVENT_SA_REPLACE | MODE_CFG_R1: category: established IKE SA flags: 0: | -> MODE_CFG_R2 EVENT_SA_REPLACE | MODE_CFG_R2: category: established IKE SA flags: 0: | -> UNDEFINED EVENT_NULL | MODE_CFG_I1: category: established IKE SA flags: 0: | -> MAIN_I4 EVENT_SA_REPLACE | XAUTH_I0: category: established IKE SA flags: 0: | -> XAUTH_I1 EVENT_RETRANSMIT | XAUTH_I1: category: established IKE SA flags: 0: | -> MAIN_I4 EVENT_RETRANSMIT | checking IKEv2 state table | PARENT_I0: category: ignore flags: 0: | -> PARENT_I1 EVENT_RETRANSMIT send-request (initiate IKE_SA_INIT) | PARENT_I1: category: half-open IKE SA flags: 0: | -> PARENT_I1 EVENT_RETAIN send-request (Initiator: process SA_INIT reply notification) | -> PARENT_I2 EVENT_RETRANSMIT send-request (Initiator: process IKE_SA_INIT reply, initiate IKE_AUTH) | PARENT_I2: category: open IKE SA flags: 0: | -> PARENT_I2 EVENT_NULL (Initiator: process INVALID_SYNTAX AUTH notification) | -> PARENT_I2 EVENT_NULL (Initiator: process AUTHENTICATION_FAILED AUTH notification) | -> PARENT_I2 EVENT_NULL (Initiator: process UNSUPPORTED_CRITICAL_PAYLOAD AUTH notification) | -> V2_IPSEC_I EVENT_SA_REPLACE (Initiator: process IKE_AUTH response) | -> PARENT_I2 EVENT_NULL (IKE SA: process IKE_AUTH response containing unknown notification) | PARENT_I3: category: established IKE SA flags: 0: | -> PARENT_I3 EVENT_RETAIN (I3: Informational Request) | -> PARENT_I3 EVENT_RETAIN (I3: Informational Response) | -> PARENT_I3 EVENT_RETAIN (I3: INFORMATIONAL Request) | -> PARENT_I3 EVENT_RETAIN (I3: INFORMATIONAL Response) | PARENT_R0: category: half-open IKE SA flags: 0: | -> PARENT_R1 EVENT_SO_DISCARD send-request (Respond to IKE_SA_INIT) | PARENT_R1: category: half-open IKE SA flags: 0: | -> PARENT_R1 EVENT_SA_REPLACE send-request (Responder: process IKE_AUTH request (no SKEYSEED)) | -> V2_IPSEC_R EVENT_SA_REPLACE send-request (Responder: process IKE_AUTH request) | PARENT_R2: category: established IKE SA flags: 0: | -> PARENT_R2 EVENT_RETAIN (R2: process Informational Request) | -> PARENT_R2 EVENT_RETAIN (R2: process Informational Response) | -> PARENT_R2 EVENT_RETAIN (R2: process INFORMATIONAL Request) | -> PARENT_R2 EVENT_RETAIN (R2: process INFORMATIONAL Response) | V2_CREATE_I0: category: established IKE SA flags: 0: | -> V2_CREATE_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IPsec SA) | V2_CREATE_I: category: established IKE SA flags: 0: | -> V2_IPSEC_I EVENT_SA_REPLACE (Process CREATE_CHILD_SA IPsec SA Response) | V2_REKEY_IKE_I0: category: established IKE SA flags: 0: | -> V2_REKEY_IKE_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IKE Rekey) | V2_REKEY_IKE_I: category: established IKE SA flags: 0: | -> PARENT_I3 EVENT_SA_REPLACE (Process CREATE_CHILD_SA IKE Rekey Response) | V2_REKEY_CHILD_I0: category: established IKE SA flags: 0: | -> V2_REKEY_CHILD_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IPsec Rekey SA) | V2_REKEY_CHILD_I: category: established IKE SA flags: 0: | V2_CREATE_R: category: established IKE SA flags: 0: | -> V2_IPSEC_R EVENT_SA_REPLACE send-request (Respond to CREATE_CHILD_SA IPsec SA Request) | V2_REKEY_IKE_R: category: established IKE SA flags: 0: | -> PARENT_R2 EVENT_SA_REPLACE send-request (Respond to CREATE_CHILD_SA IKE Rekey) | V2_REKEY_CHILD_R: category: established IKE SA flags: 0: | V2_IPSEC_I: category: established CHILD SA flags: 0: | V2_IPSEC_R: category: established CHILD SA flags: 0: | IKESA_DEL: category: established IKE SA flags: 0: | -> IKESA_DEL EVENT_RETAIN (IKE_SA_DEL: process INFORMATIONAL) | CHILDSA_DEL: category: informational flags: 0: Using Linux XFRM/NETKEY IPsec interface code on 5.1.18-200.fc29.x86_64 | Hard-wiring algorithms | adding AES_CCM_16 to kernel algorithm db | adding AES_CCM_12 to kernel algorithm db | adding AES_CCM_8 to kernel algorithm db | adding 3DES_CBC to kernel algorithm db | adding CAMELLIA_CBC to kernel algorithm db | adding AES_GCM_16 to kernel algorithm db | adding AES_GCM_12 to kernel algorithm db | adding AES_GCM_8 to kernel algorithm db | adding AES_CTR to kernel algorithm db | adding AES_CBC to kernel algorithm db | adding SERPENT_CBC to kernel algorithm db | adding TWOFISH_CBC to kernel algorithm db | adding NULL_AUTH_AES_GMAC to kernel algorithm db | adding NULL to kernel algorithm db | adding CHACHA20_POLY1305 to kernel algorithm db | adding HMAC_MD5_96 to kernel algorithm db | adding HMAC_SHA1_96 to kernel algorithm db | adding HMAC_SHA2_512_256 to kernel algorithm db | adding HMAC_SHA2_384_192 to kernel algorithm db | adding HMAC_SHA2_256_128 to kernel algorithm db | adding HMAC_SHA2_256_TRUNCBUG to kernel algorithm db | adding AES_XCBC_96 to kernel algorithm db | adding AES_CMAC_96 to kernel algorithm db | adding NONE to kernel algorithm db | net.ipv6.conf.all.disable_ipv6=1 ignore ipv6 holes | global periodic timer EVENT_SHUNT_SCAN enabled with interval of 20 seconds | setup kernel fd callback | add_fd_read_event_handler: new KERNEL_XRM_FD-pe@0x55c6ab8c14c8 | libevent_malloc: new ptr-libevent@0x55c6ab8a4d78 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8c0a28 size 16 | add_fd_read_event_handler: new KERNEL_ROUTE_FD-pe@0x55c6ab8c0918 | libevent_malloc: new ptr-libevent@0x55c6ab86b2b8 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8c1418 size 16 | global one-shot timer EVENT_CHECK_CRLS initialized selinux support is enabled. | unbound context created - setting debug level to 5 | /etc/hosts lookups activated | /etc/resolv.conf usage activated | outgoing-port-avoid set 0-65535 | outgoing-port-permit set 32768-60999 | Loading dnssec root key from:/var/lib/unbound/root.key | No additional dnssec trust anchors defined via dnssec-trusted= option | Setting up events, loop start | add_fd_read_event_handler: new PLUTO_CTL_FD-pe@0x55c6ab8c1458 | libevent_malloc: new ptr-libevent@0x55c6ab8cd718 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8d8a28 size 16 | libevent_realloc: new ptr-libevent@0x55c6ab867d58 size 256 | libevent_malloc: new ptr-libevent@0x55c6ab8d8a68 size 8 | libevent_realloc: new ptr-libevent@0x55c6ab868608 size 144 | libevent_malloc: new ptr-libevent@0x55c6ab868a68 size 152 | libevent_malloc: new ptr-libevent@0x55c6ab8d8aa8 size 16 | signal event handler PLUTO_SIGCHLD installed | libevent_malloc: new ptr-libevent@0x55c6ab8d8ae8 size 8 | libevent_malloc: new ptr-libevent@0x55c6ab8d8b28 size 152 | signal event handler PLUTO_SIGTERM installed | libevent_malloc: new ptr-libevent@0x55c6ab8d8bf8 size 8 | libevent_malloc: new ptr-libevent@0x55c6ab8d8c38 size 152 | signal event handler PLUTO_SIGHUP installed | libevent_malloc: new ptr-libevent@0x55c6ab8d8d08 size 8 | libevent_realloc: release ptr-libevent@0x55c6ab868608 | libevent_realloc: new ptr-libevent@0x55c6ab8d8d48 size 256 | libevent_malloc: new ptr-libevent@0x55c6ab8d8e78 size 152 | signal event handler PLUTO_SIGSYS installed | created addconn helper (pid:29484) using fork+execve | forked child 29484 | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) listening for IKE messages | Inspecting interface lo | found lo with address 127.0.0.1 | Inspecting interface eth0 | found eth0 with address 192.0.2.254 | Inspecting interface eth1 | found eth1 with address 192.1.2.23 Kernel supports NIC esp-hw-offload adding interface eth1/eth1 (esp-hw-offload not supported by kernel) 192.1.2.23:500 | NAT-Traversal: Trying sockopt style NAT-T | NAT-Traversal: ESPINUDP(2) setup succeeded for sockopt style NAT-T family IPv4 adding interface eth1/eth1 192.1.2.23:4500 adding interface eth0/eth0 (esp-hw-offload not supported by kernel) 192.0.2.254:500 | NAT-Traversal: Trying sockopt style NAT-T | NAT-Traversal: ESPINUDP(2) setup succeeded for sockopt style NAT-T family IPv4 adding interface eth0/eth0 192.0.2.254:4500 adding interface lo/lo (esp-hw-offload not supported by kernel) 127.0.0.1:500 | NAT-Traversal: Trying sockopt style NAT-T | NAT-Traversal: ESPINUDP(2) setup succeeded for sockopt style NAT-T family IPv4 adding interface lo/lo 127.0.0.1:4500 | no interfaces to sort | FOR_EACH_UNORIENTED_CONNECTION_... in check_orientations | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d9348 | libevent_malloc: new ptr-libevent@0x55c6ab8cd668 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8d93b8 size 16 | setup callback for interface lo 127.0.0.1:4500 fd 22 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d93f8 | libevent_malloc: new ptr-libevent@0x55c6ab869518 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8d9468 size 16 | setup callback for interface lo 127.0.0.1:500 fd 21 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d94a8 | libevent_malloc: new ptr-libevent@0x55c6ab86b3b8 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8d9518 size 16 | setup callback for interface eth0 192.0.2.254:4500 fd 20 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d9558 | libevent_malloc: new ptr-libevent@0x55c6ab868508 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8d95c8 size 16 | setup callback for interface eth0 192.0.2.254:500 fd 19 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d9608 | libevent_malloc: new ptr-libevent@0x55c6ab8394e8 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8d9678 size 16 | setup callback for interface eth1 192.1.2.23:4500 fd 18 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d96b8 | libevent_malloc: new ptr-libevent@0x55c6ab8391d8 size 128 | libevent_malloc: new ptr-libevent@0x55c6ab8d9728 size 16 | setup callback for interface eth1 192.1.2.23:500 fd 17 | certs and keys locked by 'free_preshared_secrets' | certs and keys unlocked by 'free_preshared_secrets' loading secrets from "/etc/ipsec.secrets" | saving Modulus | saving PublicExponent | ignoring PrivateExponent | ignoring Prime1 | ignoring Prime2 | ignoring Exponent1 | ignoring Exponent2 | ignoring Coefficient | ignoring CKAIDNSS | computed rsa CKAID 61 55 99 73 d3 ac ef 7d 3a 37 0e 3e 82 ad 92 c1 | computed rsa CKAID 8a 82 25 f1 loaded private key for keyid: PKK_RSA:AQO9bJbr3 | certs and keys locked by 'process_secret' | certs and keys unlocked by 'process_secret' | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.532 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection clear with policy AUTH_NEVER+GROUP+PASS+NEVER_NEGOTIATE | counting wild cards for (none) is 15 | counting wild cards for (none) is 15 | connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@(nil): none | new hp@0x55c6ab8da538 added connection description "clear" | ike_life: 0s; ipsec_life: 0s; rekey_margin: 0s; rekey_fuzz: 0%; keyingtries: 0; replay_window: 0; policy: AUTH_NEVER+GROUP+PASS+NEVER_NEGOTIATE | 192.1.2.23---192.1.2.254...%group | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0637 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection clear-or-private with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS | ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31 | from whack: got --esp= | ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128 | counting wild cards for ID_NULL is 0 | counting wild cards for ID_NULL is 0 | find_host_pair: comparing 192.1.2.23:500 to 0.0.0.0:500 but ignoring ports | connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@0x55c6ab8da538: clear added connection description "clear-or-private" | ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS | 192.1.2.23[ID_NULL]---192.1.2.254...%opportunisticgroup[ID_NULL] | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.156 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection private-or-clear with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS | ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31 | from whack: got --esp= | ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128 | counting wild cards for ID_NULL is 0 | counting wild cards for ID_NULL is 0 | find_host_pair: comparing 192.1.2.23:500 to 0.0.0.0:500 but ignoring ports | connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@0x55c6ab8da538: clear-or-private added connection description "private-or-clear" | ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS | 192.1.2.23[ID_NULL]---192.1.2.254...%opportunisticgroup[ID_NULL] | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0983 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection private with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP | ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31 | from whack: got --esp= | ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128 | counting wild cards for ID_NULL is 0 | counting wild cards for ID_NULL is 0 | find_host_pair: comparing 192.1.2.23:500 to 0.0.0.0:500 but ignoring ports | connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@0x55c6ab8da538: private-or-clear added connection description "private" | ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP | 192.1.2.23[ID_NULL]---192.1.2.254...%opportunisticgroup[ID_NULL] | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0955 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection block with policy AUTH_NEVER+GROUP+REJECT+NEVER_NEGOTIATE | counting wild cards for (none) is 15 | counting wild cards for (none) is 15 | find_host_pair: comparing 192.1.2.23:500 to 0.0.0.0:500 but ignoring ports | connect_to_host_pair: 192.1.2.23:500 0.0.0.0:500 -> hp@0x55c6ab8da538: private added connection description "block" | ike_life: 0s; ipsec_life: 0s; rekey_margin: 0s; rekey_fuzz: 0%; keyingtries: 0; replay_window: 0; policy: AUTH_NEVER+GROUP+REJECT+NEVER_NEGOTIATE | 192.1.2.23---192.1.2.254...%group | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.047 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) listening for IKE messages | Inspecting interface lo | found lo with address 127.0.0.1 | Inspecting interface eth0 | found eth0 with address 192.0.2.254 | Inspecting interface eth1 | found eth1 with address 192.1.2.23 | no interfaces to sort | libevent_free: release ptr-libevent@0x55c6ab8cd668 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d9348 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d9348 | libevent_malloc: new ptr-libevent@0x55c6ab8cd668 size 128 | setup callback for interface lo 127.0.0.1:4500 fd 22 | libevent_free: release ptr-libevent@0x55c6ab869518 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d93f8 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d93f8 | libevent_malloc: new ptr-libevent@0x55c6ab869518 size 128 | setup callback for interface lo 127.0.0.1:500 fd 21 | libevent_free: release ptr-libevent@0x55c6ab86b3b8 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d94a8 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d94a8 | libevent_malloc: new ptr-libevent@0x55c6ab86b3b8 size 128 | setup callback for interface eth0 192.0.2.254:4500 fd 20 | libevent_free: release ptr-libevent@0x55c6ab868508 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d9558 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d9558 | libevent_malloc: new ptr-libevent@0x55c6ab868508 size 128 | setup callback for interface eth0 192.0.2.254:500 fd 19 | libevent_free: release ptr-libevent@0x55c6ab8394e8 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d9608 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d9608 | libevent_malloc: new ptr-libevent@0x55c6ab8394e8 size 128 | setup callback for interface eth1 192.1.2.23:4500 fd 18 | libevent_free: release ptr-libevent@0x55c6ab8391d8 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d96b8 | add_fd_read_event_handler: new ethX-pe@0x55c6ab8d96b8 | libevent_malloc: new ptr-libevent@0x55c6ab8391d8 size 128 | setup callback for interface eth1 192.1.2.23:500 fd 17 | certs and keys locked by 'free_preshared_secrets' forgetting secrets | certs and keys unlocked by 'free_preshared_secrets' loading secrets from "/etc/ipsec.secrets" | saving Modulus | saving PublicExponent | ignoring PrivateExponent | ignoring Prime1 | ignoring Prime2 | ignoring Exponent1 | ignoring Exponent2 | ignoring Coefficient | ignoring CKAIDNSS | computed rsa CKAID 61 55 99 73 d3 ac ef 7d 3a 37 0e 3e 82 ad 92 c1 | computed rsa CKAID 8a 82 25 f1 loaded private key for keyid: PKK_RSA:AQO9bJbr3 | certs and keys locked by 'process_secret' | certs and keys unlocked by 'process_secret' loading group "/etc/ipsec.d/policies/block" loading group "/etc/ipsec.d/policies/private" loading group "/etc/ipsec.d/policies/private-or-clear" loading group "/etc/ipsec.d/policies/clear-or-private" loading group "/etc/ipsec.d/policies/clear" | 192.1.2.23/32->192.1.2.254/32 0 sport 0 dport 0 clear | 192.1.2.23/32->192.1.3.254/32 0 sport 0 dport 0 clear | 192.1.2.23/32->192.1.3.253/32 0 sport 0 dport 0 clear | 192.1.2.23/32->192.1.2.253/32 0 sport 0 dport 0 clear | 192.1.2.23/32->192.1.3.0/24 0 sport 0 dport 0 clear-or-private | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in conn_by_name | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.319 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | start processing: connection "clear" (in whack_route_connection() at rcv_whack.c:106) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "clear" (in route_group() at foodgroups.c:435) | start processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for clear#192.1.2.254/32 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: clear#192.1.2.254/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'clear#192.1.2.254/32' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1016 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25: | cmd( 80):4/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' : | cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19: | cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_: | cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_: | cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTA: | cmd( 720):NCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P: | cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=: | cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V: | cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 | popen cmd is 1014 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/: | cmd( 80):32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL: | cmd( 160):UTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.: | cmd( 240):1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOC: | cmd( 320):OL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PE: | cmd( 400):ER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2: | cmd( 480):.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PR: | cmd( 560):OTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_PO: | cmd( 640):LICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC: | cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE: | cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0: | cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI: | cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "clear" (in route_group() at foodgroups.c:439) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "clear" (in route_group() at foodgroups.c:435) | start processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for clear#192.1.3.254/32 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: clear#192.1.3.254/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'clear#192.1.3.254/32' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1016 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25: | cmd( 80):4/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' : | cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19: | cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_: | cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_: | cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTA: | cmd( 720):NCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P: | cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=: | cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V: | cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 | popen cmd is 1014 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/: | cmd( 80):32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL: | cmd( 160):UTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.: | cmd( 240):1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOC: | cmd( 320):OL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PE: | cmd( 400):ER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3: | cmd( 480):.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PR: | cmd( 560):OTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_PO: | cmd( 640):LICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC: | cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE: | cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0: | cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI: | cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "clear" (in route_group() at foodgroups.c:439) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "clear" (in route_group() at foodgroups.c:435) | start processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for clear#192.1.3.253/32 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: clear#192.1.3.253/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'clear#192.1.3.253/32' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1016 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25: | cmd( 80):3/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' : | cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19: | cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_: | cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_: | cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTA: | cmd( 720):NCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P: | cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=: | cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V: | cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 | popen cmd is 1014 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/: | cmd( 80):32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL: | cmd( 160):UTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.: | cmd( 240):1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOC: | cmd( 320):OL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PE: | cmd( 400):ER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3: | cmd( 480):.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PR: | cmd( 560):OTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_PO: | cmd( 640):LICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC: | cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE: | cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0: | cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI: | cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "clear" (in route_group() at foodgroups.c:439) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "clear" (in route_group() at foodgroups.c:435) | start processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for clear#192.1.2.253/32 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: clear#192.1.2.253/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'clear#192.1.2.253/32' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1016 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25: | cmd( 80):3/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' : | cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19: | cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_: | cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_: | cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTA: | cmd( 720):NCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P: | cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=: | cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V: | cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 | popen cmd is 1014 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/: | cmd( 80):32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PL: | cmd( 160):UTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.: | cmd( 240):1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOC: | cmd( 320):OL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PE: | cmd( 400):ER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2: | cmd( 480):.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PR: | cmd( 560):OTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_PO: | cmd( 640):LICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC: | cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE: | cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0: | cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI: | cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "clear" (in route_group() at foodgroups.c:439) | stop processing: connection "clear" (in whack_route_connection() at rcv_whack.c:116) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 2.95 milliseconds in whack | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00334 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00201 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00196 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00195 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00194 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00194 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00195 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00195 milliseconds in signal handler PLUTO_SIGCHLD | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | start processing: connection "private-or-clear" (in whack_route_connection() at rcv_whack.c:106) | stop processing: connection "private-or-clear" (in whack_route_connection() at rcv_whack.c:116) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.023 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | start processing: connection "private" (in whack_route_connection() at rcv_whack.c:106) | stop processing: connection "private" (in whack_route_connection() at rcv_whack.c:116) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.027 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | start processing: connection "block" (in whack_route_connection() at rcv_whack.c:106) | stop processing: connection "block" (in whack_route_connection() at rcv_whack.c:116) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0294 milliseconds in whack | processing signal PLUTO_SIGCHLD | waitpid returned pid 29484 (exited with status 0) | reaped addconn helper child (status 0) | waitpid returned ECHILD (no child processes left) | spent 0.0183 milliseconds in signal handler PLUTO_SIGCHLD | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in show_connections_status | FOR_EACH_CONNECTION_... in show_connections_status | FOR_EACH_STATE_... in show_states_status (sort_states) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.682 milliseconds in whack | processing global timer EVENT_SHUNT_SCAN | expiring aged bare shunts from shunt table | spent 0.00402 milliseconds in global timer EVENT_SHUNT_SCAN | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) shutting down | processing: RESET whack log_fd (was fd@16) (in exit_pluto() at plutomain.c:1825) | certs and keys locked by 'free_preshared_secrets' forgetting secrets | certs and keys unlocked by 'free_preshared_secrets' | start processing: connection "block" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | flush revival: connection 'block' wasn't on the list | stop processing: connection "block" (in discard_connection() at connections.c:249) | start processing: connection "private" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | flush revival: connection 'private' wasn't on the list | stop processing: connection "private" (in discard_connection() at connections.c:249) | start processing: connection "private-or-clear" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | flush revival: connection 'private-or-clear' wasn't on the list | stop processing: connection "private-or-clear" (in discard_connection() at connections.c:249) | start processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in delete_connection() at connections.c:189) "clear#192.1.2.253/32" 0.0.0.0: deleting connection "clear#192.1.2.253/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'clear#192.1.2.253/32' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf | priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.253/32" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_O | popen cmd is 1018 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25: | cmd( 80):3/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' : | cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19: | cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_: | cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_: | cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING: | cmd( 720):_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO: | cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE: | cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no': | cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'clear#192.1.2.253/32' wasn't on the list | stop processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in delete_connection() at connections.c:189) "clear#192.1.3.253/32" 0.0.0.0: deleting connection "clear#192.1.3.253/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'clear#192.1.3.253/32' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf | priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.253/32" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_O | popen cmd is 1018 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25: | cmd( 80):3/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' : | cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19: | cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_: | cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_: | cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING: | cmd( 720):_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO: | cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE: | cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no': | cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'clear#192.1.3.253/32' wasn't on the list | stop processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in delete_connection() at connections.c:189) "clear#192.1.3.254/32" 0.0.0.0: deleting connection "clear#192.1.3.254/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'clear#192.1.3.254/32' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf | priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.254/32" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_O | popen cmd is 1018 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25: | cmd( 80):4/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' : | cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19: | cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_: | cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_: | cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING: | cmd( 720):_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO: | cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE: | cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no': | cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'clear#192.1.3.254/32' wasn't on the list | stop processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in delete_connection() at connections.c:189) "clear#192.1.2.254/32" 0.0.0.0: deleting connection "clear#192.1.2.254/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'clear#192.1.2.254/32' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf | priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private#192.1.3.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.254/32" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='192.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_O | popen cmd is 1018 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25: | cmd( 80):4/32' PLUTO_INTERFACE='eth1' PLUTO_NEXT_HOP='192.1.2.254' PLUTO_ME='192.1.2.23' : | cmd( 160):PLUTO_MY_ID='192.1.2.23' PLUTO_MY_CLIENT='192.1.2.23/32' PLUTO_MY_CLIENT_NET='19: | cmd( 240):2.1.2.23' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_: | cmd( 560):PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_: | cmd( 640):POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING: | cmd( 720):_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO: | cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE: | cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no': | cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'clear#192.1.2.254/32' wasn't on the list | stop processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "clear" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | flush revival: connection 'clear' wasn't on the list | stop processing: connection "clear" (in discard_connection() at connections.c:249) | start processing: connection "clear-or-private#192.1.3.0/24" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | flush revival: connection 'clear-or-private#192.1.3.0/24' wasn't on the list | stop processing: connection "clear-or-private#192.1.3.0/24" (in discard_connection() at connections.c:249) | start processing: connection "clear-or-private" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | free hp@0x55c6ab8da538 | flush revival: connection 'clear-or-private' wasn't on the list | stop processing: connection "clear-or-private" (in discard_connection() at connections.c:249) | crl fetch request list locked by 'free_crl_fetch' | crl fetch request list unlocked by 'free_crl_fetch' shutting down interface lo/lo 127.0.0.1:4500 shutting down interface lo/lo 127.0.0.1:500 shutting down interface eth0/eth0 192.0.2.254:4500 shutting down interface eth0/eth0 192.0.2.254:500 shutting down interface eth1/eth1 192.1.2.23:4500 shutting down interface eth1/eth1 192.1.2.23:500 | FOR_EACH_STATE_... in delete_states_dead_interfaces | libevent_free: release ptr-libevent@0x55c6ab8cd668 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d9348 | libevent_free: release ptr-libevent@0x55c6ab869518 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d93f8 | libevent_free: release ptr-libevent@0x55c6ab86b3b8 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d94a8 | libevent_free: release ptr-libevent@0x55c6ab868508 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d9558 | libevent_free: release ptr-libevent@0x55c6ab8394e8 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d9608 | libevent_free: release ptr-libevent@0x55c6ab8391d8 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8d96b8 | FOR_EACH_UNORIENTED_CONNECTION_... in check_orientations | libevent_free: release ptr-libevent@0x55c6ab8cd718 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8c1458 | libevent_free: release ptr-libevent@0x55c6ab86b2b8 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8c0918 | libevent_free: release ptr-libevent@0x55c6ab8a4d78 | free_event_entry: release EVENT_NULL-pe@0x55c6ab8c14c8 | global timer EVENT_REINIT_SECRET uninitialized | global timer EVENT_SHUNT_SCAN uninitialized | global timer EVENT_PENDING_DDNS uninitialized | global timer EVENT_PENDING_PHASE2 uninitialized | global timer EVENT_CHECK_CRLS uninitialized | global timer EVENT_REVIVE_CONNS uninitialized | global timer EVENT_FREE_ROOT_CERTS uninitialized | global timer EVENT_RESET_LOG_RATE_LIMIT uninitialized | global timer EVENT_NAT_T_KEEPALIVE uninitialized | libevent_free: release ptr-libevent@0x55c6ab868a68 | signal event handler PLUTO_SIGCHLD uninstalled | libevent_free: release ptr-libevent@0x55c6ab8d8b28 | signal event handler PLUTO_SIGTERM uninstalled | libevent_free: release ptr-libevent@0x55c6ab8d8c38 | signal event handler PLUTO_SIGHUP uninstalled | libevent_free: release ptr-libevent@0x55c6ab8d8e78 | signal event handler PLUTO_SIGSYS uninstalled | releasing event base | libevent_free: release ptr-libevent@0x55c6ab8d8d48 | libevent_free: release ptr-libevent@0x55c6ab8bbd08 | libevent_free: release ptr-libevent@0x55c6ab8bbcb8 | libevent_free: release ptr-libevent@0x55c6ab8bbc48 | libevent_free: release ptr-libevent@0x55c6ab8bbc08 | libevent_free: release ptr-libevent@0x55c6ab8d8a28 | libevent_free: release ptr-libevent@0x55c6ab8d8aa8 | libevent_free: release ptr-libevent@0x55c6ab8bbeb8 | libevent_free: release ptr-libevent@0x55c6ab8c0a28 | libevent_free: release ptr-libevent@0x55c6ab8c1418 | libevent_free: release ptr-libevent@0x55c6ab8d9728 | libevent_free: release ptr-libevent@0x55c6ab8d9678 | libevent_free: release ptr-libevent@0x55c6ab8d95c8 | libevent_free: release ptr-libevent@0x55c6ab8d9518 | libevent_free: release ptr-libevent@0x55c6ab8d9468 | libevent_free: release ptr-libevent@0x55c6ab8d93b8 | libevent_free: release ptr-libevent@0x55c6ab868bc8 | libevent_free: release ptr-libevent@0x55c6ab8d8bf8 | libevent_free: release ptr-libevent@0x55c6ab8d8ae8 | libevent_free: release ptr-libevent@0x55c6ab8d8a68 | libevent_free: release ptr-libevent@0x55c6ab8d8d08 | libevent_free: release ptr-libevent@0x55c6ab867d58 | libevent_free: release ptr-libevent@0x55c6ab838908 | libevent_free: release ptr-libevent@0x55c6ab838d38 | libevent_free: release ptr-libevent@0x55c6ab8680c8 | releasing global libevent data | libevent_free: release ptr-libevent@0x55c6ab8387f8 | libevent_free: release ptr-libevent@0x55c6ab838cd8 | libevent_free: release ptr-libevent@0x55c6ab838dd8 leak: group instance name, item size: 30 leak: cloned from groupname, item size: 17 leak: group instance name, item size: 21 leak: cloned from groupname, item size: 6 leak: group instance name, item size: 21 leak: cloned from groupname, item size: 6 leak: group instance name, item size: 21 leak: cloned from groupname, item size: 6 leak: group instance name, item size: 21 leak: cloned from groupname, item size: 6 leak: policy group path, item size: 50 leak detective found 11 leaks, total size 205