FIPS Product: YES FIPS Kernel: NO FIPS Mode: NO NSS DB directory: sql:/etc/ipsec.d Initializing NSS Opening NSS database "sql:/etc/ipsec.d" read-only NSS initialized NSS crypto library initialized FIPS HMAC integrity support [enabled] FIPS mode disabled for pluto daemon FIPS HMAC integrity verification self-test FAILED libcap-ng support [enabled] Linux audit support [enabled] Linux audit activated Starting Pluto (Libreswan Version v3.28-685-gbfd5aef521-master-s2 XFRM(netkey) esp-hw-offload FORK PTHREAD_SETSCHEDPRIO NSS (IPsec profile) DNSSEC FIPS_CHECK LABELED_IPSEC SECCOMP LIBCAP_NG LINUX_AUDIT XAUTH_PAM NETWORKMANAGER CURL(non-NSS)) pid:17603 core dump dir: /tmp secrets file: /etc/ipsec.secrets leak-detective enabled NSS crypto [enabled] XAUTH PAM support [enabled] | libevent is using pluto's memory allocator Initializing libevent in pthreads mode: headers: 2.1.8-stable (2010800); library: 2.1.8-stable (2010800) | libevent_malloc: new ptr-libevent@0x5646827ffab8 size 40 | libevent_malloc: new ptr-libevent@0x564682801508 size 40 | libevent_malloc: new ptr-libevent@0x564682801488 size 40 | creating event base | libevent_malloc: new ptr-libevent@0x564682800288 size 56 | libevent_malloc: new ptr-libevent@0x564682792258 size 664 | libevent_malloc: new ptr-libevent@0x564682831658 size 24 | libevent_malloc: new ptr-libevent@0x5646828316a8 size 384 | libevent_malloc: new ptr-libevent@0x564682831618 size 16 | libevent_malloc: new ptr-libevent@0x564682801408 size 40 | libevent_malloc: new ptr-libevent@0x564682801388 size 48 | libevent_realloc: new ptr-libevent@0x564682791ee8 size 256 | libevent_malloc: new ptr-libevent@0x564682831858 size 16 | libevent_free: release ptr-libevent@0x564682800288 | libevent initialized | libevent_realloc: new ptr-libevent@0x564682800288 size 64 | global periodic timer EVENT_RESET_LOG_RATE_LIMIT enabled with interval of 3600 seconds | init_nat_traversal() initialized with keep_alive=0s NAT-Traversal support [enabled] | global one-shot timer EVENT_NAT_T_KEEPALIVE initialized | global one-shot timer EVENT_FREE_ROOT_CERTS initialized | global periodic timer EVENT_REINIT_SECRET enabled with interval of 3600 seconds | global one-shot timer EVENT_REVIVE_CONNS initialized | global periodic timer EVENT_PENDING_DDNS enabled with interval of 60 seconds | global periodic timer EVENT_PENDING_PHASE2 enabled with interval of 120 seconds Encryption algorithms: AES_CCM_16 IKEv1: ESP IKEv2: ESP FIPS {256,192,*128} aes_ccm, aes_ccm_c AES_CCM_12 IKEv1: ESP IKEv2: ESP FIPS {256,192,*128} aes_ccm_b AES_CCM_8 IKEv1: ESP IKEv2: ESP FIPS {256,192,*128} aes_ccm_a 3DES_CBC IKEv1: IKE ESP IKEv2: IKE ESP FIPS [*192] 3des CAMELLIA_CTR IKEv1: ESP IKEv2: ESP {256,192,*128} CAMELLIA_CBC IKEv1: IKE ESP IKEv2: IKE ESP {256,192,*128} camellia AES_GCM_16 IKEv1: ESP IKEv2: IKE ESP FIPS {256,192,*128} aes_gcm, aes_gcm_c AES_GCM_12 IKEv1: ESP IKEv2: IKE ESP FIPS {256,192,*128} aes_gcm_b AES_GCM_8 IKEv1: ESP IKEv2: IKE ESP FIPS {256,192,*128} aes_gcm_a AES_CTR IKEv1: IKE ESP IKEv2: IKE ESP FIPS {256,192,*128} aesctr AES_CBC IKEv1: IKE ESP IKEv2: IKE ESP FIPS {256,192,*128} aes SERPENT_CBC IKEv1: IKE ESP IKEv2: IKE ESP {256,192,*128} serpent TWOFISH_CBC IKEv1: IKE ESP IKEv2: IKE ESP {256,192,*128} twofish TWOFISH_SSH IKEv1: IKE IKEv2: IKE ESP {256,192,*128} twofish_cbc_ssh NULL_AUTH_AES_GMAC IKEv1: ESP IKEv2: ESP FIPS {256,192,*128} aes_gmac NULL IKEv1: ESP IKEv2: ESP [] CHACHA20_POLY1305 IKEv1: IKEv2: IKE ESP [*256] chacha20poly1305 Hash algorithms: MD5 IKEv1: IKE IKEv2: SHA1 IKEv1: IKE IKEv2: FIPS sha SHA2_256 IKEv1: IKE IKEv2: FIPS sha2, sha256 SHA2_384 IKEv1: IKE IKEv2: FIPS sha384 SHA2_512 IKEv1: IKE IKEv2: FIPS sha512 PRF algorithms: HMAC_MD5 IKEv1: IKE IKEv2: IKE md5 HMAC_SHA1 IKEv1: IKE IKEv2: IKE FIPS sha, sha1 HMAC_SHA2_256 IKEv1: IKE IKEv2: IKE FIPS sha2, sha256, sha2_256 HMAC_SHA2_384 IKEv1: IKE IKEv2: IKE FIPS sha384, sha2_384 HMAC_SHA2_512 IKEv1: IKE IKEv2: IKE FIPS sha512, sha2_512 AES_XCBC IKEv1: IKEv2: IKE aes128_xcbc Integrity algorithms: HMAC_MD5_96 IKEv1: IKE ESP AH IKEv2: IKE ESP AH md5, hmac_md5 HMAC_SHA1_96 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS sha, sha1, sha1_96, hmac_sha1 HMAC_SHA2_512_256 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS sha512, sha2_512, sha2_512_256, hmac_sha2_512 HMAC_SHA2_384_192 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS sha384, sha2_384, sha2_384_192, hmac_sha2_384 HMAC_SHA2_256_128 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS sha2, sha256, sha2_256, sha2_256_128, hmac_sha2_256 HMAC_SHA2_256_TRUNCBUG IKEv1: ESP AH IKEv2: AH AES_XCBC_96 IKEv1: ESP AH IKEv2: IKE ESP AH aes_xcbc, aes128_xcbc, aes128_xcbc_96 AES_CMAC_96 IKEv1: ESP AH IKEv2: ESP AH FIPS aes_cmac NONE IKEv1: ESP IKEv2: IKE ESP FIPS null DH algorithms: NONE IKEv1: IKEv2: IKE ESP AH FIPS null, dh0 MODP1536 IKEv1: IKE ESP AH IKEv2: IKE ESP AH dh5 MODP2048 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh14 MODP3072 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh15 MODP4096 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh16 MODP6144 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh17 MODP8192 IKEv1: IKE ESP AH IKEv2: IKE ESP AH FIPS dh18 DH19 IKEv1: IKE IKEv2: IKE ESP AH FIPS ecp_256, ecp256 DH20 IKEv1: IKE IKEv2: IKE ESP AH FIPS ecp_384, ecp384 DH21 IKEv1: IKE IKEv2: IKE ESP AH FIPS ecp_521, ecp521 DH31 IKEv1: IKE IKEv2: IKE ESP AH curve25519 testing CAMELLIA_CBC: Camellia: 16 bytes with 128-bit key Camellia: 16 bytes with 128-bit key Camellia: 16 bytes with 256-bit key Camellia: 16 bytes with 256-bit key testing AES_GCM_16: empty string one block two blocks two blocks with associated data testing AES_CTR: Encrypting 16 octets using AES-CTR with 128-bit key Encrypting 32 octets using AES-CTR with 128-bit key Encrypting 36 octets using AES-CTR with 128-bit key Encrypting 16 octets using AES-CTR with 192-bit key Encrypting 32 octets using AES-CTR with 192-bit key Encrypting 36 octets using AES-CTR with 192-bit key Encrypting 16 octets using AES-CTR with 256-bit key Encrypting 32 octets using AES-CTR with 256-bit key Encrypting 36 octets using AES-CTR with 256-bit key testing AES_CBC: Encrypting 16 bytes (1 block) using AES-CBC with 128-bit key Encrypting 32 bytes (2 blocks) using AES-CBC with 128-bit key Encrypting 48 bytes (3 blocks) using AES-CBC with 128-bit key Encrypting 64 bytes (4 blocks) using AES-CBC with 128-bit key testing AES_XCBC: RFC 3566 Test Case #1: AES-XCBC-MAC-96 with 0-byte input RFC 3566 Test Case #2: AES-XCBC-MAC-96 with 3-byte input RFC 3566 Test Case #3: AES-XCBC-MAC-96 with 16-byte input RFC 3566 Test Case #4: AES-XCBC-MAC-96 with 20-byte input RFC 3566 Test Case #5: AES-XCBC-MAC-96 with 32-byte input RFC 3566 Test Case #6: AES-XCBC-MAC-96 with 34-byte input RFC 3566 Test Case #7: AES-XCBC-MAC-96 with 1000-byte input RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 16) RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 10) RFC 4434 Test Case AES-XCBC-PRF-128 with 20-byte input (key length 18) testing HMAC_MD5: RFC 2104: MD5_HMAC test 1 RFC 2104: MD5_HMAC test 2 RFC 2104: MD5_HMAC test 3 8 CPU cores online starting up 7 crypto helpers started thread for crypto helper 0 started thread for crypto helper 1 started thread for crypto helper 2 | starting up helper thread 0 started thread for crypto helper 3 | starting up helper thread 1 | starting up helper thread 3 started thread for crypto helper 4 | status value returned by setting the priority of this thread (crypto helper 3) 22 | crypto helper 3 waiting (nothing to do) | status value returned by setting the priority of this thread (crypto helper 1) 22 | crypto helper 1 waiting (nothing to do) started thread for crypto helper 5 | starting up helper thread 5 | status value returned by setting the priority of this thread (crypto helper 5) 22 | crypto helper 5 waiting (nothing to do) | status value returned by setting the priority of this thread (crypto helper 0) 22 | starting up helper thread 2 started thread for crypto helper 6 | starting up helper thread 6 | status value returned by setting the priority of this thread (crypto helper 2) 22 | status value returned by setting the priority of this thread (crypto helper 6) 22 | checking IKEv1 state table | starting up helper thread 4 | status value returned by setting the priority of this thread (crypto helper 4) 22 | crypto helper 0 waiting (nothing to do) | MAIN_R0: category: half-open IKE SA flags: 0: | crypto helper 2 waiting (nothing to do) | -> MAIN_R1 EVENT_SO_DISCARD | crypto helper 6 waiting (nothing to do) | MAIN_I1: category: half-open IKE SA flags: 0: | crypto helper 4 waiting (nothing to do) | -> MAIN_I2 EVENT_RETRANSMIT | MAIN_R1: category: open IKE SA flags: 200: | -> MAIN_R2 EVENT_RETRANSMIT | -> UNDEFINED EVENT_RETRANSMIT | -> UNDEFINED EVENT_RETRANSMIT | MAIN_I2: category: open IKE SA flags: 0: | -> MAIN_I3 EVENT_RETRANSMIT | -> UNDEFINED EVENT_RETRANSMIT | -> UNDEFINED EVENT_RETRANSMIT | MAIN_R2: category: open IKE SA flags: 0: | -> MAIN_R3 EVENT_SA_REPLACE | -> MAIN_R3 EVENT_SA_REPLACE | -> UNDEFINED EVENT_SA_REPLACE | MAIN_I3: category: open IKE SA flags: 0: | -> MAIN_I4 EVENT_SA_REPLACE | -> MAIN_I4 EVENT_SA_REPLACE | -> UNDEFINED EVENT_SA_REPLACE | MAIN_R3: category: established IKE SA flags: 200: | -> UNDEFINED EVENT_NULL | MAIN_I4: category: established IKE SA flags: 0: | -> UNDEFINED EVENT_NULL | AGGR_R0: category: half-open IKE SA flags: 0: | -> AGGR_R1 EVENT_SO_DISCARD | AGGR_I1: category: half-open IKE SA flags: 0: | -> AGGR_I2 EVENT_SA_REPLACE | -> AGGR_I2 EVENT_SA_REPLACE | AGGR_R1: category: open IKE SA flags: 200: | -> AGGR_R2 EVENT_SA_REPLACE | -> AGGR_R2 EVENT_SA_REPLACE | AGGR_I2: category: established IKE SA flags: 200: | -> UNDEFINED EVENT_NULL | AGGR_R2: category: established IKE SA flags: 0: | -> UNDEFINED EVENT_NULL | QUICK_R0: category: established CHILD SA flags: 0: | -> QUICK_R1 EVENT_RETRANSMIT | QUICK_I1: category: established CHILD SA flags: 0: | -> QUICK_I2 EVENT_SA_REPLACE | QUICK_R1: category: established CHILD SA flags: 0: | -> QUICK_R2 EVENT_SA_REPLACE | QUICK_I2: category: established CHILD SA flags: 200: | -> UNDEFINED EVENT_NULL | QUICK_R2: category: established CHILD SA flags: 0: | -> UNDEFINED EVENT_NULL | INFO: category: informational flags: 0: | -> UNDEFINED EVENT_NULL | INFO_PROTECTED: category: informational flags: 0: | -> UNDEFINED EVENT_NULL | XAUTH_R0: category: established IKE SA flags: 0: | -> XAUTH_R1 EVENT_NULL | XAUTH_R1: category: established IKE SA flags: 0: | -> MAIN_R3 EVENT_SA_REPLACE | MODE_CFG_R0: category: informational flags: 0: | -> MODE_CFG_R1 EVENT_SA_REPLACE | MODE_CFG_R1: category: established IKE SA flags: 0: | -> MODE_CFG_R2 EVENT_SA_REPLACE | MODE_CFG_R2: category: established IKE SA flags: 0: | -> UNDEFINED EVENT_NULL | MODE_CFG_I1: category: established IKE SA flags: 0: | -> MAIN_I4 EVENT_SA_REPLACE | XAUTH_I0: category: established IKE SA flags: 0: | -> XAUTH_I1 EVENT_RETRANSMIT | XAUTH_I1: category: established IKE SA flags: 0: | -> MAIN_I4 EVENT_RETRANSMIT | checking IKEv2 state table | PARENT_I0: category: ignore flags: 0: | -> PARENT_I1 EVENT_RETRANSMIT send-request (initiate IKE_SA_INIT) | PARENT_I1: category: half-open IKE SA flags: 0: | -> PARENT_I1 EVENT_RETAIN send-request (Initiator: process SA_INIT reply notification) | -> PARENT_I2 EVENT_RETRANSMIT send-request (Initiator: process IKE_SA_INIT reply, initiate IKE_AUTH) | PARENT_I2: category: open IKE SA flags: 0: | -> PARENT_I2 EVENT_NULL (Initiator: process INVALID_SYNTAX AUTH notification) | -> PARENT_I2 EVENT_NULL (Initiator: process AUTHENTICATION_FAILED AUTH notification) | -> PARENT_I2 EVENT_NULL (Initiator: process UNSUPPORTED_CRITICAL_PAYLOAD AUTH notification) | -> V2_IPSEC_I EVENT_SA_REPLACE (Initiator: process IKE_AUTH response) | -> PARENT_I2 EVENT_NULL (IKE SA: process IKE_AUTH response containing unknown notification) | PARENT_I3: category: established IKE SA flags: 0: | -> PARENT_I3 EVENT_RETAIN (I3: Informational Request) | -> PARENT_I3 EVENT_RETAIN (I3: Informational Response) | -> PARENT_I3 EVENT_RETAIN (I3: INFORMATIONAL Request) | -> PARENT_I3 EVENT_RETAIN (I3: INFORMATIONAL Response) | PARENT_R0: category: half-open IKE SA flags: 0: | -> PARENT_R1 EVENT_SO_DISCARD send-request (Respond to IKE_SA_INIT) | PARENT_R1: category: half-open IKE SA flags: 0: | -> PARENT_R1 EVENT_SA_REPLACE send-request (Responder: process IKE_AUTH request (no SKEYSEED)) | -> V2_IPSEC_R EVENT_SA_REPLACE send-request (Responder: process IKE_AUTH request) | PARENT_R2: category: established IKE SA flags: 0: | -> PARENT_R2 EVENT_RETAIN (R2: process Informational Request) | -> PARENT_R2 EVENT_RETAIN (R2: process Informational Response) | -> PARENT_R2 EVENT_RETAIN (R2: process INFORMATIONAL Request) | -> PARENT_R2 EVENT_RETAIN (R2: process INFORMATIONAL Response) | V2_CREATE_I0: category: established IKE SA flags: 0: | -> V2_CREATE_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IPsec SA) | V2_CREATE_I: category: established IKE SA flags: 0: | -> V2_IPSEC_I EVENT_SA_REPLACE (Process CREATE_CHILD_SA IPsec SA Response) | V2_REKEY_IKE_I0: category: established IKE SA flags: 0: | -> V2_REKEY_IKE_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IKE Rekey) | V2_REKEY_IKE_I: category: established IKE SA flags: 0: | -> PARENT_I3 EVENT_SA_REPLACE (Process CREATE_CHILD_SA IKE Rekey Response) | V2_REKEY_CHILD_I0: category: established IKE SA flags: 0: | -> V2_REKEY_CHILD_I EVENT_RETRANSMIT send-request (Initiate CREATE_CHILD_SA IPsec Rekey SA) | V2_REKEY_CHILD_I: category: established IKE SA flags: 0: | V2_CREATE_R: category: established IKE SA flags: 0: | -> V2_IPSEC_R EVENT_SA_REPLACE send-request (Respond to CREATE_CHILD_SA IPsec SA Request) | V2_REKEY_IKE_R: category: established IKE SA flags: 0: | -> PARENT_R2 EVENT_SA_REPLACE send-request (Respond to CREATE_CHILD_SA IKE Rekey) | V2_REKEY_CHILD_R: category: established IKE SA flags: 0: | V2_IPSEC_I: category: established CHILD SA flags: 0: | V2_IPSEC_R: category: established CHILD SA flags: 0: | IKESA_DEL: category: established IKE SA flags: 0: | -> IKESA_DEL EVENT_RETAIN (IKE_SA_DEL: process INFORMATIONAL) | CHILDSA_DEL: category: informational flags: 0: Using Linux XFRM/NETKEY IPsec interface code on 5.1.18-200.fc29.x86_64 | Hard-wiring algorithms | adding AES_CCM_16 to kernel algorithm db | adding AES_CCM_12 to kernel algorithm db | adding AES_CCM_8 to kernel algorithm db | adding 3DES_CBC to kernel algorithm db | adding CAMELLIA_CBC to kernel algorithm db | adding AES_GCM_16 to kernel algorithm db | adding AES_GCM_12 to kernel algorithm db | adding AES_GCM_8 to kernel algorithm db | adding AES_CTR to kernel algorithm db | adding AES_CBC to kernel algorithm db | adding SERPENT_CBC to kernel algorithm db | adding TWOFISH_CBC to kernel algorithm db | adding NULL_AUTH_AES_GMAC to kernel algorithm db | adding NULL to kernel algorithm db | adding CHACHA20_POLY1305 to kernel algorithm db | adding HMAC_MD5_96 to kernel algorithm db | adding HMAC_SHA1_96 to kernel algorithm db | adding HMAC_SHA2_512_256 to kernel algorithm db | adding HMAC_SHA2_384_192 to kernel algorithm db | adding HMAC_SHA2_256_128 to kernel algorithm db | adding HMAC_SHA2_256_TRUNCBUG to kernel algorithm db | adding AES_XCBC_96 to kernel algorithm db | adding AES_CMAC_96 to kernel algorithm db | adding NONE to kernel algorithm db | net.ipv6.conf.all.disable_ipv6=1 ignore ipv6 holes | global periodic timer EVENT_SHUNT_SCAN enabled with interval of 20 seconds | setup kernel fd callback | add_fd_read_event_handler: new KERNEL_XRM_FD-pe@0x564682831078 | libevent_malloc: new ptr-libevent@0x56468282f858 size 128 | libevent_malloc: new ptr-libevent@0x564682836a78 size 16 | add_fd_read_event_handler: new KERNEL_ROUTE_FD-pe@0x564682836de8 | libevent_malloc: new ptr-libevent@0x5646828051a8 size 128 | libevent_malloc: new ptr-libevent@0x564682837398 size 16 | global one-shot timer EVENT_CHECK_CRLS initialized selinux support is enabled. | unbound context created - setting debug level to 5 | /etc/hosts lookups activated | /etc/resolv.conf usage activated | outgoing-port-avoid set 0-65535 | outgoing-port-permit set 32768-60999 | Loading dnssec root key from:/var/lib/unbound/root.key | No additional dnssec trust anchors defined via dnssec-trusted= option | Setting up events, loop start | add_fd_read_event_handler: new PLUTO_CTL_FD-pe@0x564682837288 | libevent_malloc: new ptr-libevent@0x564682843138 size 128 | libevent_malloc: new ptr-libevent@0x56468284e3e8 size 16 | libevent_realloc: new ptr-libevent@0x56468284e428 size 256 | libevent_malloc: new ptr-libevent@0x56468284e558 size 8 | libevent_realloc: new ptr-libevent@0x56468284e598 size 144 | libevent_malloc: new ptr-libevent@0x564682792818 size 152 | libevent_malloc: new ptr-libevent@0x56468284e658 size 16 | signal event handler PLUTO_SIGCHLD installed | libevent_malloc: new ptr-libevent@0x56468284e698 size 8 | libevent_malloc: new ptr-libevent@0x56468284e6d8 size 152 | signal event handler PLUTO_SIGTERM installed | libevent_malloc: new ptr-libevent@0x56468284e7a8 size 8 | libevent_malloc: new ptr-libevent@0x56468284e7e8 size 152 | signal event handler PLUTO_SIGHUP installed | libevent_malloc: new ptr-libevent@0x56468284e8b8 size 8 | libevent_realloc: release ptr-libevent@0x56468284e598 | libevent_realloc: new ptr-libevent@0x56468284e8f8 size 256 | libevent_malloc: new ptr-libevent@0x56468284ea28 size 152 | signal event handler PLUTO_SIGSYS installed | created addconn helper (pid:17754) using fork+execve | forked child 17754 | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) listening for IKE messages | Inspecting interface lo | found lo with address 127.0.0.1 | Inspecting interface eth0 | found eth0 with address 192.1.3.209 Kernel supports NIC esp-hw-offload adding interface eth0/eth0 (esp-hw-offload not supported by kernel) 192.1.3.209:500 | NAT-Traversal: Trying sockopt style NAT-T | NAT-Traversal: ESPINUDP(2) setup succeeded for sockopt style NAT-T family IPv4 adding interface eth0/eth0 192.1.3.209:4500 adding interface lo/lo (esp-hw-offload not supported by kernel) 127.0.0.1:500 | NAT-Traversal: Trying sockopt style NAT-T | NAT-Traversal: ESPINUDP(2) setup succeeded for sockopt style NAT-T family IPv4 adding interface lo/lo 127.0.0.1:4500 | no interfaces to sort | FOR_EACH_UNORIENTED_CONNECTION_... in check_orientations | add_fd_read_event_handler: new ethX-pe@0x56468284ee28 | libevent_malloc: new ptr-libevent@0x564682843088 size 128 | libevent_malloc: new ptr-libevent@0x56468284ee98 size 16 | setup callback for interface lo 127.0.0.1:4500 fd 20 | add_fd_read_event_handler: new ethX-pe@0x56468284eed8 | libevent_malloc: new ptr-libevent@0x564682805258 size 128 | libevent_malloc: new ptr-libevent@0x56468284ef48 size 16 | setup callback for interface lo 127.0.0.1:500 fd 19 | add_fd_read_event_handler: new ethX-pe@0x56468284ef88 | libevent_malloc: new ptr-libevent@0x564682806488 size 128 | libevent_malloc: new ptr-libevent@0x56468284eff8 size 16 | setup callback for interface eth0 192.1.3.209:4500 fd 18 | add_fd_read_event_handler: new ethX-pe@0x56468284f038 | libevent_malloc: new ptr-libevent@0x5646827ffe88 size 128 | libevent_malloc: new ptr-libevent@0x56468284f0a8 size 16 | setup callback for interface eth0 192.1.3.209:500 fd 17 | certs and keys locked by 'free_preshared_secrets' | certs and keys unlocked by 'free_preshared_secrets' loading secrets from "/etc/ipsec.secrets" | saving Modulus | saving PublicExponent | computed rsa CKAID 1a 15 cc e8 92 73 43 9c 2b f4 20 2a c1 06 6e f2 | computed rsa CKAID 59 b0 ef 45 loaded private key for keyid: PKK_RSA:AQPHFfpyJ | certs and keys locked by 'process_secret' | certs and keys unlocked by 'process_secret' | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.416 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection clear with policy AUTH_NEVER+GROUP+PASS+NEVER_NEGOTIATE | counting wild cards for (none) is 15 | counting wild cards for (none) is 15 | connect_to_host_pair: 192.1.3.209:500 0.0.0.0:500 -> hp@(nil): none | new hp@0x56468284fef8 added connection description "clear" | ike_life: 0s; ipsec_life: 0s; rekey_margin: 0s; rekey_fuzz: 0%; keyingtries: 0; replay_window: 0; policy: AUTH_NEVER+GROUP+PASS+NEVER_NEGOTIATE | 192.1.3.209---192.1.3.254...%group | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.134 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection clear-or-private with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS | ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31 | from whack: got --esp= | ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128 | counting wild cards for ID_NULL is 0 | counting wild cards for ID_NULL is 0 | find_host_pair: comparing 192.1.3.209:500 to 0.0.0.0:500 but ignoring ports | connect_to_host_pair: 192.1.3.209:500 0.0.0.0:500 -> hp@0x56468284fef8: clear added connection description "clear-or-private" | ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 1; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS | 192.1.3.209[ID_NULL]---192.1.3.254...%opportunisticgroup[ID_NULL] | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.123 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection private-or-clear with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS | ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31 | from whack: got --esp= | ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128 | counting wild cards for ID_NULL is 0 | counting wild cards for ID_NULL is 0 | find_host_pair: comparing 192.1.3.209:500 to 0.0.0.0:500 but ignoring ports | connect_to_host_pair: 192.1.3.209:500 0.0.0.0:500 -> hp@0x56468284fef8: clear-or-private added connection description "private-or-clear" | ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 1; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+NEGO_PASS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failurePASS | 192.1.3.209[ID_NULL]---192.1.3.254...%opportunisticgroup[ID_NULL] | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.173 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection private with policy AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP | ike (phase1) algorithm values: AES_GCM_16_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_GCM_16_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_256-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31, AES_CBC_128-HMAC_SHA2_512+HMAC_SHA2_256-MODP2048+MODP3072+MODP4096+MODP8192+DH19+DH20+DH21+DH31 | from whack: got --esp= | ESP/AH string values: AES_GCM_16_256-NONE, AES_GCM_16_128-NONE, AES_CBC_256-HMAC_SHA2_512_256+HMAC_SHA2_256_128, AES_CBC_128-HMAC_SHA2_512_256+HMAC_SHA2_256_128 | counting wild cards for ID_NULL is 0 | counting wild cards for ID_NULL is 0 | find_host_pair: comparing 192.1.3.209:500 to 0.0.0.0:500 but ignoring ports | connect_to_host_pair: 192.1.3.209:500 0.0.0.0:500 -> hp@0x56468284fef8: private-or-clear added connection description "private" | ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 1; replay_window: 32; policy: AUTHNULL+ENCRYPT+TUNNEL+PFS+OPPORTUNISTIC+GROUP+IKEV2_ALLOW+SAREF_TRACK+IKE_FRAG_ALLOW+ESN_NO+failureDROP | 192.1.3.209[ID_NULL]---192.1.3.254...%opportunisticgroup[ID_NULL] | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0992 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | Added new connection block with policy AUTH_NEVER+GROUP+REJECT+NEVER_NEGOTIATE | counting wild cards for (none) is 15 | counting wild cards for (none) is 15 | find_host_pair: comparing 192.1.3.209:500 to 0.0.0.0:500 but ignoring ports | connect_to_host_pair: 192.1.3.209:500 0.0.0.0:500 -> hp@0x56468284fef8: private added connection description "block" | ike_life: 0s; ipsec_life: 0s; rekey_margin: 0s; rekey_fuzz: 0%; keyingtries: 0; replay_window: 0; policy: AUTH_NEVER+GROUP+REJECT+NEVER_NEGOTIATE | 192.1.3.209---192.1.3.254...%group | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0635 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) listening for IKE messages | Inspecting interface lo | found lo with address 127.0.0.1 | Inspecting interface eth0 | found eth0 with address 192.1.3.209 | no interfaces to sort | libevent_free: release ptr-libevent@0x564682843088 | free_event_entry: release EVENT_NULL-pe@0x56468284ee28 | add_fd_read_event_handler: new ethX-pe@0x56468284ee28 | libevent_malloc: new ptr-libevent@0x564682843088 size 128 | setup callback for interface lo 127.0.0.1:4500 fd 20 | libevent_free: release ptr-libevent@0x564682805258 | free_event_entry: release EVENT_NULL-pe@0x56468284eed8 | add_fd_read_event_handler: new ethX-pe@0x56468284eed8 | libevent_malloc: new ptr-libevent@0x564682805258 size 128 | setup callback for interface lo 127.0.0.1:500 fd 19 | libevent_free: release ptr-libevent@0x564682806488 | free_event_entry: release EVENT_NULL-pe@0x56468284ef88 | add_fd_read_event_handler: new ethX-pe@0x56468284ef88 | libevent_malloc: new ptr-libevent@0x564682806488 size 128 | setup callback for interface eth0 192.1.3.209:4500 fd 18 | libevent_free: release ptr-libevent@0x5646827ffe88 | free_event_entry: release EVENT_NULL-pe@0x56468284f038 | add_fd_read_event_handler: new ethX-pe@0x56468284f038 | libevent_malloc: new ptr-libevent@0x5646827ffe88 size 128 | setup callback for interface eth0 192.1.3.209:500 fd 17 | certs and keys locked by 'free_preshared_secrets' forgetting secrets | certs and keys unlocked by 'free_preshared_secrets' loading secrets from "/etc/ipsec.secrets" | saving Modulus | saving PublicExponent | computed rsa CKAID 1a 15 cc e8 92 73 43 9c 2b f4 20 2a c1 06 6e f2 | computed rsa CKAID 59 b0 ef 45 loaded private key for keyid: PKK_RSA:AQPHFfpyJ | certs and keys locked by 'process_secret' | certs and keys unlocked by 'process_secret' loading group "/etc/ipsec.d/policies/block" loading group "/etc/ipsec.d/policies/private" loading group "/etc/ipsec.d/policies/private-or-clear" loading group "/etc/ipsec.d/policies/clear-or-private" loading group "/etc/ipsec.d/policies/clear" | 192.1.3.209/32->192.1.2.254/32 0 sport 0 dport 0 clear | 192.1.3.209/32->192.1.3.254/32 0 sport 0 dport 0 clear | 192.1.3.209/32->192.1.3.253/32 0 sport 0 dport 0 clear | 192.1.3.209/32->192.1.2.253/32 0 sport 0 dport 0 clear | 192.1.3.209/32->192.1.2.0/24 0 sport 0 dport 0 block | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in conn_by_name | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.428 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | start processing: connection "clear" (in whack_route_connection() at rcv_whack.c:106) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "clear" (in route_group() at foodgroups.c:435) | start processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for clear#192.1.2.254/32 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: clear#192.1.2.254/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'clear#192.1.2.254/32' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI | popen cmd is 1020 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25: | cmd( 80):4/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209': | cmd( 160): PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET=: | cmd( 240):'192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_: | cmd( 320):PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PL: | cmd( 400):UTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='1: | cmd( 480):92.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_P: | cmd( 560):EER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_C: | cmd( 640):ONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_I: | cmd( 720):NSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLU: | cmd( 800):TO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SER: | cmd( 880):VER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='n: | cmd( 960):o' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1018 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/: | cmd( 80):32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' P: | cmd( 160):LUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='1: | cmd( 240):92.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PR: | cmd( 320):OTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUT: | cmd( 400):O_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192: | cmd( 480):.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEE: | cmd( 560):R_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CON: | cmd( 640):N_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INS: | cmd( 720):TANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO: | cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE: | cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no': | cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "clear" (in route_group() at foodgroups.c:439) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "clear" (in route_group() at foodgroups.c:435) | start processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for clear#192.1.3.254/32 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.254/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: clear#192.1.3.254/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'clear#192.1.3.254/32' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI | popen cmd is 1020 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25: | cmd( 80):4/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209': | cmd( 160): PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET=: | cmd( 240):'192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_: | cmd( 320):PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PL: | cmd( 400):UTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='1: | cmd( 480):92.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_P: | cmd( 560):EER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_C: | cmd( 640):ONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_I: | cmd( 720):NSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLU: | cmd( 800):TO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SER: | cmd( 880):VER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='n: | cmd( 960):o' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1018 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/: | cmd( 80):32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' P: | cmd( 160):LUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='1: | cmd( 240):92.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PR: | cmd( 320):OTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUT: | cmd( 400):O_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192: | cmd( 480):.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEE: | cmd( 560):R_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CON: | cmd( 640):N_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INS: | cmd( 720):TANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO: | cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE: | cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no': | cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "clear" (in route_group() at foodgroups.c:439) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "clear" (in route_group() at foodgroups.c:435) | start processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for clear#192.1.3.253/32 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: clear#192.1.3.253/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'clear#192.1.3.253/32' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI | popen cmd is 1020 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25: | cmd( 80):3/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209': | cmd( 160): PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET=: | cmd( 240):'192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_: | cmd( 320):PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PL: | cmd( 400):UTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='1: | cmd( 480):92.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_P: | cmd( 560):EER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_C: | cmd( 640):ONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_I: | cmd( 720):NSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLU: | cmd( 800):TO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SER: | cmd( 880):VER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='n: | cmd( 960):o' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1018 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/: | cmd( 80):32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' P: | cmd( 160):LUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='1: | cmd( 240):92.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PR: | cmd( 320):OTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUT: | cmd( 400):O_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192: | cmd( 480):.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEE: | cmd( 560):R_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CON: | cmd( 640):N_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INS: | cmd( 720):TANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO: | cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE: | cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no': | cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "clear" (in route_group() at foodgroups.c:439) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "clear" (in route_group() at foodgroups.c:435) | start processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for clear#192.1.2.253/32 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.253/32" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: clear#192.1.2.253/32 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'clear#192.1.2.253/32' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf | netlink_raw_eroute: SPI_PASS | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI | popen cmd is 1020 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25: | cmd( 80):3/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209': | cmd( 160): PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET=: | cmd( 240):'192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_: | cmd( 320):PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PL: | cmd( 400):UTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='1: | cmd( 480):92.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_P: | cmd( 560):EER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_C: | cmd( 640):ONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_I: | cmd( 720):NSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLU: | cmd( 800):TO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SER: | cmd( 880):VER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='n: | cmd( 960):o' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1018 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/: | cmd( 80):32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' P: | cmd( 160):LUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='1: | cmd( 240):92.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PR: | cmd( 320):OTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUT: | cmd( 400):O_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192: | cmd( 480):.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEE: | cmd( 560):R_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CON: | cmd( 640):N_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INS: | cmd( 720):TANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO: | cmd( 800):_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVE: | cmd( 880):R='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no': | cmd( 960): VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "clear" (in route_group() at foodgroups.c:439) | stop processing: connection "clear" (in whack_route_connection() at rcv_whack.c:116) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 4.09 milliseconds in whack | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00588 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00275 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.0156 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00282 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.0028 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00251 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00255 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00267 milliseconds in signal handler PLUTO_SIGCHLD | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | start processing: connection "private-or-clear" (in whack_route_connection() at rcv_whack.c:106) | stop processing: connection "private-or-clear" (in whack_route_connection() at rcv_whack.c:116) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0478 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | start processing: connection "private" (in whack_route_connection() at rcv_whack.c:106) | stop processing: connection "private" (in whack_route_connection() at rcv_whack.c:116) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0224 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in conn_by_name | start processing: connection "block" (in whack_route_connection() at rcv_whack.c:106) | FOR_EACH_CONNECTION_... in conn_by_name | suspend processing: connection "block" (in route_group() at foodgroups.c:435) | start processing: connection "block#192.1.2.0/24" 0.0.0.0 (in route_group() at foodgroups.c:435) | could_route called for block#192.1.2.0/24 (kind=CK_INSTANCE) | FOR_EACH_CONNECTION_... in route_owner | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "block#192.1.2.0/24" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute() for proto 0, and source port 0 dest port 0 | FOR_EACH_CONNECTION_... in route_owner | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "block#192.1.2.0/24" 0.0.0.0 unrouted: NULL; eroute owner: NULL | route_and_eroute with c: block#192.1.2.0/24 (next: none) ero:null esr:{(nil)} ro:null rosr:{(nil)} and state: #0 | shunt_eroute() called for connection 'block#192.1.2.0/24' to 'add' for rt_kind 'prospective erouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "block#192.1.2.0/24" is 0x17dfdf | IPsec Sa SPD priority set to 1564639 | priority calculation of connection "block#192.1.2.0/24" is 0x17dfdf | IPsec Sa SPD priority set to 1564639 | route_and_eroute: firewall_notified: true | running updown command "ipsec _updown" for verb prepare | command executing prepare-host | executing prepare-host: PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='block#192.1.2.0/24' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16424' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.0/24' PLUTO_PEER_CLIENT_NET='192.1.2.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+REJECT+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0 | popen cmd is 1014 chars long | cmd( 0):PLUTO_VERB='prepare-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='block#192.1.2.0/: | cmd( 80):24' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' P: | cmd( 160):LUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='1: | cmd( 240):92.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PR: | cmd( 320):OTOCOL='0' PLUTO_SA_REQID='16424' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUT: | cmd( 400):O_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.0/24' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.2.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROT: | cmd( 560):OCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLI: | cmd( 640):CY='AUTH_NEVER+GROUPINSTANCE+REJECT+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANC: | cmd( 720):E' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEE: | cmd( 800):R_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0: | cmd( 880):' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI: | cmd( 960):_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | running updown command "ipsec _updown" for verb route | command executing route-host | executing route-host: PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='block#192.1.2.0/24' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16424' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.0/24' PLUTO_PEER_CLIENT_NET='192.1.2.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+REJECT+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 i | popen cmd is 1012 chars long | cmd( 0):PLUTO_VERB='route-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='block#192.1.2.0/24: | cmd( 80):' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLU: | cmd( 160):TO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192: | cmd( 240):.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROT: | cmd( 320):OCOL='0' PLUTO_SA_REQID='16424' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_: | cmd( 400):PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.0/24' PLUTO_PEER_CLIENT_NET='192.1.2: | cmd( 480):.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOC: | cmd( 560):OL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY: | cmd( 640):='AUTH_NEVER+GROUPINSTANCE+REJECT+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_INSTANCE': | cmd( 720): PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_: | cmd( 800):DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' : | cmd( 880):PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_S: | cmd( 960):HARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: | suspend processing: connection "block#192.1.2.0/24" 0.0.0.0 (in route_group() at foodgroups.c:439) | start processing: connection "block" (in route_group() at foodgroups.c:439) | stop processing: connection "block" (in whack_route_connection() at rcv_whack.c:116) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.994 milliseconds in whack | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00373 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned nothing left to do (all child processes are busy) | spent 0.00207 milliseconds in signal handler PLUTO_SIGCHLD | processing signal PLUTO_SIGCHLD | waitpid returned pid 17754 (exited with status 0) | reaped addconn helper child (status 0) | waitpid returned ECHILD (no child processes left) | spent 0.0213 milliseconds in signal handler PLUTO_SIGCHLD | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_STATE_... in show_traffic_status (sort_states) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.204 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 0.0651 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) | FOR_EACH_CONNECTION_... in show_connections_status | FOR_EACH_CONNECTION_... in show_connections_status | FOR_EACH_STATE_... in show_states_status (sort_states) | close_any(fd@16) (in whack_process() at rcv_whack.c:700) | spent 1.07 milliseconds in whack | accept(whackctlfd, (struct sockaddr *)&whackaddr, &whackaddrlen) -> fd@16 (in whack_handle() at rcv_whack.c:722) shutting down | processing: RESET whack log_fd (was fd@16) (in exit_pluto() at plutomain.c:1825) | certs and keys locked by 'free_preshared_secrets' forgetting secrets | certs and keys unlocked by 'free_preshared_secrets' | start processing: connection "block#192.1.2.0/24" 0.0.0.0 (in delete_connection() at connections.c:189) "block#192.1.2.0/24" 0.0.0.0: deleting connection "block#192.1.2.0/24" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'block#192.1.2.0/24' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "block#192.1.2.0/24" is 0x17dfdf | priority calculation of connection "block#192.1.2.0/24" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn block mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn private mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn private-or-clear mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn block#192.1.2.0/24 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "block#192.1.2.0/24" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='block#192.1.2.0/24' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16424' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.0/24' PLUTO_PEER_CLIENT_NET='192.1.2.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+REJECT+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT | popen cmd is 1016 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='block#192.1.2.0/: | cmd( 80):24' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' P: | cmd( 160):LUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='1: | cmd( 240):92.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PR: | cmd( 320):OTOCOL='0' PLUTO_SA_REQID='16424' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUT: | cmd( 400):O_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.0/24' PLUTO_PEER_CLIENT_NET='192.1: | cmd( 480):.2.0' PLUTO_PEER_CLIENT_MASK='255.255.255.0' PLUTO_PEER_PORT='0' PLUTO_PEER_PROT: | cmd( 560):OCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLI: | cmd( 640):CY='AUTH_NEVER+GROUPINSTANCE+REJECT+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_A: | cmd( 720):WAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_P: | cmd( 800):EER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER=: | cmd( 880):'0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' V: | cmd( 960):TI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "block#192.1.2.0/24" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'block#192.1.2.0/24' wasn't on the list | stop processing: connection "block#192.1.2.0/24" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "block" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | flush revival: connection 'block' wasn't on the list | stop processing: connection "block" (in discard_connection() at connections.c:249) | start processing: connection "private" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | flush revival: connection 'private' wasn't on the list | stop processing: connection "private" (in discard_connection() at connections.c:249) | start processing: connection "private-or-clear" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | flush revival: connection 'private-or-clear' wasn't on the list | stop processing: connection "private-or-clear" (in discard_connection() at connections.c:249) | start processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in delete_connection() at connections.c:189) "clear#192.1.2.253/32" 0.0.0.0: deleting connection "clear#192.1.2.253/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'clear#192.1.2.253/32' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf | priority calculation of connection "clear#192.1.2.253/32" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.253/32" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.253/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='192.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 S | popen cmd is 1022 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25: | cmd( 80):3/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209': | cmd( 160): PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET=: | cmd( 240):'192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_: | cmd( 320):PROTOCOL='0' PLUTO_SA_REQID='16420' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PL: | cmd( 400):UTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.253/32' PLUTO_PEER_CLIENT_NET='1: | cmd( 480):92.1.2.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_P: | cmd( 560):EER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_C: | cmd( 640):ONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_G: | cmd( 720):OING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' P: | cmd( 800):LUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_S: | cmd( 880):ERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING=: | cmd( 960):'no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'clear#192.1.2.253/32' wasn't on the list | stop processing: connection "clear#192.1.2.253/32" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in delete_connection() at connections.c:189) "clear#192.1.3.253/32" 0.0.0.0: deleting connection "clear#192.1.3.253/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'clear#192.1.3.253/32' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf | priority calculation of connection "clear#192.1.3.253/32" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.253/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.253/32" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.253/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='192.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 S | popen cmd is 1022 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25: | cmd( 80):3/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209': | cmd( 160): PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET=: | cmd( 240):'192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_: | cmd( 320):PROTOCOL='0' PLUTO_SA_REQID='16416' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PL: | cmd( 400):UTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.253/32' PLUTO_PEER_CLIENT_NET='1: | cmd( 480):92.1.3.253' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_P: | cmd( 560):EER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_C: | cmd( 640):ONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_G: | cmd( 720):OING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' P: | cmd( 800):LUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_S: | cmd( 880):ERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING=: | cmd( 960):'no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.253/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'clear#192.1.3.253/32' wasn't on the list | stop processing: connection "clear#192.1.3.253/32" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in delete_connection() at connections.c:189) "clear#192.1.3.254/32" 0.0.0.0: deleting connection "clear#192.1.3.254/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'clear#192.1.3.254/32' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf | priority calculation of connection "clear#192.1.3.254/32" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.3.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.3.254/32" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.254/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='192.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 S | popen cmd is 1022 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.3.25: | cmd( 80):4/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209': | cmd( 160): PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET=: | cmd( 240):'192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_: | cmd( 320):PROTOCOL='0' PLUTO_SA_REQID='16412' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PL: | cmd( 400):UTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.3.254/32' PLUTO_PEER_CLIENT_NET='1: | cmd( 480):92.1.3.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_P: | cmd( 560):EER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_C: | cmd( 640):ONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_G: | cmd( 720):OING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' P: | cmd( 800):LUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_S: | cmd( 880):ERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING=: | cmd( 960):'no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.3.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'clear#192.1.3.254/32' wasn't on the list | stop processing: connection "clear#192.1.3.254/32" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in delete_connection() at connections.c:189) "clear#192.1.2.254/32" 0.0.0.0: deleting connection "clear#192.1.2.254/32" 0.0.0.0 instance with peer 0.0.0.0 {isakmp=#0/ipsec=#0} | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | shunt_eroute() called for connection 'clear#192.1.2.254/32' to 'delete' for rt_kind 'unrouted' using protoports 0--0->-0 | netlink_shunt_eroute for proto 0, and source port 0 dest port 0 | priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf | priority calculation of connection "clear#192.1.2.254/32" is 0x17dfdf | FOR_EACH_CONNECTION_... in route_owner | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear mark 0/00000000, 0/00000000 | conn clear#192.1.2.254/32 mark 0/00000000, 0/00000000 vs | conn clear-or-private mark 0/00000000, 0/00000000 | route owner of "clear#192.1.2.254/32" unrouted: NULL | running updown command "ipsec _updown" for verb unroute | command executing unroute-host | executing unroute-host: PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.254/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209' PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET='192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PLUTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='192.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_PEER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_CONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_GOING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' PLUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_SERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING='no' VTI_SHARED='no' SPI_IN=0x0 S | popen cmd is 1022 chars long | cmd( 0):PLUTO_VERB='unroute-host' PLUTO_VERSION='2.0' PLUTO_CONNECTION='clear#192.1.2.25: | cmd( 80):4/32' PLUTO_INTERFACE='eth0' PLUTO_NEXT_HOP='192.1.3.254' PLUTO_ME='192.1.3.209': | cmd( 160): PLUTO_MY_ID='192.1.3.209' PLUTO_MY_CLIENT='192.1.3.209/32' PLUTO_MY_CLIENT_NET=: | cmd( 240):'192.1.3.209' PLUTO_MY_CLIENT_MASK='255.255.255.255' PLUTO_MY_PORT='0' PLUTO_MY_: | cmd( 320):PROTOCOL='0' PLUTO_SA_REQID='16408' PLUTO_SA_TYPE='none' PLUTO_PEER='0.0.0.0' PL: | cmd( 400):UTO_PEER_ID='(none)' PLUTO_PEER_CLIENT='192.1.2.254/32' PLUTO_PEER_CLIENT_NET='1: | cmd( 480):92.1.2.254' PLUTO_PEER_CLIENT_MASK='255.255.255.255' PLUTO_PEER_PORT='0' PLUTO_P: | cmd( 560):EER_PROTOCOL='0' PLUTO_PEER_CA='' PLUTO_STACK='netkey' PLUTO_ADDTIME='0' PLUTO_C: | cmd( 640):ONN_POLICY='AUTH_NEVER+GROUPINSTANCE+PASS+NEVER_NEGOTIATE' PLUTO_CONN_KIND='CK_G: | cmd( 720):OING_AWAY' PLUTO_CONN_ADDRFAMILY='ipv4' XAUTH_FAILED=0 PLUTO_IS_PEER_CISCO='0' P: | cmd( 800):LUTO_PEER_DNS_INFO='' PLUTO_PEER_DOMAIN_INFO='' PLUTO_PEER_BANNER='' PLUTO_CFG_S: | cmd( 880):ERVER='0' PLUTO_CFG_CLIENT='0' PLUTO_NM_CONFIGURED='0' VTI_IFACE='' VTI_ROUTING=: | cmd( 960):'no' VTI_SHARED='no' SPI_IN=0x0 SPI_OUT=0x0 ipsec _updown 2>&1: "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. "clear#192.1.2.254/32" 0.0.0.0: unroute-host output: Error: Peer netns reference is invalid. | flush revival: connection 'clear#192.1.2.254/32' wasn't on the list | stop processing: connection "clear#192.1.2.254/32" 0.0.0.0 (in discard_connection() at connections.c:249) | start processing: connection "clear" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | FOR_EACH_CONNECTION_... in conn_by_name | FOR_EACH_CONNECTION_... in foreach_connection_by_alias | flush revival: connection 'clear' wasn't on the list | stop processing: connection "clear" (in discard_connection() at connections.c:249) | start processing: connection "clear-or-private" (in delete_connection() at connections.c:189) | Deleting states for connection - including all other IPsec SA's of this IKE SA | pass 0 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | pass 1 | FOR_EACH_STATE_... in foreach_state_by_connection_func_delete | free hp@0x56468284fef8 | flush revival: connection 'clear-or-private' wasn't on the list | stop processing: connection "clear-or-private" (in discard_connection() at connections.c:249) | crl fetch request list locked by 'free_crl_fetch' | crl fetch request list unlocked by 'free_crl_fetch' shutting down interface lo/lo 127.0.0.1:4500 shutting down interface lo/lo 127.0.0.1:500 shutting down interface eth0/eth0 192.1.3.209:4500 shutting down interface eth0/eth0 192.1.3.209:500 | FOR_EACH_STATE_... in delete_states_dead_interfaces | libevent_free: release ptr-libevent@0x564682843088 | free_event_entry: release EVENT_NULL-pe@0x56468284ee28 | libevent_free: release ptr-libevent@0x564682805258 | free_event_entry: release EVENT_NULL-pe@0x56468284eed8 | libevent_free: release ptr-libevent@0x564682806488 | free_event_entry: release EVENT_NULL-pe@0x56468284ef88 | libevent_free: release ptr-libevent@0x5646827ffe88 | free_event_entry: release EVENT_NULL-pe@0x56468284f038 | FOR_EACH_UNORIENTED_CONNECTION_... in check_orientations | libevent_free: release ptr-libevent@0x564682843138 | free_event_entry: release EVENT_NULL-pe@0x564682837288 | libevent_free: release ptr-libevent@0x5646828051a8 | free_event_entry: release EVENT_NULL-pe@0x564682836de8 | libevent_free: release ptr-libevent@0x56468282f858 | free_event_entry: release EVENT_NULL-pe@0x564682831078 | global timer EVENT_REINIT_SECRET uninitialized | global timer EVENT_SHUNT_SCAN uninitialized | global timer EVENT_PENDING_DDNS uninitialized | global timer EVENT_PENDING_PHASE2 uninitialized | global timer EVENT_CHECK_CRLS uninitialized | global timer EVENT_REVIVE_CONNS uninitialized | global timer EVENT_FREE_ROOT_CERTS uninitialized | global timer EVENT_RESET_LOG_RATE_LIMIT uninitialized | global timer EVENT_NAT_T_KEEPALIVE uninitialized | libevent_free: release ptr-libevent@0x564682792818 | signal event handler PLUTO_SIGCHLD uninstalled | libevent_free: release ptr-libevent@0x56468284e6d8 | signal event handler PLUTO_SIGTERM uninstalled | libevent_free: release ptr-libevent@0x56468284e7e8 | signal event handler PLUTO_SIGHUP uninstalled | libevent_free: release ptr-libevent@0x56468284ea28 | signal event handler PLUTO_SIGSYS uninstalled | releasing event base | libevent_free: release ptr-libevent@0x56468284e8f8 | libevent_free: release ptr-libevent@0x5646828316a8 | libevent_free: release ptr-libevent@0x564682831658 | libevent_free: release ptr-libevent@0x564682800288 | libevent_free: release ptr-libevent@0x564682831618 | libevent_free: release ptr-libevent@0x56468284e3e8 | libevent_free: release ptr-libevent@0x56468284e658 | libevent_free: release ptr-libevent@0x564682831858 | libevent_free: release ptr-libevent@0x564682836a78 | libevent_free: release ptr-libevent@0x564682837398 | libevent_free: release ptr-libevent@0x56468284f0a8 | libevent_free: release ptr-libevent@0x56468284eff8 | libevent_free: release ptr-libevent@0x56468284ef48 | libevent_free: release ptr-libevent@0x56468284ee98 | libevent_free: release ptr-libevent@0x564682791ee8 | libevent_free: release ptr-libevent@0x56468284e7a8 | libevent_free: release ptr-libevent@0x56468284e698 | libevent_free: release ptr-libevent@0x56468284e558 | libevent_free: release ptr-libevent@0x56468284e8b8 | libevent_free: release ptr-libevent@0x56468284e428 | libevent_free: release ptr-libevent@0x564682801408 | libevent_free: release ptr-libevent@0x564682801388 | libevent_free: release ptr-libevent@0x564682792258 | releasing global libevent data | libevent_free: release ptr-libevent@0x5646827ffab8 | libevent_free: release ptr-libevent@0x564682801508 | libevent_free: release ptr-libevent@0x564682801488 leak: group instance name, item size: 19 leak: cloned from groupname, item size: 6 leak: group instance name, item size: 21 leak: cloned from groupname, item size: 6 leak: group instance name, item size: 21 leak: cloned from groupname, item size: 6 leak: group instance name, item size: 21 leak: cloned from groupname, item size: 6 leak: group instance name, item size: 21 leak: cloned from groupname, item size: 6 leak: policy group path, item size: 50 leak detective found 11 leaks, total size 183