Skip to content

Cybersource Java SimpleOrderAPI SDK 6.2.10

mahendya1002 edited this page Jul 28, 2020 · 3 revisions

Release notes


  1. Connection Pooling
    • Reduced latency in subsequent requests.
    • Reduced CPU usage and round-trips because of fewer new connections and TLS handshakes.
    • Reduced network congestion (fewer TCP connections).
  2. Retry Logic
    • Handles Transient Failures – Caused due to network failure or service not being available.
    • Supports retries in milliseconds.
  3. Monitoring
    • Captures time taken by transaction in transit.
    • Compute time taken by SDK.

Steps to upgrade to 6.2.10 version


To upgrade cybersource-sdk-java.jar version in your application, follow the steps below.

  1. Make sure you download/(or refresh cybersource-sdk-java.jar dependency to) the latest version of SDK which is cybersource-sdk-java-6.2.10.jar.

  2. Also make sure that you have all the required jars. Alternatively you can the download the zip folder from Cybersource GitHub.com releases section for your respective. httpcore-4.4.13.jar and httpclient-4.5.11.jar is required for connection pooling feature.

  3. To enable connection pooling, please add below properties in cybs.properties file, else config exception will be thrown. Note: Below are the sample values that we have used during testing:

      useHttpClientWithConnectionPool=true
    
      maxConnections=1000
    
      defaultMaxConnectionsPerRoute=1000
    
      maxConnectionsPerRoute=1000
    
      connectionRequestTimeoutMs=1000
    
      connectionTimeoutMs=5000
    
      socketTimeoutMs=130000
    
      evictThreadSleepTimeMs=3000
    
      maxKeepAliveTimeMs=300000
    

    Note: These properties are in milliseconds.

  4. Also change retry interval property as below. If retryInterval=1(prior to 6.2.10 release, it was set in seconds), change this as in milliseconds.

    retryInterval=1000

PoolingHttpClient Introduction


  • Apache PoolingHttpClientConnectionManager manages a pool of client connections and is able to service connection requests from multiple execution threads.
  • Connections are pooled on a per route basis.
  • A request for a route for which the manager already has a persistent connection available in the pool will be serviced by leasing a connection from the pool rather than creating a brand new connection.
  • PoolingHttpClientConnectionManagermaintains maximum limit of connections on a per route basis and in total.

Configuration

Below are the properties we must know before using pooling http client connection in a proper way

  1. maxConnections: Specifies the maximum number of concurrent, active HTTP connections allowed by the resource instance to be opened with the target service. There is no default value. For applications that create many long-lived connections, increase the value of this parameter.

  2. defaultMaxConnectionsPerRoute: the maximum number of connections per (any) route.

  3. maxConnectionsPerRoute: Specifies the maximum number of concurrent, active HTTP connections allowed by the resource instance to the same host or route. In SDK, all above config does same functionality and the same value can be given to these configs as we have only one route. Note: This number cannot be greater than Maximum Total Connections and every connection created here also counts into Maximum Total Connections.

  4. connectionRequestTimeoutMs: Time taken in milliseconds to get connection request from the pool. If it times out, it will throw error as Timeout waiting for connection from pool

  5. connectionTimeOutMs: Specifies the number of milliseconds to wait while a connection is being established.

  6. socketTimeoutMs: Specifies the time waiting for data – after establishing the connection; maximum time of inactivity between two data packets.

  7. evictThreadSleepTimeMs: Specifies time duration in milliseconds between "sweeps" by the "idle connection" evictor thread. This thread will check if any idle/expired/stale connections are available in pool and evict it

  8. maxKeepAliveTimeMs: Specifies the time duration in milliseconds that a connection can be idle before it is evicted from the pool.

  9. staleConnectionCheckEnabled: It determines whether the stale connection check is to be used. Disabling the stale connection check can result in slight performance improvement at the risk of getting an I/O error, when executing a request over a connection that has been closed at the server side. By default it is set to true, which means it is enabled.

  10. validateAfterInactivityMs: By default it is set to 0. This value can be set if in case you decide to disable staleConnectionCheckEnabled to get slight better performance. We recommended a value of 2000ms.

  11. allowRetry: To enable retry mechanism. By default it is enabled when useHttpCleint or useHttpClientWithConnectionPool is set to true.

  12. numberOfRetries: This parameter value should be set between 0 and 5. By default the value for numberOfRetries will be 3.

  13. retryInterval": Specified delay in between the retry attempts. By default, this is set to 1000 milliSeconds.

  14. enabledShutdownHook: We should close the connection manager, http client and idle connection cleaner thread when application get shutdown both abruptly and gracefully. If enabledShutdownHook is true, then JVM runtime addShutdownHook method will be initialized. Shutdown Hooks are a special construct that allows developers to plug in a piece of code to be executed when the JVM is shutting down. This comes in handy in cases where we need to do special clean-up operations in case the VM is shutting down.

       `private void addShutdownHook() {
              Runtime.getRuntime().addShutdownHook(this.createShutdownHookThread());
        }`
    

    createShutdownHookThread method will call static shutdown api to close connectionManager, httpClient and IdleCleanerThread. By default this is enabled.

Factors to be considered to configure PoolingHttpClientConnection


These values are based on multiple factors like transaction rate on single node, response time, CPU, Max and Min Heap, OS, Java etc. Connection pooling configuration should work in most of the case, however every application before using pooling http connection flow, must thoroughly do some analysis to configure optimised values for best client performance.

Max # of connection

If your application expected total TPS is 500, number of client nodes deployed is 5, if traffic distribution is equal among nodes then each node can serve 100tps.

  • Let say response time is ~1 sec: max connection could be 100+some buffer value.
  • If response time is ~2 sec: max connection could be 200+some buffer value.

We need to consider some extra buffer value for scenarios like,

  • Client nodes may go down.
  • High Response time due to increased load.
  • Apart from CyberSource, any other external connections call is available at same time.
  • High Peak time transaction rate and average transaction rate.
  • CPU, OS, JVM, free space limit.

Stale connection check

  • HTTP specification permits both the client and the server to terminate a persistent (keep-alive) connection at any time without notice to the counterpart, thus rendering the connection invalid or stale.

  • By default HttpClient performs a check, just prior to executing a request, to determine if connection is valid or not.

  • Stale connection check also run in background and the cost of this operation depends on the number of connections and JRE used. As it is background process, it will not affect in-flight transactions.

  • Below is the average time taken to kill the no. of connections.

     ** # of Connections**	**Average Time Taken(ms)**
        20	                     1-3
        100	                     5-7
        166	                      12
        323	                      14
        474	                      24
        615	                      39
    
  • Based on the above data, it is clear that approximately 12-18ms will be required to kill around 200 idle or stale connections.

  • We have created a daemon thread “IdleConnectionCleanerThread” which runs in background when pooling connection manager is initialized.

  • For every evictThreadSleepTimeMs, cleaner thread will evict all stale/expired connection and Idle connection which are Idle for more than maxKeepAliveTimeMs 300000ms.

  • evictThreadSleepTimeMs value should be lesser, sample value configured in properties file is 3000ms

  • If more stale or expired connection exists in connection pool, then connection request timeout exception may occur in case if more requests piled up at the same time.

Test environment Perf Results


These values are based on our test environment result considering below settings and other factors,

  • OS => Red Hat Enterprise Linux 7.7
  • Single node
  • java version => 1.8
  • Max heap Set => 1GB
  • Min and Max Heap Usage Seen => 183MB, 772MB respectively
  • CPU provided to the app => 4CPU
  • TPS sent => 200
  • We had randomness in Payload and MIDs
  • Response time => 1253ms

Comparison between staleConnectionCheckEnabled and validateAfterInactivityMs performance.

staleConnectionCheckEnabled validateAfterInactivityMs TP90-RespTime TP99-RespTime CPU Usage MaxHeap Usage MinHeap Usage
FALSE 2000 1231 1253 14% 755mb 140mb
FALSE 100 1231 1253 14% 790mb 182mb
FALSE 0 1232 1254 16% 764mb 135mb

Note: Response time is not Cybersource response time. We are using simulator with delay of ~1000ms.

Test setup on the Client Side

  • We ran 200TPS XML Payload using jmeter with 1000 connection pools on SDK side for 1 hour durations
  • We set our SDK dummy tenant service to have a memory of 1GB max heap.
  • The host VM had 8CPUs. But we have limited the available CPUs to 4.
  • Dummy service is a very simple Vertx based rest service, which does nothing exception invoking Client.runTransaction() method.

Test setup on the Simulator Side

  • Simulator was used delays were set to 1000ms
  • Every 10 seconds the connection would start closing all transactions for a duration of 1second.
  • We have ample logging on the client and simulator side and that helped us capture where exactly time was spent.

Note


  • Every time there is a change in TPS/physical machine/external connections, we may need to revisit the configuration.
  • You need to add proper configuration to use http-client connection polling for better performance.
  • Connection pooling properties are enabled and used only if useHttpClientWithConnectionPool=true

Disclaimer


Above configuration settings are tested and recommended based on our test environment results. Please consider all factors to set the most optimum Connection Pooling Properties as discussed above to get the best performance.