-
Notifications
You must be signed in to change notification settings - Fork 1k
NYCPerformance
as of mid-November 2012 (commit f5685d20d4124310b4cb3fb7f1934a92e7606bb7)
These measurements were made on a New York City graph including 11 GTFS feeds and OSM data for the entire transit-served area. Using our profiling tools, we queried an OTP server with 10500 requests generated by combining a range of routing parameters (time of day, maximum walk distance, transportation modes) with endpoints chosen randomly but located within 2km of a transit stop. Times reported represent the full round-trip time of a request to the REST API.
Keep in mind that these results are for a very large metropolitan area, before the "long distance mode" was implemented, and measured on Amazon EC2 instances rather than dedicated servers.
This server was configured to use the retrying path service backed by the A* algorithm. Server heap space was set to 6GiB. An identical run was performed on two different EC2 instance types for comparison.
7.5 GiB memory and 2 cores (in this case, 2 non-hyperthreaded cores of a shared 4-core E5507)
Amazon rates this instace at 2.5 EC2 units per core.
17GiB RAM and 2 cores (in this case, 2 hyperthreaded cores on a shared 4-core X5550)
Java heap space was intentionally limited to 6GiB, so the main difference here is that the processor is rated at 3.25EC2 units per core. In the plots below, we have superimposed the m2.xlarge response times on the m1.large response times for comparison.
Note that this plot uses a logarithmic scale on both axes -- response times for the m2.xlarge instance are half those for the m1.large instance across all path lengths, despite the 30% higher EC2 rating.
The same holds true for bicycle trips. For any path length, responses are returned roughly twice as fast from the m2.xlarge instance.
In a public-facing server or behind a load balancer, you may want to use EC2 instances with similar computing capacity per core to the above test cases, but with more cores to handle more simultaneous requests. In the 2.5 EC2 unit / core category you might choose a c1.xlarge instance with 8 cores. In the 3.5 EC2 unit / core category you might choose an m3.xlarge instance with 4 cores.
unless you are intentionally working with legacy versions of OpenTripPlanner. Please consult the current documentation at readthedocs