-
Notifications
You must be signed in to change notification settings - Fork 83
/
Copy pathfcgi-perf.htm
456 lines (455 loc) · 22.4 KB
/
fcgi-perf.htm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<TITLE>
Understanding FastCGI Application Performance
</TITLE>
<STYLE TYPE="text/css">
body {
background-color: #FFFFFF;
color: #000000;
}
:link { color: #cc0000 }
:visited { color: #555555 }
:active { color: #000011 }
div.c3 {margin-left: 2em}
h5.c2 {text-align: center}
div.c1 {text-align: center}
</STYLE>
</HEAD>
<BODY>
<DIV CLASS="c1">
<A HREF="http://fastcgi-archives.github.io"><IMG BORDER="0" SRC="../images/fcgi-hd.gif" ALT="[[FastCGI]]"></A>
</DIV>
<BR CLEAR="all">
<DIV CLASS="c1">
<H3>
Understanding FastCGI Application Performance
</H3>
</DIV>
<!--Copyright (c) 1996 Open Market, Inc. -->
<!--See the file "LICENSE" for information on usage and redistribution-->
<!--of this file, and for a DISCLAIMER OF ALL WARRANTIES. -->
<DIV CLASS="c1">
Mark R. Brown<BR>
Open Market, Inc.<BR>
<P>
10 June 1996<BR>
</P>
</DIV>
<P>
</P>
<H5 CLASS="c2">
Copyright © 1996 Open Market, Inc. 245 First Street, Cambridge, MA 02142 U.S.A.<BR>
Tel: 617-621-9500 Fax: 617-621-1703 URL: <A HREF=
"http://www.openmarket.com/">http://www.openmarket.com/</A><BR>
$Id: fcgi-perf.htm,v 1.4 2002/02/25 00:42:59 robs Exp $<BR>
</H5>
<HR>
<UL TYPE="square">
<LI>
<A HREF="#S1">1. Introduction</A>
</LI>
<LI>
<A HREF="#S2">2. Performance Basics</A>
</LI>
<LI>
<A HREF="#S3">3. Caching</A>
</LI>
<LI>
<A HREF="#S4">4. Database Access</A>
</LI>
<LI>
<A HREF="#S5">5. A Performance Test</A>
<UL TYPE="square">
<LI>
<A HREF="#S5.1">5.1 Application Scenario</A>
</LI>
<LI>
<A HREF="#S5.2">5.2 Application Design</A>
</LI>
<LI>
<A HREF="#S5.3">5.3 Test Conditions</A>
</LI>
<LI>
<A HREF="#S5.4">5.4 Test Results and Discussion</A>
</LI>
</UL>
</LI>
<LI>
<A HREF="#S6">6. Multi-threaded APIs</A>
</LI>
<LI>
<A HREF="#S7">7. Conclusion</A>
</LI>
</UL>
<P>
</P>
<HR>
<H3>
<A NAME="S1">1. Introduction</A>
</H3>
<P>
Just how fast is FastCGI? How does the performance of a FastCGI application compare with the performance of
the same application implemented using a Web server API?
</P>
<P>
Of course, the answer is that it depends upon the application. A more complete answer is that FastCGI often
wins by a significant margin, and seldom loses by very much.
</P>
<P>
Papers on computer system performance can be laden with complex graphs showing how this varies with that.
Seldom do the graphs shed much light on <I>why</I> one system is faster than another. Advertising copy is
often even less informative. An ad from one large Web server vendor says that its server "executes web
applications up to five times faster than all other servers," but the ad gives little clue where the
number "five" came from.
</P>
<P>
This paper is meant to convey an understanding of the primary factors that influence the performance of Web
server applications and to show that architectural differences between FastCGI and server APIs often give an
"unfair" performance advantage to FastCGI applications. We run a test that shows a FastCGI
application running three times faster than the corresponding Web server API application. Under different
conditions this factor might be larger or smaller. We show you what you'd need to measure to figure that
out for the situation you face, rather than just saying "we're three times faster" and moving
on.
</P>
<P>
This paper makes no attempt to prove that FastCGI is better than Web server APIs for every application. Web
server APIs enable lightweight protocol extensions, such as Open Market's SecureLink extension, to be
added to Web servers, as well as allowing other forms of server customization. But APIs are not well matched
to mainstream applications such as personalized content or access to corporate databases, because of API
drawbacks including high complexity, low security, and limited scalability. FastCGI shines when used for the
vast majority of Web applications.
</P>
<P>
</P>
<H3>
<A NAME="S2">2. Performance Basics</A>
</H3>
<P>
Since this paper is about performance we need to be clear on what "performance" is.
</P>
<P>
The standard way to measure performance in a request-response system like the Web is to measure peak request
throughput subject to a response time constriaint. For instance, a Web server application might be capable of
performing 20 requests per second while responding to 90% of the requests in less than 2 seconds.
</P>
<P>
Response time is a thorny thing to measure on the Web because client communications links to the Internet have
widely varying bandwidth. If the client is slow to read the server's response, response time at both the
client and the server will go up, and there's nothing the server can do about it. For the purposes of
making repeatable measurements the client should have a high-bandwidth communications link to the server.
</P>
<P>
[Footnote: When designing a Web server application that will be accessed over slow (e.g. 14.4 or even 28.8
kilobit/second modem) channels, pay attention to the simultaneous connections bottleneck. Some servers are
limited by design to only 100 or 200 simultaneous connections. If your application sends 50 kilobytes of data
to a typical client that can read 2 kilobytes per second, then a request takes 25 seconds to complete. If your
server is limited to 100 simultaneous connections, throughput is limited to just 4 requests per second.]
</P>
<P>
Response time is seldom an issue when load is light, but response times rise quickly as the system approaches
a bottleneck on some limited resource. The three resources that typical systems run out of are network I/O,
disk I/O, and processor time. If short response time is a goal, it is a good idea to stay at or below 50% load
on each of these resources. For instance, if your disk subsystem is capable of delivering 200 I/Os per second,
then try to run your application at 100 I/Os per second to avoid having the disk subsystem contribute to slow
response times. Through careful management it is possible to succeed in running closer to the edge, but
careful management is both difficult and expensive so few systems get it.
</P>
<P>
If a Web server application is local to the Web server machine, then its internal design has no impact on
network I/O. Application design can have a big impact on usage of disk I/O and processor time.
</P>
<P>
</P>
<H3>
<A NAME="S3">3. Caching</A>
</H3>
<P>
It is a rare Web server application that doesn't run fast when all the information it needs is available
in its memory. And if the application doesn't run fast under those conditions, the possible solutions are
evident: Tune the processor-hungry parts of the application, install a faster processor, or change the
application's functional specification so it doesn't need to do so much work.
</P>
<P>
The way to make information available in memory is by caching. A cache is an in-memory data structure that
contains information that's been read from its permanent home on disk. When the application needs
information, it consults the cache, and uses the information if it is there. Otherwise is reads the
information from disk and places a copy in the cache. If the cache is full, the application discards some old
information before adding the new. When the application needs to change cached information, it changes both
the cache entry and the information on disk. That way, if the application crashes, no information is lost; the
application just runs more slowly for awhile after restarting, because the cache doesn't improve
performance when it is empty.
</P>
<P>
Caching can reduce both disk I/O and processor time, because reading information from disk uses more processor
time than reading it from the cache. Because caching addresses both of the potential bottlenecks, it is the
focal point of high-performance Web server application design. CGI applications couldn't perform in-memory
caching, because they exited after processing just one request. Web server APIs promised to solve this
problem. But how effective is the solution?
</P>
<P>
Today's most widely deployed Web server APIs are based on a pool-of-processes server model. The Web server
consists of a parent process and a pool of child processes. Processes do not share memory. An incoming request
is assigned to an idle child at random. The child runs the request to completion before accepting a new
request. A typical server has 32 child processes, a large server has 100 or 200.
</P>
<P>
In-memory caching works very poorly in this server model because processes do not share memory and incoming
requests are assigned to processes at random. For instance, to keep a frequently-used file available in memory
the server must keep a file copy per child, which wastes memory. When the file is modified all the children
need to be notified, which is complex (the APIs don't provide a way to do it).
</P>
<P>
FastCGI is designed to allow effective in-memory caching. Requests are routed from any child process to a
FastCGI application server. The FastCGI application process maintains an in-memory cache.
</P>
<P>
In some cases a single FastCGI application server won't provide enough performance. FastCGI provides two
solutions: session affinity and multi-threading.
</P>
<P>
With session affinity you run a pool of application processes and the Web server routes requests to individual
processes based on any information contained in the request. For instance, the server can route according to
the area of content that's been requested, or according to the user. The user might be identified by an
application-specific session identifier, by the user ID contained in an Open Market Secure Link ticket, by the
Basic Authentication user name, or whatever. Each process maintains its own cache, and session affinity
ensures that each incoming request has access to the cache that will speed up processing the most.
</P>
<P>
With multi-threading you run an application process that is designed to handle several requests at the same
time. The threads handling concurrent requests share process memory, so they all have access to the same
cache. Multi-threaded programming is complex -- concurrency makes programs difficult to test and debug -- but
with FastCGI you can write single threaded <I>or</I> multithreaded applications.
</P>
<P>
</P>
<H3>
<A NAME="S4">4. Database Access</A>
</H3>
<P>
Many Web server applications perform database access. Existing databases contain a lot of valuable
information; Web server applications allow companies to give wider access to the information.
</P>
<P>
Access to database management systems, even within a single machine, is via connection-oriented protocols. An
application "logs in" to a database, creating a connection, then performs one or more accesses.
Frequently, the cost of creating the database connection is several times the cost of accessing data over an
established connection.
</P>
<P>
To a first approximation database connections are just another type of state to be cached in memory by an
application, so the discussion of caching above applies to caching database connections.
</P>
<P>
But database connections are special in one respect: They are often the basis for database licensing. You pay
the database vendor according to the number of concurrent connections the database system can sustain. A
100-connection license costs much more than a 5-connection license. It follows that caching a database
connection per Web server child process is not just wasteful of system's hardware resources, it could
break your software budget.
</P>
<P>
</P>
<H3>
<A NAME="S5">5. A Performance Test</A>
</H3>
<P>
We designed a test application to illustrate performance issues. The application represents a class of
applications that deliver personalized content. The test application is quite a bit simpler than any real
application would be, but still illustrates the main performance issues. We implemented the application using
both FastCGI and a current Web server API, and measured the performance of each.
</P>
<P>
</P>
<H4>
<A NAME="S5.1">5.1 Application Scenario</A>
</H4>
<P>
The application is based on a user database and a set of content files. When a user requests a content file,
the application performs substitutions in the file using information from the user database. The application
then returns the modified content to the user.
</P>
<P>
Each request accomplishes the following:
</P>
<P>
</P>
<OL>
<LI>
authentication check: The user id is used to retrieve and check the password.
<P>
</P>
</LI>
<LI>
attribute retrieval: The user id is used to retrieve all of the user's attribute values.
<P>
</P>
</LI>
<LI>
file retrieval and filtering: The request identifies a content file. This file is read and all occurrences
of variable names are replaced with the user's corresponding attribute values. The modified HTML is
returned to the user.<BR>
<BR>
</LI>
</OL>
<P>
Of course, it is fair game to perform caching to shortcut any of these steps.
</P>
<P>
Each user's database record (including password and attribute values) is approximately 100 bytes long.
Each content file is 3,000 bytes long. Both database and content files are stored on disks attached to the
server platform.
</P>
<P>
A typical user makes 10 file accesses with realistic think times (30-60 seconds) between accesses, then
disappears for a long time.
</P>
<P>
</P>
<H4>
<A NAME="S5.2">5.2 Application Design</A>
</H4>
<P>
The FastCGI application maintains a cache of recently-accessed attribute values from the database. When the
cache misses the application reads from the database. Because only a small number of FastCGI application
processes are needed, each process opens a database connection on startup and keeps it open.
</P>
<P>
The FastCGI application is configured as multiple application processes. This is desirable in order to get
concurrent application processing during database reads and file reads. Requests are routed to these
application processes using FastCGI session affinity keyed on the user id. This way all a user's requests
after the first hit in the application's cache.
</P>
<P>
The API application does not maintain a cache; the API application has no way to share the cache among its
processes, so the cache hit rate would be too low to make caching pay. The API application opens and closes a
database connection on every request; keeping database connections open between requests would result in an
unrealistically large number of database connections open at the same time, and very low utilization of each
connection.
</P>
<P>
</P>
<H4>
<A NAME="S5.3">5.3 Test Conditions</A>
</H4>
<P>
The test load is generated by 10 HTTP client processes. The processes represent disjoint sets of users. A
process makes a request for a user, then a request for a different user, and so on until it is time for the
first user to make another request.
</P>
<P>
For simplicity the 10 client processes run on the same machine as the Web server. This avoids the possibility
that a network bottleneck will obscure the test results. The database system also runs on this machine, as
specified in the application scenario.
</P>
<P>
Response time is not an issue under the test conditions. We just measure throughput.
</P>
<P>
The API Web server is in these tests is Netscape 1.1.
</P>
<P>
</P>
<H4>
<A NAME="S5.4">5.4 Test Results and Discussion</A>
</H4>
<P>
Here are the test results:
</P>
<P>
</P>
<DIV CLASS="c3">
<PRE>
FastCGI 12.0 msec per request = 83 requests per second
API 36.6 msec per request = 27 requests per second
</PRE>
</DIV>
<P>
Given the big architectural advantage that the FastCGI application enjoys over the API application, it is not
surprising that the FastCGI application runs a lot faster. To gain a deeper understanding of these results we
measured two more conditions:
</P>
<P>
</P>
<UL>
<LI>
API with sustained database connections. If you could afford the extra licensing cost, how much faster
would your API application run?
<P>
</P>
<PRE>
API 16.0 msec per request = 61 requests per second
</PRE>
Answer: Still not as fast as the FastCGI application.
<P>
</P>
</LI>
<LI>
FastCGI with cache disabled. How much benefit does the FastCGI application get from its cache?
<P>
</P>
<PRE>
FastCGI 20.1 msec per request = 50 requests per second
</PRE>
Answer: A very substantial benefit, even though the database access is quite simple.<BR>
<BR>
</LI>
</UL>
<P>
What these two extra experiments show is that if the API and FastCGI applications are implemented in exactly
the same way -- caching database connections but not caching user profile data -- the API application is
slightly faster. This is what you'd expect, since the FastCGI application has to pay the cost of
inter-process communication not present in the API application.
</P>
<P>
In the real world the two applications would not be implemented in the same way. FastCGI's architectural
advantage results in much higher performance -- a factor of 3 in this test. With a remote database or more
expensive database access the factor would be higher. With more substantial processing of the content files
the factor would be smaller.
</P>
<P>
</P>
<H3>
<A NAME="S6">6. Multi-threaded APIs</A>
</H3>
<P>
Web servers with a multi-threaded internal structure (and APIs to match) are now starting to become more
common. These servers don't have all of the disadvantages described in Section 3. Does this mean that
FastCGI's performance advantages will disappear?
</P>
<P>
A superficial analysis says yes. An API-based application in a single-process, multi-threaded server can
maintain caches and database connections the same way a FastCGI application can. The API-based application
does not pay for inter-process communication, so the API-based application will be slightly faster than the
FastCGI application.
</P>
<P>
A deeper analysis says no. Multi-threaded programming is complex, because concurrency makes programs much more
difficult to test and debug. In the case of multi-threaded programming to Web server APIs, the normal problems
with multi-threading are compounded by the lack of isolation between different applications and between the
applications and the Web server. With FastCGI you can write programs in the familiar single-threaded style,
get all the reliability and maintainability of process isolation, and still get very high performance. If you
truly need multi-threading, you can write multi-threaded FastCGI and still isolate your multi-threaded
application from other applications and from the server. In short, multi-threading makes Web server APIs
unusable for practically all applications, reducing the choice to FastCGI versus CGI. The performance winner in
that contest is obviously FastCGI.
</P>
<P>
</P>
<H3>
<A NAME="S7">7. Conclusion</A>
</H3>
<P>
Just how fast is FastCGI? The answer: very fast indeed. Not because it has some specially-greased path through
the operating system, but because its design is well matched to the needs of most applications. We invite you
to make FastCGI the fast, open foundation for your Web server applications.
</P>
<P>
</P>
<HR>
<A HREF="http://www.openmarket.com/"><IMG SRC="omi-logo.gif" ALT="OMI Home Page"></A>
<ADDRESS>
© 1995, Open Market, Inc. / mbrown@openmarket.com
</ADDRESS>
</BODY>
</HTML>