references- 6.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175
  1. %z Article
  2. %K Wolman89
  3. %A Barry L. Wolman
  4. %A Thomas M. Olson
  5. %T IOBENCH: a system independent IO benchmark
  6. %J Computer Architecture News
  7. %V 17
  8. %N 5
  9. %D September 1989
  10. %P 55-70
  11. %x IOBENCH is an operating system and processor independent synthetic
  12. %x input/output (IO) benchmark designed to put a configurable IO and
  13. %x processor (CP) load on the system under test. This paper discusses
  14. %x the UNIX versions.
  15. %k IOBENCH, synthetic I/O benchmark, UNIX workload
  16. %s vinton%cello@hplabs.hp.com (Fri Sep 20 12:55:58 PDT 1991)
  17. %z Book
  18. %K Hennessy96
  19. %A John L. Hennessy
  20. %A David A. Patterson
  21. %T Computer Architecture A Quantitative Approach, 2nd Edition
  22. %I Morgan Kaufman
  23. %D 1996
  24. %z Article
  25. %K Chen94a
  26. %A P. M. Chen
  27. %A D. A. Patterson
  28. %T A new approach to I/O performance evaluation \- self-scaling I/O benchmarks, predicted I/O performance
  29. %D November 1994
  30. %J Transactions on Computer Systems
  31. %V 12
  32. %N 4
  33. %P 308-339
  34. %x Current I/O benchmarks suffer from several chronic problems: they
  35. %x quickly become obsolete; they do not stress the I/O system; and they
  36. %x do not help much in undelsi;anding I/O system performance. We
  37. %x propose a new approach to I/O performance analysis. First, we
  38. %x propose a self-scaling benchmark that dynamically adjusts aspects of
  39. %x its workload according to the performance characteristic of the
  40. %x system being measured. By doing so, the benchmark automatically
  41. %x scales across current and future systems. The evaluation aids in
  42. %x understanding system performance by reporting how performance varies
  43. %x according to each of five workload parameters. Second, we propose
  44. %x predicted performance, a technique for using the results from the
  45. %x self-scaling evaluation to estimate quickly the performance for
  46. %x workloads that have not been measured. We show that this technique
  47. %x yields reasonably accurate performance estimates and argue that this
  48. %x method gives a far more accurate comparative performance evaluation
  49. %x than traditional single-point benchmarks. We apply our new
  50. %x evaluation technique by measuring a SPARCstation 1+ with one SCSI
  51. %x disk, an HP 730 with one SCSI-II disk, a DECstation 5000/200 running
  52. %x the Sprite LFS operating system with a three-disk disk array, a
  53. %x Convex C240 minisupercomputer with a four-disk disk array, and a
  54. %x Solbourne 5E/905 fileserver with a two-disk disk array.
  55. %s toc@hpl.hp.com (Mon Mar 13 10:57:38 1995)
  56. %s wilkes%hplajw@hpl.hp.com (Sun Mar 19 12:38:01 PST 1995)
  57. %s wilkes%cello@hpl.hp.com (Sun Mar 19 12:38:53 PST 1995)
  58. %z InProceedings
  59. %K Ousterhout90
  60. %s wilkes%cello@hplabs.hp.com (Fri Jun 29 20:46:08 PDT 1990)
  61. %A John K. Ousterhout
  62. %T Why aren't operating systems getting faster as fast as hardware?
  63. %C Proceedings USENIX Summer Conference
  64. %c Anaheim, CA
  65. %D June 1990
  66. %P 247-256
  67. %x This paper evaluates several hardware pplatforms and operating systems using
  68. %x a set of benchmarks that stress kernel entry/exit, file systems, and
  69. %x other things related to operating systems. The overall conclusion is that
  70. %x operating system performance is not improving at the same rate as the base speed of the
  71. %x underlying hardware. The most obvious ways to remedy this situation
  72. %x are to improve memory bandwidth and reduce operating systems'
  73. %x tendency to wait for disk operations to complete.
  74. %o Typical performance of 10-20 MIPS cpus is only 0.4 times what
  75. %o their raw hardware performance would suggest. HP-UX is
  76. %o particularly bad on the HP 9000/835, at about 0.2x. (Although
  77. %o this measurement discounted a highly-tuned getpid call.)
  78. %k OS performance, RISC machines, HP9000 Series 835 system calls
  79. %z InProceedings
  80. %K McVoy91
  81. %A L. W. McVoy
  82. %A S. R. Kleiman
  83. %T Extent-like Performance from a Unix File System
  84. %C Proceedings USENIX Winter Conference
  85. %c Dallas, TX
  86. %D January 1991
  87. %P 33-43
  88. %z Article
  89. %K Chen93d
  90. %A Peter M. Chen
  91. %A David Patterson
  92. %T Storage performance \- metrics and benchmarks
  93. %J Proceedings of the IEEE
  94. %V 81
  95. %N 8
  96. %D August 1993
  97. %P 1151-1165
  98. %x Discusses metrics and benchmarks used in storage performance evaluation.
  99. %x Describes, reviews, and runs popular I/O benchmarks on three systems. Also
  100. %x describes two new approaches to storage benchmarks: LADDIS and a Self-Scaling
  101. %x benchmark with predicted performance.
  102. %k I/O, storage, benchmark, workload, self-scaling benchmark,
  103. %k predicted performance, disk, performance evaluation
  104. %s staelin%cello@hpl.hp.com (Wed Sep 27 16:21:11 PDT 1995)
  105. %z Article
  106. %K Park90a
  107. %A Arvin Park
  108. %A J. C. Becker
  109. %T IOStone: a synthetic file system benchmark
  110. %J Computer Architecture News
  111. %V 18
  112. %N 2
  113. %D June 1990
  114. %P 45-52
  115. %o this benchmark is useless for all modern systems; it fits
  116. %o completely inside the file system buffer cache. Soon it may even
  117. %o fit inside the processor cache!
  118. %k IOStone, I/O, benchmarks
  119. %s staelin%cello@hpl.hp.com (Wed Sep 27 16:37:26 PDT 1995)
  120. %z Article
  121. %K Fenwick95
  122. %A David M. Fenwick
  123. %A Denis J. Foley
  124. %A William B. Gist
  125. %A Stephen R. VanDoren
  126. %A Danial Wissell
  127. %T The AlphaServer 8000 series: high-end server platform development
  128. %J Digital Technical Journal
  129. %V 7
  130. %N 1
  131. %D August 1995
  132. %P 43-65
  133. %x The AlphaServer 8400 and the AlphaServer 8200 are Digital's newest high-end
  134. %x server products. Both servers are based on the 300Mhz Alpha 21164
  135. %x microprocessor and on the AlphaServer 8000-series platform architecture.
  136. %x The AlphaServer 8000 platform development team set aggressive system data
  137. %x bandwidth and memory read latency targets in order to achieve high-performance
  138. %x goals. The low-latency criterion was factored into design decisions made at
  139. %x each of the seven layers of platform development. The combination of
  140. %x industry-leading microprocessor technology and a system platform focused
  141. %x on low latency has resulted in a 12-processor server implementation ---
  142. %x the AlphaServer 8400 --- capable of supercomputer levels of performance.
  143. %k DEC Alpha server, performance, memory latency
  144. %s staelin%cello@hpl.hp.com (Wed Sep 27 17:27:23 PDT 1995)
  145. %z Book
  146. %K Toshiba94
  147. %A Toshiba
  148. %T DRAM Components and Modules
  149. %I Toshiba America Electronic Components, Inc.
  150. %P A59-A77,C37-C42
  151. %D 1994
  152. %z Article
  153. %K McCalpin95
  154. %A John D. McCalpin
  155. %T Memory bandwidth and machine balance in current high performance computers
  156. %J IEEE Technical Committee on Computer Architecture newsletter
  157. %V to appear
  158. %D December 1995
  159. %z Article
  160. %K FSF89
  161. %A Richard Stallman
  162. %Q Free Software Foundation
  163. %T General Public License
  164. %D 1989
  165. %O Included with \*[lmbench]