print("Hello, World!")
One program in a potpourri of programming languages.
Benchmark 1: ./testc
Time (mean ± σ): 16.8 ms ± 3.4 ms [User: 16.5 ms, System: 0.6 ms]
Range (min … max): 13.0 ms … 28.4 ms 191 runs
Benchmark 2: ./testcpp
Time (mean ± σ): 17.8 ms ± 4.0 ms [User: 17.2 ms, System: 0.8 ms]
Range (min … max): 13.5 ms … 29.1 ms 175 runs
Benchmark 3: python test.py
Time (mean ± σ): 430.7 ms ± 23.4 ms [User: 426.1 ms, System: 4.0 ms]
Range (min … max): 388.5 ms … 466.9 ms 10 runs
Benchmark 4: java test
Time (mean ± σ): 62.3 ms ± 11.7 ms [User: 57.5 ms, System: 8.0 ms]
Range (min … max): 49.2 ms … 87.5 ms 41 runs
Benchmark 5: mono test.exe
Time (mean ± σ): 32.8 ms ± 6.6 ms [User: 29.9 ms, System: 3.1 ms]
Range (min … max): 25.9 ms … 48.5 ms 101 runs
Benchmark 6: node test.js
Time (mean ± σ): 61.7 ms ± 8.4 ms [User: 57.0 ms, System: 5.5 ms]
Range (min … max): 50.4 ms … 79.1 ms 53 runs
Benchmark 7: perl test.pl
Time (mean ± σ): 666.4 ms ± 33.8 ms [User: 663.9 ms, System: 1.5 ms]
Range (min … max): 619.0 ms … 725.2 ms 10 runs
Benchmark 8: ruby test.rb
Time (mean ± σ): 249.1 ms ± 13.7 ms [User: 243.7 ms, System: 4.9 ms]
Range (min … max): 229.8 ms … 271.4 ms 12 runs
Benchmark 9: ./testgo
Time (mean ± σ): 21.4 ms ± 5.3 ms [User: 21.0 ms, System: 0.8 ms]
Range (min … max): 16.9 ms … 34.7 ms 153 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 10: ./testrs
Time (mean ± σ): 22.0 ms ± 5.4 ms [User: 21.6 ms, System: 0.6 ms]
Range (min … max): 14.5 ms … 31.1 ms 167 runs
Benchmark 11: ./tesths
Time (mean ± σ): 90.7 ms ± 5.8 ms [User: 89.2 ms, System: 1.5 ms]
Range (min … max): 82.2 ms … 100.2 ms 33 runs
Benchmark 12: julia test.jl
Time (mean ± σ): 262.2 ms ± 11.4 ms [User: 213.5 ms, System: 48.3 ms]
Range (min … max): 250.8 ms … 291.6 ms 11 runs
Benchmark 13: Rscript test.r
Time (mean ± σ): 3.086 s ± 0.084 s [User: 3.065 s, System: 0.020 s]
Range (min … max): 2.913 s … 3.235 s 10 runs
Benchmark 14: ecl --shell test.lisp
Time (mean ± σ): 790.7 ms ± 24.2 ms [User: 1190.7 ms, System: 93.0 ms]
Range (min … max): 757.7 ms … 831.1 ms 10 runs
Benchmark 15: lua test.lua
Time (mean ± σ): 137.0 ms ± 9.8 ms [User: 136.2 ms, System: 0.8 ms]
Range (min … max): 120.4 ms … 158.4 ms 21 runs
Benchmark 16: elixir test.exs
Time (mean ± σ): 298.4 ms ± 11.9 ms [User: 416.0 ms, System: 50.9 ms]
Range (min … max): 272.6 ms … 319.2 ms 10 runs
Summary
'./testc' ran
1.06 ± 0.32 times faster than './testcpp'
1.27 ± 0.41 times faster than './testgo'
1.31 ± 0.42 times faster than './testrs'
1.95 ± 0.56 times faster than 'mono test.exe'
3.67 ± 0.90 times faster than 'node test.js'
3.70 ± 1.03 times faster than 'java test'
5.39 ± 1.15 times faster than './tesths'
8.14 ± 1.76 times faster than 'lua test.lua'
14.80 ± 3.13 times faster than 'ruby test.rb'
15.59 ± 3.26 times faster than 'julia test.jl'
17.74 ± 3.69 times faster than 'elixir test.exs'
25.60 ± 5.41 times faster than 'python test.py'
39.61 ± 8.34 times faster than 'perl test.pl'
47.00 ± 9.71 times faster than 'ecl --shell test.lisp'
183.44 ± 37.82 times faster than 'Rscript test.r'
¯\_(ツ)_/¯