Lib Perf-O-Rate is the performance measuring utility you always wanted. Well. It's the utility *I* always wanted.
Perf-O-Rate doesn't measure CPU time, but what it DOES do is let you request summaries of usage by tag -- note that the addon API already gives you a fairly good version of this in the Inspect.AddOn.CPU() output. The primary usages:
1. /perf -b "expr" will run expr 100 times (once per update, so this will take a bit) and tell you how long it took.
2. In code, perf.hook(func, name) returns a function f, such that f(args) is equivalent to func(args), but the real time that passes during the evaluation is accumulated under the tag <name>.
So if you wanted to find out what one of your hooks is consuming for CPU time, replace it with perf.hook(func, "foo"), then run "/perf foo". Note that you can reset the timer ("/perf -r foo") if you'd like to start over from a given point.
This is young and experimental. Breaking changes could be in the pipeline.
Embeds my LibGetOpt, so it actually supports quoting for expr (only " and \ quoting at this time, mind).
BTW, I did some meta-testing on the performance of the hooks. It looks like running a function inside the hooks takes on the order of 0.8 milliseconds, which isn't super fast, but it should be small enough that you can have a few things hooked and not suffer horribly.
Added lap timer and elapsed time report feature.
0.3/0.4: ToC updates, improved reporting.
0.5: ToC updates.
0.6: Fix reset (perf -r) so it works, automatically reset a benchmark after running it so you don't accumulate old statistics.
0.7: Event API changes.