Jan 18: Cost
Them: 'So it's picking this new plan and it's much slower than the old plan, but the new plan has a much lower cost.'
Me: 'Of course it has a lower cost. That's why it's picked it. That's what a cost-based optimiser does.'
Them: 'But it's slower'
Me: 'Well spotted'
Them: 'Well then why is the cost lower?'
Now at this point there are diverging paths you could take and it's utterly valid to find out *why* the calculations haven't delivered the best plan. The cost *is* the result of a calculation and not an arbritary number plucked from the air. The optimiser selected the 'bad' plan because it had the lowest cost.
To quote Jonathan
'Cost IS time - but only in theory.'
However, it's equally valid to say, I don't care why the cost is wrong, I *know* the plan that performs most efficiently and that's the plan I want the server to use. In fact, I'd suggest that *most* SQL performance problems I look at have at their very core a plan with a lower cost that doesn't deliver the best response time! My users couldn't care less about optimiser calculations. (I use the cliche carefully - they probably don't care at all.)
What isn't easy for me is to watch people spend a day aiming for the lowest cost, brandish it at me proudly and then be disappointed when it runs more slowly than the previous version!
Them: 'But look at the cost!'
Me: 'Who cares? It's slower.'
My take on it is this. Cost is the output of a model designed to deliver the lowest execution time but sometimes the model gets things wrong and, in the face of actual run times, I'll take reality over the model every time. When people arrive at my desk (perhaps electronically) with a SQL performance problem, one of the first things I say if the conversation begins like that one above is
"Ignore the cost!"
Jonathan's telling the truth and the true answer to a question is important, as long as you understand what the original question was. I know from experience that many people react to the cost without understanding how the optimiser arrives at that answer. Of course, it could be that I happen to see the problem queries, where the optimiser calculation isn't working out too well, but I come across plenty of them and it would save a lot of time if people didn't spend so long arguing about costs!
Cost is a fundamental metric if you care about the CBO, develop it, write about it (Jonathan) or are trying to work out what the hell it's doing (anyone who has ever looked at a 10053 trace in desperation). But as a performance metric? I'm not sure it's very important at all.
How about time?
P.S. No more blog posts for a while now. That Hotsos presentation looms large. Although, now I've said that, I'll probably contradict myself ....
P.P.S. I should say that I've missed the main thrust of Jonathan's post and picked up on the bit that suited me. Really, it was about this ...
"As I’ve pointed out in the past, “Cost is Time”. The cost of a query represents the optimizer’s estimate of how long it will take that query to run – so it is perfectly valid to compare the cost of two queries to see which one the optimizer thinks will be faster but, thanks to limitations and defects in the optimizer it may not be entirely sensible to do so.
The point I want to address in this post though is the comment that 'it’s valid to compare the cost of different plans, but not to compare the cost of two different queries'."
'Valid' and 'sensible'. I liked that and I hope it's what people absorbed.
[Late update: I happened to look at AskTom today. I don't look as often now that the pace of interesting follow-ups has slowed down. That's no bad thing. Lots of things have been answered (for now) and I still search on old threads constantly. Anyway, I should have known that this subject would crop up there.]
Tracked: Apr 07, 13:26
Tracked: Apr 07, 14:23
Good posts, both from you, Doug and Jonathan's.
Due to the complexities in code, design and CBO, we all are, (although some folks more than others... ) trying to figure out WHY we don't receive the most efficient execution choice consistently by the optimizer.
I've rarely found a query that if I didn't dig deep enough into the objects column statistics data or taken apart the dynamic sampling choices, that I couldn't figure out WHY the CBO made the choice in plan it offered me.
Now was it the best execution choice? Not always, but it was the lowest cost. I would align closer with Jonathan that it was due to design, coding issues, CBO limitations or a combination of any/all, but I also have to agree with you- how much time should I really allocate to the task when the bottom line is, "which one performs better?"
Picking our battles sometimes has to do with deciding how much is worth the fight...
Interesting comment, Kellyn, because it made me realise I might not have emphasised the right points sufficiently.
I'm definitely not saying that the cost and cardinality of individual plan lines aren't of great interest to me in working out *why* we're not getting the desired plan, I was only really talking about this waving around of the final cost of the entire plan as though it was some sort of magic metric.
In fact, whilst Time might be the best metric from a user perspective, Cardinalities are what I tend to focus on, particularly estimated versus actual cardinalities.
I would say that what they should be looking at is not whether the cost is going down, it's whether the cardinality estimates are getting more accurate.
If the cardinality estimates are accurate then 90% ( ? at least ?) of the time the plan will be a good one.
Here's a tuning task from yesterday for a SQL statement returning several hundred thousand rows. The plan's forty lines so I'll just paste the top line (hope the formatting's ok)
Based on these estimates, how likely is it that the plans going to be good for several hundred thousand rows?
But add in a bit of subquery factoring and dynamic sampling:
But ooh - nine times the cost!
Tuning task = manual tuning task, not one of those new fangled automatic ones. this is 9i after all.
And of course the other advantage of concentrating on accurate cardinalities is sensible PGA operations - of which a recent reminder is Randolf Geist's post on hash aggregation:
Agreed. I sent a link to that post round work straight away and was going to blog about it here too.
Ultimately, my favourite tools are SQL monitoring, /*+ gather_plan_stats */ with ALLSTATS LAST and anything that shows me actual versus estimated cardinalities. The overall cost of an execution plan, mmmm, not sure at all.
> The overall cost of an execution plan not sure at all.
> /*+ gather_plan_statistics */
Massive fan - shame that I'm back on 9i for the moment.
And, as you mentioned the other day, what about when your query takes ten hours to run?
> /*+ gather_plan_statistics */
> Massive fan - shame that I'm back on 9i for the moment.
you certainly know that you can get the same information in 9i by using STATISTICS_LEVEL = ALL? Of course it introduces more overhead than "GATHER_PLAN_STATISTICS" but it is nevertheless very helpful.
Way back I started to develop an anonymous PL/SQL block that emulated DBMS_XPLAN.DISPLAY_CURSOR in 9i. If you happen to have a copy of SQLTools++ it uses that PL/SQL block to emulate DBMS_XPLAN.DISPLAY_CURSOR in 9i (built-in functionality to get cardinality feedback) actually, but I never used it / tested it extensively I have to admit.
It can be used as a quite good starting point I think, though, because it contains the required queries to pull the information from the V$SQL_PLAN_STATISTICS_ALL and the remaining part of the code is about emulating the formatting of DBMS_XPLAN.DISPLAY_CURSOR.
Sure. It's just not as nice and convenient. And that convenience is a big deal.
> Sure. It's just not as nice and convenient. And that convenience is a big deal.
Yes, of course that is very true. And that is exactly the reason I did this for SQLTools++ - so you can get it there with (almost) the same convenience for 9i, too
I tried fixing the formatting, but not sure whether I made it any better really!