Skip to content

Conversation

@bsanchez-the-roach
Copy link
Contributor

@bsanchez-the-roach bsanchez-the-roach commented Oct 31, 2025

DOC-15152

This is a first draft, I definitely want a review for accuracy since I'm still pretty new to this product.

There's an unfinished section at the very bottom, I've left a note there and am looking for some guidance.

Happy to iterate on this more, I just want to get eyes on it.

@netlify
Copy link

netlify bot commented Oct 31, 2025

Deploy Preview for cockroachdb-api-docs canceled.

Name Link
🔨 Latest commit c273994
🔍 Latest deploy log https://app.netlify.com/projects/cockroachdb-api-docs/deploys/690cd5924a60580008deea6d

@netlify
Copy link

netlify bot commented Oct 31, 2025

Deploy Preview for cockroachdb-interactivetutorials-docs canceled.

Name Link
🔨 Latest commit c273994
🔍 Latest deploy log https://app.netlify.com/projects/cockroachdb-interactivetutorials-docs/deploys/690cd592755ed6000816f8af

@github-actions
Copy link

Files changed:

@bsanchez-the-roach bsanchez-the-roach marked this pull request as draft October 31, 2025 16:16
@netlify
Copy link

netlify bot commented Oct 31, 2025

Netlify Preview

Name Link
🔨 Latest commit c273994
🔍 Latest deploy log https://app.netlify.com/projects/cockroachdb-docs/deploys/690cd592c67cbd0008792256
😎 Deploy Preview https://deploy-preview-20893--cockroachdb-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

Copy link
Contributor

@rytaft rytaft left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really great! Thank you for doing this! I left a few suggestions, and I bet @yuzefovich may have some more.


- [Understand how the cost-based optimizer chooses query plans]({% link {{page.version.version}}/cost-based-optimizer.md %}) based on table statistics, and how those statistics are refreshed.

## Query plan regressions vs. suboptimal plans
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section seems a bit too focused on the technicality of what the Insights page currently supports. I think it's worth mentioning that the Insights page can help, but I'm not sure you need to distinguish between plan regressions v suboptimal plans.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Becca on this. This section seems confusing to me in the current form. "Slow execution" and "suboptimal plan" insights might be good starting points for troubleshooting an unsatisfactory latency for a given query, yet neither necessarily confirms / disproves that this query has experienced a query plan regression.

Perhaps a better way to include the information about the insights would be to have just a single sentence in "Before you begin" section to indicate that "suboptimal plan" insight might help with identifying / understanding the query plan regression. I'd probably omit the mention of "slow execution" insight altogether since it doesn't give much useful signal with query plan regressions - after all, the execution time exceeding the threshold controlled via the cluster setting could be the best we can do.

2. If you've already identified specific time intervals in Step 1, you can use the time interval selector to create a custom time interval. Click **Apply**.
3. If there is only one plan in the resulting table, there was only one plan used for this statement fingerprint during this time interval, and therefore a query plan regression could not have occurred. If there are multiple plans listed in the resulting table, the query plan changed within the given time interval. By default, the table is sorted from most recent to least recent query plan. Compare the **Average Execution Time** of the different plans.

If a plan in the table has a significantly higher average execution time than the one that preceded it, it's possible that this is a query plan regression. It's also possible that the increase in latency is coincidental, or that the plan change was not the actual cause. For example, if the average execution time of the latest query plan is significantly higher than the average execution time of the previous query plan, this could be explained by a significant increase in the **Average Rows Read** column.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An increase in Average Rows Read could indicate a query plan regression, since it's possible that the bad query plan is scanning more rows than it should.

But as I think you're intending to show, an increase in Average Rows Read could also indicate that more data was added to the table. It's probably worth mentioning both possibilities here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me it seems more likely that a significant increase (like an order of magnitude growth) in Average Rows Read is actually due to a plan regression, rather than due to the table size growth, since we're comparing two plans for the given query fingerprint that presumably were executed close - time-wise - to each other. I agree though that both are possibilities.


1. In the **Explain Plans** tab, click on the Plan Gist of the more recent plan to see it in more detail.
2. Click on **All Plans** above to return to the list of plans.
3. Click on the Plan Gist of the previous plan to see it in more detail. Compare the two plans to understand what changed. Do the plans use different indexes? Are they scanning the different portions of the table? Do they use different join strategies?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the different portions -> different portions


#### Determine if a literal in the SQL statement has changed

[NOTE FROM BRANDON: I need more information on this case, mainly how to identify that this is the case, and what to do about it.]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure there is a good way to determine this without collecting a conditional statement bundle for a slow execution of the statement fingerprint (unless the DB operator happens to know that the application is using a new value for a particular placeholder). Maybe @yuzefovich has another idea?

Copy link
Member

@yuzefovich yuzefovich Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oof, yeah, this is hard one. The tutorial so far assumes that there is a single good plan for a query fingerprint that might have regressed, but it's actually possible that multiple plans are good, depending on the values of placeholders ("literals").

Here is an example of two different optimal plans (although they do look similar):

CREATE TABLE small (k INT PRIMARY KEY, v INT);
CREATE TABLE large (k INT PRIMARY KEY, v INT, INDEX (v));
INSERT INTO small SELECT i, i FROM generate_series(1, 10) AS g(i);
INSERT INTO large SELECT i, 1 FROM generate_series(1, 10000) AS g(i);
ANALYZE small;
ANALYZE large;
-- this scans `large` on the _left_ side of merge join
EXPLAIN SELECT * FROM small INNER JOIN large ON small.v = large.v AND small.v = 1;
-- this scans `large` on the _right_ side of merge join
EXPLAIN SELECT * FROM small INNER JOIN large ON small.v = large.v AND small.v = 2;

Complicating things is that we deal with query fingerprints internally, so all such constants are removed from our observability tooling. If there was an escalation saying that a particular query fingerprint is occasionally slow, similar to Becca I'd have asked for a conditional statement bundle, and then I'd play around locally with different values of placeholders to see whether multiple plans could be chosen based on concrete placeholder values. But so far we've used statement bundles mostly as internal (to Queries team in particular and Cockroach Labs support in general) tooling, so I'd probably not mention going down this route.

Instead, I'd consider suggesting looking into application side to see whether the literal has changed or something like that.


[NOTE FROM BRANDON: I need more information on this case, mainly how to identify that this is the case, and what to do about it.]

If you suspect that the query plan change is the cause of the latency increase, and you suspect that the query plan changed due to a changed query literal, [what should you do]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what should you do

The likely problem is that the query stats don't accurately reflect how this value is represented in the data. This can be fixed by running ANALYZE <table> to refresh the stats for the table. It's also possible that a good index isn't available, which could be fixed by checking the index recommendations displayed by EXPLAIN-ing the query or on the insights page. If none of these options fixes the issue, a more drastic redesign of the schema/application may be needed.

Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, glad to see this work!


- [Understand how the cost-based optimizer chooses query plans]({% link {{page.version.version}}/cost-based-optimizer.md %}) based on table statistics, and how those statistics are refreshed.

## Query plan regressions vs. suboptimal plans
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Becca on this. This section seems confusing to me in the current form. "Slow execution" and "suboptimal plan" insights might be good starting points for troubleshooting an unsatisfactory latency for a given query, yet neither necessarily confirms / disproves that this query has experienced a query plan regression.

Perhaps a better way to include the information about the insights would be to have just a single sentence in "Before you begin" section to indicate that "suboptimal plan" insight might help with identifying / understanding the query plan regression. I'd probably omit the mention of "slow execution" insight altogether since it doesn't give much useful signal with query plan regressions - after all, the execution time exceeding the threshold controlled via the cluster setting could be the best we can do.


One way of tracking down query plan regressions is to identify SQL statements whose executions are relatively high in latency. Use one or both of the following methods to identify queries that might be associated with a latency increase.

#### Use workload insights
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in another comment, my understanding of "slow execution" and "suboptimal plan" insights is that they cannot really be used to find or troubleshoot query plan regressions, so I'd remove "Use workload insights" approach altogether.

That said, it might be worth reaching out to TSEs / EEs to check whether their experience matches my understanding.

3. Among the resulting Statement Fingerprints, look for those with high latency. Click on the column headers to sort the results by **Statement Time** or **Max Latency**.
4. Click on the Statement Fingerprint to go to the page that details the statement and its executions.
{{site.data.alerts.callout_success}}
Look for statements whose **Execution Count** is high. Statements that are run once, such as import statements, aren't likely to be the cause of increased latency due to query plan regressions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: capitalize IMPORT and perhaps link to the IMPORT docs page.

2. If you've already identified specific time intervals in Step 1, you can use the time interval selector to create a custom time interval. Click **Apply**.
3. If there is only one plan in the resulting table, there was only one plan used for this statement fingerprint during this time interval, and therefore a query plan regression could not have occurred. If there are multiple plans listed in the resulting table, the query plan changed within the given time interval. By default, the table is sorted from most recent to least recent query plan. Compare the **Average Execution Time** of the different plans.

If a plan in the table has a significantly higher average execution time than the one that preceded it, it's possible that this is a query plan regression. It's also possible that the increase in latency is coincidental, or that the plan change was not the actual cause. For example, if the average execution time of the latest query plan is significantly higher than the average execution time of the previous query plan, this could be explained by a significant increase in the **Average Rows Read** column.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me it seems more likely that a significant increase (like an order of magnitude growth) in Average Rows Read is actually due to a plan regression, rather than due to the table size growth, since we're comparing two plans for the given query fingerprint that presumably were executed close - time-wise - to each other. I agree though that both are possibilities.

#### Determine if the table indexes changed

1. Look at the **Used Indexes** column for the older and the newer query plans. If these aren't the same, it's likely that the creation or deletion of an index resulted in a change to the statement's query plan.
2. In the **Explain Plans** tab, click on the Plan Gist of the more recent plan to see it in more detail. Identify the table used in the initial "scan" step of the plan.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/table/tables/ - it's possible that we have initial scans of multiple tables.


#### Determine if a literal in the SQL statement has changed

[NOTE FROM BRANDON: I need more information on this case, mainly how to identify that this is the case, and what to do about it.]
Copy link
Member

@yuzefovich yuzefovich Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oof, yeah, this is hard one. The tutorial so far assumes that there is a single good plan for a query fingerprint that might have regressed, but it's actually possible that multiple plans are good, depending on the values of placeholders ("literals").

Here is an example of two different optimal plans (although they do look similar):

CREATE TABLE small (k INT PRIMARY KEY, v INT);
CREATE TABLE large (k INT PRIMARY KEY, v INT, INDEX (v));
INSERT INTO small SELECT i, i FROM generate_series(1, 10) AS g(i);
INSERT INTO large SELECT i, 1 FROM generate_series(1, 10000) AS g(i);
ANALYZE small;
ANALYZE large;
-- this scans `large` on the _left_ side of merge join
EXPLAIN SELECT * FROM small INNER JOIN large ON small.v = large.v AND small.v = 1;
-- this scans `large` on the _right_ side of merge join
EXPLAIN SELECT * FROM small INNER JOIN large ON small.v = large.v AND small.v = 2;

Complicating things is that we deal with query fingerprints internally, so all such constants are removed from our observability tooling. If there was an escalation saying that a particular query fingerprint is occasionally slow, similar to Becca I'd have asked for a conditional statement bundle, and then I'd play around locally with different values of placeholders to see whether multiple plans could be chosen based on concrete placeholder values. But so far we've used statement bundles mostly as internal (to Queries team in particular and Cockroach Labs support in general) tooling, so I'd probably not mention going down this route.

Instead, I'd consider suggesting looking into application side to see whether the literal has changed or something like that.

If you were unable to identify a specific moment in time when the latency increased, you won't have a specific "before" and "after" to compare. If this is the case, it would still be useful to have a vague sense of the time of the increase (using the methods in Step 1), even if that range is many hours long. You can then use the above methods (in Step 3) to compare query plans on a rolling basis by changing the custom time interval to consecutive hour-long intervals. This might help you discover the specific time interval in which a sudden latency increase occurred.
{{site.data.alerts.end}}

### Step 4. Understand why the query plan changed
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the three things I have listed here (changing table indexes, changing table statistics, changing query literals) the only three I should mention? How about more broad schema changes? Or changed cluster settings?

@bsanchez-the-roach bsanchez-the-roach marked this pull request as ready for review November 6, 2025 17:23
3. In your SQL client, run `SHOW INDEXES FROM <table_name>;` for each of those tables.
4. Make sure that the query plan is using a table index that makes sense, given the query and the table's full set of indexes.

It's possible that the new index is well-chosen but that the schema change triggered a statistics refresh that is the root problem. It's also possible that the new index is not ideal. Think about how and when this table gets queried, to determine if the index should be reconsidered. [Check the **Insights** page for index recommendations]({% link {{ page.version.version }}/ui-insights-page.md %}#suboptimal-plan), and read more about [secondary index best practices]({% link {{ page.version.version }}/schema-design-indexes.md %}#best-practices).
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally in this section, is the linking to solutions sufficient (as in: read this page to learn how to refresh table stats, or read this page to learn about index recommendations) or should those solutions be described in full on this page.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants