Troubleshooting
Understanding the Components #
The secret to troubleshooting a Vitess cluster well comes from knowing how all the components are wired together.
Of these connections, the most important one is the query serving path:
- The application sends a request to a vtgate.
- The vtgate forwards that request to one or more vttablets.
- The vttablets in turn send the request to MySQL.
If there is any kind of problem with serving queries, these are the components to drill into.
VTGates and vttablets connect to the global and cell-specific toposerver. They use these toposervers to broadcast their state as well as to discover configuration changes. Additionally, vtgates receive health information from the vttablets they are connected to, and use this information to direct traffic in real-time. If there is any kind of problem with configuration, these are the areas to focus on.
The first step to troubleshooting is to have an established baseline. It is recommended that monitoring is set up for all the components as directed in the monitoring section. When there is an incident, the change in the graphs will likely help with identifying the root cause.
Query Serving #
Most of the focus of Vitess monitoring centers around the cost of a query. Although not always true, we use the following rules of thumb to get a rough estimate:
- In MySQL, the time taken by a query roughly translates to its actual cost. More often than not, the limiting factor is the number of disk IOPS.
- In Vitess components, the payload contributes to the cost. The limiting factor is most often the CPU, followed by memory.
- The cost of parsing and analyzing the input query has started to become significant. This is typically driven by the size and complexity of the query.
Let us now go through some fire drills.
Too many connections #
If you see an error that contains the string Too many connections (errno 1040) (sqlstate 08004)
, then it means that the maximum number of connections allowed by MySQL has been exceeded.
The remedy is to further increase the max_connections
settings in MySQL. The other alternative is to reduce the pool sizes. As recommended before, the max_connections
value should be set about double the number of connections allocated for pools.
Elevated Query Latency #
This is one of the most common problems. It can show up as just an elevated latency with no external impact. Sometimes, the problem can be more acute where queries return errors due to timeouts. There can be multiple root causes.
Approach 1: Validate resource usage and remove bottlenecks #
Check vtgate CPU usage: If the vtgate CPU is too high or is getting throttled, it could be the root cause. Bringing up another vtgate to take on additional load or increasing the CPU quota of the VTGate should resolve the problem.
Check vttablet CPU usage: If vttablet CPU usage is maxed out, the immediate solution is to increase the quota. If the vttablet is a replica, then bringing up more replicas should help distribute the load. If it is a master, then you need to plan on splitting the shard into smaller parts, or reshard in the case of an unsharded keyspace.
Check MySQL resource usage: If MySQL is resource constrained, for example IOPS, then you need to provision more. Also, you will then need to add more replicas or plan a reshard or split just like in the case of a vttablet. More often than not, MySQL will run into resource constraints before vttablet does.
Check Network packet limits. Some cloud providers limit network capacity by limiting the number of packets per second. Check to see if this limit has been reached. If so, you will need to request a quota increase from the cloud provider.
If none of the processes are resource constrained, the next step is to check connection pool waits in vttablet. If there are long waits, then we know that the connection pool is under-provisioned. Restarting the vttablets with increased pool sizes should fix the problem.
If latency continues to be elevated after increasing the pool size, it most likely means that the contention has shifted from vttablet to MySQL. In that case, it is likely that MySQL is the one resource constrained. You may have to roll back the pool size increase and instead look at increasing resources for MySQL.
Approach 2: Find the problematic query #
Typically, resource constraints are encountered during the early stages of a rollout where things are not tuned well. Once the system is well tuned and running smoothly, the root cause is likely to be related to how the application sends queries.
We first need to determine if this is an overall increase in application traffic or a change in specific query patterns.
Inspect the per-tablet and per-user graphs to see if the increase is uniform across all tables or is to only a specific table.
If the increase is across the board, then it is likely a general overload. The mitigation is to increase provisioning or address the root cause of the overload if it was not expected.
If the increase is on a specific table, then this data itself is sufficient to determine the root cause. If not, we may need to drill down further to identify if a specific query is causing the problem. There are two approaches:
- Inspect the
/queryz
page and look at the stats for all queries of that table. It is very likely that the problematic ones have already risen to the top of the page and may be color-coded red. If not, talking a few snapshots of the page and comparing the stats should help identify the problematic query. - Real-time stream from
/debug/querylog
, filtering out unwanted tables and observing the results to identify the problematic query.
Once the query is identified, the remedy depends on the situation. It could be a rewrite of the query to be more efficient, the creation of an additional index, or it could be the shutdown of an abusive batch process.
The above guidelines have not required you to inspect MySQL. Over time, Vitess has evolved by improving its observability every time there was an incident. However, there may still be situations where the above approach is insufficient. If so, you will need to resort to looking inside MySQL to find out the root cause.
The actual identification of the root cause may not be as straightforward as described above. Sometimes, an incident is caused by multiple factors. In such cases, using first principles of troubleshooting and understanding how the components communicate with each other may be the only way to get to the bottom of a problem. If you have exhausted all the recommendations given so far and still have not found the root cause, you may have to directly troubleshoot the problem at the MySQL level.
Elevated Error Rates #
The analysis for elevated error rates for read queries follows steps similar to elevated latency. You should essentially use the same drill down approach to identify the root cause.
Transaction timeouts #
Transaction timeouts manifest as the following errors:
ERROR 1317 (HY000): vtgate: http://sougou-lap1:15001/: vttablet: rpc error: code = Aborted desc = transaction 1610909864463057369: ended at 2021-01-17 10:58:49.155 PST (exceeded timeout: 30s) (CallerID: userData1)
If you see such errors, you may have to do one of the following:
- Increase the transaction timeout in vttablet by setting a higher value for
queryserver-config-transaction-timeout
. - Refactor the application code to finish the transaction sooner.
It is recommended to minimize long running transactions in MySQL. This is because the efficiency of MySQL drastically drops as the number of concurrent transactions increases.
Transaction connection limit errors #
If you encounter errors that contain the following text: transaction pool connection limit exceeded
, it means that your connection pool for transactions is full and Vitess timed out waiting for a connection. This issue can have multiple root causes.
If your transaction load is just spiky, then you may just have to increase the pool timeout to make the transaction wait longer for a connection. This can be increased by setting the queryserver-config-txpool-timeout
flag in vttablet. The default value is one second.
It is also possible that you have underprovisioned the transaction pool size. If so, you can increase the size by changing the value for queryserver-config-transaction-cap
. Note that it is risky to increase the pool size beyond the low hundreds because MySQL performance can drastically deteriorate if too many concurrent transactions are opened.
Another possibility is that the application is unnecessarily keeping transactions open for too long thereby causing the pool to get full. To identify this, you can look at the vttablet logs. Every time the pool gets full, the list of transactions that were occupying those connections are printed out. Looking at those transactions should help you identify the root cause. This logging is throttled to prevent log spam.
Frequent application crashes can also leave transactions open in the pool until timeout. Vitess tries to detect this situation and proactively rolls back such transactions, but it is not always reliable. If this is the case, the log file will contain many unfinished transactions that were rolled back.
If you are starting to see these errors due to steady organic growth, it may be time to split the database or reshard.
Errant GTIDs #
An errant GTID incident is one where the MySQL instances of a shard do not agree on the events in the binlogs. This means that the data on those instances has potentially diverged. Errant GTIDs are often introduced due to operator errors. For example, someone could use DBA privileges to write data to a replica.
In Vitess, you can get into an errant GTID situation if a primary is network partitioned and you make a decision to proceed forward with an EmergencyReparentShard
. This will essentially cause the old primary to get indefinitely stuck due to transactions waiting for a semi-sync ack. The natural instinct would be to restart that server. However, such a restart will cause MySQL to go into recovery mode and complete those pending transactions thereby causing divergence.
vtorc
can be used to detect errant GTIDs. You can also set up your own monitoring to detect this situation. This can be performed by ensuring that the GTIDs of the replicas are always a subset of the primary.
If an errant GTID is detected, the first task is to identify the GTIDs that are diverged. If the divergence did not cause any data skew, you could choose to create dummy transactions with those extra GTIDs on instances that do not contain them. If you are running vtorc
, you can use it to perform this fix.
If the data has diverged, you have to make a decision about which one is authoritative. Once that is decided, it is recommended that you first take a backup of the instance that you have determined to be authoritative. Following this, you can bring down the instances that were non-authoritative and restart them with empty directories. This will trigger the restore workflow that will start with the authoritative backup. Once restored, the MySQL will point itself to the current primary and catch up to a consistent state.
Replica vttablet not receiving queries #
A number of reasons can cause a vtgate to not send queries to a replica vttablet. The first step is to visit the /debug/status
page of vtgate and look at the Health Check Cache
section.
If the vttablet entry is present, but is color coded red and displays an error message, it means that vtgate is seeing the vttablet as unhealthy. Once you fix the error that causes the problem, traffic should resume to the vttablet. There can be many reasons for the unhealthiness:
- vttablet may not be reachable: troubleshoot connectivity from the vtgate machine to the vttablet, make sure firewall rules allow access, ports are reachable, etc.
- vttablet is reporting itself as unhealthy: Fix the root cause. For example, the MySQL may be lagging too much, or vttablet may have trouble connecting to the MySQL instance, etc.
If the vttablet entry is absent, then it means that vtgate has not discovered the vttablet yet. Check to see if vtgate is having trouble connecting to the topo server. If there was a problem, there should be errors in the log file like cannot get tablets
. If such errors are found, fix the root cause and verify again.
It may be worth proactively monitoring TopologyWatcherErrors
and TopologyWatcherOperations
. Alerting on errors can help identify these problems early.
If there are no topo errors in vtgate, check to see if the tablet record has been created by vttablet using the vtctlclient ListAllTablets
command. If the tablet record is absent or does not contain the correct host and port info, check the vttablet logs to see if it has trouble connecting to the topo and is unable to publish its existence. If there are errors, fixing the issue should resolve the problem.
Read-only errors #
If you see the following error string The MySQL server is running with the --read-only option so it cannot execute this statement (errno 1290) (sqlstate HY000)
while trying to write to the primary, then it likely means that a previous PlannedReparentShard
operation failed in the middle.
Re-executing PlannedReparentShard
against that master should fix the problem. If this operation fails with an error saying that there is no current master, you may have to issue an EmergencyReparentShard
to safely elect a master.
If vtorc
is running, no action is needed because vtorc
will notice this state and fix it in a safe manner.
This error can also be encountered if a new primary has been elected, but the older vttablet continues to think that it is still the primary. If this is the situation, then it is transient and will heal itself as long as components are able to communicate with each other. In this situation, the older vttablet will be in read-only mode. VTGates that are trying to send the writes to it will fail.
Eventually, the new primary will inform the vtgates of its existence, and they will start sending traffic to the new primary instead of the old one. The old primary will also eventually notice that a new primary was elected. When it notices it, it will demote itself to a replica.
Local TopoServer of a Cell is getting overloaded #
This situation can happen if a large number of vtgates continuously spam the local toposerver to check for changes in the list of tablet servers. If this is the case, you may have to reduce the polling frequency of the vtgates by reducing the -tablet_refresh_interval
value.
Global Topo or Cell Topo is down #
Vitess servers are built to survive brief topo outages in the order of many minutes. All Vitess servers cache the necessary information to serve traffic. If there is an outage, they use the cached information to continue serving traffic.
However, during such an outage, you may not be able to perform cluster maintenance operations like a reparent, resharding, or bringing up new servers.
Topo Complete Data Loss #
In the unforeseen circumstance of a total data loss of the topo servers, a Vitess cluster can be restored to an operational state by performing the following actions:
- Bring up a brand new set of empty topo servers.
- Recreate the cell info as before.
- Restart all the vttablets.
- Upload the VSchema for the recreated keyspaces.
If you are in the middle of a reshard, make sure you restart the source vttablets first. This order will ensure that they get marked as serving. Any shards that get added later that overlap with existing keyranges will be marked as non-serving.