A brief discussion on the performance issues of MySQL paging limit

A brief discussion on the performance issues of MySQL paging limit

MySQL paging queries are usually implemented through limit. limit accepts 1 or 2 integer parameters. If it is 2 parameters, the first one specifies the offset of the first returned record row, and the second one is the maximum number of returned record rows. The initial record row offset is 0. For compatibility with PostgreSQL, limit also supports limit # offset #.

question:

For small offsets, there is no problem in directly using limit to query. However, as the amount of data increases, the offset of the limit statement will become larger and the speed will be significantly slower as the paging progresses.

Optimization idea: Avoid scanning too many records when the data volume is large

Solution: Subquery paging method or JOIN paging method. The efficiency of JOIN paging and subquery paging is basically at the same level, and the time consumed is also basically the same.

Here is an example. Generally, the primary key of MySQL is an auto-incrementing numeric type. In this case, the following method can be used for optimization.

Take a table with 60,000 records in a real production environment as an example to compare the query time before and after optimization:

-- Traditional limit, file scanning [SQL]
SELECT * FROM tableName ORDER BY id LIMIT 50000,2;
Affected rows: 0
Time: 0.171s

-- Subquery method, index scan [SQL]
SELECT * FROM tableName
WHERE id >= (SELECT id FROM tableName ORDER BY id LIMIT 50000 , 1)
LIMIT 2;
Affected rows: 0
Time: 0.035s

-- JOIN paging method [SQL]
SELECT * FROM tableName AS t1
JOIN (SELECT id FROM tableName ORDER BY id LIMIT 50000, 1) AS t2
WHERE t1.id <= t2.id ORDER BY t1.id LIMIT 2;
Affected rows: 0
Time: 0.036s

It can be seen that the performance has been improved many times after optimization.

Optimization principle:

Subqueries are performed on the index, while normal queries are performed on the data files. Generally speaking, index files are much smaller than data files, so they can be operated more efficiently. Because all field contents need to be retrieved, the first method needs to span a large number of data blocks and retrieve them, while the second method basically retrieves the corresponding content directly after locating it according to the index field, which naturally greatly improves efficiency. Therefore, to optimize limit, instead of using limit directly, first get the offset id, and then use limit size directly to get data.

In actual project use, you can use a similar strategy pattern to handle paging. For example, if there are 100 records per page, if the number of pages is less than 100, the most basic paging method is used; if the number of pages is greater than 100, the subquery paging method is used.

The above is the full content of this article. I hope it will be helpful for everyone’s study. I also hope that everyone will support 123WORDPRESS.COM.

You may also be interested in:
  • Solution to data duplication when using limit+order by in MySql paging
  • Why does MySQL paging become slower and slower when using limit?
  • MySQL optimization query_cache_limit parameter description
  • Detailed explanation of the pitfalls of mixing MySQL order by and limit
  • Reasons and optimization solutions for slow MySQL limit paging with large offsets
  • Mysql sorting and paging (order by & limit) and existing pitfalls
  • MySQL uses limit to implement paging example method
  • How to use MySQL limit and solve the problem of large paging
  • MySQL limit performance analysis and optimization
  • Analysis of the Poor Performance Caused by Large Offset of LIMIT in MySQL Query
  • Why does using limit in MySQL affect performance?

<<:  Implementation of nginx worker process loop

>>:  Detailed explanation of desktop application using Vue3 and Electron

Recommend

Share 8 MySQL pitfalls that you have to mention

MySQL is easy to install, fast and has rich funct...

MySQL database terminal - common operation command codes

Table of contents 1. Add users 2. Change the user...

The difference between JS pre-parsing and variable promotion in web interview

Table of contents What is pre-analysis? The diffe...

Detailed explanation of Linux text processing tools

1. Count the number of users whose default shell ...

Detailed explanation of CSS sticky positioning position: sticky problem pit

Preface: position:sticky is a new attribute of CS...

Springboot+Vue-Cropper realizes the effect of avatar cutting and uploading

Use the Vue-Cropper component to upload avatars. ...

Mysql master-slave synchronization configuration scheme under Centos7 system

Preface Recently, when working on a high-availabi...

How to backup MySQL regularly and upload it to Qiniu

In most application scenarios, we need to back up...

HTML head tag detailed introduction

There are many tags and elements in the HTML head ...

js canvas realizes rounded corners picture

This article shares the specific code of js canva...

Learn Vue middleware pipeline in one article

Often when building a SPA, you will need to prote...

The process of SSH service based on key authentication in Linux system

As we all know, SSH is currently the most reliabl...

Summary of some common writing methods that cause MySQL index failure

Preface Recently, I have been busy dealing with s...