Sharing ideas on processing tens of millions of data in a single MySQL table

Sharing ideas on processing tens of millions of data in a single MySQL table

Project Background

During the processing, field A needs to be updated this morning. In the afternoon, the crawler team completes the crawling of specifications or pictures and needs to update the pictures and specification fields. Due to the deep page flipping of tens of millions of pages in a single table, the processing speed will become slower and slower.

select a,b,c from db.tb limit 10000 offset 9000000

But time is limited. Is there a better way to solve this problem?

Improvement ideas

Is there any way to update data without deep page turning?
Yes, using the auto-increment id column

Observe data characteristics

This single table has an auto-incrementing ID column and is the primary key. The ideal way to query and update data is based on the index column.

select a,b,c from db.tb where id=9999999;
update db.tb set a=x where id=9999999;

Multi-processing

Each process processes data within a certain ID range, which avoids deep page flipping and allows multiple processes to process data at the same time.
Improving data query speed also improves data processing speed.
Here is the task allocation function I wrote for reference:

def mission_handler(all_missions, worker_mission_size):
    """
    The task list is calculated based on the total number of tasks and the number of tasks of each worker. The task list elements are (task start id, task end id).
    Example: The total number of tasks is 100, and the number of tasks for each worker is 40. Then the task list is: [(1, 40), (41, 80), (81, 100)]
    :param all_missions: total number of missions :param worker_mission_size: maximum number of missions for each worker :return: [(start_id, end_id), (start_id, end_id), ...]
    """
    worker_mission_ids = []
    current_id = 0
    while current_id <= all_missions:
        start_id = all_missions if current_id + 1 >= all_missions else current_id + 1
        end_id = all_missions if current_id + worker_mission_size >= all_missions else current_id + worker_mission_size
        if start_id == end_id:
            if worker_mission_ids[-1][1] == start_id:
                break
        worker_mission_ids.append((start_id, end_id))
        current_id += worker_mission_size

    return worker_mission_ids

Assume that the maximum value of a single table ID is 100, and we want each process to process 20 IDs, then the task list will be:

>>> mission_handler(100, 40)
[(1, 40), (41, 80), (81, 100)]

So,
Process 1 will only need to process data with id between 1 to 40;
Process 2 will only need to process data with id between 41 to 80;
Process 3 will only need to process data with ids between 81 to 100.

from concurrent.futures import ProcessPoolExecutor


def main():
    # Maximum value of the auto-increment id max_id = 30000000
    # Data volume processed by a single worker worker_mission_size = 1000000
    # Use multiple processes to process missions = mission_handler(max_id, worker_mission_size)
    workers = []
    executor = ProcessPoolExecutor()
    for idx, mission in enumerate(missions):
        start_id, end_id = mission
        workers.append(executor.submit(data_handler, start_id, end_id, idx))


def data_handler(start_id, end_id, worker_id):
    pass

Summary of ideas

  1. Avoid deep page flipping and use auto-increment id to query data and data
  2. Processing data using multiple processes

Data processing skills

Record the data IDs of successful and failed processing for subsequent follow-up processing

# Use another table to record the processing status insert into db.tb_handle_status(row_id, success) values ​​(999, 0);

Exception capture is performed within the loop to prevent the program from exiting abnormally

def data_handler(start_id, end_id, worker_id):
    #Data connection conn, cursor = mysql()
    current_id = start_id
        try:
            while current_id <= end_id:
                try:
                    # TODO data processing code pass

                except Exception as e:
                    # TODO record processing results# data moves to the next current_id += 1
                    continue
                else:
                    # No exception, continue to process the next data current_id += 1
        except Exception as e:
            return 'worker_id({}): result({})'.format(worker_id, False)
        finally:
            # Database resource release cursor.close()
            conn.close()

        return 'worker_id({}): result({})'.format(worker_id, True)

Update database data using batch submission as much as possible

sql = """update db.tb set a=%s, b=%s where id=%s"""
values ​​= [
            ('a_value', 'b_value', 9999),
            ('a_value', 'b_value', 9998),
            ...
         ]
# Batch submission to reduce network io and lock acquisition frequency cursor.executemany(sql, values)

The above is the detailed content of the idea of ​​processing tens of millions of data in a single MySQL table. For more information about processing tens of millions of data in a single MySQL table, please pay attention to other related articles on 123WORDPRESS.COM!

You may also be interested in:
  • How to optimize MySQL tables with tens of millions of data?
  • Practical record of optimizing MySQL tables with tens of millions of data

<<:  Analysis and solutions to problems encountered in the use of label tags

>>:  How to open a page in an iframe

Recommend

How to use vite to build vue3 application

1. Installation Tip: There is currently no offici...

How to view and clean up Docker container logs (tested and effective)

1. Problem The docker container logs caused the h...

Appreciation of the low-key and elegant web design in black, white and gray

Among classic color combinations, probably no one...

Pure JS method to export table to excel

html <div > <button type="button&qu...

Two ways to reset the root password of MySQL database using lnmp

The first method: Use Junge's one-click scrip...

Example code for using text-align and margin: 0 auto to center in CSS

Use text-align, margin: 0 auto to center in CSS W...

Implementation of Nginx forwarding matching rules

1. Regular expression matching ~ for case-sensiti...

Detailed explanation of Jquery datagrid query

Table of contents Add code to the Tree item; 1. S...

Detailed explanation of JavaScript onblur and onfocus events

In HTML pages, visual elements such as buttons an...

Detailed explanation of the EXPLAIN command and its usage in MySQL

1. Scenario description: My colleague taught me h...

Vue.js implements simple timer function

This article example shares the specific code of ...

JavaScript Composition and Inheritance Explained

Table of contents 1. Introduction 2. Prototype ch...