This topic describes the test model used for the Tablestore performance test.

  • Table schema

    Primary key column name Type Encoding method Length
    userid string 4-Byte-Hash + Long.toHexString 20
  • Attribute column

    Attribute column name Type Length
    field0 string 100
    field1 string 100
    field2 string 100
    field3 string 100
    field4 string 100
  • Number of partitions

    The automatic load balancing feature of Tablestore dynamically splits table partitions based on the data volumes and access requests in each partition. This process requires no human interference. In this test, performance data of tables typically with 1, 4, and 16 partitions are selected.

    By default, a new table has a single data partition. To manually split a new table, submit a ticket.

  • Test cases
    Start N threads in each runner, create com.alicloud.openservices.tablestore.SyncClient for each thread, and then call Tablestore APIs.
    • The test cases include:

      • Random write: The test calls SyncClient.putRow. Each request contains one row of data and is sustained for 1 hour.
      • Batch write: The test calls SyncClient.batchWriteRow. Each request contains 200 rows of data and is sustained for 1 hour.
      • Random read: The test calls BatchWriteRow to write 20 GB of data to each partition, and then calls SyncClient.getRow. Each request reads one row of data and is sustained for 30 minutes.
      • Random range read: The test calls BatchWriteRow to write 20 GB of data to each partition, and then calls SyncClient.getRange. Each request reads 100 rows of data and is sustained for 30 minutes.

      All test cases send requests directly to the internal network address of the Tablestore instance to avoid the impact caused by the network environment.

      This performance test is not a limit test of the service performance. The test does not trigger throttling measures on the Tablestore server. The automatic load balancing feature of Tablestore guarantees horizontal scale-up of the service capabilities provided by a single table. A large-scale performance test may trigger backend throttling and result in high fees. If you plan to perform a large-scale performance test,submit a ticket to obtain the result while cost-effectiveness is ensured.

      BatchWriteRow operations of Tablestore are processed concurrently by partition. The data written to each partition is a single write operation on a disk. We recommend that you aggregate BatchWriteRow requests by data partition key to reduce write disk operations of each BatchWriteRow and effectively improve write performance.

      No data is written during the random read and random range read test cases. Their cache hit rates increase when the test proceeds. In low-stress scenarios, the cache hit rate increases slowly. Therefore, the test is closely related to the disk I/O capability. In high-stress scenarios, the cache hit rate quickly increases. Therefore, the test is less closely related to the disk I/O capability.

      The BatchWriteRow and GetRange test cases occupy a large amount of network bandwidth. If the performance in reading or writing your Tablestore instance is lower than expected, check whether your network bandwidth has been fully occupied.

      The read performance of Tablestore is significantly affected by data volumes and the cache hit rate. Therefore, some scenarios may be beyond the limit of the GetRow and GetRange test cases. You can replicate the performance data generated by these two test cases. You can use the data in this report as a reference in similar scenarios. If the actual read throughput, write throughput, or latency differs greatly from the data in this report, contact Tablestore personnel to help analyze the causes.