Columnar storage provides high-performance data analysis and query by supporting the storage of data in columns. As a cloud-native data warehouse, ByteHouse also adopts a columnar storage design to ensure read and write performance, support transaction consistency, and apply to large-scale data computing, providing users with extremely fast analysis experience and massive data processing capabilities, and improving the digital transformation capabilities of enterprises.
Introduction to columnar storage
Columnar storage in an analytical database is a physical storage structure of a database that stores data based on columns rather than rows. The main advantage of columnar storage is its ability to improve the performance of data analysis and queries, especially when working with large datasets.
Here are some of the key features of columnar storage:
Data compression: Because data in the same column tends to have similar or identical data patterns (e.g., date, time, address, etc.), columnar storage can be more efficient for data compression, saving storage space.
Data filtering performance: Columnar storage makes it very efficient to read only the columns that are needed for a query. When executing a large number of complex queries involving multiple columns, you can significantly reduce disk IO operations, resulting in improved query performance.
Calculate locality: Since the data is stored in columns, when performing certain calculations (such as mathematical arithmetic or statistical functions), the data can be directly operated locally in memory without frequent access to disk, thus improving the computational efficiency.
Data independence: Columnar storage allows columns in a table to be updated independently, which makes incremental updates and data maintenance much simpler and more efficient.
Data sharding and distributed processing: Due to the nature of columnar storage, it is well suited for distributed computing environments. Data can be sharded by columns and distributed to different compute nodes for parallel processing, enabling distributed processing and analysis of large-scale data.
Flexible data model: Columnar storage typically supports multiple data models, such as row, column, and key-value stores, which allows it to adapt to different data processing needs.
Bytehouse's columnar storage design
ByteHouse is a cloud-native data warehouse that provides users with an ultra-fast analysis experience, supports real-time data analysis and offline analysis of massive data, convenient elastic scaling capabilities, extreme analysis performance, and rich enterprise-level features to help customers achieve digital transformation.
Generally, transactional databases use row storage to support transactions and high concurrency reads and writes, while analytical databases use column storage to reduce IO and facilitate compression. bytehouse uses column-storage to ensure read/write performance, support transaction consistency, and is suitable for large-scale data computing.
data layout
Table data is physically divided into multiple parts according to the partition key, and stored in the logical storage path of a unified cloud storage, and the size of each part is limited by the amount of data and the number of rows.
part delta
After the part data is initially built, it is a part data file that is stored in a mixed row and column, and there is incremental data in the part with the construction of DML data dictionary, bitmap index, etc., and this part of the data can be stored in the following two ways:
1.Each build rewrites the part data.
2.Incremental data is generated, and the background is asynchronously merged into one large part file.
Scenario 1 may have an impact on the availability of the entire cluster
1.Each time a DML data dictionary is built, it may involve full I/O operations on the entire table parts, which is costly.
2.It takes a long time to build DML and other operations to complete, which is not user-friendly, so we use solution 2.
Part file contents
The partdata is divided into two parts:
First, the entire part includes meta information such as the offset of rows schema column data in the data file, which is persistently stored and cached by the computing node.
The second is the actual data information, which includes the actual column bin data, column mrk data, map key bin map key index data, dictionary data, bitmap index data, etc., and the data is stored in the data file of the part according to the offset information in the meta information.
compaction
bytehouse supports splitting a part file into multiple small files, and the maximum size and maximum number of lines of the part need to meet this limit by configuring the maximum size and number of lines of the part.
The compaction in bytehouse is done globally, which is consistent with the global block ID raised earlier.
In addition to columnar storage capabilities, Bytehouse also optimizes technologies such as metadata management and self-developed table engines to provide users with a more extreme analysis experience.