Package org.rocksdb
Interface AdvancedMutableColumnFamilyOptionsInterface<T extends AdvancedMutableColumnFamilyOptionsInterface<T>>
-
- All Known Subinterfaces:
MutableColumnFamilyOptionsInterface<T>
- All Known Implementing Classes:
ColumnFamilyOptions,MutableColumnFamilyOptions.MutableColumnFamilyOptionsBuilder,Options
public interface AdvancedMutableColumnFamilyOptionsInterface<T extends AdvancedMutableColumnFamilyOptionsInterface<T>>Advanced Column Family Options which are mutable Taken from include/rocksdb/advanced_options.h and MutableCFOptions in util/cf_options.h
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description longarenaBlockSize()The size of one block in arena memory allocation.longblobCompactionReadaheadSize()Get compaction readahead for blob files.CompressionTypeblobCompressionType()Get the compression algorithm in use for large values stored in blob files.longblobFileSize()The size limit for blob files.intblobFileStartingLevel()Get the starting LSM tree level to enable blob files.doubleblobGarbageCollectionAgeCutoff()Get cutoff in terms of blob file age for garbage collection.doubleblobGarbageCollectionForceThreshold()Get the current value for theblobGarbageCollectionForceThreshold()booleanenableBlobFiles()When set, large values (blobs) are written to separate blob files, and only pointers to them are stored in SST files.booleanenableBlobGarbageCollection()Query whether garbage collection of blobs is enabled.Blob GC is performed as part of compaction.doubleexperimentalMempurgeThreshold()Threshold used in the MemPurge (memtable garbage collection) feature.longhardPendingCompactionBytesLimit()All writes are stopped if estimated bytes needed to be compaction exceed this threshold.longinplaceUpdateNumLocks()Number of locks used for inplace update Default: 10000, if inplace_update_support = true, else 0.intlevel0SlowdownWritesTrigger()Soft limit on number of level-0 files.intlevel0StopWritesTrigger()Maximum number of level-0 files.doublemaxBytesForLevelMultiplier()The ratio between the total size of level-(L+1) files and the total size of level-L files for all L.int[]maxBytesForLevelMultiplierAdditional()Different max-size multipliers for different levels.longmaxSequentialSkipInIterations()An iteration->Next() sequentially skips over keys with the same user-key unless this option is set.longmaxSuccessiveMerges()Maximum number of successive merge operations on a key in the memtable.intmaxWriteBufferNumber()Returns maximum number of write buffers.longmemtableHugePageSize()Page size for huge page TLB for bloom in memtable.doublememtablePrefixBloomSizeRatio()if prefix_extractor is set and memtable_prefix_bloom_size_ratio is not 0, create prefix bloom for memtable with the size of write_buffer_size * memtable_prefix_bloom_size_ratio.booleanmemtableWholeKeyFiltering()Returns whether whole key bloom filter is enabled in memtablelongminBlobSize()Get the size of the smallest value to be stored separately in a blob file.booleanparanoidFileChecks()After writing every SST file, reopen it and read all the keys.longperiodicCompactionSeconds()Get the periodicCompactionSeconds.PrepopulateBlobCacheprepopulateBlobCache()Get the prepopulate blob cache option.booleanreportBgIoStats()Determine whether IO stats in compactions and flushes are being measuredTsetArenaBlockSize(long arenaBlockSize)The size of one block in arena memory allocation.TsetBlobCompactionReadaheadSize(long blobCompactionReadaheadSize)Set compaction readahead for blob files.TsetBlobCompressionType(CompressionType compressionType)Set the compression algorithm to use for large values stored in blob files.TsetBlobFileSize(long blobFileSize)Set the size limit for blob files.TsetBlobFileStartingLevel(int blobFileStartingLevel)Set a certain LSM tree level to enable blob files.TsetBlobGarbageCollectionAgeCutoff(double blobGarbageCollectionAgeCutoff)Set cutoff in terms of blob file age for garbage collection.TsetBlobGarbageCollectionForceThreshold(double blobGarbageCollectionForceThreshold)If the ratio of garbage in the oldest blob files exceeds this threshold, targeted compactions are scheduled in order to force garbage collecting the blob files in question, assuming they are all eligible based on the value ofblobGarbageCollectionAgeCutoff()above.TsetEnableBlobFiles(boolean enableBlobFiles)When set, large values (blobs) are written to separate blob files, and only pointers to them are stored in SST files.TsetEnableBlobGarbageCollection(boolean enableBlobGarbageCollection)Enable/disable garbage collection of blobs.TsetExperimentalMempurgeThreshold(double experimentalMempurgeThreshold)Threshold used in the MemPurge (memtable garbage collection) feature.TsetHardPendingCompactionBytesLimit(long hardPendingCompactionBytesLimit)All writes are stopped if estimated bytes needed to be compaction exceed this threshold.TsetInplaceUpdateNumLocks(long inplaceUpdateNumLocks)Number of locks used for inplace update Default: 10000, if inplace_update_support = true, else 0.TsetLevel0SlowdownWritesTrigger(int level0SlowdownWritesTrigger)Soft limit on number of level-0 files.TsetLevel0StopWritesTrigger(int level0StopWritesTrigger)Maximum number of level-0 files.TsetMaxBytesForLevelMultiplier(double multiplier)The ratio between the total size of level-(L+1) files and the total size of level-L files for all L.TsetMaxBytesForLevelMultiplierAdditional(int[] maxBytesForLevelMultiplierAdditional)Different max-size multipliers for different levels.TsetMaxSequentialSkipInIterations(long maxSequentialSkipInIterations)An iteration->Next() sequentially skips over keys with the same user-key unless this option is set.TsetMaxSuccessiveMerges(long maxSuccessiveMerges)Maximum number of successive merge operations on a key in the memtable.TsetMaxWriteBufferNumber(int maxWriteBufferNumber)The maximum number of write buffers that are built up in memory.TsetMemtableHugePageSize(long memtableHugePageSize)Page size for huge page TLB for bloom in memtable.TsetMemtablePrefixBloomSizeRatio(double memtablePrefixBloomSizeRatio)if prefix_extractor is set and memtable_prefix_bloom_size_ratio is not 0, create prefix bloom for memtable with the size of write_buffer_size * memtable_prefix_bloom_size_ratio.TsetMemtableWholeKeyFiltering(boolean memtableWholeKeyFiltering)Enable whole key bloom filter in memtable.TsetMinBlobSize(long minBlobSize)Set the size of the smallest value to be stored separately in a blob file.TsetParanoidFileChecks(boolean paranoidFileChecks)After writing every SST file, reopen it and read all the keys.TsetPeriodicCompactionSeconds(long periodicCompactionSeconds)Files older than this value will be picked up for compaction, and re-written to the same level as they were before.TsetPrepopulateBlobCache(PrepopulateBlobCache prepopulateBlobCache)Set a certain prepopulate blob cache option.TsetReportBgIoStats(boolean reportBgIoStats)Measure IO stats in compactions and flushes, if true.TsetSoftPendingCompactionBytesLimit(long softPendingCompactionBytesLimit)All writes will be slowed down to at least delayed_write_rate if estimated bytes needed to be compaction exceed this threshold.TsetTargetFileSizeBase(long targetFileSizeBase)The target file size for compaction.TsetTargetFileSizeMultiplier(int multiplier)targetFileSizeMultiplier defines the size ratio between a level-L file and level-(L+1) file.TsetTtl(long ttl)Non-bottom-level files older than TTL will go through the compaction process.longsoftPendingCompactionBytesLimit()All writes will be slowed down to at least delayed_write_rate if estimated bytes needed to be compaction exceed this threshold.longtargetFileSizeBase()The target file size for compaction.inttargetFileSizeMultiplier()targetFileSizeMultiplier defines the size ratio between a level-(L+1) file and level-L file.longttl()Get the TTL for Non-bottom-level files that will go through the compaction process.
-
-
-
Method Detail
-
setMaxWriteBufferNumber
T setMaxWriteBufferNumber(int maxWriteBufferNumber)
The maximum number of write buffers that are built up in memory. The default is 2, so that when 1 write buffer is being flushed to storage, new writes can continue to the other write buffer. Default: 2- Parameters:
maxWriteBufferNumber- maximum number of write buffers.- Returns:
- the instance of the current options.
-
maxWriteBufferNumber
int maxWriteBufferNumber()
Returns maximum number of write buffers.- Returns:
- maximum number of write buffers.
- See Also:
setMaxWriteBufferNumber(int)
-
setInplaceUpdateNumLocks
T setInplaceUpdateNumLocks(long inplaceUpdateNumLocks)
Number of locks used for inplace update Default: 10000, if inplace_update_support = true, else 0.- Parameters:
inplaceUpdateNumLocks- the number of locks used for inplace updates.- Returns:
- the reference to the current options.
- Throws:
java.lang.IllegalArgumentException- thrown on 32-Bit platforms while overflowing the underlying platform specific value.
-
inplaceUpdateNumLocks
long inplaceUpdateNumLocks()
Number of locks used for inplace update Default: 10000, if inplace_update_support = true, else 0.- Returns:
- the number of locks used for inplace update.
-
setMemtablePrefixBloomSizeRatio
T setMemtablePrefixBloomSizeRatio(double memtablePrefixBloomSizeRatio)
if prefix_extractor is set and memtable_prefix_bloom_size_ratio is not 0, create prefix bloom for memtable with the size of write_buffer_size * memtable_prefix_bloom_size_ratio. If it is larger than 0.25, it is santinized to 0.25. Default: 0 (disabled)- Parameters:
memtablePrefixBloomSizeRatio- the ratio of memtable used by the bloom filter, 0 means no bloom filter- Returns:
- the reference to the current options.
-
memtablePrefixBloomSizeRatio
double memtablePrefixBloomSizeRatio()
if prefix_extractor is set and memtable_prefix_bloom_size_ratio is not 0, create prefix bloom for memtable with the size of write_buffer_size * memtable_prefix_bloom_size_ratio. If it is larger than 0.25, it is santinized to 0.25. Default: 0 (disabled)- Returns:
- the ratio of memtable used by the bloom filter
-
setExperimentalMempurgeThreshold
T setExperimentalMempurgeThreshold(double experimentalMempurgeThreshold)
Threshold used in the MemPurge (memtable garbage collection) feature. A value of 0.0 corresponds to no MemPurge, a value of 1.0 will trigger a MemPurge as often as possible. Default: 0.0 (disabled)- Parameters:
experimentalMempurgeThreshold- the threshold used by the MemPurge decider.- Returns:
- the reference to the current options.
-
experimentalMempurgeThreshold
double experimentalMempurgeThreshold()
Threshold used in the MemPurge (memtable garbage collection) feature. A value of 0.0 corresponds to no MemPurge, a value of 1.0 will trigger a MemPurge as often as possible. Default: 0 (disabled)- Returns:
- the threshold used by the MemPurge decider
-
setMemtableWholeKeyFiltering
T setMemtableWholeKeyFiltering(boolean memtableWholeKeyFiltering)
Enable whole key bloom filter in memtable. Note this will only take effect if memtable_prefix_bloom_size_ratio is not 0. Enabling whole key filtering can potentially reduce CPU usage for point-look-ups. Default: false (disabled)- Parameters:
memtableWholeKeyFiltering- true if whole key bloom filter is enabled in memtable- Returns:
- the reference to the current options.
-
memtableWholeKeyFiltering
boolean memtableWholeKeyFiltering()
Returns whether whole key bloom filter is enabled in memtable- Returns:
- true if whole key bloom filter is enabled in memtable
-
setMemtableHugePageSize
T setMemtableHugePageSize(long memtableHugePageSize)
Page size for huge page TLB for bloom in memtable. If ≤ 0, not allocate from huge page TLB but from malloc. Need to reserve huge pages for it to be allocated. For example: sysctl -w vm.nr_hugepages=20 See linux doc Documentation/vm/hugetlbpage.txt- Parameters:
memtableHugePageSize- The page size of the huge page tlb- Returns:
- the reference to the current options.
-
memtableHugePageSize
long memtableHugePageSize()
Page size for huge page TLB for bloom in memtable. If ≤ 0, not allocate from huge page TLB but from malloc. Need to reserve huge pages for it to be allocated. For example: sysctl -w vm.nr_hugepages=20 See linux doc Documentation/vm/hugetlbpage.txt- Returns:
- The page size of the huge page tlb
-
setArenaBlockSize
T setArenaBlockSize(long arenaBlockSize)
The size of one block in arena memory allocation. If ≤ 0, a proper value is automatically calculated (usually 1/10 of writer_buffer_size). There are two additional restriction of the specified size: (1) size should be in the range of [4096, 2 << 30] and (2) be the multiple of the CPU word (which helps with the memory alignment). We'll automatically check and adjust the size number to make sure it conforms to the restrictions. Default: 0- Parameters:
arenaBlockSize- the size of an arena block- Returns:
- the reference to the current options.
- Throws:
java.lang.IllegalArgumentException- thrown on 32-Bit platforms while overflowing the underlying platform specific value.
-
arenaBlockSize
long arenaBlockSize()
The size of one block in arena memory allocation. If ≤ 0, a proper value is automatically calculated (usually 1/10 of writer_buffer_size). There are two additional restriction of the specified size: (1) size should be in the range of [4096, 2 << 30] and (2) be the multiple of the CPU word (which helps with the memory alignment). We'll automatically check and adjust the size number to make sure it conforms to the restrictions. Default: 0- Returns:
- the size of an arena block
-
setLevel0SlowdownWritesTrigger
T setLevel0SlowdownWritesTrigger(int level0SlowdownWritesTrigger)
Soft limit on number of level-0 files. We start slowing down writes at this point. A value < 0 means that no writing slow down will be triggered by number of files in level-0.- Parameters:
level0SlowdownWritesTrigger- The soft limit on the number of level-0 files- Returns:
- the reference to the current options.
-
level0SlowdownWritesTrigger
int level0SlowdownWritesTrigger()
Soft limit on number of level-0 files. We start slowing down writes at this point. A value < 0 means that no writing slow down will be triggered by number of files in level-0.- Returns:
- The soft limit on the number of level-0 files
-
setLevel0StopWritesTrigger
T setLevel0StopWritesTrigger(int level0StopWritesTrigger)
Maximum number of level-0 files. We stop writes at this point.- Parameters:
level0StopWritesTrigger- The maximum number of level-0 files- Returns:
- the reference to the current options.
-
level0StopWritesTrigger
int level0StopWritesTrigger()
Maximum number of level-0 files. We stop writes at this point.- Returns:
- The maximum number of level-0 files
-
setTargetFileSizeBase
T setTargetFileSizeBase(long targetFileSizeBase)
The target file size for compaction. This targetFileSizeBase determines a level-1 file size. Target file size for level L can be calculated by targetFileSizeBase * (targetFileSizeMultiplier ^ (L-1)) For example, if targetFileSizeBase is 2MB and target_file_size_multiplier is 10, then each file on level-1 will be 2MB, and each file on level 2 will be 20MB, and each file on level-3 will be 200MB. by default targetFileSizeBase is 64MB.- Parameters:
targetFileSizeBase- the target size of a level-0 file.- Returns:
- the reference to the current options.
- See Also:
setTargetFileSizeMultiplier(int)
-
targetFileSizeBase
long targetFileSizeBase()
The target file size for compaction. This targetFileSizeBase determines a level-1 file size. Target file size for level L can be calculated by targetFileSizeBase * (targetFileSizeMultiplier ^ (L-1)) For example, if targetFileSizeBase is 2MB and target_file_size_multiplier is 10, then each file on level-1 will be 2MB, and each file on level 2 will be 20MB, and each file on level-3 will be 200MB. by default targetFileSizeBase is 64MB.- Returns:
- the target size of a level-0 file.
- See Also:
targetFileSizeMultiplier()
-
setTargetFileSizeMultiplier
T setTargetFileSizeMultiplier(int multiplier)
targetFileSizeMultiplier defines the size ratio between a level-L file and level-(L+1) file. By default target_file_size_multiplier is 1, meaning files in different levels have the same target.- Parameters:
multiplier- the size ratio between a level-(L+1) file and level-L file.- Returns:
- the reference to the current options.
-
targetFileSizeMultiplier
int targetFileSizeMultiplier()
targetFileSizeMultiplier defines the size ratio between a level-(L+1) file and level-L file. By default targetFileSizeMultiplier is 1, meaning files in different levels have the same target.- Returns:
- the size ratio between a level-(L+1) file and level-L file.
-
setMaxBytesForLevelMultiplier
T setMaxBytesForLevelMultiplier(double multiplier)
The ratio between the total size of level-(L+1) files and the total size of level-L files for all L. DEFAULT: 10- Parameters:
multiplier- the ratio between the total size of level-(L+1) files and the total size of level-L files for all L.- Returns:
- the reference to the current options.
See
MutableColumnFamilyOptionsInterface.setMaxBytesForLevelBase(long)
-
maxBytesForLevelMultiplier
double maxBytesForLevelMultiplier()
The ratio between the total size of level-(L+1) files and the total size of level-L files for all L. DEFAULT: 10- Returns:
- the ratio between the total size of level-(L+1) files and
the total size of level-L files for all L.
See
MutableColumnFamilyOptionsInterface.maxBytesForLevelBase()
-
setMaxBytesForLevelMultiplierAdditional
T setMaxBytesForLevelMultiplierAdditional(int[] maxBytesForLevelMultiplierAdditional)
Different max-size multipliers for different levels. These are multiplied by max_bytes_for_level_multiplier to arrive at the max-size of each level. Default: 1- Parameters:
maxBytesForLevelMultiplierAdditional- The max-size multipliers for each level- Returns:
- the reference to the current options.
-
maxBytesForLevelMultiplierAdditional
int[] maxBytesForLevelMultiplierAdditional()
Different max-size multipliers for different levels. These are multiplied by max_bytes_for_level_multiplier to arrive at the max-size of each level. Default: 1- Returns:
- The max-size multipliers for each level
-
setSoftPendingCompactionBytesLimit
T setSoftPendingCompactionBytesLimit(long softPendingCompactionBytesLimit)
All writes will be slowed down to at least delayed_write_rate if estimated bytes needed to be compaction exceed this threshold. Default: 64GB- Parameters:
softPendingCompactionBytesLimit- The soft limit to impose on compaction- Returns:
- the reference to the current options.
-
softPendingCompactionBytesLimit
long softPendingCompactionBytesLimit()
All writes will be slowed down to at least delayed_write_rate if estimated bytes needed to be compaction exceed this threshold. Default: 64GB- Returns:
- The soft limit to impose on compaction
-
setHardPendingCompactionBytesLimit
T setHardPendingCompactionBytesLimit(long hardPendingCompactionBytesLimit)
All writes are stopped if estimated bytes needed to be compaction exceed this threshold. Default: 256GB- Parameters:
hardPendingCompactionBytesLimit- The hard limit to impose on compaction- Returns:
- the reference to the current options.
-
hardPendingCompactionBytesLimit
long hardPendingCompactionBytesLimit()
All writes are stopped if estimated bytes needed to be compaction exceed this threshold. Default: 256GB- Returns:
- The hard limit to impose on compaction
-
setMaxSequentialSkipInIterations
T setMaxSequentialSkipInIterations(long maxSequentialSkipInIterations)
An iteration->Next() sequentially skips over keys with the same user-key unless this option is set. This number specifies the number of keys (with the same userkey) that will be sequentially skipped before a reseek is issued. Default: 8- Parameters:
maxSequentialSkipInIterations- the number of keys could be skipped in a iteration.- Returns:
- the reference to the current options.
-
maxSequentialSkipInIterations
long maxSequentialSkipInIterations()
An iteration->Next() sequentially skips over keys with the same user-key unless this option is set. This number specifies the number of keys (with the same userkey) that will be sequentially skipped before a reseek is issued. Default: 8- Returns:
- the number of keys could be skipped in a iteration.
-
setMaxSuccessiveMerges
T setMaxSuccessiveMerges(long maxSuccessiveMerges)
Maximum number of successive merge operations on a key in the memtable. When a merge operation is added to the memtable and the maximum number of successive merges is reached, the value of the key will be calculated and inserted into the memtable instead of the merge operation. This will ensure that there are never more than max_successive_merges merge operations in the memtable. Default: 0 (disabled)- Parameters:
maxSuccessiveMerges- the maximum number of successive merges.- Returns:
- the reference to the current options.
- Throws:
java.lang.IllegalArgumentException- thrown on 32-Bit platforms while overflowing the underlying platform specific value.
-
maxSuccessiveMerges
long maxSuccessiveMerges()
Maximum number of successive merge operations on a key in the memtable. When a merge operation is added to the memtable and the maximum number of successive merges is reached, the value of the key will be calculated and inserted into the memtable instead of the merge operation. This will ensure that there are never more than max_successive_merges merge operations in the memtable. Default: 0 (disabled)- Returns:
- the maximum number of successive merges.
-
setParanoidFileChecks
T setParanoidFileChecks(boolean paranoidFileChecks)
After writing every SST file, reopen it and read all the keys. Default: false- Parameters:
paranoidFileChecks- true to enable paranoid file checks- Returns:
- the reference to the current options.
-
paranoidFileChecks
boolean paranoidFileChecks()
After writing every SST file, reopen it and read all the keys. Default: false- Returns:
- true if paranoid file checks are enabled
-
setReportBgIoStats
T setReportBgIoStats(boolean reportBgIoStats)
Measure IO stats in compactions and flushes, if true. Default: false- Parameters:
reportBgIoStats- true to enable reporting- Returns:
- the reference to the current options.
-
reportBgIoStats
boolean reportBgIoStats()
Determine whether IO stats in compactions and flushes are being measured- Returns:
- true if reporting is enabled
-
setTtl
T setTtl(long ttl)
Non-bottom-level files older than TTL will go through the compaction process. This needsMutableDBOptionsInterface.maxOpenFiles()to be set to -1. Enabled only for level compaction for now. Default: 0 (disabled) Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
ttl- the time-to-live.- Returns:
- the reference to the current options.
-
ttl
long ttl()
Get the TTL for Non-bottom-level files that will go through the compaction process. SeesetTtl(long).- Returns:
- the time-to-live.
-
setPeriodicCompactionSeconds
T setPeriodicCompactionSeconds(long periodicCompactionSeconds)
Files older than this value will be picked up for compaction, and re-written to the same level as they were before. One main use of the feature is to make sure a file goes through compaction filters periodically. Users can also use the feature to clear up SST files using old format. A file's age is computed by looking at file_creation_time or creation_time table properties in order, if they have valid non-zero values; if not, the age is based on the file's last modified time (given by the underlying Env). Supported in Level and FIFO compaction. In FIFO compaction, this option has the same meaning as TTL and whichever stricter will be used. Pre-req: max_open_file == -1. unit: seconds. Ex: 7 days = 7 * 24 * 60 * 60 Values: 0: Turn off Periodic compactions. UINT64_MAX - 1 (i.e 0xfffffffffffffffe): Let RocksDB control this feature as needed. For now, RocksDB will change this value to 30 days (i.e 30 * 24 * 60 * 60) so that every file goes through the compaction process at least once every 30 days if not compacted sooner. In FIFO compaction, since the option has the same meaning as ttl, when this value is left default, and ttl is left to 0, 30 days will be used. Otherwise, min(ttl, periodic_compaction_seconds) will be used. Default: 0xfffffffffffffffe (allow RocksDB to auto-tune) Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
periodicCompactionSeconds- the periodic compaction in seconds.- Returns:
- the reference to the current options.
-
periodicCompactionSeconds
long periodicCompactionSeconds()
Get the periodicCompactionSeconds. SeesetPeriodicCompactionSeconds(long).- Returns:
- the periodic compaction in seconds.
-
setEnableBlobFiles
T setEnableBlobFiles(boolean enableBlobFiles)
When set, large values (blobs) are written to separate blob files, and only pointers to them are stored in SST files. This can reduce write amplification for large-value use cases at the cost of introducing a level of indirection for reads. See also the options min_blob_size, blob_file_size, blob_compression_type, enable_blob_garbage_collection, and blob_garbage_collection_age_cutoff below. Default: false Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
enableBlobFiles- true iff blob files should be enabled- Returns:
- the reference to the current options.
-
enableBlobFiles
boolean enableBlobFiles()
When set, large values (blobs) are written to separate blob files, and only pointers to them are stored in SST files. This can reduce write amplification for large-value use cases at the cost of introducing a level of indirection for reads. See also the options min_blob_size, blob_file_size, blob_compression_type, enable_blob_garbage_collection, and blob_garbage_collection_age_cutoff below. Default: false Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Returns:
- true if blob files are enabled
-
setMinBlobSize
T setMinBlobSize(long minBlobSize)
Set the size of the smallest value to be stored separately in a blob file. Values which have an uncompressed size smaller than this threshold are stored alongside the keys in SST files in the usual fashion. A value of zero for this option means that all values are stored in blob files. Note that enable_blob_files has to be set in order for this option to have any effect. Default: 0 Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
minBlobSize- the size of the smallest value to be stored separately in a blob file- Returns:
- the reference to the current options.
-
minBlobSize
long minBlobSize()
Get the size of the smallest value to be stored separately in a blob file. Values which have an uncompressed size smaller than this threshold are stored alongside the keys in SST files in the usual fashion. A value of zero for this option means that all values are stored in blob files. Note that enable_blob_files has to be set in order for this option to have any effect. Default: 0 Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Returns:
- the current minimum size of value which is stored separately in a blob
-
setBlobFileSize
T setBlobFileSize(long blobFileSize)
Set the size limit for blob files. When writing blob files, a new file is opened once this limit is reached. Note that enable_blob_files has to be set in order for this option to have any effect. Default: 256 MB Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
blobFileSize- the size limit for blob files- Returns:
- the reference to the current options.
-
blobFileSize
long blobFileSize()
The size limit for blob files. When writing blob files, a new file is opened once this limit is reached.- Returns:
- the current size limit for blob files
-
setBlobCompressionType
T setBlobCompressionType(CompressionType compressionType)
Set the compression algorithm to use for large values stored in blob files. Note that enable_blob_files has to be set in order for this option to have any effect. Default: no compression Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
compressionType- the compression algorithm to use.- Returns:
- the reference to the current options.
-
blobCompressionType
CompressionType blobCompressionType()
Get the compression algorithm in use for large values stored in blob files. Note that enable_blob_files has to be set in order for this option to have any effect.- Returns:
- the current compression algorithm
-
setEnableBlobGarbageCollection
T setEnableBlobGarbageCollection(boolean enableBlobGarbageCollection)
Enable/disable garbage collection of blobs. Blob GC is performed as part of compaction. Valid blobs residing in blob files older than a cutoff get relocated to new files as they are encountered during compaction, which makes it possible to clean up blob files once they contain nothing but obsolete/garbage blobs. See also blob_garbage_collection_age_cutoff below. Default: false- Parameters:
enableBlobGarbageCollection- the new enabled/disabled state of blob garbage collection- Returns:
- the reference to the current options.
-
enableBlobGarbageCollection
boolean enableBlobGarbageCollection()
Query whether garbage collection of blobs is enabled.Blob GC is performed as part of compaction. Valid blobs residing in blob files older than a cutoff get relocated to new files as they are encountered during compaction, which makes it possible to clean up blob files once they contain nothing but obsolete/garbage blobs. See also blob_garbage_collection_age_cutoff below. Default: false- Returns:
- true if blob garbage collection is currently enabled.
-
setBlobGarbageCollectionAgeCutoff
T setBlobGarbageCollectionAgeCutoff(double blobGarbageCollectionAgeCutoff)
Set cutoff in terms of blob file age for garbage collection. Blobs in the oldest N blob files will be relocated when encountered during compaction, where N = garbage_collection_cutoff * number_of_blob_files. Note that enable_blob_garbage_collection has to be set in order for this option to have any effect. Default: 0.25- Parameters:
blobGarbageCollectionAgeCutoff- the new age cutoff- Returns:
- the reference to the current options.
-
blobGarbageCollectionAgeCutoff
double blobGarbageCollectionAgeCutoff()
Get cutoff in terms of blob file age for garbage collection. Blobs in the oldest N blob files will be relocated when encountered during compaction, where N = garbage_collection_cutoff * number_of_blob_files. Note that enable_blob_garbage_collection has to be set in order for this option to have any effect. Default: 0.25- Returns:
- the current age cutoff for garbage collection
-
setBlobGarbageCollectionForceThreshold
T setBlobGarbageCollectionForceThreshold(double blobGarbageCollectionForceThreshold)
If the ratio of garbage in the oldest blob files exceeds this threshold, targeted compactions are scheduled in order to force garbage collecting the blob files in question, assuming they are all eligible based on the value ofblobGarbageCollectionAgeCutoff()above. This option is currently only supported with leveled compactions. Note thatenableBlobGarbageCollection()has to be set in order for this option to have any effect. Default: 1.0 Dynamically changeable through the SetOptions() API- Parameters:
blobGarbageCollectionForceThreshold- new value for the threshold- Returns:
- the reference to the current options
-
blobGarbageCollectionForceThreshold
double blobGarbageCollectionForceThreshold()
Get the current value for theblobGarbageCollectionForceThreshold()- Returns:
- the current threshold at which garbage collection of blobs is forced
-
setBlobCompactionReadaheadSize
T setBlobCompactionReadaheadSize(long blobCompactionReadaheadSize)
Set compaction readahead for blob files. Default: 0 Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
blobCompactionReadaheadSize- the compaction readahead for blob files- Returns:
- the reference to the current options.
-
blobCompactionReadaheadSize
long blobCompactionReadaheadSize()
Get compaction readahead for blob files.- Returns:
- the current compaction readahead for blob files
-
setBlobFileStartingLevel
T setBlobFileStartingLevel(int blobFileStartingLevel)
Set a certain LSM tree level to enable blob files. Default: 0 Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
blobFileStartingLevel- the starting level to enable blob files- Returns:
- the reference to the current options.
-
blobFileStartingLevel
int blobFileStartingLevel()
Get the starting LSM tree level to enable blob files. Default: 0- Returns:
- the current LSM tree level to enable blob files.
-
setPrepopulateBlobCache
T setPrepopulateBlobCache(PrepopulateBlobCache prepopulateBlobCache)
Set a certain prepopulate blob cache option. Default: 0 Dynamically changeable throughRocksDB.setOptions(ColumnFamilyHandle, MutableColumnFamilyOptions).- Parameters:
prepopulateBlobCache- the prepopulate blob cache option- Returns:
- the reference to the current options.
-
prepopulateBlobCache
PrepopulateBlobCache prepopulateBlobCache()
Get the prepopulate blob cache option. Default: 0- Returns:
- the current prepopulate blob cache option.
-
-