tip: 298
title: Reformat manifest
author: halibobo1205@gmail.com
discussions to: https://github.com/tronprotocol/TIPs/issues/298
status: Final
type: Standards Track
category: Core
created: 2021-07-16
This TIP describes the levelDB Manifest file reformatting function.
The c++ version of levelDB has been running for a long time, the manifest file may grow too big, resulting in high memory and long start-up time when restarting the database, for this situation, use the java version to process the levelDB manifest file for regularization before starting the database.
Optimize c++ version of levelDB startup, reduce memory usage and time.
- Check manifest size.
- Bulk loading of manifest using java version.
- reopen with leveldbjni
The c++ version of levelDB is running, with the Compaction process, files are added and deleted, the levelDB sst file keeps changing, the manifest will keep growing (leveldb only rewrites this file when opening the database, it will only append at runtime), when it grows to a certain size, the database will restart, it may encounter OOM and cannot open the database.
- Check manifest size.
- Bulk loading of manifest using java version.
- reopen with leveldbjni
1.Check manifest size
// Read "CURRENT" file, which contains a pointer to the current manifest file
File currentFile = new File(dir, Filename.currentFileName());
if (!currentFile.exists()) {
return false;
}
String currentName = com.google.common.io.Files.asCharSource(currentFile, UTF_8).read();
if (currentName.isEmpty() || currentName.charAt(currentName.length() - 1) != '\n') {
return false;
}
currentName = currentName.substring(0, currentName.length() - 1);
File current = new File(dir, currentName);
if (!current.isFile()) {
return false;
}
long maxSize = options.maxManifestSize();
if (maxSize < 0) {
return false;
}
logger.info("CurrentName {}/{},size {} kb.", dir, currentName, current.length() / 1024);
return current.length() >= maxSize * 1024 * 1024;
2.Bulk loading of manifest using java version.
// open file channel
try (FileInputStream fis = new FileInputStream(new File(databaseDir, currentName));
FileChannel fileChannel = fis.getChannel()) {
// read log edit log
Long nextFileNumber = null;
Long lastSequence = null;
Long logNumber = null;
Long prevLogNumber = null;
Builder builder = new Builder(this, current);
LogReader reader = new LogReader(fileChannel, throwExceptionMonitor(), true, 0);
VersionEdit edit;
for (Slice record = reader.readRecord(); record != null; record = reader.readRecord()) {
// read version edit
edit = new VersionEdit(record);
// verify comparator
// todo implement user comparator
String editComparator = edit.getComparatorName();
String userComparator = internalKeyComparator.name();
checkArgument(editComparator == null || editComparator.equals(userComparator),
"Expected user comparator %s to match existing database comparator ", userComparator, editComparator);
// apply edit
builder.apply(edit);
if (builder.batchSize > options.maxBatchSize()) {
Version version = new Version(this);
builder.saveTo(version);
// Install recovered version
finalizeVersion(version);
appendVersion(version);
builder = new Builder(this, current);
}
// save edit values for verification below
logNumber = coalesce(edit.getLogNumber(), logNumber);
prevLogNumber = coalesce(edit.getPreviousLogNumber(), prevLogNumber);
nextFileNumber = coalesce(edit.getNextFileNumber(), nextFileNumber);
lastSequence = coalesce(edit.getLastSequenceNumber(), lastSequence);
}
/**
* Apply the specified edit to the current state.
*/
public void apply(VersionEdit edit)
{
// Update compaction pointers
for (Entry<Integer, InternalKey> entry : edit.getCompactPointers().entrySet()) {
Integer level = entry.getKey();
InternalKey internalKey = entry.getValue();
versionSet.compactPointers.put(level, internalKey);
}
// Delete files
for (Entry<Integer, Long> entry : edit.getDeletedFiles().entries()) {
Integer level = entry.getKey();
Long fileNumber = entry.getValue();
if (!versionSet.options.fast()) {
levels.get(level).deletedFiles.add(fileNumber);
batchSize++;
}
// todo missing update to addedFiles?
// missing update to addedFiles for open db to release resource
if (levels.get(level).addedFilesMap.remove(fileNumber) != null) {
batchSize--;
}
}
// Add new files
for (Entry<Integer, FileMetaData> entry : edit.getNewFiles().entries()) {
Integer level = entry.getKey();
FileMetaData fileMetaData = entry.getValue();
// We arrange to automatically compact this file after
// a certain number of seeks. Let's assume:
// (1) One seek costs 10ms
// (2) Writing or reading 1MB costs 10ms (100MB/s)
// (3) A compaction of 1MB does 25MB of IO:
// 1MB read from this level
// 10-12MB read from next level (boundaries may be misaligned)
// 10-12MB written to next level
// This implies that 25 seeks cost the same as the compaction
// of 1MB of data. I.e., one seek costs approximately the
// same as the compaction of 40KB of data. We are a little
// conservative and allow approximately one seek for every 16KB
// of data before triggering a compaction.
int allowedSeeks = (int) (fileMetaData.getFileSize() / 16384);
if (allowedSeeks < 100) {
allowedSeeks = 100;
}
fileMetaData.setAllowedSeeks(allowedSeeks);
if (levels.get(level).deletedFiles.remove(fileMetaData.getNumber())) {
batchSize--;
}
//levels.get(level).addedFiles.add(fileMetaData);
levels.get(level).addedFilesMap.put(fileMetaData.getNumber(), fileMetaData);
batchSize++;
}
}
3.When manifest is reach certain size ,open db with Bulk.
public void open() throws IOException {
DB database = factory.open(this.srcDbPath.toFile(), this.options);
database.close();
}
Huge manifest costs a lot.