Skip to content
/ bmf Public

Cross-platform, customizable multimedia/video processing framework. With strong GPU acceleration, heterogeneous design, multi-language support, easy to use, multi-framework compatible and high performance, the framework is ideal for transcoding, AI inference, algorithm integration, live video streaming, and more.

License

Notifications You must be signed in to change notification settings

BabitMF/bmf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BMF - Cross-platform, multi-language, customizable video processing framework with strong GPU acceleration

BMF (Babit Multimedia Framework) is a cross-platform, multi-language, customizable multimedia processing framework developed by ByteDance. With over 4 years of testing and improvements, BMF has been tailored to adeptly tackle challenges in our real-world production environments. It is currently widely used in ByteDance's video streaming, live transcoding, cloud editing and mobile pre/post processing scenarios. More than 2 billion videos are processed by the framework every day.

Here are some key features of BMF:

  • Cross-Platform Support: Native compatibility with Linux, Windows, and macOS, as well as optimization for both x86 and ARM CPUs.

  • Easy to use: BMF provides Python, Go, and C++ APIs, allowing developers the flexibility to code in their favourite languages.

  • Customizability: Developers can enhance the framework's features by adding their own modules independently because of BMF decoupled architecture.

  • High performance: BMF has a powerful scheduler and strong support for heterogeneous acceleration hardware. Moreover, NVIDIA has been cooperating with us to develop a highly optimized GPU pipeline for video transcoding and AI inference.

  • Efficient data conversion: BMF offers seamless data format conversions across popular frameworks (FFmpeg/Numpy/PyTorch/OpenCV/TensorRT), conversion between hardware devices (CPU/GPU), and color space and pixel format conversion.

BMFLite is a client-side cross-platform, lightweight, more efficient client-side multimedia processing framework. So far, the BMFLite client-side algorithm is used in apps such as Douyin/Xigua, serving more than one billion users in live streaming/video playing/pictures/cloud games and other scenarios, and processing videos and pictures trillions of times every day.

Dive deeper into BMF's capabilities on our website for more details.

Quick Experience

In this section, we will directly showcase the capabilities of the BMF framework around six dimensions: Transcode, Edit, Meeting/Broadcaster, GPU acceleration, AI Inference, and client-side Framework. For all the demos provided below, corresponding implementations and documentation are available on Google Colab, allowing you to experience them intuitively.

Transcode

This demo describes step-by-step how to use BMF to develop a transcoding program, including video transcoding, audio transcoding, and image transcoding. In it, you can familiarize yourself with how to use BMF and how to use FFmpeg-compatible options to achieve the capabilities you need.

If you want to have a quick experiment, you can try it on Open In Colab

Edit

The Edit Demo will show you how to implement a high-complexity audio and video editing pipeline through the BMF framework. We have implemented two Python modules, video_concat and video_overlay, and combined various atomic capabilities to construct a complex BMF Graph.

If you want to have a quick experiment, you can try it on Open In Colab

Meeting/Broadcaster

This demo uses BMF framework to construct a simple broadcast service. The service provides an API that enables dynamic video source pulling, video layout control, audio mixing, and ultimately streaming the output to an RTMP server. This demo showcases the modularity of BMF, multi-language development, and the ability to dynamically adjust the pipeline.

Below is a screen recording demonstrating the operation of broadcaster:

GPU acceleration

GPU Video Frame Extraction

The video frame extraction acceleration demo shows:

  1. BMF flexible capability of:

    • Multi-language programming, we can see multi-language modules work together in the demo
    • Ability to extend easily, there are new C++, Python modules added simply
    • FFmpeg ability is fully compatible
  2. Hardware acceleration quickly enablement and CPU/GPU pipeline support

    • Heterogeneous pipeline is supported in BMF, such as process between CPU and GPU
    • Useful hardware color space conversion in BMF

If you want to have a quick experiment, you can try it on Open In Colab

GPU Video Transcoding and Filtering

The GPU transcoding and filter module demo shows:

  1. Common video/image filters in BMF accelerated by GPU
  2. How to write GPU modules in BMF

The demo builds a transcoding pipeline which fully runs on GPU:

decode->scale->flip->rotate->crop->blur->encode

If you want to have a quick experiment, you can try it on Open In Colab

AI inference

LLM preprocessing

The prototype of how to build a video preprocessing for LLM training data in Bytedance, which serves billions of clip processing each day.

The input video will be split according to scene change, and subtitles in the video will be detected and cropped by OCR module, and the video quality will be assessed by BMF provided aesthetic module. After that, the finalized video clips will be encoded as output.

If you want to have a quick experiment, you can try it on Open In Colab

Deoldify

This demo shows how to integrate the state of art AI algorithms into the BMF video processing pipeline. The famous open source colorization algorithm DeOldify is wrapped as a BMF pyhton module in less than 100 lines of codes. The final effect is illustrated below, with the original video on the left side and the colored video on the right.

If you want to have a quick experiment, you can try it on Open In Colab

Super Resolution

This demo implements the super-resolution inference process of Real-ESRGAN as a BMF module, showcasing a BMF pipeline that combines decoding, super-resolution inference and encoding.

If you want to have a quick experiment, you can try it on Open In Colab

Video Quality Score

This demo shows how to invoke our aesthetic assessment model using bmf. Our deep learning model Aesmode has achieved a binary classification accuracy of 83.8% on AVA dataset, reaching the level of academic SOTA, and can be directly used to evaluate the aesthetic degree of videos by means of frame extraction processing.

If you want to have a quick experiment, you can try it on Open In Colab

Face Detect With TensorRT

This Demo shows a full-link face detect pipeline based on TensorRT acceleration, which internally uses the TensorRT-accelerated Onnx model to process the input video. It uses the NMS algorithm to filter repeated candidate boxes to form an output, which can be used to process a Face Detection Task efficiently.

If you want to have a quick experiment, you can try it on Open In Colab

Client-side Framework

Edge AI models

This case illustrates the procedures of integrating an external algorithm module into the BMFLite framework and management of its execution.

sr

Real-time denoise

This example implements the denoise algorithm as a BMF module, showcasing a BMF pipeline that combines video capture, noise reduction and rendering.

sr

Table of Contents

License

The project has an Apache 2.0 License. Third party components and dependencies remain under their own licenses.

Contributing

Contributions are welcomed. Please follow the guidelines.

We use GitHub issues to track and resolve problems. If you have any questions, please feel free to join the discussion and work with us to find a solution.

Acknowledgment

The decoder, encoder and filter reference ffmpeg cmdline tool. They are wrapped as BMF's built-in modules under the LGPL license.

The project also draws inspiration from other popular frameworks, such as ffmpeg-python and mediapipe. Our website is using the project from docsy based on hugo.

Here, we'd like to express our sincerest thanks to the developers of the above projects!

About

Cross-platform, customizable multimedia/video processing framework. With strong GPU acceleration, heterogeneous design, multi-language support, easy to use, multi-framework compatible and high performance, the framework is ideal for transcoding, AI inference, algorithm integration, live video streaming, and more.

Topics

Resources

License

Stars

Watchers

Forks

Packages