We Enable Scalable, Fast and Cost-Effective Datacenters
The MangoBoost Data Processing Unit (DPU) enables faster and more scalable datacenters. Our DPU significantly improves application performance and reduces the CPU and server costs for infrastructure processing. We provide fully programmable solutions and cover many market-critical services.
Seamless DPU Deployment with Our SDK
The MangoBoost Software Development Kit (SDK) maximizes the potential of the MangoBoost DPU. With such SDK, developers can easily and quickly offload and accelerate datacenter workloads using the DPU. The MangoBoost SDK provides user-friendly APIs, libraries, and industry-standard runtime environment.

We Accelerate Key Datacenter Workloads
Modern datacenters are increasingly transitioning towards virtualization and distributed models, designed to support a variety of workloads such as big data, AI, and cloud applications. Mangoboost DPU provides comprehensive support for the complex requirements and services of today's evolving datacenter landscape.

Virtualization
SR-IOV, ATS/ATC
Vhost-NVMe Acceleration
VirtIO Acceleration
vDPA Support

Storage
Data Duplication
In-Line Compression
Thin Provisioning
LVM/RAID

AI
Large-Scale DNN Training
Device Orchestration
Pre/Post-Processing Acceleration
MPI Acceleration

Disaggregation
NVMe over RDMA
NVMe over TCP
GPU over Fabric

Security
Root of Trust
Crypto Acceleration
Network Security

Network
SDN Acceleration
P4 Support
Full TCP Acceleration
RDMA (RoCE v2)
We Provide Customer-Optimized DPUs
With the innovative composable DPU structure, we rapidly modify the design to customers’ service requests.
By arranging the necessary functions, we produce best-fit products with optimized performance and costs for various needs.

Full-Stack Service Demo Using Optimized DPUs
Efficient neural network computation is crucial for AI applications. Specialized hardware such as GPUs with NVLink support high-performance computation. However, the performance bottleneck has shifted to other system overhead, known as the “AI tax,” including tasks like data processing and network protocols. Traditional CPU-centric servers struggle with the AI tax, compromising performance and scalability.
The MangoBoost DPU offloads AI tax operations for optimal computation. Our demonstration with NVIDIA Triton inference server shows that the MangoBoost DPU delivers scalable AI Serving Systems.
The disaggregated storage server is a trending storage system architecture, which effectively resolves the severe underutilization of storage devices in conventional architectures. However, disaggregated storage comes with a “storage tax” in the form of high CPU overhead for managing storage stacks and fast network transfers.
The MangoBoost DPU efficiently resolves this challenge by offloading heavy storage tax operations to the DPU, including RoCEv2-based RDMA, NVMe-oF initiator & target offloading. Our evaluation with the SPECStorage benchmark demonstrates the MangoBoost DPU is a perfect fit for disaggregated storage solutions.
VMs and container-based services are popular, and efficient packet processing is essential for handling their high network traffic. Datacenters manage traffic with network functions such as NAT while trying to reach line-rate performance. However, running such functions on CPUs incurs an unaffordably large overhead.
The MangoBoost DPU resolves this issue with flexible and fast packet processing offload. Users can freely configure hardware with P4 or offload OVS and VXLan, and run network functions at high throughput and low latency.
Executive Team
We bring together an outstanding team with vast industrial and academic experience
Advisory Team
Our advisory team includes world-class experts
Our DPU Makes All Devices Smart
