DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
-
Updated
Jul 16, 2024 - Python
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Making large AI models cheaper, faster and more accessible
Docs for torchpipe: https://github.com/torchpipe/torchpipe
Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
Bridge the gap between deep learning training and serving
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
A curated list of awesome projects and papers for distributed training or inference
飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。
Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*
Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.
FTPipe and related pipeline model parallelism research.
pipeDejavu: Hardware-aware Latency Predictable, Differentiable Search for Faster Config and Convergence of Distributed ML Pipeline Parallelism
Official implementation of DynPartition: Automatic Optimal Pipeline Parallelism of Dynamic Neural Networks over Heterogeneous GPU Systems for Inference Tasks
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
Model parallelism for NN architectures with skip connections (eg. ResNets, UNets)
Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.
Development of Project HPGO | Hybrid Parallelism Global Orchestration
An Efficient Pipelined Data Parallel Approach for Training Large Model
A GPipe implementation in PyTorch
Add a description, image, and links to the pipeline-parallelism topic page so that developers can more easily learn about it.
To associate your repository with the pipeline-parallelism topic, visit your repo's landing page and select "manage topics."