A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
-
Updated
Jul 16, 2024 - Python
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
stable diffusion webui colab
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
MQTT gateway for ESP8266 or ESP32 with bidirectional 433mhz/315mhz/868mhz, Infrared communications, BLE, Bluetooth, beacons detection, mi flora, mi jia, LYWSD02, LYWSD03MMC, Mi Scale, TPMS, BBQ thermometer compatibility & LoRa.
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
Meshtastic device firmware
Industrial IoT Messaging and Device Management Platform
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
ChirpStack Network Server is an open-source LoRaWAN network-server.
LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
LLM Finetuning with peft
Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
Using Low-rank adaptation to quickly fine-tune diffusion models.
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Wifi & BLE driven passenger flow metering with cheap ESP32 boards
Add a description, image, and links to the lora topic page so that developers can more easily learn about it.
To associate your repository with the lora topic, visit your repo's landing page and select "manage topics."