Tether has rolled out a new AI framework designed to bring large language model training onto consumer devices, including smartphones and a range of non-Nvidia GPUs. The system is part of its QVAC initiative, which centres on running and refining AI models locally rather than through cloud-based infrastructure.
The framework leverages Microsoft’s BitNet architecture together with LoRA techniques to reduce the computational load and memory requirements needed for model training. By using a 1-bit model structure, BitNet significantly cuts VRAM usage compared with traditional 16-bit approaches, allowing more efficient deployment on constrained hardware.
Related: xAI Recruits Wall Street Experts to Train Grok for Finance
Tether reported that it successfully fine-tuned models with up to one billion parameters on smartphones in under two hours, with smaller models requiring only minutes. The system can also handle larger configurations, supporting models of up to 13 billion parameters on mobile devices.
The framework is compatible with a wide range of hardware, including chips from AMD, Intel and Apple, as well as mobile GPUs from Qualcomm and Apple, enabling both training and inference across different platforms. It additionally supports LoRA fine-tuning on non-Nvidia systems, extending functionality beyond the typical AI hardware stack.
This release builds on Tether’s ongoing development of QVAC, which has included tools for local model execution and fine-tuning across consumer hardware. The initiative reflects a broader effort to prioritise on-device AI processing, with an emphasis on reducing dependence on centralised cloud services.
Related: Crypto ATM Scams Hit $333M in the U.S. as AI Deepfakes Fuel Fraud
The post Tether Unveils AI Framework to Train LLMs on Smartphones and Consumer Hardware appeared first on Crypto News Australia.

