Distilling large language models (available)

Model distillation for large language models (LLMs) presents a key challenge in AI research: how to compress massive, computationally expensive models into smaller, more efficient versions while preserving their performance. Large models, such as GPT-4, require vast amounts of memory and processing power, making them impractical for real-time applications on edge devices or personal computers. … full description “Distilling large language models (available)”

Quantum computing without quantum computers (available)

Quantum Technologies (QT) may revolutionise data science but are often unreliable. Classical and quantum noise makes most of the existing systems highly unstable. This generalised unreliability has limited their applicability to real-world computational problems. In special cases, quantum systems can be simulated on classical computers. As classical simulations are noise-free, they can be used to … full description “Quantum computing without quantum computers (available)”