This project explores how specialised programming frameworks called Guardrails, developed specifically for constraining large language models (LLMs), can prevent them from generating harmful, biased, or off-topic content. The goal of the project is to build simple examples using three leading guardrail frameworks implemented in Python: Guardrails AI, NeMo Guardrails from NVIDIA, and Llama Guard from … full description “A Comparative Analysis of Guardrail Frameworks for Large Language Models (available)”
Tag: LLM
Training Neural Networks for Analog AI Hardware (available)
Modern AI models achieve impressive performance but require enormous amounts of energy when trained and run on conventional GPU hardware. A promising alternative is analog in-memory computing, where neural network computations are performed directly inside memory devices such as resistive crossbar arrays. This approach can dramatically improve the efficiency of AI systems, but it also … full description “Training Neural Networks for Analog AI Hardware (available)”