πŸ”’ AI Security Breakthrough: Instant, Private Workflows! πŸš€

AI

🎧English flagFrench flagGerman flagSpanish flag

Summary

Liquid AI has released LFM2-24B-A2B, a model designed for local, low-latency tool dispatch, alongside LocalCowork, an open-source desktop agent application. The system provides a deployable architecture for running enterprise workflows entirely on-device, eliminating reliance on external APIs. LFM2-24B-A2B utilizes a Sparse Mixture-of-Experts (MoE) architecture, activating approximately 2 billion parameters per token. LocalCowork facilitates offline execution via the Model Context Protocol (MCP), logging every action for audit trails. Liquid AI evaluated the model on a workload of 100 tool selection prompts and 50 multi-step chains, averaging 385 milliseconds per response. This technology offers a pathway to privacy-sensitive environments by executing complex tasks locally, representing a significant step in on-device AI processing.

INSIGHTS


LFM2-24B-A2B: A Revolution in On-Device AI
Liquid AI has unveiled LFM2-24B-A2B, a groundbreaking model specifically engineered for low-latency tool dispatch and execution directly on local hardware. This model represents a significant advancement in on-device AI, offering a deployable architecture designed for enterprise workflows where data privacy and immediate response times are paramount. At its core, LFM2-24B-A2B leverages a Sparse Mixture-of-Experts (MoE) architecture. This sophisticated design allows the model to maintain a vast knowledge base while drastically reducing the computational burden associated with each generation step. The model contains 24 billion parameters, yet it activates only approximately 2 billion parameters per token during inference. This strategic optimization dramatically improves performance, enabling rapid and efficient tool dispatch for a wide range of applications. The team rigorously tested the model across a defined hardware and software stack, emphasizing reliability and accuracy.

LocalCowork: The Open-Source Desktop AI Agent
LocalCowork is a completely offline desktop AI agent, representing the practical implementation of the LFM2-24B-A2B model. Designed for privacy-sensitive environments, LocalCowork operates entirely offline, utilizing the Model Context Protocol (MCP) to execute pre-built tools without any reliance on cloud APIs or data egress. Every action taken by the agent is meticulously logged to a local audit trail, providing full traceability and accountability. The system incorporates 75 tools spread across 14 MCP servers, capable of handling diverse tasks including filesystem operations, Optical Character Recognition (OCR), and security scanning. The initial demo focuses on a curated subset of 20 tools across 6 servers, each rigorously tested to achieve over 80% single-step accuracy and verified multi-step chain participation. This controlled environment allows for precise evaluation of the model's capabilities and performance.

Performance Benchmarks and Ecosystem Expansion
Liquid AI conducted extensive testing of the LFM2-24B-A2B model, evaluating it against a substantial workload of 100 single-step tool selection prompts and 50 multi-step chains – requiring 3 to 6 discrete tool executions. During these trials, the model averaged approximately 385 milliseconds per tool-selection response. This performance highlights the model’s suitability for human-in-the-loop applications where immediate feedback is crucial. The open-source nature of LocalCowork, coupled with the robust toolset and performance metrics, fosters an expanding ecosystem. We encourage exploration through the provided GitHub repository and detailed technical documentation. Furthermore, stay connected via our Twitter channel, join our 120k+ strong ML SubReddit community, and subscribe to our newsletter for the latest updates. Finally, don’t miss the opportunity to join our rapidly growing Telegram community – now available for seamless communication and collaboration.

This article is AI-synthesized from public sources and may not reflect original reporting.