Want more intelligent visions of your inbox? Subscribe to our weekly newsletters to get what is concerned only for institutions AI, data and security leaders. Subscribe now
His artificial intelligence liquid Absolute LFM2-VLA new generation of the Language Foundation models Effective publishing through a wide range of devices – From smartphones and laptops to wearable devices and guaranteed systems.
Low performance models are jaws, strong accuracy, and flexibility of applications in the real world.
LFM2-VL depends on the company’s current LFM2 structure, which extends to multimedia processing that supports both text and image inputs in changing decisions.
According to the artificial intelligence fluid, Models offer up to twice the speed of GPU’s conclusion to similar language modelsWhile maintaining competitive performance on common standards.
Artificial intelligence limits its limits
Power caps, high costs of the symbol, and inference delay are reshaped. Join our exclusive salon to discover how the big difference:
- Transforming energy into a strategic advantage
- Teaching effective reasoning for real productivity gains
- Opening the return on competitive investment with sustainable artificial intelligence systems
Securing your place to stay in the foreground: https://bit.ly/4mwngngo
“Efficiency is our product,” the co -founder and CEO of the AI Ruine Hassan wrote In a post on the advertisement of the new model family:
Two variables to meet the different needs
The version includes the sizes of two models:
- LFM2-VL-450M -A very efficient model with less than half a billion parameters (internal settings) that aim at high resource restrictions.
- LFM2-VL-1.6B A more capable model that remains lightweight enough to spread one GPU and the device based on the device.
Both variables treat images in original decisions up to 512 x 512 pixels, and avoid unnecessary distortion or height.
For larger images, the system applies an unexploited correction and adds a mini image of the global context, allowing the model to capture both precise details and the broader scene.
The background of artificial intelligence liquid
Artificial Intelligence Liquid was established by former researchers from the computer science and artificial intelligence laboratory at the Massachusetts Institute of Technology (CSAIL) with the aim of building the AI structure that exceeds the transformer model widely used.
The pioneering innovation of the company, LFMS models (LFMS)It depends on the principles of dynamic systems, signaling, numerical linear algebra, the production of Amnesty International for general purposes capable of dealing with text, videos, sound, time series and other serial data.
Unlike the traditional structure, the liquid approach aims to provide competitive or superior performance using a much lower mathematical resourcesAllow the ability to adapt actual time while inference while maintaining low memory requirements. This makes the LFMS perfectly suitable for all the cases of institutions widely using and spreading the limited edge of the resources.
In July 2025, The company has expanded a platform strategy with the launch of the AI (LAP) liquid platformand SDK via platforms designed to facilitate developers running small language models directly on mobile and compact devices.
LEAP provides an unspecified support for the operating system for the iOS and Android operating system, integrating with both the Liquid and other open source SLMS models, a built-in library with a small models of up to 300MB-on the feet of equality enough for modern phones with the minimum random access memory.
Its associated application, APOLLO, enables developers to test models in completely connecting mode, and corresponds to liquid artificial intelligence focus on maintaining low, low -total privacy. Together, Leap and Apollo reflects the company’s commitment to the implementation of the decentralization AI, reduce dependence on cloud infrastructure, and enable developers to build improved models for environments in the real world.
Speed/quality and technical design bodies
LFM2-VL uses a standard structure Combine the backbone of the linguistic model, the Siglip2 NAFLEX VISION ENCODER, and a multi -media display.
The display includes a two -layer MLP connector with Pixel Unshuffle, which reduces the number of images icons and improves productivity.
Users can adjust parameters such as the maximum number of images or corrections, allowing them to balance speed and quality depending on the publishing scenario. The training process included approximately 100 billion multimedia symbols, obtained from open data collections and internal artificial data.
Performance and standards
Models achieve standard competitive results through a set of vision language assessments. LFM2-VL-1.6B degrees in Relaworldqa (65.23), InfovQA (58.68), and Ocrbench (742), and maintains solid results in multimedia thinking tasks.

In the reasoning test, the LFM2-VL has achieved the fastest GPU processing times in its class when tested on a standard work of 1024 x 1024 and a short demand.

Licensing and availability
LFM2-VL models are now available in the face of embrace, as well as the example of the Clap settings. It is compatible with embracing facial transformers and TRL.
Models are released under a dedicated “LFM1.0” license. Artificial intelligence liquid described this license as based on the principles of APACHE 2.0, but the full text has not been published yet.
The company indicated that commercial use will be allowed under certain conditions, with different conditions for companies above $ 10 million in annual revenue.
With LFM2-VL, the Amnesty International aims to make Amnesty International a high-performance multimedia easier to access the device and the limited operation of the resources, without sacrificing the ability.
https://venturebeat.com/wp-content/uploads/2025/08/Collage-of-Expression-and-Vision.png?w=1024?w=1200&strip=all
Source link