Adobe Develops SlimLM That Can Process Documents Locally on Devices Without Internet Connectivity

Internet

Adobe researchers have published a paper that details a new artificial intelligence (AI) model capable of processing documents locally on a device. Published last week, the paper highlights that researchers experimented with existing large language models (LLMs) and small language models (SLMs) to find how to reduce the size of the AI model while keeping its processing capability and inference speed high. The researchers, as a result of the experimentations, were able to develop an AI model dubbed SlimLM that can function entirely within a smartphone and process documents.

Adobe Researchers Develop SlimLM

AI-powered document processing, which allows a chatbot to answer user queries about its content, is an important use case of generative AI. Many companies, including Adobe, have tapped into this application and have released tools that offer this functionality. However, there is one issue with all such tools — the AI processing takes place on the cloud. On-server processing of data raises concerns about data privacy and makes processing documents containing sensitive information a risk-ridden process.

The risk mainly emerges from fears that the company offering the solution might train the AI on it, or a data breach incident could cause the sensitive information to be leaked. As a solution, Adobe researchers published a paper in the online journal arXiv, detailing a new AI model that can carry out document processing entirely on the device.

Dubbed SlimLM, the AI model’s smallest variant contains just 125 million parameters which makes it feasible to be integrated within a smartphone’s operating system. The researchers claim that it can operate locally, without needing Internet connectivity. As a result, users can process even the most sensitive documents without any fear as the data never leaves the device.

In the paper, the researchers highlighted that they conducted several experiments on a Samsung Galaxy S24 to find the balance between parameter size, inference speed, and processing speed. After optimising it, the team pre-tained the model on SlimPajama-627B foundation model and fine-tuned it using DocAssist, a specialised software for document processing.

Notably, arXiv is a pre-print journal where publishing does not require peer reviews. As such, the validity of the claims made in the research paper cannot be ascertained. However, if true, the AI model could be shipped with Adobe’s platforms in the future.

Articles You May Like

Google Messages Rolls Out Merged Camera and Gallery UI, Adds Image Quality Selection in Beta: Report
Chinese DeepSeek-R1 AI Model With Advanced Reasoning Capabilities Released, Can Rival OpenAI o1
Instagram Will Let Users Reset Their Content Recommendations
Bitcoin rises to fresh record above $94,000 as investors watch Trump transition, ETF options
Bitcoin edges higher as tensions mount between Ukraine and Russia