{"id":109173,"date":"2024-10-01T16:18:58","date_gmt":"2024-10-01T09:18:58","guid":{"rendered":"https:\/\/hotvideos24.online\/?p=109173"},"modified":"2024-10-01T16:18:58","modified_gmt":"2024-10-01T09:18:58","slug":"liquid-ai-debuts-new-lfm-based-models-that-seem-to-outperform-most-traditional-large-language-models","status":"publish","type":"post","link":"https:\/\/hotvideos24.online\/?p=109173","title":{"rendered":"Liquid AI debuts new LFM-based models that seem to outperform most traditional large language models"},"content":{"rendered":"<p> <script async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-3711241968723425\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-format=\"fluid\"\r\n     data-ad-layout-key=\"-fb+5w+4e-db+86\"\r\n     data-ad-client=\"ca-pub-3711241968723425\"\r\n     data-ad-slot=\"7910942971\"><\/ins>\r\n<script>\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n<\/p>\n<div>\n<p>Artificial intelligence startup and MIT spinoff <a href=\"https:\/\/www.liquid.ai\">Liquid AI Inc.<\/a> today launched its first set of generative AI models, and they\u2019re notably different from competing models because they\u2019re built on a fundamentally new architecture.<\/p>\n<p><a href=\"https:\/\/www.liquid.ai\/liquid-foundation-models\">The new models<\/a> are being called \u201cLiquid Foundation Models,\u201d or LFMs, and they\u2019re said to deliver impressive performance that\u2019s on a par with, or even superior to, some of the best large language models available today.<\/p>\n<p>The Boston-based startup was founded by a team of researchers from the Massachusetts Institute of Technology, including Ramin Hasani, Mathias Lechner, Alexander Amini and Daniela Rus. They\u2019re said to be pioneers in the concept of \u201cliquid neural networks,\u201d which is a class of AI models that\u2019s quite different from the Generative Pre-trained Transformer-based models we know and love today, such as OpenAI\u2019s GPT series and Google LLC\u2019s Gemini models.<\/p>\n<p>The company\u2019s mission is to create highly capable and efficient general-purpose models that can be used by organizations of all sizes. To do that, it\u2019s building LFM-based AI systems that can work at every scale, from the network edge to enterprise-grade deployments.<\/p>\n<h3>What are LFMs?<\/h3>\n<p>According to Liquid, its LFMs represent a new generation of AI systems that are designed with both performance and efficiency in mind. They use minimal system memory while delivering exceptional computing power, the company explains.<\/p>\n<p>They\u2019re grounded in dynamical systems, numerical linear algebra and signal processing. That makes them ideal for handling various types of sequential data, including text, audio, images, video and signals.<\/p>\n<p>Liquid AI first made headlines in December when it <a href=\"https:\/\/siliconangle.com\/2023\/12\/06\/liquid-ai-raises-37-6m-build-liquid-neural-networks\/\">raised $37.6 million in seed funding<\/a>. At the time, it explained that its LFMs are based on a newer, Liquid Neural Network architecture that was originally developed at MIT\u2019s Computer Science and Artificial Intelligence Laboratory. LNNs are based on the concept of artificial neurons, or nodes for transforming data.<\/p>\n<p>Whereas traditional deep learning models need thousands of neurons to perform computing tasks, LNNs can achieve the same performance with significantly fewer. It does this by combining those neurons with innovative mathematical formulations, enabling it to do much more with less.<\/p>\n<p>The startup says its LFMs retain this adaptable and efficient capability, which enables them to perform real-time adjustments during inference without the enormous computational overheads associated with traditional LLMs. As a result, they can handle up to 1 million tokens efficiently without any noticeable impact on memory usage.<\/p>\n<p>Liquid AI is kicking off with a family of three models at launch, including LFM-1B, which is a dense model with 1.3 billion parameters, designed for resource-constrained environments. Slightly more powerful is LFM-3B, which has 3.1 billion parameters and is aimed at edge deployments, such as mobile applications, robots and drones. Finally, there\u2019s LFM-40B, which is a vastly more powerful \u201d mixture of experts\u201d model with 40.3 billion parameters, designed to deployed on cloud servers in order to handle the most complex use cases.<\/p>\n<p>The startup reckons its new models have already shown \u201cstate-of-the-art results\u201d across a number of important AI benchmarks, and it believes they are shaping up to be formidable competitors to existing generative AI models such as ChatGPT.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-672451\" src=\"https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/66f9a9b9624c365c96251a0c_desktop-graph-p-1080.png\" alt=\"\" width=\"1080\" height=\"526\" srcset=\"https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/66f9a9b9624c365c96251a0c_desktop-graph-p-1080.png 1080w, https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/66f9a9b9624c365c96251a0c_desktop-graph-p-1080-300x146.png 300w, https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/66f9a9b9624c365c96251a0c_desktop-graph-p-1080-768x374.png 768w, https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/66f9a9b9624c365c96251a0c_desktop-graph-p-1080-800x390.png 800w\" sizes=\"auto, (max-width: 1080px) 100vw, 1080px\"\/><\/p>\n<p>Whereas traditional LLMs see a sharp increase in memory usage when performing long-context processing, the LFM-3B model notably maintains a much smaller memory footprint (above) which makes it an excellent choice for applications that require large amounts of sequential data to be processed. Example use cases might include chatbots and document analysis, the company said.<\/p>\n<h3>Strong performance on benchmarks<\/h3>\n<p>In terms of their performance, the LFMs delivered some impressive results, with LFM-1B outperforming transformer-based models in the same size category. Meanwhile, LFM-3B stands up well against models such as Microsoft Corp.\u2019s Phi-3.5 and Meta Platforms Inc.\u2019s Llama family. As for LFM-40B, its efficiency is such that it can even outperform larger models while maintaining an unmatched balance between performance and efficiency.<\/p>\n<p>Liquid AI said the LFM-1B model put in an especially dominating performance on benchmarks such as MMLU and ARC-C, setting a new standard for 1B-parameter models.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-672452\" src=\"https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/image_2024-10-01_070501875.png\" alt=\"\" width=\"971\" height=\"587\" srcset=\"https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/image_2024-10-01_070501875.png 971w, https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/image_2024-10-01_070501875-300x181.png 300w, https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/image_2024-10-01_070501875-768x464.png 768w, https:\/\/d15shllkswkct0.cloudfront.net\/wp-content\/blogs.dir\/1\/files\/2024\/09\/image_2024-10-01_070501875-800x484.png 800w\" sizes=\"auto, (max-width: 971px) 100vw, 971px\"\/><\/p>\n<p>The company is making its models available in early access via platforms such as Liquid Playground, Lambda \u2013 via its Chat and application programming interfaces \u2013 and Perplexity Labs. That will give organizations a chance to integrate its models into various AI systems and see how they perform in various deployment scenarios, including edge devices and on-premises.<\/p>\n<p>One of the things it\u2019s working on now is optimizing the LFM models to run on specific hardware built by Nvidia Corp., Advanced Micro Devices Inc., Apple Inc., Qualcomm Inc. and Cerebras Computing Inc., so users will be able to squeeze even more performance out of them by the time they reach general availability.<\/p>\n<p>The company says it will release a series of technical blog posts that take a deep dive into the mechanics of each model ahead of their official launch. In addition, it\u2019s encouraging red-teaming, inviting the AI community to test its LFMs to the limit, to see what they can and cannot yet do.<\/p>\n<h5>Image: SiliconANGLE\/Microsoft Designer<\/h5>\n<div class=\"silic-after-content\" id=\"silic-628284240\">\n<hr style=\"border: 1px solid; color: #d8d8d8; height: 0px; margin-top: 20px;\"\/>\n<h3><span style=\"font-size: 16px;\">Your vote of support is important to us and it helps us keep the content FREE.<\/span><\/h3>\n<h3><span style=\"font-size: 16px;\">One click below supports our mission to provide free, deep, and relevant content. \u00a0<\/span><\/h3>\n<h3><a href=\"https:\/\/www.youtube.com\/channel\/UCu3Ri8DI1RQLdVtU12uIp1Q?sub_confirmation=1\">Join our community on YouTube<\/a><\/h3>\n<h3><span style=\"font-size: 16px;\">Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.<\/span><\/h3>\n<div>\n<p><figure><strong>\u201cTheCUBE\u00a0is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the\u00a0content you create as well\u201d \u2013 Andy Jassy<\/strong><\/figure>\n<\/p>\n<\/div>\n<p><strong>THANK YOU<\/strong><\/p>\n<\/div><\/div>\n<p><script async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-3711241968723425\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-format=\"fluid\"\r\n     data-ad-layout-key=\"-fb+5w+4e-db+86\"\r\n     data-ad-client=\"ca-pub-3711241968723425\"\r\n     data-ad-slot=\"7910942971\"><\/ins>\r\n<script>\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n<br \/><div data-type=\"_mgwidget\" data-widget-id=\"1660802\">\r\n<\/div>\r\n<script>(function(w,q){w[q]=w[q]||[];w[q].push([\"_mgc.load\"])})(window,\"_mgq\");\r\n<\/script>\r\n<br \/>\n<br \/><a href=\"https:\/\/siliconangle.com\/2024\/09\/30\/liquid-ai-debuts-new-lfm-based-models-seem-outperform-traditional-llms\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence startup and MIT spinoff Liquid AI Inc. today launched its first set of generative AI models, and they\u2019re notably different from competing models because they\u2019re built on a &hellip; <a href=\"https:\/\/hotvideos24.online\/?p=109173\" class=\"more-link\">Read More<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8630],"tags":[],"class_list":["post-109173","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"_links":{"self":[{"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/posts\/109173","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=109173"}],"version-history":[{"count":0,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/posts\/109173\/revisions"}],"wp:attachment":[{"href":"https:\/\/hotvideos24.online\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=109173"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=109173"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=109173"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}