{"id":107792,"date":"2024-09-28T04:56:54","date_gmt":"2024-09-27T21:56:54","guid":{"rendered":"https:\/\/hotvideos24.online\/?p=107792"},"modified":"2024-09-28T04:56:54","modified_gmt":"2024-09-27T21:56:54","slug":"google-and-meta-update-their-ai-models-amid-the-rise-of-alphachip","status":"publish","type":"post","link":"https:\/\/hotvideos24.online\/?p=107792","title":{"rendered":"Google and Meta update their AI models amid the rise of \u201cAlphaChip\u201d"},"content":{"rendered":"<p> <script async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-3711241968723425\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-format=\"fluid\"\r\n     data-ad-layout-key=\"-fb+5w+4e-db+86\"\r\n     data-ad-client=\"ca-pub-3711241968723425\"\r\n     data-ad-slot=\"7910942971\"><\/ins>\r\n<script>\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n<\/p>\n<div itemprop=\"articleBody\">\n<figure class=\"intro-image intro-left\">\n  <img decoding=\"async\" src=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/news_gauntlet_2-800x450.jpg\" alt=\"Cyberpunk concept showing a man running along a futuristic path full of monitors.\"\/><figcaption class=\"caption\">\n<div class=\"caption-text\"><a href=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/news_gauntlet_2.jpg\" class=\"enlarge-link\" data-height=\"675\" data-width=\"1200\">Enlarge<\/a> <span class=\"sep\">\/<\/span> There&#8217;s been a lot of AI news this week, and covering it sometimes feels like running through a hall full of danging CRTs, just like this Getty Images illustration.<\/div>\n<\/figcaption><\/figure>\n<aside id=\"social-left\" class=\"social-left\" aria-label=\"Read the comments or share this article\">\n<\/aside>\n<p><!-- cache hit 18:single\/related:24436f6072debc07869ab64a94ccbf34 --><!-- empty --><\/p>\n<p>It&#8217;s been a wildly busy week in AI news thanks to OpenAI, including a controversial <a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/09\/ai-superintelligence-looms-in-sam-altmans-new-essay-on-the-intelligence-age\/\">blog post<\/a> from CEO Sam Altman, the <a href=\"https:\/\/arstechnica.com\/ai\/2024\/09\/talking-to-chatgpt-for-the-first-time-is-a-surreal-experience\/\">wide rollout<\/a> of Advanced Voice Mode, 5GW <a href=\"https:\/\/arstechnica.com\/tech-policy\/2024\/09\/openai-asked-us-to-approve-energy-guzzling-5gw-data-centers-report-says\/\">data center rumors<\/a>, <a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/09\/openais-murati-shocks-with-sudden-departure-announcement\/\">major staff<\/a> shake-ups, and dramatic <a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/09\/openai-plans-tectonic-shift-from-nonprofit-to-for-profit-giving-altman-equity\/\">restructuring plans<\/a>.<\/p>\n<p>But the rest of the AI world doesn&#8217;t march to the same beat, doing its own thing and <a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/09\/ai-hosting-platform-surpasses-1-million-models-for-the-first-time\/\">churning out<\/a> new AI models and research by the minute. Here&#8217;s a roundup of some other notable AI news from the past week.<\/p>\n<h2>Google Gemini updates<\/h2>\n<figure class=\"image shortcode-img center large\" style=\"width:100%\"><a href=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/Gemini__BlogHero.jpg\" class=\"enlarge\" data-height=\"357\" data-width=\"1200\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/Gemini__BlogHero-640x190.jpg\" width=\"640\" height=\"190\" srcset=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/Gemini__BlogHero.jpg 2x\"\/><\/a><figcaption class=\"caption\"\/><\/figure>\n<p>On Tuesday, Google <a href=\"https:\/\/developers.googleblog.com\/en\/updated-gemini-models-reduced-15-pro-pricing-increased-rate-limits-and-more\/\">announced<\/a> updates to its Gemini model lineup, including the release of two new production-ready models that iterate on past releases: Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002. The company reported improvements in overall quality, with notable gains in math, long context handling, and vision tasks. Google claims a 7 percent increase in performance on the <a href=\"https:\/\/arxiv.org\/abs\/2406.01574\">MMLU-Pro<\/a> benchmark and a 20 percent improvement in math-related tasks. But as you know, if you&#8217;ve been reading Ars Technica for a while, AI typically benchmarks <a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/03\/the-ai-wars-heat-up-with-claude-3-claimed-to-have-near-human-abilities\/\">aren&#8217;t as useful<\/a> as we would like them to be.<\/p>\n<p>Along with model upgrades, Google introduced substantial price reductions for Gemini 1.5 Pro, cutting input token costs by 64 percent and output token costs by 52 percent for prompts under 128,000 tokens. As AI researcher Simon Willison <a href=\"https:\/\/simonwillison.net\/2024\/Sep\/24\/gemini-models\/\">noted<\/a> on his blog, &#8220;For comparison, GPT-4o is currently $5\/[million tokens] input and $15\/m output and Claude 3.5 Sonnet is $3\/m input and $15\/m output. Gemini 1.5 Pro was already the cheapest of the frontier models and now it&#8217;s even cheaper.&#8221;<\/p>\n<p>Google also increased rate limits, with Gemini 1.5 Flash now supporting 2,000 requests per minute and Gemini 1.5 Pro handling 1,000 requests per minute. Google reports that the latest models offer twice the output speed and three times lower latency compared to previous versions. These changes may make it easier and more cost-effective for developers to build applications with Gemini than before.<\/p>\n<h2>Meta launches Llama 3.2<\/h2>\n<figure class=\"image shortcode-img center large\" style=\"width:100%\"><a href=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/llama32.jpg\" class=\"enlarge\" data-height=\"675\" data-width=\"1200\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/llama32-640x360.jpg\" width=\"640\" height=\"360\" srcset=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/llama32.jpg 2x\"\/><\/a><figcaption class=\"caption\"\/><\/figure>\n<p>On Wednesday, Meta <a href=\"https:\/\/ai.meta.com\/blog\/llama-3-2-connect-2024-vision-edge-mobile-devices\/\">announced<\/a> the release of Llama 3.2, a significant update to its open-weights AI model lineup that we have <a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/07\/the-first-gpt-4-class-ai-model-anyone-can-download-has-arrived-llama-405b\/\">covered extensively<\/a> in the past. The new release includes vision-capable large language models (LLMs) in 11 billion and 90B parameter sizes, as well as lightweight text-only models of 1B and 3B parameters designed for edge and mobile devices. Meta claims the vision models are competitive with leading closed-source models on image recognition and visual understanding tasks, while the smaller models reportedly outperform similar-sized competitors on various text-based tasks.<\/p>\n<p>Willison did some experiments with some of the smaller 3.2 models and <a href=\"https:\/\/simonwillison.net\/2024\/Sep\/25\/llama-32\/\">reported impressive results<\/a> for the models&#8217; size. AI researcher Ethan Mollick <a href=\"https:\/\/x.com\/emollick\/status\/1839480234623611002\">showed off<\/a> running Llama 3.2 on his iPhone using an app called PocketPal.<\/p>\n<p>Meta also introduced the first official &#8220;<a href=\"https:\/\/github.com\/meta-llama\/llama-stack\">Llama Stack<\/a>&#8221; distributions, created to simplify development and deployment across different environments. As with previous releases, Meta is making the models available for free download, with license restrictions. The new models support long context windows of up to 128,000 tokens.<\/p>\n<h2>Google\u2019s AlphaChip AI speeds up chip design<\/h2>\n<figure class=\"image shortcode-img center large\" style=\"width:100%\"><a href=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/google_alphachip.jpg\" class=\"enlarge\" data-height=\"668\" data-width=\"1200\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/google_alphachip-640x356.jpg\" width=\"640\" height=\"356\" srcset=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2024\/09\/google_alphachip.jpg 2x\"\/><\/a><figcaption class=\"caption\"\/><\/figure>\n<p>On Thursday, Google DeepMind <a href=\"https:\/\/deepmind.google\/discover\/blog\/how-alphachip-transformed-computer-chip-design\/\">announced<\/a> what appears to be a significant advancement in AI-driven electronic chip design, AlphaChip. It began as a <a href=\"https:\/\/arxiv.org\/pdf\/2004.10746\">research project<\/a> in 2020 and is now a reinforcement learning method for designing chip layouts. Google has reportedly used AlphaChip to create &#8220;superhuman chip layouts&#8221; in the last three generations of its <a href=\"https:\/\/en.wikipedia.org\/wiki\/Tensor_Processing_Unit\">Tensor Processing Units<\/a> (TPUs), which are chips similar to GPUs designed to accelerate AI operations. Google claims AlphaChip can generate high-quality chip layouts in hours, compared to weeks or months of human effort. (Reportedly, Nvidia has <a href=\"https:\/\/www.businessinsider.com\/nvidia-uses-ai-to-produce-its-ai-chips-faster-2024-2\">also been using AI<\/a> to help design its chips.)<\/p>\n<p>Notably, Google also released a <a href=\"https:\/\/github.com\/google-research\/circuit_training\/?tab=readme-ov-file#PreTrainedModelCheckpoint\">pre-trained checkpoint<\/a> of AlphaChip on GitHub, sharing the model weights with the public. The company reported that AlphaChip&#8217;s impact has already extended beyond Google, with chip design companies like <a href=\"https:\/\/www.mediatek.com\/products\/smartphones\/dimensity-5g\">MediaTek<\/a> adopting and building on the technology for their chips. According to Google, AlphaChip has sparked a new line of research in AI for chip design, potentially optimizing every stage of the chip design cycle from computer architecture to manufacturing.<\/p>\n<p>That wasn&#8217;t everything that happened, but those are some major highlights. With the AI industry showing no signs of slowing down at the moment, we&#8217;ll see how next week goes.<\/p>\n<\/p><\/div>\n<p><script async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-3711241968723425\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-format=\"fluid\"\r\n     data-ad-layout-key=\"-fb+5w+4e-db+86\"\r\n     data-ad-client=\"ca-pub-3711241968723425\"\r\n     data-ad-slot=\"7910942971\"><\/ins>\r\n<script>\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n<br \/><div data-type=\"_mgwidget\" data-widget-id=\"1660802\">\r\n<\/div>\r\n<script>(function(w,q){w[q]=w[q]||[];w[q].push([\"_mgc.load\"])})(window,\"_mgq\");\r\n<\/script>\r\n<br \/>\n<br \/><a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/09\/major-ai-updates-from-meta-and-google-and-a-new-era-for-ai-designed-chips\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Enlarge \/ There&#8217;s been a lot of AI news this week, and covering it sometimes feels like running through a hall full of danging CRTs, just like this Getty Images &hellip; <a href=\"https:\/\/hotvideos24.online\/?p=107792\" class=\"more-link\">Read More<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8630],"tags":[],"class_list":["post-107792","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"_links":{"self":[{"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/posts\/107792","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=107792"}],"version-history":[{"count":0,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=\/wp\/v2\/posts\/107792\/revisions"}],"wp:attachment":[{"href":"https:\/\/hotvideos24.online\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=107792"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=107792"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hotvideos24.online\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=107792"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}