Sign In  |  Register  |  About Santa Clara  |  Contact Us

Santa Clara, CA
September 01, 2020 1:39pm
7-Day Forecast | Traffic
  • Search Hotels in Santa Clara

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

H2O.ai Releases New Language Model H2O-Danube-1.8B for Mobile

H2O-Danube-1.8B super tiny LLM model designed to run on smartphones, laptops, desktops and IoT devices, spurring growth in natural language applications and further democratizing AI

H2O.ai, the open source leader in Generative AI and machine learning and maker behind Enterprise h2oGPTe, is announcing the release of H2O-Danube-1.8B - an open source natural language model with 1.8 billion parameters. Despite being trained on significantly less data than comparable models, benchmark results show H2O-Danube-1.8B achieves highly competitive performance across a wide range of natural language tasks.

"We are excited to release H2O-Danube-1.8B as a portable LLM on small devices like your smartphone, something that Anthropic is not offering today. The proliferation of smaller, lower-cost hardware and more efficient training now allows modestly-sized models to be accessible to a wider audience. With an Apache 2.0 license for commercial use and versatile capabilities, we believe H2O-Danube-1.8B will be a game changer for mobile offline applications," said Sri Ambati, CEO and co-founder of H2O.ai.

As detailed in the arXiv technical report, H2O-Danube-1.8B was trained on 1 trillion tokens collected from diverse web sources, with techniques refined from models like LLama 2 and Mistral. Despite the relatively limited training data, benchmark results show H2O-Danube-1.8B performs on par or better than other models in the 1-2 billion parameter size class across tasks like common sense reasoning, reading comprehension, summarization and translation.

H2O.ai also announced the release of H2O-Danube-1.8B-Chat, a version of the model fine-tuned specifically for conversational applications. Building on the base H2O-Danube-1.8B model, the chat version was tuned using supervised learning on dialog datasets followed by reinforcement learning using human preferences. Initial benchmark results show state-of-the-art performance compared to existing chat models with less than 2 billion parameters.

Both the base H2O-Danube-1.8B model and chat-tuned version are available immediately from Hugging Face. H2O.ai will be releasing additional tools to simplify using the models in applications, as well as exploring potential future model scaling.

"We are committed to advancing AI for responsible progress. H2O-Danube-1.8B raises the bar through its impressive performance combined with an open license enabling broad access," said Sri Ambati, CEO and co-founder of H2O.ai.

About H2O.ai

Founded in 2012, H2O.ai is at the forefront of the AI movement to democratize Generative AI. H2O.ai’s open source Generative AI and Enterprise h2oGPTe, combined with Document AI and the award-winning autoML Driverless AI, have transformed more than 20,000 global organizations and over half of the Fortune 500 and household brands, including AT&T, Commonwealth Bank of Australia, PayPal, Chipotle, ADP, Workday, Progressive Insurance and AES. H2O.ai’s AI for Good program supports nonprofit groups, foundations and communities in their efforts to advance education, healthcare, and environmental conservation, including identifying areas vulnerable to natural disasters and protecting endangered species.

H2O.ai has a vibrant community of 2 million data scientists worldwide, and aims to bring together the world’s top data scientists with customers to co-create GenAI applications that are usable and valuable by everyone. Business users can now leverage the power of LLMs to enhance productivity with enterprise applications.

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 SantaClara.com & California Media Partners, LLC. All rights reserved.