Events

Sparsity In Deep Neural Nets

In this talk, we will introduce a novel sparsity in deep neural nets. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla quis commodo augue. Nam et lacinia sapien. Maecenas pellentesque erat quis ante congue, at accumsan ipsum ullamcorper. Etiam iaculis euismod ex, vitae interdum ipsum imperdiet non. Vestibulum bibendum egestas rhoncus. Suspendisse ut lacinia ligula, vitae elementum mi. Nullam accumsan nunc sapien. Aliquam turpis diam, suscipit in faucibus ac, consequat mollis orci. Donec sit amet auctor risus.

Apr 1, 2024

Sparsity In Deep Neural Nets

Large Language Models (LLMs) have captured the attention of the tech world with their remarkable common-sense reasoning and generalizability. However, their large size and server transfer requirements can make them resource-intensive and slow, which is problematic for use in mobile or wearable devices like smart glasses and smart watches. Moreover, on-device computing could offer a solution to privacy concerns by keeping sensitive data, such as text messages or photos, on the device itself. To tackle these challenges, we’ve developed a more compact language model, ranging from 0.5B to 1.4B parameters. This model is designed to run on-device, providing a competitive performance for conversational grounded tasks, while also managing latency and memory usage effectively.

Apr 1, 2024

Lorem ipsum

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero. Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris. Fusce nec tellus sed augue semper porta. Mauris massa.

Apr 1, 2024