Last year, Microsoft coined the term small language model (SLM) after the release of its Phi model, which, with less than 2B parameters, was able to outperform much larger LLMs in computer science and math tasks. Small foundation models – think 1B-5B parameters – are a key requirement for the viability of decentralized AI and unlock promising scenarios for on-device AI. Decentralizing multi-hundred-billion-parameter models is nearly impossible today and will remain so for a while. However, small foundation models should be able to run on many of today’s Web3 infrastructures. Pushing the SLM agenda is essential for building real value with Web3 and AI.
Comments (No)