Stability AI has released an open-source suite of large language models, collectively known as StableLM. Stability AI is the company behind AI-powered Stable Diffusion, a tool that generates images using AI. In a Wednesday post, the company announced its models were now available to developers for use and adaptation via GitHub.
StableLM, like its ChatGPT competitor, is designed to generate text and code efficiently. The StableLM is trained using a more extensive version of the Pile. This dataset includes information from a variety of sources including Wikipedia, Stack Exchange and PubMed. Stability AI claims that StableLM models can be found between 3 billion and seven billion parameters. 15 to 65 billion parameter StableLM models will follow later.
StableLM builds on the open source language models Stability AI developed in collaboration with EleutherAI. It also continues its mission of making AI tools more accessible , as it did with Stable Diffusion. The company released its text-to image AI in several forms, including as a public demonstration, software beta and a complete download. This allowed developers to play with the tool and create various integrations.
We could even see StableLM follow the same path as Meta’s open-source LLaMa model, which leaked online in October. My colleague James Vincent pointed out that the release of Stable Diffusion led to “both more good and worse stuff happening.” We’ll see a similar dynamic with AI text creation: more stuff more of the time.
Try out the demo of StableLM’s finely tuned chat model on Hugging Face. It gave me an extremely complex and nonsensical answer when I asked it how to make peanut butter sandwiches. It suggested I add “a funny drawing” to my sympathy card. Stability AI warns, however, that although the datasets used should “steer language models towards’safer distributions’ of text,” not all biases or toxicity can still be reduced through fine tuning.