The Petal tool is a decentralized platform that uses the Bloom-176b language model for inference and fine-tuning tasks.
The Petal tool is a decentralized platform that utilizes a powerful language model called Bloom-176b. It can load and process specific sections of the model for inference and fine-tuning tasks. With a single-batch inference, each step (token) takes around 1 second to execute, and it can execute parallel inference at a rate of hundreds of tokens per second. Unlike a typical language model API, Petal provides additional features such as fine-tune sampling methods, the ability to follow custom paths, and visibility into hidden states. Additionally, Petal includes a flexible PyTorch API. This tool is developed as part of the BigScience research workshop project.
To provide the best experiences, we and our partners use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us and our partners to process personal data such as browsing behavior or unique IDs on this site and show (non-) personalized ads. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Click below to consent to the above or make granular choices. Your choices will be applied to this site only. You can change your settings at any time, including withdrawing your consent, by using the toggles on the Cookie Policy, or by clicking on the manage consent button at the bottom of the screen.
Reviews
There are no reviews yet.