10
The supervised fine-tuning phase employed Low-Rank Adaptation (LoRA) to efficiently adapt the base DeepSeek- R1-Distill-Qwen-7B model for extraction tasks
So this is bolted on top of a model that cost six figures.
The supervised fine-tuning phase employed Low-Rank Adaptation (LoRA) to efficiently adapt the base DeepSeek- R1-Distill-Qwen-7B model for extraction tasks
So this is bolted on top of a model that cost six figures.
And deepseek is based on llama, more than six figures.
I'm not aware of any larger parameter LLMs not based on one which is absurdly expensive.
DeepSeek is trained from-scratch. Only some variants used other LLMs.
This is a megaphone made from string, a squirrel, and a megaphone.
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.