Alleged Leaked Google Doc bullish on Open Source LLMs

DTP #3

In the last issue we went over how open source alternatives to LLMs like ChatGPT and Bard may be a more accessible alternative for organisations. To name a few examples:

  • StableLM

  • XLNet

  • BERT

  • Flan

  • GPT-Neo

  • GPT-J

OpenAI's GPT and Google's Bard have been at the forefront of this technology, but their proprietary nature has limited their accessibility.

This has led to the rise of open-source alternatives such as the ones above, which offer similar functionality while being more transparent and customizable.

On a related note, the blog SemiAnalysis published a document that they purport to be leaked by a Google employee. The source, according to them:

‘‘The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google.”

Here’s a quick rundown:

Give us your thoughts on this issue:

Leaked Google Doc: “We have no moat”

The document’s main argument is that despite the ongoing competition between OpenAI and Google to create the most advanced language models, the progress made in the open-source community is quickly surpassing their efforts.

“While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”

As highlighted in the doc:

"Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime."

While Google and OpenAI still hold a slight edge in model quality, open source models are faster, more customizable, and capable of solving major problems.

The doc suggests that companies like Google and OpenAI should prioritize learning from and collaborating with open source developers, consider their value proposition, and focus on creating small, iterative models that can be improved quickly.

The section “What We Missed” offers insights on LoRA, a technique used to fine tune models within just a few hours.

LoRA works by representing model updates as low-rank factorizations, which reduces the size of the update matrices by a factor of up to several thousand. This allows model fine-tuning at a fraction of the cost and time. Being able to personalise a language model in a few hours on consumer hardware is a big deal, particularly for aspirations that involve incorporating new and diverse knowledge in near real-time.

Another point to mention from the document suggests that competing with open source is a losing proposition.

“This recent progress has direct, immediate implications for our business strategy. Who would pay for a Google product with usage restrictions if there is a free, high quality alternative without them?”

And we should not expect to be able to catch up. The modern internet runs on open source for a reason. Open source has some significant advantages that we cannot replicate.”

The paper's final analysis presents intriguing insights on strategy.

Google has faced challenges in safeguarding their competitive advantages from contenders like OpenAI, and with the growing collaboration of the broader research community in open source, their task is likely to become even more challenging. And about Open AI:

“And in the end, OpenAI doesn’t matter. They are making the same mistakes we are in their posture relative to open source, and their ability to maintain an edge is necessarily in question.”

The document ends with a timeline, examining the speed at which open source model development is taking place.

This timeline describes the development of open source alternatives to ChatGPT and Bard, two popular language models developed by OpenAI and Google, respectively. It starts with the launch of LLaMA, an open-sourced model that quickly gets leaked to the public, leading to a race among developers to fine-tune and optimize the model for various purposes.

As the timeline progresses, we see the development of models that are faster, cheaper, and more capable than their predecessors, culminating in the launch of models like Koala and Open Assistant, which achieve performance on par with ChatGPT using only open data and tools.

Read the whole post here.

See you next week,

Mukundan

Do you have a unique perspective on developing and managing data science and AI talent? We want to hear from you! Reach out to us by replying to this email.

Reply

or to participate.