Slot Gacor Gampang Menang Situs Slot Gacor

Google and OpenAI are Walmarts besieged by fruit stalls

Image credit: Tim Boyle/Getty Images

OpenAI may be synonymous with machine learning now, and Google is doing its best to get off the ground, but both may soon face a new threat: the rapid proliferation of open-source projects pushing the latest technology and leaving the deep-pocketed but unmanageable companies in the dust. This Zerg-like threat may not be existential, but it will certainly keep dominant players on the defensive.

The idea isn’t exactly new—in the fast-moving AI community, it’s expected to see this kind of disruption on a weekly basis—but the situation was put into perspective by a widely shared document purported to have originated at Google. “We have no moat, and neither does OpenAI,” the note says.

I won’t bore the reader with a long summary of this perfectly readable and interesting piece, but the bottom line is that while GPT-4 and other proprietary models have gotten the lion’s share of attention and actual revenue, they’ve gotten the upper hand. with funding and infrastructure looking slimmer every day.

While the pace of OpenAI’s releases may seem frenetic by the standards of regular major software releases, GPT-3, ChatGPT, and GPT-4 were certainly hot on each other’s heels if you compare them to versions of iOS or Photoshop. But they still occur on the scale of months and years.

What the note points out is that in March a leaked basic language model from Meta, called LLaMA, was leaked in a rather crude form. Within weeks, people tinkering around on laptops and penny-a-minute servers had added core features like instruction alignment, multiple modalities, and reinforcement learning from human feedback. OpenAI and Google probably poked around the code, too, but they couldn’t—couldn’t—recreate the level of collaboration and experimentation that happened in subreddits and Discords.

Could it really be that the titanic computational problem that seemed to present an insurmountable obstacle—a moat—to challengers is already a relic of another era of AI development?

Sam Altman has already noted that we should expect diminishing returns when we throw parameters at the problem. Bigger isn’t always better, but few would have guessed that smaller was instead.

GPT-4 is a Walmart, and nobody actually likes Walmart

The business paradigm pursued by OpenAI and others right now is a direct descendant of the SaaS model. You have high-value software or services, and you offer carefully gated access to it through an API or something similar. It’s a simple and proven approach that makes perfect sense when you’ve invested hundreds of millions in developing a single monolithic but versatile product like a large language model.

If GPT-4 generalizes well to answering questions about precedents in contract law, great – never mind that a large number of its “intellect” is dedicated to being able to imitate the style of every author who has ever published a work in English . GPT-4 is like a Walmart. None actually wishes to go there, so the company makes sure there is no other alternative.

But customers start to wonder why I go through 50 times of garbage to buy some apples? Why am I hiring the services of the largest and most general AI model ever created if all I want to do is use some intelligence to match the language of this contract with a couple of hundred others? At the risk of torturing the metaphor (to say nothing of the reader), if GPT-4 is the Walmart you go to for apples, what happens when a fruit stand opens in the parking lot?

It didn’t take long in the AI ​​world for a large language model to be run, of course in very truncated form, on (fittingly) a Raspberry Pi. For a company like OpenAI, its jockey Microsoft, Google, or anyone else in the AI-as-a-service world, it effectively beggars the whole premise of their business: that these systems are so hard to build and run that they have to do it for you. In fact, it’s starting to look like these companies chose and engineered a version of AI that fit their existing business model, not the other way around!

Once upon a time, you had to download the computation involved in word processing to a mainframe – your terminal was just a screen. Of course, it was a different era and we have long since been able to fit the entire application on a personal computer. This process has happened many times since as our devices have repeatedly and exponentially increased their capacity for computation. These days when something needs to be done on a supercomputer, everyone understands that it’s just a matter of time and optimization.

For Google and OpenAI, the time came much faster than expected. And they weren’t optimizing – and may never be at this rate.

Now that doesn’t mean they are unlucky. Google didn’t get to where it is by being the best – at least not for a long time. Being a Walmart has its perks. Companies don’t want to have to find the custom solution that does the task they want 30% faster if they can get a decent price from their existing supplier and not rock the boat too much. Never underestimate the value of inertia in business!

Sure, people are iterating on LLaMA so fast they’re running out of camelids to name them after. By the way, I want to thank the developers for an excuse to just scroll through hundreds of pictures of cute, tawny vicuñas instead of working. But few enterprise IT departments are going to patch together an implementation of Stability’s open source derivative in progress of a quasi-legally leaked Meta model over OpenAI’s simple, efficient API. They have a business to run!

But at the same time, I stopped using Photoshop years ago for image editing and creation because the open source alternatives like Gimp and have become so incredibly good. At this point the argument goes the other way. Pay How Much for Photoshop? No, we have a business to run!

What Google’s anonymous authors are clearly worried about is that the distance from the first situation to the second is going to be a lot shorter than anyone thought, and there doesn’t seem to be a damn thing anyone can do about it.

Except, the note claims: embrace it. Open up, publish, collaborate, share, compromise. As they conclude:

Google should establish itself as a leader in the open source community, and take the lead by collaborating with, rather than ignoring, the broader conversation. This probably means taking some uncomfortable steps, like publishing the model weights for small ULM variants. This necessarily means giving up some control over our models. But this compromise is inevitable. We cannot hope to both drive innovation and control it.

Source link

Back to top button