What can LLMs never do?

What can LLMs never do?
By: Marginal Revolution Posted On: April 24, 2024 View: 12

By Rohit Krishnan, he and I are both interested in the question of what LLMs cannot do, and why.  Here is one excerpt:

It might be best to say that LLMs demonstrate incredible intuition but limited intelligence. It can answer almost any question that can be answered in one intuitive pass. And given sufficient training data and enough iterations, it can work up to a facsimile of reasoned intelligence.

The fact that adding an RNN type linkage seems to make a little difference though by no means enough to overcome the problem, at least in the toy models, is an indication in this direction. But it’s not enough to solve the problem.

In other words, there’s a “goal drift” where as more steps are added the overall system starts doing the wrong things. As contexts increase, even given previous history of conversations, LLMs have difficulty figuring out where to focus and what the goal actually is. Attention isn’t precise enough for many problems.

A closer answer here is that neural networks can learn all sorts of irregular patterns once you add an external memory.

And:

In LLMs as in humans, context is that which is scarce.

Interesting throughout.

Adblock test (Why?)

Read this on Marginal Revolution Header Banner
  Contact Us
  • Contact Form
  Follow Us
  About

Brainfind is your one-stop shop for breaking news headlines and personalized news stories. Not only are we a news aggregator and content curator, we also allow registered users to publish their own articles on our website with full credit and their social links.