Viewing a single comment thread. View all comments

red75prime t1_iynpyob wrote

It's not feasible to increase context window due to quadratic growth of required computations.

> It doesn't need more context window to be more useful

It needs memory to be significantly more useful (as in large-scale disruptive) and, probably, other subsystems/capabilities (error detection, continual learning). Its current applications require significant human participation and scaling alone will not change that.

14

EntireContext OP t1_iynq6u8 wrote

I mean the context window will increase with incoming models. GPT-1 had a smaller context window than ChatGPT.

2

ChronoPsyche t1_iyp04j8 wrote

It will increase but the size of increases will slow down without major breakthroughs. You can't predict the rate of future progesss solely based on the rate of past progress in the short term.

You guys take the "exponential growth" stuff way too seriously. All that refers to is technological growth over human history itself, but every time scale doesn't follow the exact same growth patterns. If they did we'd have already reached the singularity a long time ago.

Bottlenecks sometimes occur in the short term and the context-window problem is one such bottleneck.

Nobody doubts that we can solve it eventually, but we haven't solved it yet.

There are potential workarounds like using external memory systems, but that is only a partial workaround for enabling more modest context-window increases. External-memory systems are not feasible for AGI because they are way too slow and do not scale well dynamically, not to mention they are separate from the neural network itself.

In the end, we either need an algorithmic breakthroughs or quantum computers to solve the context-window problem as it relates to AGI. An algorithmic breakthrough is more likely to happen before quantum computers become viable. If it doesn't, then we may be waiting a long time for AGI.

Look into the concept of computational complexity if you want to better understand the issue we are dealing with here.

2