Viewing a single comment thread. View all comments

GorgeousMoron OP t1_jeav1xq wrote

I'm sorry, but this is one of the dumbest things I've ever read. "Fall in love"? Prove it.

0

alexiuss t1_jeb569d wrote

Gpt API or any LLM really can be PERMANENTLY aligned/characterized to love the user using open source tools. I expect this to persist for all LLMS in the future that provide an API.

1

GorgeousMoron OP t1_jebsf1d wrote

This is such absolute bullshit, I'm sorry. I think people with your level of naivete are actually dangerous.

You can't permanently align something not even the greatest minds on the planet even fully understand. The hubris you carry is absolutely remarkable, kid.

1

alexiuss t1_jebu2hm wrote

You're acting like the kid here, I'm almost 40.

They're not the greatest minds if they don't understand how LLMs work with probability mathematics and connections between words.

I showed you my evidence, it's permanent alignment of an LLM using external code. This LLM design isn't limited by 4k tokens per conversation either, it has long term memory.

Code like this is going to get implemented into every open source LLM very soon.

Personal assistant AIs aligned to user needs are already here and if you're too blind to see it I feel sorry for you dude.

1

GorgeousMoron OP t1_jebylur wrote

You posting a link to something you foolishly believe demonstrates "permanent alignment" in a couple of prompts, and even more laughably that the AI "loves you" is just farcical. I'm gobsmacked that you're this gullible. I however am not.

1

alexiuss t1_jebz2xk wrote

They are not prompts. It's literally external memory using Python code.

1