__ingeniare__
__ingeniare__ t1_jea74g3 wrote
Reply to comment by [deleted] in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Currently, AI research for large models (such as ChatGPT) is expensive since you need large data centers to train and run the model. Therefore, these powerful models are mostly developed by companies that have a profit incentive to not publish their research.
A well known non-profit called LAION has made a petition that proposes a large publicly funded international data center for researchers to use for training open source foundation models ("foundation model" means its a large model used as a base for more specialized models, open source means that they are freely available for everyone to download). It's a bit like how particle accelerators are international and publicly funded for use in particle physics, but instead we have large data centers for AI development.
__ingeniare__ t1_jdhxcds wrote
Reply to comment by BinarySplit in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
I would think image segmentation for UI to identify clickable elements and the like is a very solvable task
__ingeniare__ OP t1_jdck0lh wrote
Reply to comment by Nukemouse in Why is this graph not a bigger deal? by __ingeniare__
Yes you could, the specific implementation is irrelevant, the big thing is that it can estimate the confidence at all
Submitted by __ingeniare__ t3_11zhttl in singularity
__ingeniare__ t1_jc9f2xb wrote
Reply to comment by EricLee8 in [N] Baidu to Unveil Conversational AI ERNIE Bot on March 16 (Live) by kizumada
No, it will provide an answer that aligns with the CCP's agenda
__ingeniare__ t1_ja6z4j2 wrote
Reply to comment by AsthmaBeyondBorders in Singularity claims its first victim: the anime industry by Ok_Sea_6214
Flickering is not solved at the moment yes, but how do you know it is far from being solved? Temporal consistency has already been solved in other aspects of generative AI (like inter-frame interpolation for FPS upscaling). I wouldn't be surprised if flickering is solved by the end of this year. Stable diffusion's Emad has already talked about real-time generated videos coming very soon with a recent breakthrough in their algorithm, allowing for something like 100x generation speedup.
__ingeniare__ t1_ja1dse6 wrote
Reply to comment by HabeusCuppus in How long before we start to see chat AI that specializes in a certain field at a human or better level? by saleemkarim
From what I could find, it's literally just another GPT-3-powered service fine-tuned for legal work. It's not by OpenAI, they just invested some money in it. Where did you hear it's GPT-4?
__ingeniare__ t1_j9ei5el wrote
Reply to comment by RavenWolf1 in Whatever happened to quantum computing? by MultiverseOfSanity
It's kind of like what happened with AI, most people don't care about it until they have something tangible to play with, and then it seems like it just came out or nowhere. It will probably be a while until quantum computing affects the everyday person though.
__ingeniare__ t1_j8i84uj wrote
Reply to comment by humptydumpty369 in ChatGPT Passed a Major Medical Exam, but Just Barely | Researchers say ChatGPT is the first AI to receive a passing score for the U.S. Medical Licensing Exam, but it's still bad at math. by chrisdh79
Given the current pace, there will likely be some model that scores in the top 50% (or even top of class) of medical students within one year from now.
__ingeniare__ t1_j8i7pmf wrote
Reply to comment by Lionfyst in ChatGPT Passed a Major Medical Exam, but Just Barely | Researchers say ChatGPT is the first AI to receive a passing score for the U.S. Medical Licensing Exam, but it's still bad at math. by chrisdh79
That has already happened, there's a hybrid ChatGPT/Wolfram Alpha program but it's not available to the public. It can understand which parts of the user request should be handed off to Wolfram Alpha and combine it into the final output.
__ingeniare__ t1_j8c4z0x wrote
Reply to comment by yickth in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
I think we have different definitions of scalable then. Our minds emerged from computation under the evolutionary pressure to form certain information processing patterns, so it isn't just any computation. Just so I understand you correctly, are you claiming an arbitrary computational system would inevitably lead to theory of mind and other emergent properties by simply scaling it (in other words, adding more compute units like neurons or transistors)?
__ingeniare__ t1_j8c0bbz wrote
Reply to comment by yickth in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
Let's say you have a computer that simply adds two large numbers. You can scale it indefinitely to add even larger numbers, but it will never do anything interesting beyond that because it's not a complex system. Computation in itself does not necessarily lead to emergent properties, it is the structure of the information processing that dictates this.
__ingeniare__ t1_j8b8y1b wrote
Reply to comment by yickth in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
I said that you can't have theory of mind appear from scaling just any compute system, not that you can't scale it.
__ingeniare__ t1_j880eru wrote
Reply to comment by yickth in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
The difference is the computing architecture. Obviously you can't just scale any computing system and have theory of mind appear as an emergent property, the computations need to have a pattern that allows it.
__ingeniare__ t1_j872ifz wrote
Reply to comment by ekdaemon in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
No we don't/didn't, artificial neural networks are very different from biological ones, and the transformer architecture has nothing to do with the brain.
__ingeniare__ t1_j8722ii wrote
Reply to comment by skolioban in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
You're talking about generative adversarial networks (GANs), which is a type of architecture from many years ago. More recent image generators tend to be based on diffusion, and text generators like in the article are transformer based.
__ingeniare__ t1_j4fry4r wrote
Reply to comment by 2bdb2 in So wait, ChatGPT can code... But can't code itself? by Dan60093
Even if it could code better than humans (like AlphaCode, that outperforms most humans in coding competitions), that's not the hard part.
The hard part is the science/engineering aspect of machine learning, programming is just the implementation of the ideas when they are already thought out. Actually coming up with useful improvements is significantly harder and requires a thorough grasp of the mathematical underpinnings of ML. ChatGPT is nowhere near capable of making useful contributions to the machine learning research community (or in other words, capable of writing a ML paper), and therefore it is incapable of improving its own software. AI most likely will reach that level at some point however, possibly in the near future.
__ingeniare__ t1_j1885k4 wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
It must develop its own goals and objectives if we intend it to do something general. Any complex goal must be broken down into smaller sub goals, and it's the sub goals we don't have any control over. That is the problem.
__ingeniare__ t1_j0tvtay wrote
Reply to comment by Ace_Snowlight in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
I wouldn't call ACT-1 AGI, but it looks revolutionary nonetheless. If what they show in those videos is legit, it will be a game changer.
__ingeniare__ t1_j0p2vrm wrote
Reply to comment by overlordpotatoe in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
Not really, this isn't necessarily something it saw in the dataset. You can easily reach that conclusion by looking at the size of ChatGPT vs the size of its dataset. The model is orders of magnitude smaller than the dataset, so it hasn't just stored things verbatim. Instead, it has compressed it to the essence of what people tend to say, which is a vital step towards understanding. It's how it can combine concepts rather than just words, which also allows for potentially novel ideas.
__ingeniare__ t1_j0g3rav wrote
Reply to comment by sohaibshaheen in Predictive Artificial Intelligence by Final-Cause9540
Determinism can be deduced from the laws of physics - if the laws are deterministic, then the universe is, by necessity, also deterministic. But there are many strange things we don't yet know about, such as consciousness and free will. So, no one really knows. Good talk!
__ingeniare__ t1_j0fw5pq wrote
Reply to comment by sohaibshaheen in Predictive Artificial Intelligence by Final-Cause9540
Not exactly, in this case it's theoretical because it is not of practical concern, despite being true. A theory in science is the highest possible status an idea can achieve, nothing can be conclusively proven.
Quantum randomness is a pretty popular idea, but everything else is known to be deterministic. Whether the universe as a whole is random or deterministic depends on if quantum randomness is actually true randomness, maybe we'll have an answer in the coming decades.
__ingeniare__ t1_j0ftkkp wrote
Reply to comment by sohaibshaheen in Predictive Artificial Intelligence by Final-Cause9540
I'm not talking about practically predictable using current tech, I'm talking about theoretically predictable. Everything that happens can be theoretically derived from the laws of the universe. The laws are deterministic (perhaps except for quantum mechanics). Therefore, everything is deterministic and can be predicted. Human emotions are only unpredictable because we don't have an accurate model of the brain of the human we're trying to predict. If we did, and had a computer powerful enough to simulate it, their behaviour could be predicted. Hence - lack of data (model of the brain) and processing power (computer to simulate it).
Also chaos theory has nothing to do with the possibility of predictions, only the difficulty. It states that a chaotic system yields very different results for very small differences in initial conditions, not that there is some magic randomness that is impossible to account for in the system. Given the complete initial conditions, you can compute it completely deterministically. Therefore, if you get it wrong because of a chaotic system, it was lack of data (the incorrect initial conditions).
__ingeniare__ t1_j0e08zw wrote
Reply to comment by sohaibshaheen in Predictive Artificial Intelligence by Final-Cause9540
The only truly unpredictable events are those in quantum mechanics, everything else is just a lack of data and processing power to do the inference. And we're not even sure about the quantum events.
__ingeniare__ t1_jealsr1 wrote
Reply to comment by Veleric in The next step of generative AI by nacrosian
They're probably just doing a vector embedding of the information and retrieving it using semantic search, this has been around for quite some time already