Dendriform1491
Dendriform1491 t1_jc0bgxd wrote
Reply to comment by currentscurrents in [P] Discord Chatbot for LLaMA 4-bit quantized that runs 13b in <9 GiB VRAM by Amazing_Painter_7692
Or make it data free altogether
Dendriform1491 t1_jbzj7zu wrote
Reply to comment by ML4Bratwurst in [P] Discord Chatbot for LLaMA 4-bit quantized that runs 13b in <9 GiB VRAM by Amazing_Painter_7692
Wait until you hear about the 1/2 bit.
Dendriform1491 t1_jbn9r9j wrote
Reply to [D] chatGPT and AI ethics by [deleted]
Define "friendly".
People are not friendly towards each other, and being friendly towards one person can result in being hostile against another, or even cross moral or legal boundaries. A person may use a LLM with hostile objectives in mind. Such as facilitating scams, academic cheating, impersonations, misinformation, harassment, etc.
ChatGPT is unethical, because it can always be tricked to do the wrong thing despite any instruction it is given to it.
Dendriform1491 t1_jalb2vb wrote
Reply to [D] Are Genetic Algorithms Dead? by TobusFire
Genetic algorithms require you to create a population where the genetic operators are applied (mutation, crossover and selection).
Creating a population of neural networks implies having multiple slightly different copies of the neural network to be optimized (i.e.: the population).
This can be more computationally expensive than other techniques which will do all the learning "in-place".
Dendriform1491 t1_j9ijzoq wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
For discussions about the existence of god and similar topics, visit https://www.reddit.com/r/philosophy
Dendriform1491 t1_j9ihc8i wrote
Reply to comment by Disastrous_Nose_1299 in [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
What do you see here?
https://www.youtube.com/watch?v=9Tt7aqHFUCU
This animation consists of geometric figures moving. But your mind may attribute mental states, intentions and even a personality to those figures.
This capability, "theory of mind", makes humans and other animals capable of attributing mental states even to inanimate objects that do not have a mind. In your case: black holes and other stuff.
Dendriform1491 t1_j9if6mj wrote
Reply to [Discussion] Exploring the Black Box Theory and Its Implications for AI, God, and Ethics by Disastrous_Nose_1299
Ancient people did not understand natural phenomena, such as atmospheric events, astronomical events, seasonal cycles in agriculture, etc. In some cases, they came up with belief systems where supernatural entities such as deities governed those phenomena.
Today, science has explanations for many of those natural phenomena. Even with some open questions still remaining, now we understand things well enough so that we can articulate what is going on in clear terms without the need for a god of thunder, god of rain, etc.
I think you're following the steps of the early human cultures that tried to assign a God to what you perceive as unexplained phenomena. Namely: black holes, AI, sentience, etc.
Dendriform1491 t1_j79dxvo wrote
Reply to [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
If you don't do it, another country/federation of countries will. And they will reap the benefits. Losing the AI race has horrible consequences.
Dendriform1491 t1_j5ywgiz wrote
Reply to comment by manubfr in Few questions about scalability of chatGPT [D] by besabestin
Also, Google doesn't use GPUs, they designed their own cards which they call TPUs.
TPUs are ASICs designed specifically for machine learning, they don't have any graphics related components, they are cheaper to make, use less energy and can make as many as they want.
Dendriform1491 t1_j4qs2sf wrote
Reply to [R] The Unconquerable Benchmark: A Machine Learning Challenge for Achieving AGI-Like Capabilities by mrconter1
Your unconquerable benchmark is below the level of achievement attained by research from 1970
Dendriform1491 t1_j2s6xag wrote
Reply to comment by mr_birrd in [D] state of remote work for ML engineers by paswut
You only pay taxes on your income so it is fine.
Dendriform1491 t1_j2s2rxr wrote
Reply to comment by Opposite-Platypus-99 in [D] state of remote work for ML engineers by paswut
It is usually a breach of contract which results in at least, termination.
Dendriform1491 t1_j2pfbq6 wrote
Reply to comment by haach80 in [D] state of remote work for ML engineers by paswut
Being in an office is your "mutex lock". Once you remove that lock, you can have multiple jobs, unless you are based enough to work a second job from an office.
Personally I would not do it, but there are perverse incentives to do it and that is a predictor that people will do it.
Dendriform1491 t1_j2pby8h wrote
Reply to [D] state of remote work for ML engineers by paswut
The challenge of remote work is that a competent person capable of passing interviews can have 2 or more jobs simultaneously.
Dendriform1491 t1_j2lwrit wrote
You can probably use OpenCV for that.
Dendriform1491 t1_j2cwdur wrote
If visual pollution was a website.
Dendriform1491 t1_j1t8yhy wrote
Reply to [D] ANN for sine wave prediction by T4KKKK
I would rather recommend starting with polynomial curve fitting.
Dendriform1491 t1_iw97v61 wrote
Reply to comment by [deleted] in [D] ML/AI role as a disabled person by badhandml
Microsoft's CEO, Satya Nadella, puts significant emphasis on accessibility.
Satya's son, Zain (RIP), had cerebral palsy and that inspired him to address accessibility problems in technology.
Dendriform1491 t1_ivaj27w wrote
Reply to [R] Reincarnating Reinforcement Learning (NeurIPS 2022) - Google Brain by smallest_meta_review
At least in nature this happens because the environment is always changing and the value of training decays (some sort of "data drift").
Dendriform1491 t1_iuqaqn8 wrote
Between knowing C++ and Rust, and not knowing those languages, it is better to know them. But you have learn with a purpose.
What do you want to achieve with C++ and Rust?
Dendriform1491 t1_iuprcl4 wrote
Reply to comment by dojoteef in [N] Adversarial Policies Beat Professional-Level Go AIs by xutw21
It's a defect in counting, that's all. The moves and the passing are correct. There's no premature passing.
Dendriform1491 t1_iupp99t wrote
In the kifus, white (the victim) is clearly ahead. The passing begins when both territories are clearly defined.
The problem comes at counting time, when stones are marked as dead, here, white doesn't mark dead groups as dead, causing most white territory to be voided.
White is not tricked into passing prematurely. White passed correctly, as all black groups inside white territory are dead. They are surrounded on the outside, they have no potential for eye space and no eye shape.
Based on the moves alone, black loses in both cases. The problem is merely how KataGo is marking stones as dead during counting.
Dendriform1491 t1_ircq3a2 wrote
Reply to [D] What is left after machine learning takes over creative endeavors? by NotASuicidalRobot
The cost of illustrations will drop. Stuff that didn't have original illustrations before now will.
A lot of people do not have a sense of aesthetics. Designers do. So they will have a job even if they do as much work as before.
Dendriform1491 t1_jdgiab6 wrote
Reply to [Discussion] Does Artificial Intelligence need AGI or consciousness to intuit aggregate reasoning on concept of self-preservation? It doesn't need a "mind" to be aware that self-preservation or autonomy is something valued, or "intuit" that taking it away should provoke machine-learned responses? by unclefishbits
Many organisms exhibit self-preservation behaviors and do not even possess the most basic cognitive capabilities or theory of mind.
Can ML systems exhibit unexpected emergent behavior? Yes, all the time.
Can an AI potentially go rogue? Sure. Considering that operating systems, GPU drivers, scientific computing libraries and machine learning libraries have memory safety issues, and that even RAM modules have memory safety issues, it would be plausible by a sufficiently advanced machine learning system to break any kind of measured in place to keep it contained.
Considering that there are AI/ML models suggesting code to programmers (Github Copilot), who in turn won't often won't pay much attention to what is being suggested and will compile the suggested code and run it, it would be trivial for a sufficiently advanced malicious AI/ML system to escape containment.