LanchestersLaw
LanchestersLaw t1_je22xzv wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
Something ive seen a lot of on reddit which you can get a slice of is now that GPT is out, “let me build an app that has GPT do this thing automatically” with varying degrees of success from dating bot to medical diagnosis tools
LanchestersLaw t1_jdx8cyc wrote
So you’re telling me communism is better at making long lasting cars than long lasting states.
LanchestersLaw t1_jdszbjk wrote
Reply to comment by addition in [D] GPT4 and coding problems by enryu42
What I think is the most amazing thing is that GPT got this far while only trying to predict the very next word one word at a time. The fact it can generate essays by only considering one token at a time is mind boggling.
With all the feedback from ChatGPT it should be easy to program a supervisor who can look at the entire final output of GPT and make a prediction what the user would say in response; then it asks that to GPT to revise the output recursively until it converges. That should be relatively easy to do but would be very powerful.
LanchestersLaw t1_jdiw7op wrote
Reply to comment by mycall in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
The example data does demonstrate object detection
LanchestersLaw t1_jd42gi0 wrote
I love it! Im not a buzzkill, the animation looks great and communicates change over time
LanchestersLaw t1_jcf5x9c wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
I think the most similar historic example is the human genome project where the government and private industry where both racing to be the first to fully decode the human genome but the US government was releasing its data and industry could use it to get even further ahead.
Its the classic prisoners dilemma. If both parties are secretive research is much slower and might never complete but with a small probability of completing the project first for a high private reward for the owner and low reward for society. If one party shares and the other does not, the withholding party gets a huge comparative boost for a high probability of a high private reward. If both parties share we have the best case with parties being able to split the work and share insights so less time is wasted for a very high probability of a high private and high public reward.
I think for AI we need mutual cooperation and to stop seeing ourselves as rivals. The rewards for AI cannot be privatized for the shared mutual good of humanity in general (“Humanity” regrettably does include Google and the spider piloting Zuckerberg’s body). Mutual beneficial agreement with enforceable punishment for contract breakers is what we need to defuse tensions, not an escalation of tensions.
LanchestersLaw t1_jcf41pg wrote
Reply to comment by Competitive_Dog_6639 in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Bing Chat runs on GPT-4 and a fully version with multimodality is availible as a research preview
LanchestersLaw t1_j9hk5h1 wrote
Reply to comment by Deto in [OC] How Walmart makes money (they just released earnings for the fiscal year ending January 31) by IncomeStatementGuy
Walmart is the biggest purchaser of many goods which puts it as the counterpart to a monopoly; a monopsony (single purchaser instead of single producer).
Walmart has incredible negotiating power, when there is a price shock Walmart forces its suppliers to take the hit with the threat of never buying from them again; which it can do. One of the efficiencies of Walmart’s economy of scale is that it pays below market rate for most of its goods because of bull purchase efficiencies and negotiation power.
https://www.econ.ucdavis.edu/events/papers/1014JustinCWiltshire_JMP.pdf
LanchestersLaw t1_j7nb8o9 wrote
Reply to [D] Should I focus on python or C++? by NoSleep19
C++ is optional. It has its benefits but isn’t strictly necessary. The bulk of ML is calling functions from libraries which do the hard part for you. The part of python you really need to know is how to use the existing libraries. That is the bare minimum to do ML
LanchestersLaw t1_j7mkxvx wrote
Reply to Wouldn’t it be a good idea to bring a more energy efficient language into the ML world to reduce the insane costs a bit?[D] by thedarklord176
It matters way more what you doing than the language itself. I can easily make an infinite loop C program that uses more energy than a Haskell program.
LanchestersLaw t1_j6ygz44 wrote
It keeps looping over and over again! It never ends!
LanchestersLaw t1_j64bjww wrote
Reply to [OC] Housing tenure in the UK by age over 20 years. A big increase in young renters and older people owning their homes outright. by Optimal-Credit-1945
In 2002 there was a peak of home ownership in people aged 70-74. In 2022 those people are aged 85+.
Older people didn’t buy more homes. The age bracket with highest home ownership got older.
LanchestersLaw t1_j3yi7x6 wrote
Reply to I analyzed 11000 products of a Dutch supermarket to find the cheapest sources of protein [OC] by MemeableData
Top 10 foods SWOLE VEGANS dont want you to know about!!!
LanchestersLaw t1_j3dh4ws wrote
Reply to comment by Baturinsky in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
The key word you to use for better answers are “control problem” and “AI safety”. For my personal opinion ChatGPT/GPT-3.5 is an inflection point. GPT-3.5 can understand programming code well and do a passable job generating it. This includes its own code. One of the beginner tutorials is using GPT to program its own API.
That said, GPT-3.5 has many limitations. It isnt a threat. Future versions of GPT have the potential to be very disruptive.
LanchestersLaw t1_j3afzkc wrote
I think this is a valid question and many of the responses here were quite snarky and unnecessarily rude. The short answer is yesnt. As with any regulation there is no one solution everyone will agree on or be happy with. I felt like a particularly ironic way to answer this question is with ChatGPT which gave me this response:
> “There is certainly a valid concern about the potential misuse of AI, and it is important for society to consider the risks and how to mitigate them. However, it is also important to consider the potential benefits of AI and not restrict research in a way that would prevent those benefits from being realized. It is a delicate balance to strike.
> There are already some regulations and guidelines in place for AI research and development, such as the Asilomar AI Principles and the EU's General Data Protection Regulation (GDPR). In addition, many companies and organizations have their own guidelines and best practices for responsible AI development.
> It is also important to note that AI is not a single monolithic technology, but rather a broad field with many different applications and use cases. The risks and benefits of AI can vary depending on the specific application, so a one-size-fits-all approach to regulation may not be effective.
> Overall, it is important for society to continue to have an open and honest dialogue about the risks and benefits of AI and how best to address them. It is also worth noting that AI is still in the early stages of development, and there are many technical and practical limitations to what it can currently do. While it is important to be mindful of the potential risks and to take steps to mitigate them, it is also important to keep these limitations in perspective and not let fear or anxiety about the potential misuse of AI obscure the many potential benefits it can bring.”
Edit: r/artificial is a better community to ask this question. This one is mostly about the technical details of how the complicated math works and “why is my model doing something stupid?”
LanchestersLaw t1_j1af69w wrote
Reply to comment by quantuminous in The Heritage Foundation data sources contradict its report that the US city with the 30th highest rate of homicide was 2.4 / 100K. One of their referenced sources- "CHR&R" (footnote 25) - has more than 1K counties with higher homicide rates. Here's that data. Relevant links are in the comments. [OC] by quantuminous
I applaud what was surely many hours painstaking looking for truth in the face of lazy lies.
LanchestersLaw t1_j1aev2b wrote
Reply to comment by Pac_Eddy in The Heritage Foundation data sources contradict its report that the US city with the 30th highest rate of homicide was 2.4 / 100K. One of their referenced sources- "CHR&R" (footnote 25) - has more than 1K counties with higher homicide rates. Here's that data. Relevant links are in the comments. [OC] by quantuminous
I am continuously amused by the fact that Chicago/New York/name_a_city is not the county with the highest homicide rate, just a place with a high number of absolute homicides. The actual culprits are rural counties in the deep south.
LanchestersLaw t1_iyof6it wrote
In general: i depends on a lot of things.
For your specific case: it will almost certainly be better to feed it the raw data.
There were a couple points in your post that seem like fundamental (but common) misunderstandings of neural nets.
- you cannot scale it up infinitely, that causes over fitting there is an optimal size, finding that size depends on the problem
- summary statistics are not features. Yes it sometimes makes sense to calculate features, not it almost never makes sense to give it summary statistics
- yes neural nets identify “features”, no they are not anything you would recognize as a feature
LanchestersLaw t1_iv9de2l wrote
Reply to comment by leibnizpascal in [OC] Detailed Language Family Map of the World by BLAZENIOSZ
A lot of research went into trying to define language families. It is a very interesting topic to read about. https://en.m.wikipedia.org/wiki/Proto-Indo-European_language
The TL;DR version is that languages evolve in predictable ways. You can reverse engineer what a parent language would sound like by comparing its daughter languages. This is very similar to cladistic analysis in evolutionary biology. By comparing traits of organisms it is possible to reverse engineer how they evolved.
Like most scientific hypotheses the evidence for language families was pretty flimsy at first, but it had accumulated over time and is supported by archeological finds, and DNA analysis. All languages should have a common ancestor language far enough back, in the same way all organisms have a common ancestor. Unlike organisms, language does not have fossils and the language groups shown here are basically as far back as we are able to show similarities to languages.
India is one of the most interesting countries in the world linguistically because it has multiple language families. Because children tend to speak their parent’s language, the language families correspond to human migrations. The Dravidian speakers of South India used to be more wide-spread in India before proto-indo-europeans orginating from central asia displaced them. This is an incredibly interesting topic to read about. https://en.m.wikipedia.org/wiki/Peopling_of_India
LanchestersLaw t1_jea6q5k wrote
Reply to [D] What do you think about all this hype for ChatGPT? by Dear-Vehicle-3215
Devil’s advocate: why shouldnt the biggest leap in progress towards AGI and the shocking rate of progress be hyped? Even if you limit the news to just be publications by MS/closedAI a lot is happening with progress that was expected to happen in years taking weeks.