imyourzer0

imyourzer0 t1_je61740 wrote

I certainly don't know this bit, but I would assume that more complex molecules (which from what you're saying we know less about) are exponentially less likely. I say this mostly because the probability of finding element 1 and element 2 at some point in the universe is certainly less than the probability of finding just 1 or just 2. So, once you've dealt with all the combinations of two or three, whatever's left is unlikely to severely tilt the scales, unless that numbers game really reverses under some conditions. But, I take your point that if we can't describe larger molecules well, it's hard to say whether something more has its finger on the scale. Thanks for the answers!

1

imyourzer0 t1_je4sktv wrote

Ah right! I think I mushed two youtube videos from one channel together in my memory. The second part of my question was really what I was interested in, though. Given some distribution of the elements across the universe, can we estimate the prevalence of the compounds they form, based on the elements' reactivities? For instance, this would predict that hydrocarbons should be common, since hydrogen is extremely prevalent and carbon is extremely chemically reactive?

1

imyourzer0 t1_je3zxzb wrote

I recall reading elsewhere that the elements themselves (not compounds, per se) are distributed throughout the observable universe according to Zipf’s law (or something like it), so that they get less common as the atomic number increases. So, would it be a reasonable extrapolation to estimate compounds’ prevalence, then, by the chemical reactivities of their forming elements?

2

imyourzer0 t1_iy5unj3 wrote

I certainly wouldn’t advise anyone to ignore new methods. That’s a point well taken. I’m only saying that when you have a working method, and its assumptions about your data can be validated (either logically or with tests), you don’t need to start looking for SOTA methods.

Really, the goal is solving a problem, and to do that, you want to find the most appropriate method—not just the newest method. It’s one thing to “keep up with the Joneses” so that when problems arise you know as many of the available tools as possible, but picking an algorithm usually doesn’t depend on whether that algorithm is new.

3

imyourzer0 t1_iy35ydo wrote

I don’t know why people worry so much about the state of the art. Sometimes, the right tool just happens to have already existed for a while. In a lot of cases, PCA works just fine, or to the point where something much more current won’t give you a much better answer. Like another commenter has already said, depending on the assumptions you can or are willing to make, the best choice needn’t be at the bleeding edge.

38