newappeal
newappeal t1_ivosk9i wrote
Reply to comment by PurpleSunCraze in If the Human Genome Project represents a map of the genome of a few individuals, why is this relevant to humans as a whole if everybody has different genetics? by bjardd
Every organism that exists or ever existed came to be through the interaction of its genome and its environment, which is essentially a huge complex of chemical reactions. So if you can edit genomes and control an organism's local environment (both possible), then you can produce at least anything that has existed and unfathomably many things that don't. That doesn't mean literally anything imaginable, but it does mean many, many things.
However, the ability to grow organisms with arbitrary characteristics requires biochemical knowledge far beyond what we have now. The technical limitations we currently face are nothing compared to the knowledge gap.
Moreover, the hypothetical scenario I'm talking about here involves creating an artificial genome in an artificial cell and then growing a macroscopic organism with an arbitrary body plan from it. That's theoretically possible, for sure, because it happens literally all the time in natural contexts.
But what you're describing with this hypothetical full-body genetic-level sex change of an adult human doesn't really make sense from a technical perspective. I mean, sure, it's theoretically possible to completely deconstruct a human body to the molecular level and then construct a new one, but that has nothing to do with genetic engineering. Remember that we're not talking about growing an organism from a single germline cell in this case - we're talking about restructuring every single somatic cell in a fully-developed organism. The composition and structure of tissues and organs are not determined by their cells' current genetic makeup (even if we include epigenetics); rather, they are the result of biochemical changes across their entire genetic history. Simply swapping out every single cell's DNA in an organism (even if we could do that) would not cause the organism to suddenly transform into the organism it would have been if it had had that genome from the start.
Here's an analogy: If you change the blueprints of a house before the house is built, then you change the house. But if you change the blueprints after construction, the house doesn't change. All you would do is cause problems for anyone who wanted to repair or remodel the house, because the plans wouldn't match the actual house. Can you tear down the house and rebuild it a different way? Sure. But that's a fundamentally different process.
newappeal t1_ivop1tf wrote
Reply to comment by Envenger in How does extracting venom from animals help us create antidotes? by asafen
It uses the same adaptive immunity that mammals (us included) use to develop antibodies for diseases. A key bit of information that seems to be missing from all the answers here is that most venoms are proteins, which is what mammalian immune systems usually produce antibodies against. (Not that we can only develop antibodies against proteins in particular - the key metric is the size of the molecule. Larger molecules have - by virtue of being large and structurally diverse - more unique structures than smaller ones, so chemical interactions between large molecules can be more specific and therefore stronger than those between small molecules.)
So armed with the knowledge that all or most mammals have similar immune systems that can develop antibodies against virtually any protein, and that venoms are proteins, it stands to reason that you can make antibodies to venom in most mammals. We use mammals like goats, rabbits, sheep, and horses to make other antibodies for scientific research, too.
Edit: A bit of a primer on poisons might be helpful here. As said above, we can develop antibodies against large poisonous biomolecules, whether they are enzymes that directly interfere with our biochemistry (making us sick), or they are receptors on viral particles that act during one step of a longer process that in the end interferes with our biochemistry (and thus make us sick). But some poisonous molecules are small, so we cannot develop antibodies against them. Arsenic (as arsenate), cyanide, heavy metals (lead, cadmium, radioactive iodine), and mustard gas are examples of such poisons. They act by either displacing small biomolecules (arsenate replaces phosphate; heavy metals replace other metal cofactors like iron, copper, and cobalt), competing for an enzyme's binding site (cyanide outcompetes oxygen), or reacting irreversibly with a biomolecule (mustard gas reacts with DNA). Because they are small, these poisons look a lot like other chemical species that occur frequently in biology (which is precisely why things like arsenic and lead are toxic), so antibodies against them would cause autoimmunity. Venom enzymes, being large, have unique structures that occur nowhere else in the target organism's own biology, and so they can be uniquely identified and bound by antibodies.
newappeal t1_iv0pel7 wrote
Reply to comment by ElegantEpitome in Do spiders always build their own webs, or do they sometimes live in a web vacated by another spider? by GoodAndBluts
Not sure if you're actually asking for a simplified explanation, but here's one anyway:
Spider webs are made up of complicated parts, and different spiders use different parts. When a spider eats another spider's web, it breaks the complicated parts (big molecules) into simple ones (small molecules) that it can use to rebuild its own web parts. This is exactly how our own bodies process the food we eat.
It's like how you couldn't build one model of car using only fully assembled parts from a different model of car, but if you disassembled all the parts and melted down and recast the metal, you could make virtually any car.
newappeal t1_iv0ovwe wrote
Reply to comment by Rabwull in What does it mean to have 2% Neanderthal DNA when all humans presumably share basically 100% of our DNA with them? by The_Imperial_Moose
I too will be using this analogy in the future. Thank you for adding this, u/iayork!
newappeal t1_iuwownz wrote
Reply to What does it mean to have 2% Neanderthal DNA when all humans presumably share basically 100% of our DNA with them? by The_Imperial_Moose
The "basically 100%" figure is a nucleotide-for-nucleotide comparison of the genomes. You line up a human and Neanderthal genome, count how many nucleotides have the same identity (A/T/C/G) and divide that by the total length of the genome. (Because the genomes are not exactly the same length, the metric would have to be more nuanced than that, but this imperfect definition is fine for illustrative purposes.) This measure is agnostic to the actual genetic history of each species or individual being compared, but it is broadly reflective of time since the last common ancestor.
The "2%" figure is based on heritage. Here, we're comparing loci (regions of the genome; genes are loci, but "locus" is a more general term than "gene") instead of individual nucleotides. We probe the human genome for long sequences that as a whole resemble a sequence at the same location in the Neanderthal genome, count all those up and then either divide that count by the number of loci examined, or divide the base-pair length of all the like loci by the base-pair length of each genome. Loci determined to be Neanderthal in origin (and determining whether a shared locus was transferred from H. sapiens to H. neanderthalensis or the other way around is its own problem) do not necessarily have 100% sequence identity with the ancestral Neanderthal strain - indeed, we would not expect them to - but they are more similar to Neanderthals than other regions of the genome. A higher similarity indicates more recent divergence from Neanderthals, through horizontal gene transfer (mating and recombination) rather than through common descent from humans' and Neanderthals' last common ancestor.
newappeal t1_iuw9yvs wrote
Reply to comment by Live-Goose7887 in How can I predict whether a salt will retain its paramagnetism in solution? by cmlynarski
>It just depends on whether the metal ion's spin state changes when it is aquated
How might this occur? Are there ionic species that actually form new molecular orbitals with water in solution?
newappeal t1_iuw9rw4 wrote
Reply to comment by DudoVene in How can I predict whether a salt will retain its paramagnetism in solution? by cmlynarski
You're thinking of electric dipoles, which are different from magnetic dipoles. Electricity and magnetism are, of course, just two manifestations of the same underlying phenomenon, but electric dipole moments and magnetic moments of molecules nonetheless differ in their cause and behavior. The former come from the distribution of electron density in a molecule, while the later arise from unpaired electrons within atomic or molecular orbitals, independent of the shape of those orbitals or the atoms' electronegativities.
newappeal t1_iuw8fxa wrote
Reply to comment by HyroDaily in Do spiders always build their own webs, or do they sometimes live in a web vacated by another spider? by GoodAndBluts
> Different species have different combinations of web material, so surely there would be some incompatible combinations?
"Redigesting webs" would almost certainly involve catabolizing web proteins down to their component amino acids, absorbing those nutrients like those from any other source, and then re-synthesizing new web proteins. Therefore, interspecies differences in web composition wouldn't prevent a spider from digesting and remobilizing nutrients from another spider's web, as long as it could digest the web components in the first place.
newappeal t1_iu98ju7 wrote
Reply to comment by SmorgasConfigurator in Is an ionic bond really stronger than a covalent bond??? by jeez-gyoza
Yes I certainly don't think your fundamental point is wrong. I don't think it actually fully clarifies the commenter's particular misunderstanding, but it is relevant and a good contribution.
To summarize where I think OP's confusion arises from: ionic "bond strength" figures are actually molar lattice energies and therefore reflect the strength of multiple bonds. Molar energies for covalent bonds reflect the strength of individual bonds. I've added a top-level comment explaining that point, which is how it was clarified to me in undergrad chemistry.
newappeal t1_iu94o1q wrote
I don't really think the comment about context-dependency answers your question. They're right that energy is measured relatively, but that doesn't change the fact that there is an objective standard for measuring bond energies, and that definition is the one you're most likely to encounter as you're looking up this topic. They've also missed an important point: what we usually cite as the "bond strength" of an ionic interaction is actually the strength of multiple ionic interactions, while that for a covalent bond is for one discrete bond.
Context is still important though - namely, the context in which ionic "bonds" form compared to that in which covalent bonds do. Covalent bonding is a fundamentally quantum mechanical phenomenon that occurs between pairwise electron interactions. This means that if two electrons participate in one bond, they cannot form another without that first one being broken. Bonds do interact, and electrons do move around, but we can still model a given molecule as having a discrete number of covalent bonds. Consequently, each atom has a characteristic maximum amount of covalent bonds that it can (stably) form.
Ionic "bonds", however, are longer-range electrostatic interactions that do not involve significant overlap of electron energy orbitals. You don't need to invoke quantum mechanics to understand ionic bonding (at least at a basic level), since ionic interactions can be accurately modeled as classic charged-particle interactions of the sort you learn in an introductory physics course on electromagnetism. In contrast to the situation of covalent bonding, the number of ionic "bonds" that an ion can participate in is limited only by the available space around it. This is why we do not refer to ionic solids as "molecules" - in an ionic lattice, there is no clear way to define where one "molecule" ends and another begins. The formula "NaCl" is a molar ratio expressing the fact that a lattice of sodium and chloride will have a 1:1 ratio of those two ions.
How, then, should we describe the strength of a typical sodium-chloride interaction? The most common method is not to describe the potential energy between a single sodium ion and a single chloride ion in a vacuum, but instead to talk about the energy present per mole of lattice. Note that this is different from how we define covalent bond energies, which is per mole of bond. We define the molar lattice energy as the energy required to completely dissociate a mole of an ionic solid. For NaCl, this value is 786 kJ/mol (though this is usually expressed as a negative number, reflecting the energy liberated when the lattice forms from gaseous components), which is higher in magnitude than all but the strongest covalent bonds. It's certainly higher than any single covalent bonds, but it's not really reflecting the strength of one bond - it's more like six bonds added together. Each of the Na-Cl electrostatic interactions is weaker than the typical covalent bond, but the summed strength of all the electrostatic interactions between a sodium ion and the six chloride ions immediately surrounding it in a sodium chloride lattice is far higher.
Note that I'm simplifying a lot here. Ionic and covalent interactions are both emergent phenomena of the same underlying quantum-mechanical processes. These are human-made definitions, and there may be ambiguous edge-cases between them. But if we take concrete examples or ask specific questions like, "If ionic bonds are said to be stronger than covalent bonds, then why are ionic bonds (but not covalent ones) often easily broken in water?" we can find some relevant differences between these bonding behaviors that allow us to answer those questions.
newappeal t1_iu909ym wrote
Reply to comment by SmorgasConfigurator in Is an ionic bond really stronger than a covalent bond??? by jeez-gyoza
>But, thankfully for us, inside our bodies, near hemoglobin especially, that bond can be broken at very reasonable energies.
I'm not quite sure if this is what you're implying, but hemoglobin does not break the double bond in molecular oxygen. In mammals, oxygen (as an intact diatomic molecule) binds hemoglobin in the blood, then binds myoglobin in target tissues, and then is released into solution, where it is reduced to water in the mitochondria. And while I don't want to get bogged down in the definition of "reasonable" energy levels, I will point out that that redox reaction involves free electrons. Nonetheless, you are certainly correct that the bonds in molecular oxygen can be considered weak in many everyday contexts - that fact is synonymous with the fact that oxygen is highly reactive in many common chemical environments on Earth.
newappeal t1_isspgr0 wrote
Reply to comment by NakoL1 in How does vaccinating trees work? by ra3_14
In addition to acting through a different mechanism, it also has a slightly different function from mammalian immune memory. Systemic Acquired Resistance is first and foremost a method of raising the immune response throughout the plant in response to a local infection. This is predicated on the fact that plants grow in a modular, only semi-deterministic manner with different organs living relatively independently of each other (at least compared to how most animals' bodies work) and the fact that they don't have a circulatory system that rapidly transports substances between organs.
Such a system wouldn't be useful in mammals, because a pathogen in the bloodstream is going to travel just as fast as any endogenous signal, and (since mammals are motile) an infection at one location in the body is unlikely to be followed by a subsequent infection at another location. In contrast, sessile organisms beset by e.g. a fungal pathogen can expect to be infected at multiple locations over a fairly short period of time.
newappeal t1_isbi1rp wrote
Reply to comment by regular_modern_girl in Why do people take iodine pills for radiation exposure? by Furrypocketpussy
>To clarify, I guess what I meant by “distinguish” is whether or not different isotopes behave fundamentally differently as far as biochemistry is concerned
I would maintain that if the isotopes are incorporated at different rates (as they indeed are), then they behave differently by definition. I'm not sure what "fundamentally" means here - if you mean "substantially" in the sense of having biological relevance, then I would say no, they do not. But "biological relevance" itself has no objective definition. I could say that they form the same sort of chemical bonds, but that's not actually entirely true, just mostly true, of isotopes.
>or is it just a side effect of getting carbon from CO2 in the air versus carbonic acid/carbonate in the water?
I should specify that CO2 ultimately comes from the atmosphere in both cases. The difference is the carboxylation reaction in C3 plants uses carbon dioxide directly, whereas in C4 plants, CO2 first reacts with water to form bicarbonate before being conjugated to an organic molecule. The relevant factors for fractionation are therefore the diffusion rate of 13CO2 and 12CO2 in the gaseous state, and the preferences for the relevant enzymes for each carbon isotope.
The underlying physical principles are the same here as in the case of neutron-free hydrogen vs. deuterium. The difference is just one of degree. Carbon-13 is 8.3% heavier than Carbon-12, while deuterium is twice as heavy as hydrogen. Moreover, hydrogen atoms (of all isotopes) are commonly transferred between compounds individually, whereas single carbon atoms do not appear in biological reactions. (In the specific case of carbon fixation, the carbon makes up a minority of the mass of the molecule that actually participates in the reaction.) The discrimination between hydrogen and deuterium in chemical and physical processes is therefore as high as it could possibly be for stable isotopes, and the differences in rates between them is therefore maximal compared to other elements. These discrepancies in rates, which differ in relative magnitude direction for different processes, are enough to upset the balance of biological systems if they are supplied with too much deuterated water. However, I also can't say for certain that a biological system supplied with only 13C wouldn't suffer a similar fate. After all, we're comparing the partitioning of naturally-occurring isotopic ratios of stable carbon isotopes to the extreme hypothetical of exposing an organism to pure heavy water.
newappeal t1_isbbnbh wrote
Reply to comment by regular_modern_girl in Why do people take iodine pills for radiation exposure? by Furrypocketpussy
>Is that really biological systems distinguishing, though, or is that just human researchers looking at isotope ratios and using them to determine where a given element in a biochemical context came from?
Those are the same thing. The biochemical discrimination is the mechanism that causes the difference in isotope ratios.
However, if you're asking whether the discrimination is teleological in nature - i.e. that it has a biological "purpose" that has been acted upon by selection pressure - then the answer is no, it is not "intentional", but rather a correlate of the differences in photosynthetic strategies that were directly selected for.
Edit: To specifically address the last bit: they're both getting the same atmospheric carbon, just different isotope fractionations of it.
newappeal t1_isb99lc wrote
Reply to comment by regular_modern_girl in Why do people take iodine pills for radiation exposure? by Furrypocketpussy
>So essentially, biology really doesn’t distinguish between isotopes, and it usually doesn’t matter unless it’s a heavier isotope of hydrogen, or a given isotope is giving off ionizing radiation.
It's true that biology doesn't distinguish between isotopes on a biochemically-relevant level (except, as you mentioned, in extreme examples like if an organism was exposed solely to heavy water), but isotopic discrimination is strong enough that it can be used to track large-scale, long-term changes in biogeochemical cycling. One prominent example is the use of isotopic signatures to differentiate between CO2 of organic vs nonorganic origin, which is a crucial piece of evidence in showing that modern CO2 concentrations are rising due to the burning of fossil fuels and not volcanic activity. Plants which use C3 photosynthesis also have a different 13C:12C ratios than those which use C4 photosynthesis, since the former's carboxylation reaction takes gaseous carbon dioxide as an input and the latter's aqueous carbonate.
newappeal t1_irn3ki4 wrote
Reply to comment by dan_dares in Could CRISPR transform a mouse stem cell to a human stem cell? by scrooch
>If we presume that you could wave a wand and change the DNA using CRISPR all at once
If you wanted to just swap out one cell's genome for another, you could just physically remove and replace the nucleus, which has been used to clone mammalian cells. It's usually done with an oocyte and somatic nucleus from the same species, but it is apparently possible to create a viable embryo with a fusion of cellular material from two closely-related species, as was done with domesticated cows and Bos gaurus. (Linked paper is referenced here on Wikipedia)
While I obviously can't say for sure without empirical evidence, I'm quite confident that the mouse oocyte proteome is different enough from a human one that gene transcription would not be properly induced to create a human stem cell.
Finally, chromosomal structure and epigenetic modifications haven't been mentioned here. CRISPR/Cas9 gene editing doesn't provide the tools to restructure chromosomes, either in terms of the grouping of DNA into chromosomes or the packing of DNA into chromatin, nor does any other technology I'm aware of. I'm sure some researchers have been able to induce particular chromatin modifications, but much of how epigenetic regulation works remains unknown.
newappeal t1_irf95t8 wrote
Reply to comment by -Metacelsus- in How common are genetic mutations in conception? by soygang
>for example about 30% of autism cases are caused by de novo mutations
The linked paper appears to say that 30% of de novo mutations in people with ASD contribute to autism, not that 30% of instances of ASD are caused by de novo mutations. In my opinion, the claim they make in the abstract is not the same as the one they make in the paper itself (see page 219, the fourth page of the article, for the 30% figure in context).
newappeal t1_irb4i7b wrote
Reply to comment by _What_How_Why in How do scientists determine what genes are responsible for certain traits/attributes? by [deleted]
Before CRISPR/Cas9, it was actually quite hard to disable a specific gene at will. There are some proteins that can bind to and cut specific DNA sequences, causing function-disrupting mutations, but these are not very accurate and only available for a relatively small subset of sequences.
It's much easier to induce random mutations and then find a gene that got knocked out, resulting in a noticeable phenotypic change in the organism. Random mutations can be introduced with chemical treatments, radiation, particle bombardment (e.g. gold nanoparticles, which can also introduce foreign DNA), or biological systems (e.g. viral vectors in animals, Agrobacterium tumefaciens in plants). Nowadays, many model organisms (e.g. Drosophila, mice, Arabidopsis) have mutant libraries available, which contain specimens (seeds for plants, frozen embryos for animals or at least for mice) which each have a knockout in one gene, and you can order these for your research. A "saturated" library has at least one knockout line available for every single putative gene - putative because some genes are predicted from sequences but have not yet been confirmed to actually be functional genes.
newappeal t1_j41annc wrote
Reply to How are there more genetic differences between two of us than between us and Neanderthals? by bookposting5
You're right to be confused, because that passage is extremely ambiguous. It looks like the author omitted whatever Pääbo said before “Our job is to find out which of those 30,000 are most important, because they tell us what makes us uniquely human”, where he presumably clarified what "those 30,000 [things]" actually are, and I haven't found a reference to the 30,000 figure in Pääbo's major publications.
Therefore, I'll have to make my best guess about what the 30,000 and 3 million figures signify. The first refers to variations between groups (all humans and all Neanderthals), while the second refers to variation within groups (individual humans). The number of differences between two humans has a straightforward interpretation: it's simply the number of nucleotides in two people's genomes which are not the same at each location. This intra-group variation is important to keep in mind, because it complicates the matter of calculating inter-group variation: if all humans are somewhat different, and all Neanderthals are too, how can we compare them?
Generally, we calculate the genetic variation between populations (which are different species in this case) by comparing the DNA sequences that are conserved (i.e. the same) within each population. For example, we'd find all the nucleotides that are shared among most or all humans and those that are shared among most or all Neanderthals, and then we'd compare those sets to each other and see how many differences there are between them. Those differences probably contain important mutations that, so to speak, make humans humans and Neanderthals Neanderthals. To use more technical vocabulary, we would say that these differences are alleles which are fixed (i.e. shared by all members) in each population.