# Wikipedia:Reference desk/Science

Welcome to the science reference desk.

Main page: Help searching Wikipedia

How can I get my question answered?

• Provide a short header that gives the general topic of the question.
• Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
• Post your question to only one desk.
• Don't post personal contact information – it will be removed. All answers will be provided here.
• Specific questions, that are likely to produce reliable sources, will tend to get clearer answers,
• Note:
• We don't answer (and may remove) questions that require medical diagnosis or legal advice.
• We don't answer requests for opinions, predictions or debate.
• We are not a substitute for actually doing any original research required, or as a free source of ideas.

How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

Choose a topic:
 Computing desk Entertainment desk Humanities desk Language desk Mathematics desk Science desk Miscellaneous desk Archives

# October 13

## instead of alpha decay or beta decay, why doesn't there also exist decays by a proton and a neutrino, or by a neutron and a neutrino happen?

After all, the spins add up to an integer in the two imaginary cases i've suggested, just as in the case of alpha and beta decay. Is there a reason known why they don't occur? Maybe it is that there aren't many protons or neutrons running around freely inside a heavy nucleus compared to the number of alpha particles?144.35.45.38 (talk) 05:15, 13 October 2017 (UTC)

You mean like neutron emission or proton emission? Such things do happen, they are just rare. Dragons flight (talk) 06:39, 13 October 2017 (UTC)
Indeed, if one were to read the article Radioactive decay and specifically the section Radioactive decay#Types of decay both proton emission and neutron emission are listed, as well as included in a nice little diagram along the right sidebar. --Jayron32 10:41, 13 October 2017 (UTC)
It gets even better. Neutrino astronomy is there as well and the 2002 Nobel Prize in Physics was given for the advances in that. --Kharon (talk) 23:47, 14 October 2017 (UTC)
Why do you think that a neutrino would be created in such an event? According to the current understanding of particle physics, a single neutrino (or antineutrino) can be created due to an interaction involving a W boson (a W boson can decay to a neutrino and a charged lepton, or a charged lepton can "decay" to a neutrino and a W boson), or a pair of a neutrinos can be created due to an interaction involving a Z boson. Note that particles whose mass-energy exceeds the available energy can occur as intermediates (see virtual particle). In ordinary beta decay, a down quark inside a neutron becomes an up quark and a W- boson, and the W- boson subsequently results in an electron and an antineutrino. In proton emission or neutron emission the whole protons or neutrons are ejected from the nucleus, no neutrino is created.
By the way, for every electromagnetic decay of an excited state with an energy of at least twice the mass of the lightest neutrino, there is a competing process of the emission of a virtual Z boson that creates a pair of a neutrino and an antineutrino. These neutrino pair creations are just rare compared to photon emissions (gamma radiation in nuclear physics, but also emission of light from atomic or molecular state transitions), and neutrinos are hard to measure anyway, so this effect probably won't be experimentally accessible any time soon.
Icek~enwiki (talk) 20:21, 15 October 2017 (UTC)

## What's the highest sulfur fossil fuel ever made?

Processed fuel and fractional distillation products count (i.e. gasoline, diesel) Sagittarian Milky Way (talk) 07:49, 13 October 2017 (UTC)

Depends on what you mean by "made"; probably some bituminous coal mined somewhere. Up to 4% [1] 2606:A000:4C0C:E200:9480:46FD:8725:3114 (talk) 08:19, 13 October 2017 (UTC)
Otherwise, synthetic fuel, made from lignite using the coal liquefaction process by IG Farben during WW2 is a likely candidate; however, lignite has a relatively low sulfur content (for coal) -- about up to 1%.[2] 2606:A000:4C0C:E200:9480:46FD:8725:3114 (talk) 09:51, 13 October 2017 (UTC)
"Typical Sulfur Content in Coal: Anthracite Coal : 0.6 - 0.77 weight %; Bituminous Coal : 0.7 - 4.0 weight %; Lignite Coal : 0.4 weight %". Classification of Coal. Alansplodge (talk) 09:59, 13 October 2017 (UTC)
Sulphur in Assam coal says: "Sulphur in these coals generally occurs in the range of 2.7-7.8%" (p. 87).
Prior to 1993, diesel fuel had relatively high sulfur content, but I can't find anything above 4%; however, there is a "global sulfur cap of 3.50 weight percent".[3] 107.15.152.93 (talk) 10:26, 13 October 2017 (UTC)
Bunker C usually has a very high sulfur content. 2601:646:8E01:7E0B:756C:F81D:F1A7:3FB4 (talk) 10:36, 13 October 2017 (UTC)
Close, but no cigar -- Bunker C fuel oil has 2.4% sulfur.[4]2606:A000:4C0C:E200:9480:46FD:8725:3114 (talk) 11:03, 13 October 2017 (UTC)
• It should be noted that the presence of sulfur is not desirable; it isn't added to fuel or increased intentionally in any way; fuel producers would rather it weren't there at all as it doesn't add to the value of the fuel meaningfully; indeed sulfur oxides produced through combustion can be highly corrosive, and there's literally no good reason for it; so there's no economic reason why a fuel producer would add sulfur to their fuel. It's merely an impurity already present. --Jayron32 10:38, 13 October 2017 (UTC)
• Density and Sulfer content of selected crude oils --Guy Macon (talk) 16:22, 13 October 2017 (UTC)
(None of which exceeds 7.8%; and, crude oil isn't really a fuel). 2606:A000:4C0C:E200:9480:46FD:8725:3114 (talk) 16:49, 13 October 2017 (UTC)
So oil tankers don't run on their own supply? Sagittarian Milky Way (talk) 17:24, 13 October 2017 (UTC)
Typically, no. The large diesel engines of oil talkers are typically fueled with Bunker fuel. Crude oil is a mixture of long chain and small chain hydrocarbons. The long chain hydrocarbons, have a high boiling point, high flash point, and high viscosity, and are difficult to ignite. The small chain hydrocarbons have a low boiling point, low flash point, and low viscosity, and tend to ignite too soon.
The large gas turbines used in many power plants will run off of crude oil just fine. This is usually uneconomical, because you can refine out the gasoline/petrol and sell it for a high price. In certain circumstances, however, such as right at the well head, it may be more convenient to burn crude than to import some other kind of fuel. --Guy Macon (talk) 17:48, 13 October 2017 (UTC)
And finally, petroleum coke, which has a sulphur content of between 4% and 7% as well as all kinds of other nasty stuff - see Comparative Properties of Bituminous Coal and Petroleum Coke as Fuels in Cement Kilns. For more on petroleum coke or "petcoke", see also China Is Quietly Burning A Fuel Dirtier Than Coal -- And Buying It From The US. Alansplodge (talk) 11:08, 14 October 2017 (UTC)
Personally, I prefer Petroleum Pepsi. 2606:A000:4C0C:E200:7595:47BF:7C36:8BA6 (talk) 17:06, 14 October 2017 (UTC)
Better than petroleum heroin. Sagittarian Milky Way (talk) 19:39, 14 October 2017 (UTC)
Wouldn't that look like black tar ? StuRat (talk) 23:52, 14 October 2017 (UTC)

# October 14

## Schroedinger's mathematician

I was just glancing at [5] - apparently you can entangle millions of atoms with a single photon. I'm not entirely sure, but I think they are boasting that the atoms are entangled in independent groups in this case, i.e. it is not a Schroedinger's cat state, but one where many pairs can be separately read without disturbing the others (I think!). But it intrigues me that they thought a Schroedinger's cat state was a possibility.

1) what is the closest we've actually come to Schroedinger's cat?

2) is it conceivable to genuinely make a macroscopic Schroedinger-cat state in space, with a well shielded capsule that would seem far from interacting with Earth? (Even if some bulk parameters remained observable, there might be multiple internal ways to explain them, for something less than a true superposition of all states but still useful?)

3) Is this a valid analogy for quantum computing? You send a mathematician up in one of those space capsules with a generous library, tell him to use a genuinely random number generator to decide what general methods to use to go about proving an unknown problem he has some interest in and what papers to read about it. The mission might be two months but he is instructed to break radio silence and phone home early if he has come up with a good proof. The theory is that the mathematician should be exceptionally lucky in his choices and phone home with unexpectedly high frequency, as viewed from Earth at least. OTOH I wonder if they could be an actual hazard for a mission to Mars if the probe called home early with bad news. But can the odds of calling home be changed by quantum effects at all? (I think they must be in QM computing?) Wnt (talk) 10:11, 14 October 2017 (UTC)

• 1,2) I'm no physics whiz but seems to me that Schroedinger's cat is more of a philosophical concept (Copenhagen interpretation) than a physical one, and that it's out of fashion these days. Certainly the classic experiment with a cat and a particle detector is buildable, though keeping it entangled with the whole world without quantum decoherence is beyond imagining. OTOH there is a developing theory[6] that entanglement explains spacetime, gravity, etc.

3) I (maybe wrongly) think of QC like this: flip 1000 coins so each lands in a classical state <Heads|Tails> with 50-50 probability. The # of heads for the whole ensemble will follow a bell-shaped probability distribution (binomial distribution to be precise). Now quantum "bits" (qubits), as opposed to coins, rather than real-valued probabilities like 0.5, will have complex-valued probability amplitudes, so combining them will have wave-like behaviour showing constructive and destructive interference. A quantum computation is basically an experiment concocted to transform a state representing a problem, into a state where the potential solutions interfere in a way that the incorrect ones cancel each other out, leaving a measurable peak at the correct solution. Shor's algorithm for integer factoring is the most famous example.

Scott Aaronson's book "Quantum Computing Since Democritus" should be a good semi-popular-level introduction (disclaimer: I haven't seen the actual book, just online excerpts, but I liked them). You might also like his blog, "Shtetl-Optimized", which discusses these subjects a lot. 173.228.123.121 (talk) 20:44, 14 October 2017 (UTC)

For what it's worth, I think that our current understanding of the collapse of superposition necessitating an observer is a load of codswhollop, at the very least fundamentally flawed. Think about it, if one of the central tenets of quantum mechanics - particle-wave duality, is correct, then every particle is acting as an "observer" of every other particle, and no part of the universe can be completely isolated from any other part (perhaps singularities are an exception). This means that superposition should be impossible in the first place, which is contrary to empirical evidence. The only reasonable conclusion that I can draw, is that collapse is not an absolute result, but is instead determined by statistical factors. Perhaps, the probability of collapse is determined by the butterfly effect - the greater the potential influence of a particular state on the environment, the more likely the collapse. However, I can't figure out how potential influence would be determined, unless the system somehow exchanges information with the future of every possible outcome, much like how a driver watches the road at night, only able to see the road directly in front, lit up by the headlights, deciding where to turn to keep on the road. Plasmic Physics (talk) 23:41, 14 October 2017 (UTC)
Ah, now you're getting to the advanced aspect of the model. The putative outcome is that we have a mathematician who comes up with a result thanks to many years of work ... done in a much shorter period of time. Did he experience this time? As who? The many-worlds interpretation is an obvious way to go, yet the premise is that somehow we get to pick the "right" world when the signal is sent, which seems absurd, if it weren't already being done by quantum computers.
The most accessible approach to conscious quantum parallel computing presently would seem to be precognition, a phenomenon that is at best difficult to control or study systematically. I would throw out an anecdote that "technical" precognition, like selecting which of a thousand files will be found to contain a keyword, or doing a web search using unrelated terms, seems to cause significant pain related to blood flow at the past end somewhere vaguely near Broca's area (this not being correlated to the level of detail or the time differential) ... but how to prove such a thing, and what collateral damage is done by the witch in the meanwhile? I suspect the essence of free will and qualia involve the choice of which solution to a causal loop "really" applies; it is thus an external interface for the universe. So ... if the mathematician is conscious, does this selection of a reality mean that he breaks the quantum parallel computing scheme? Well I just don't know. That's the fun stuff past the edge of the world we know. Wnt (talk) 03:45, 15 October 2017 (UTC)
Wnt, re the mathematician, I believe you're thinking of quantum postselection. Aaronson's informal description is: 1) write down your question (anything in NP. 2) Generate some random bits to guess a possible answer. 3) Check whether the answer is right. 4) If the answer is wrong, kill yourself! There will be a branch of the many-worlds interpretation in which you survived, which means in that universe you guessed the answer on the first try, lucky you! He joked about opening a crisis hotline for depressed complexity theorists, where he would explain to them that if suicide was really the answer to life's problems, that would give a way to solve NP-hard problems in polynomial time, which is widely thought to be impossible, so they shouldn't give up.

Plasmic Physics, the conscious observer theory is the Copenhagen interpretation which I think is now mostly thought of as quaint. See: interpretations of quantum mechanics. 173.228.123.121 (talk) 04:48, 15 October 2017 (UTC)

Precognition. Yes, however, in this instance it does not require sentience/conciousness. A superpositioned particle simply follows the timeline which results in the largest change in entropy of the system, as long as the change in entropy associated with that specific timeline is different enough. If the associated change in entropy is not sufficiently unique, then the particle has a higher chance of remaining in a superposition. Howver, the particle doesn't "know" which timeline gives the largest change in entropy, without actually having followed that timeline, which means that it has, is and will have travelled them all. Please forgive the confusing wording, I don't think that the correct paralance may have been invented yet. Plasmic Physics (talk) 05:45, 15 October 2017 (UTC)
Regarding 3), it won't quite work that way (the probability of a phone call won't increase by merely isolating the mathematician). But you should have a look at Grover's algorithm. The random number generator would initialize the system to the state ${\displaystyle |s\rangle }$ and the work of the mathematician corresponds to the operator ${\displaystyle U_{\omega }}$. Then a quantum circuit implementing ${\displaystyle U_{s}}$ should be applied to the mathematician's output (all while keeping the system, including the mathematician, isolated). You'd need to repeat that ${\displaystyle O({\sqrt {N}})}$ times as opposed to ${\displaystyle O(N)}$ times for the classical case.
Icek~enwiki (talk) 08:52, 15 October 2017 (UTC)
This is the real deal, but I have to admit I don't necessarily understand this operation or how to apply it to the mathematician. I would suppose N is all the ways that he can be given random cues to start the work. But I am not clear what ${\displaystyle U_{s}}$ is when you apply it to his output. Is it conceivable to apply it to an entire written proof he might generate, or is this just some kind of flag to indicate which initial conditions made him say he had the answer? And if you process the output this way, then reestablish contact with the mathematician, is the mathematician you contact the one who came up with the proof? Wnt (talk) 03:03, 16 October 2017 (UTC)

N is the number of possible random inputs as you say.
Let's write the initial state similar to the article:
${\displaystyle |s_{M}\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}|x\rangle |M\rangle .}$
Here ${\displaystyle x}$ is the random cue and ${\displaystyle M}$ is the mathematician (together with his library and everything he needs) in his initial state. The initial state is created by sending entangled photons (different polarization states stand for 0 and 1, and the bits make up the number ${\displaystyle x}$) to optical detectors attached to the isolated capsule.
Now the mathematician reads the random cue from a display inside his isolated capsule. Then he starts working on the problem, using the random cue as a guide. To make it simple, he has a clock that tells him when it's time to stop. If he has solved the problem within the time limit, he pushes a button which causes a mirror that causes a phase shift of 180 degrees to be placed in an optical path; if he hasn't solved the problem, he pushes a different button which causes a mirror that doesn't cause a phase shift to the placed in the same spot. A certain time after the mathematician got the stop signal from the clock (enough time to let the mathematician push the right button), the devices attached to the isolated capsule create photons with the same polarizations that have been measured just before the number ${\displaystyle x}$ was displayed for the mathematician. One of the photons is bounced off the mirror that is either phase-shifting or not. Then the state looks (slightly simplified; now there are actual photons somewhere representing ${\displaystyle x}$ while before the information was in the detectors or already on the display of the isolated capsule) like this:
${\displaystyle U_{\omega }|s_{M}\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}\phi _{x}|x\rangle |M_{x}\rangle .}$
Here ${\displaystyle \phi _{x}=1}$ if the mathematician hasn't been able to solve the problem with random cue ${\displaystyle x}$ and ${\displaystyle \phi _{x}=-1}$ if the mathematician has been able to solve it. ${\displaystyle M_{x}}$ symbolized the mathematician and his study after the experience of attempting to solve the problem with random cue ${\displaystyle x}$.
Now, applying the operator ${\displaystyle U_{s}}$ basically leaves the mathematician's state as it is and works only on the photons. We send the photons through the quantum circuit that implements ${\displaystyle U_{s}}$ and then we have as the new state:
${\displaystyle U_{s}U\omega |s_{M}\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}\phi _{x}(2|s\rangle {\frac {1}{\sqrt {N}}}\sum _{x'=0}^{N-1}\langle x'|x\rangle -|x\rangle )|M_{x}\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}\phi _{x}({\frac {2}{N}}\sum _{y=0}^{N-1}|y\rangle -|x\rangle )|M_{x}\rangle }$
After this quantum circuit, the photons are sent to the optical detectors again, the mathematician gets a random cue again, performs his work and we get the state
${\displaystyle U_{\omega }U_{s}U_{\omega }|s_{M}\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}\phi _{x}({\frac {2}{N}}\sum _{y=0}^{N-1}\phi _{y}|y\rangle |M_{xy}\rangle -\phi _{x}|x\rangle |M_{xx}\rangle )}$
Here ${\displaystyle M_{xy}}$ symbolizes that the mathematician has attempted to do his work with cue ${\displaystyle x}$ and cue ${\displaystyle y}$, in that order.
After sending the photons through the quantum circuit again, and let the mathematician do his work again, we have the following state:
${\displaystyle U_{\omega }U_{s}U_{\omega }U_{s}U_{\omega }|s_{M}\rangle ={\frac {1}{\sqrt {N}}}\sum _{x=0}^{N-1}\phi _{x}({\frac {4}{N^{2}}}\sum _{y=0}^{N-1}\phi _{y}\sum _{z=0}^{N-1}\phi _{z}|z\rangle |M_{xyz}\rangle -{\frac {2}{N}}\sum _{y=0}^{N-1}\phi _{y}^{2}|y\rangle |M_{xyy}\rangle -{\frac {2}{N}}\sum _{y=0}^{N-1}\phi _{x}\phi _{y}|y\rangle |M_{xxy}\rangle +\phi _{x}^{2}|x\rangle |M_{xxx}\rangle )}$
Going on with this iteration, we reach a point when the state of the photons is very close to ${\displaystyle |a\rangle }$, assuming there is only a single ${\displaystyle a}$ for which ${\displaystyle \phi _{a}=-1}$. The state state of the mathematician however is a superposition of having tried to solve the problem with various random cues.
So in the end, we get the answer ${\displaystyle a}$ for the right cue, and in the mathematician's history there will be in general various cues, but they include ${\displaystyle a}$. In fact, while doing his work, if the mathematician receives a cue that he received before and for which he solved the problem, he can use his time in other ways and just press the right button at the end.
Icek~enwiki (talk) 20:37, 16 October 2017 (UTC)
This is a great explanation, very clear. One thing that kind of amazes me about it though is that the mathematician sends out a known, non-quantum set of bits for a, which leaves his enclosure, then they get mirrored and come right back to him as qubits. And the qubits after a few iterations are (usually) going to provide the right conventional bits by incredible luck - even from the mathematician's point of view, I think! Now... is there anything about this process that requires that the mathematician is the one in the small enclosure and the ${\displaystyle U_{s}}$ part is in the rest of the world? Could you have a tiny perfectly isolated quantum device that implements U_s and an ordinary mathematician in an ordinary university sends his findings into it and gets oracular answers back based on the superposition of all possible worlds out here in the "normal" universe? Wnt (talk) 20:12, 17 October 2017 (UTC)

## colds and endogenous opiates

One of the painful lessons I learned about colds when I was young is that they seem -- for me -- to involve production of a fairly large amount of some kind of endogenous opiate during the "sniffle" phase. I concluded this because back then I suffered some very sore throats after colds on account of burning myself with hot soup without realizing it. More recently, though exerting extra care with hot food, I managed to do something to a back muscle that left it sore for a couple of days when that phase of a cold ended. And, of course, there is the observation that for a few days it is possible to sniffle huge amounts of crap and feel/hear wheezing in the lungs with no instinctive reaction to cough, which suggests a powerful cough suppressant activity.

But for something that seems so important to the course of the cold, I didn't quickly find much discussion of it (mostly gene therapy papers with endogenous opiates in adenoviruses!), though I understand endorphins are part of a generalized stress response, so I just wanted to check if there is a medical term for this sort of suppressive effect I'm not thinking of. I mean, do other people even have this response? Wnt (talk) 22:24, 14 October 2017 (UTC)

When you feel pain in one place, it does seem to make less severe pains elsewhere even less noticeable, but I'm not sure of the mechanism. It might be entirely within the brain, which has a limited ability to pay attention to different pains. I suppose that makes sense, as there's little survival advantage to being worried about your aching back whilst being mauled by wild animals. Best to only concern yourself with the most immediate threat, which presumably is causing the most severe pain. (Note that there's no reason to assume that this phenomenon is exclusive to humans.) StuRat (talk) 23:47, 14 October 2017 (UTC)
Stress-induced analgesia sounds like what you're describing, a form of hypoalgesia.[1] StuRat is suggesting counterstimulation (a page that could do with some work!). Klbrain (talk) 23:58, 14 October 2017 (UTC)
I think I've experienced this - I remember being momentarily freaked out by a prank back at an undergrad and not noticing I'd misplaced a bit of skin on one malleolus the size of a dime until I happened to spot the blood. But the sort of hypoalgesia during a "fight or flight" seems hard to relate mentally to the first days of a common cold type infection. Hypoalgesia seemed like a great keyword ... but didn't bring anything up with rhinovirus or coronavirus or "common cold". I should point out though that none of the generic mechanisms seem to match what happens with a cold as I experience it -- because the cough suppression and hypoalgesia stop early on during the cold, even though there are still plenty of distracting, unpleasant stimuli, and not obviously less stress. Wnt (talk) 03:01, 15 October 2017 (UTC)

References

1. ^ Butler, Ryan K.; Finn, David P. (1 July 2009). "Stress-induced analgesia". Progress in Neurobiology. 88 (3): 184–202. doi:10.1016/j.pneurobio.2009.04.003. Retrieved 14 October 2017.

# October 15

## Confirmation of Special relativity

General Relativity had the famous confirmation with the bending of light during a solar eclipse. Was there any similar moment for Special Relativity? Bubba73 You talkin' to me? 00:47, 15 October 2017 (UTC)

Is the Wikipedia article titled Tests of special relativity insufficient for your purposes? The most important and earliest such experiment is probably the Michelson–Morley experiment, which while it predates special relativity by some decades, provided the results to verify it. --Jayron32 00:55, 15 October 2017 (UTC)
Well, it looks like there was already some experimental evidence in favor of it when it came out, but it wasn't until the 1930s that there was a confirming experiment. Bubba73 You talkin' to me? 03:51, 16 October 2017 (UTC)
Do you mean the Ives–Stilwell experiment? I thought all those outcomes could have been predicted from the Lorentz transformation, which however wasn't really an explanatory theory. Maybe I'm wrong about that though. The SR chapter[7] of the Feynman Lectures on Physics has a brief historical treatment if that's of any interest. 173.228.123.121 (talk) 06:34, 16 October 2017 (UTC)
Just because it happened before the theory was formalized does NOT mean that the results could not be used to confirm it. --Jayron32 10:55, 16 October 2017 (UTC)
History of special relativity may also be of interest. --47.138.160.139 (talk) 01:45, 17 October 2017 (UTC)

# October 16

## Holographic interactions

In several sci-fi tropes, like Blade Runner 2049 or Halo holograms of humans can interact with people both visually and audially, despite having any organs or sensors to receive and analyze visual and audio cues. How is that? Unlike modern virtual assistants, it seems holograms per se can't process any data due to their nature. 212.180.235.46 (talk) 07:58, 16 October 2017 (UTC)

You're talking about works of fiction. If the author doesn't give an explanation, you're free to make up your own. --69.159.60.147 (talk) 10:48, 16 October 2017 (UTC)
The image of a person on Skype has no actual eyes or ears and yet one can talk to the image as if it were the actual person. Dmcq (talk) 12:01, 16 October 2017 (UTC)
In the classic science fiction series Red Dwarf, the holograms could not interact with real matter: hologram Arnold Rimmer could not hold objects, could walk through walls, and (if memory serves), due to his nature, he was a complete and utter smeg-head. Of course, it would be wholly unfair to compare Red Dwarf to other science fiction, in terms of scientific accuracy, literary merit, and cultural impact; it’d be a no-contest win.
Nimur (talk) 15:24, 16 October 2017 (UTC)
I'm pretty sure that Arnold Judas Rimmer BSC was a total smeg-head before he ever became a hologram. Iapetus (talk) 16:22, 16 October 2017 (UTC)
There are already virtual CAD systems that even allow limited interaction with a hologram-like augmented and/or virtual, visual representation of objects. Developers also try to implement haptic/tactile feedback into these virtual systems! Everything is still in an early stage tho and you always need interactive bridge-devices like VR-googles, -pointers, -gloves and alike, to use these systems. Its highly doubtful that there will ever be a "holodesk" which will not need such "adapters" and on top these systems certainly will have allot more limitations then their imagination in sci-fi. So allot of sci-fi-"products" are actually branded wrong since they contain somuch clearly pure letsmakesometing up fantasy elements and mechanics. Warpdrives, Wormholes, artificial gravity, teleportation... never gona happen! --Kharon (talk) 16:38, 16 October 2017 (UTC)
Some of those seem doable:
1) Artificial gravity just requires spinning the ship, but it needs to be a large ship to avoid nausea induced by a noticeably variable (apparent) gravity field. Also spinning the ship introduces lots of new problems with docking, solar panels and communications arrays and telescopes tracking their targets, external maintenance, etc. Doable, but tricky. (One solution is a rotating part and a stationary part, but linking those parts together isn't easy.)
2) Teleportation seems possible, although again not in the way they show. It would require scanning an object down to the molecular level, transmitting that info, creating a copy in that exact configuration at the remote site, then destroying the original, if you want to avoid having a clone. See Think Like a Dinosaur for a realistic treatment of the problem. Certainly there could never be transporters with equipment at one side only.
3) The time it will take to get humans to other stars does seem to be a profoundly unsolvable problem. Even if you have a ship that can go close to light speed, it would still take around a year to get to that speed at 1 g and another year to slow down at the other end. So, add almost 2 years to the travel time for that, and then we have the nearest stars being over 4 years away at the speed of light (although little time would pass for them during this period). So now we're up to maybe 6 Earth years to get there, and another 6 to come back. Would people really want to sign up to not see their loved ones for 12 years, minimum, then being a decade younger than those loved ones when, and if, they came back ? I'd have to say robotic ships seems a lot more practical. They can accelerate faster, and not worry about coming back. Even in our own solar system we've sent robotic ships all over the place but never sent people further than the Moon. Maybe if we ever found a planet we could easily colonize then sending people might make sense, but I'm skeptical of that, too, considering how none of the planets, dwarf planets, and moons in our solar system seem particularly close to being able to support a self-sustaining colony. StuRat (talk) 20:06, 16 October 2017 (UTC)
The visual and audio interaction just requires cameras, microphones, and speakers. In Star Trek: Voyager, they had a "portable holographic emitter", and as long as it had those items, and the ability to display a hologram, that seems possible. However, actually manipulating objects in the real world is another matter ("photonic matter", to be specific). For that, you would need a robot, or at least a robotic arm. A more realistic version of the holographic doctor on that show might have had him do all the "bedside manner" human interactions (except touching), like asking patients about their symptoms, while robotic arms do all the physical operations, like surgical procedures.
As for Blade Runner, we could give them the benefit of a doubt and assume that the locations were all hooked up with microphones, cameras, speakers, and holographic emitters. StuRat (talk) 20:41, 16 October 2017 (UTC)
...never gona happen! --Kharon (talk) 01:41, 17 October 2017 (UTC)
We're already headed towards every public place being filled with microphones, cameras, and speakers. StuRat (talk) 02:02, 17 October 2017 (UTC)
I was going to link smart dust. That said, it seems conceivable to me that "holograms" (sensu lato) could have direct sensory capabilities by some more integrated means, since interfering lasers are inherently capable of measuring distances very precisely and have been used to detect very small vibrations (i.e. spying by bouncing off windows). But this depends on the specifics of how you make a seemingly 3D free floating hologram far from an emitter, which is the more difficult technical question. Wnt (talk) 19:55, 17 October 2017 (UTC)

# October 17

## Kilonova and superheavy elements

The kilonova from neutron star merger was just announced on Monday as part of gravitational wave detection. I read the article on astronomy.com and said the kilonova produced Earth masses worth of gold, platinum and uranium, which are superheavy elements. It makes me think if the neutron star merger could also produce transactinide elements or even transoganesson elements like moscovium (element 115) and feynmanium (element 137). Based on the amount of gold being produced in that kilonova at over 10 Earth masses, it could produce asteroid-mass worth of moscovium for example. You will strike me gold if you think so. PlanetStar 00:16, 17 October 2017 (UTC)

Some points:
1) Those huge masses are likely mixed in with even huger masses of "junk" elements, so it's not like there will be solid gold planets spit out. After all, there's enough gold in seawater to pave the streets with gold, but it's diluted by a huge amount of water, etc., and hence useless to us.
2) Isotopes of moscovium lists all half-lives less than a second, so I wouldn't expect any to be left, say, a day after the event.
3) Island of stability might mean there are some stable heavy elements created by such an event. If so, this could be quite interesting. We should look for their atomic spectra in light coming from the event. StuRat (talk) 01:27, 17 October 2017 (UTC)
Nucleosynthesis#Explosive nucleosynthesis is a good start for your reading, I would then follow the links from there for more details on various questions you may have. --Jayron32 01:30, 17 October 2017 (UTC)
Pretty much anything that can exist, would be expected to be created in some quantity in such events. However, as a general rule the superheavy elements are also very unstable. Many will radioactively decay within very short periods of time, preventing them from being seen or used elsewhere. Dragons flight (talk) 11:16, 17 October 2017 (UTC)

## Moving from warfarin to heparin + surgery

Why would a surgeon move a patient from heparin from warfarin (which was the usual treatment) pre and post surgery? As far as I know, both would increase bleeding (and also work as blood thinners).--Dikipewia (talk) 00:37, 17 October 2017 (UTC)

The patient should ask. If the doctor can't give a good reason, it might be good to get a 2nd opinion. I've seen doctors change meds "for no apparent reason" way too often. Any change in medication should be discussed with the patient, and a reason given. I wonder if there's a "patient bill of rights" item somewhere that lists "The patient has the right to be informed of any change in medication, given a reason for the change, and refuse the change, if they so choose".StuRat (talk) 01:19, 17 October 2017 (UTC)
It's not a real ongoing case. It just appear to be normal praxis, see [[8]]. I just want to know the rationale behind this.--Dikipewia (talk) 01:48, 17 October 2017 (UTC)
Did you mean "praxis" or "practice" ? StuRat (talk) 02:05, 17 October 2017 (UTC)
Yes. Indeed.Dikipewia (talk) 15:24, 17 October 2017 (UTC)
Warfarin is generally discontinued for surgery due to the bleeding risk. There is a lot of literature about "bridging" the period when warfarin is discontinued with heparin or other anticoagulants. One fairly recent study [9] of atrial fibrillation patients was of the opinion it was unnecessary to have any anticoagulant. But this is a big topic and it would really take a lot more effort than I'm willing to give it to see how general and agreed-upon that conclusion actually is. Wnt (talk) 11:39, 17 October 2017 (UTC)
I know that some doctors are against or don't see the necessity of this bridging treatment.
However if they choose a bridging mechanism, why would another anticoagulant be different? During a surgery, what makes the anticoagulant warfarin unsafe and the anticoagulant heparin safe? Both seem to act in the same way, a blood thinners that reduce coagulation to avoid blood clots. Wouldn't this imply that both increase bleeding risk? Dikipewia (talk) 15:24, 17 October 2017 (UTC)
Warfarin is a vitamin K antagonist while heparin activates antithrombin on binding. Warfarin's effect should be more long lasting (I think) and heparin's can rapidly be reversed with protamine sulfate. Again, this is an area where a great deal is known but I don't know much at all, but I think this is at least part of the answer. Wnt (talk) 19:58, 17 October 2017 (UTC)

## Clean air in the UK

Is it just a question of cars? Couldn't it be that the air on a really small place is contaminated by a local industry? Could that be more unhealthy than London?--Hofhof (talk) 00:53, 17 October 2017 (UTC)

See air pollution in the United Kingdom. Unfortunately, that doesn't seem to consider any source other than vehicles. Here's a more even treatment of the source, but alas a bit dated (2001): [10]. Note that vehicles are somewhat unique in that they pollute most where the most people are, whereas factories or power stations can be located where the prevailing winds will blow the smoke clear of the cities. StuRat (talk) 00:55, 17 October 2017 (UTC)
• There are many sources of air pollution. Historically, London had life-threatening pollution ("killer fogs", mostly from coal fires) prior to the advent of automobiles. As otehr sources were addressed, car pollution became relatively more important. Modern monitoring methods do a fairly good job of identifying sources, -Arch dude (talk) 00:58, 17 October 2017 (UTC)
But the question remains: air pollution monitoring covers things like nitrogen dioxide and ozone. But what if I'm close to a chemical plant. Could this chemical plant contaminate more than anything that you find in London?--Hofhof (talk) 01:07, 17 October 2017 (UTC)
A UK chemical plant shouldn't release many chemicals into the air normally, due to regulations, but there's always the risk of a Bhopal disaster event. StuRat (talk) 01:17, 17 October 2017 (UTC)
"Shouldn't" is not an exact synonym for "doesn't", mind you. --Jayron32 01:27, 17 October 2017 (UTC)
List of active coal fired power stations in the United Kingdom does show they are rapidly reducing reliance on this dirty energy source. StuRat (talk) 01:17, 17 October 2017 (UTC)
Thats why there are so many record high Chimneys in industrial areas! As long as anyone pollutes in a save distance from any detector, nature, livestock or human population, they can "contaminated" almost as much as they want without direct, local consequences. Cars emit right where they are, so there are direct, local consequences. --Kharon (talk) 01:40, 17 October 2017 (UTC)
"The Government announced in November 2015 that the UK will phase out coal-fired power generation by 2025" UK COAL PLANT CLOSURES - A STRUCTURAL SHIFT AWAY FROM COAL. Alansplodge (talk) 12:53, 17 October 2017 (UTC)
The mayor of London has recently called for a ban on domestic wood-burning stoves - [11] - and there have been concerns about the amount of methane produced by cows - [12]. Pollution is a highly complex issue, with no easy answers. Wymspen (talk) 10:21, 17 October 2017 (UTC)
Stand next to a wood burning grill and you are breathing more dangerous particulates than you would ever encounter in normal London air. So, yes, local pollution can be intense. However, humans rarely spend much time near intense pollution sources (e.g. wood fires) but many people breathe city air all the time, so the cumulative effect of the latter is often more important. Beyond a certain scale, all industries have regulations for the quantity and type of pollutants they can legally emit into the air. Often there are inspections to show that they have the right kind of mitigation procedures (e.g. the right type of burners, smokestacks, etc.) to mitigate any expected air pollution. For very large scale industries there is also routine monitoring of local air quality. Power plants and industrial activity are a source of air pollution in the UK. (So is agriculture, for some pollutants like ammonia.) However, cars get a lot of attention in the UK because they are a major source of pollution, and they operate in close proximity to people. The growth of relatively more-polluting small diesel engines (roughly 50% of UK transport) and the relatively less stringent emissions standards (compared to, for example, the US) has made air pollution from the transportation sector a more prominent problem in the UK than in most other developed countries. Dragons flight (talk) 11:09, 17 October 2017 (UTC)
The Department for Environment, Food and Rural Affairs web page, Causes of air pollution , says that "In all except worst-case situations, industrial and domestic pollutant sources, together with their impact on air quality, tend to be steady or improving over time. However, traffic pollution problems are worsening world-wide". A more detailed breakdown linked from that page is What are the causes of air Pollution. Alansplodge (talk) 12:47, 17 October 2017 (UTC)

## Voltage drop

I am experimenting with conductive ink, which I applied to a strip, at the ends of which I placed terminals (nuts and bolts) and I measure a resistance of about 45 Ω over those terminals. But I also want to measure the voltage drop at different points.
So I stacked two 3 V batteries, resulting in about 6.5 V. When I connect two copper wires to the poles I measure a slightly lower voltage at the other ends of those wires. But then when I connect those ends to the terminals I measure only 0.37 V over the terminals. That is about 1/20 of what it should be.
If I place the leads of the multimeter further in, I get an increasingly lower voltage, which is as expected (the voltage drop), but the initial voltage just can't be right, can it?
To make sure, I measured the resistance over a terminal, but that is 0.2 Ω or less. And the wires are well connected to the terminals (a firm pull doesn't pull the wires out). So what may cause this? DirkvdM (talk) 15:55, 17 October 2017 (UTC)

• This would be much simpler with a diagram, because while I think I understood it (and find it as puzzling as you) I may have missed something; or better yet, a photograph. The best guess I have that matches all the symptoms is that one of the copper wires, or its connection with the battery, has a resistance ~ 10kΩ, but that sounds unlikely (this is too low for a broken cable or faulty connection). You could try measuring the resistance of those.
BTW: if you do any kind of experiment, take a lot of photographs - do not spend time choosing good angles/lighting or sorting them out afterwards, just take tons of crappy shots with your cell phone and dump in into a date-named folder on a hard drive. In this day and age it is pretty much free to do that, and once in a blue moon you will be able to retrieve the one photograph from two years ago that shows a crucial point of the setup that you did not realize was crucial back then. This is of course in addition to keeping a lab book, but the amount of lab-book writing required to capture as much information is just enormous. TigraanClick here to contact me 16:12, 17 October 2017 (UTC)
You should check the voltage of the pair of batteries when connected. If it is still 6.5 V, then measure the voltage drop on your power supply wires. There could be a problem in those wires. If the voltage from the battery pair is very low it could mean that the battery is flat, or has a very high internal resistance. Or perhaps they are connected back to front. Graeme Bartlett (talk) 21:11, 17 October 2017 (UTC)

## Flu vaccine and disease prevention (US vs. Europe)

Having lived in both US and Europe, I am acutely aware of their differing policies regarding flu vaccines. The US recommends a flu vaccine for all healthy adults (not otherwise excluded by allergies or other concerns). In Europe, the flu vaccine is only recommended for at risk populations (e.g. children, elderly, etc.) As a consequence, the US vaccinates ~50% of the population each year, while the coverage in European countries is much, much lower. I was wondering if there was any research comparing the effects of these diverging policies in terms of the relative incidence of flu-related disease, lost productivity, death, etc. from the US and Europe. I would be particularly interested to know if the much higher vaccination rates in the US can be shown to have appreciable herd immunity related benefits, or is ~50% not high enough to see benefits in the unvaccinated populations. Dragons flight (talk) 16:01, 17 October 2017 (UTC)

Something else you might want to look at is what happens with strains which the vaccinations don't cover. That is, do those strains spread more when vaccinations occur for the other strains, because people who would have contracted another strain and stayed home now go out and catch the unvaccinated strains ? StuRat (talk) 16:10, 17 October 2017 (UTC)
• The thing about herd immunity is that it is an abrupt transition between "everyone infectable will get it" and "herd immunity works" (based on a few more or less realistic assumptions - large population (often an OK assumption), probability that person A will catch the disease from person B if infected more or less the same for all A and B (pretty much never the case) - but still a good first approximation). Our article cites [13] (which I have not checked) and says that the herd immunity threshold (= level of vaccination, basically) to stop influenza from propagating is 33 to 44%, meaning a 50% vaccination rate would indeed provide herd immunity, but that it would not take a large drop in the vaccination rate to lose it. I will note that this press article leaves one with the impression that the threshold for herd immunity for flu would be somewhere in the 80-90% range, but I would rather trust the NCBI source. TigraanClick here to contact me 16:27, 17 October 2017 (UTC)
The mathematics of these types of functions are actually fairly well studied; there's math like bifurcation theory or even the famous Mandelbrot set which is based on iterative functions that have two states, a "stable" state that collapses back to a single, small value, and an "unstable" state that runs away to infinity. The Mandlebrot set is just the limit between the stable state of the function and the "runaway" state of the function. Disease immunity follows similar behavior: at some value of vaccination, the infection rate always drops back to a small, stable value, whereas at any vaccination rate below that threshold, the infection rate skyrockets to essentially "everybody". While each disease has its own characteristic function that describes its transition, there's usually some "tipping point" between "herd immunity" and "everyone gets sick". LOTS of natural systems obey this kind of mathematics, such as population dynamics. --Jayron32 16:40, 17 October 2017 (UTC)
I have been imagining that within the US there might be enough state-to-state or city-to-city variation in vaccination rates that herd immunity was not necessarily an all-or-nothing proposition for the whole US. Also, with flu, the immunity rate should be higher than the vaccination rate if past exposure to similar strains provides some protection. Which is of course is all a way of saying "it's complicated". Dragons flight (talk) 17:41, 17 October 2017 (UTC)
The U.S. should be vaccinating more widely than it is, for example with universal free vaccinations. I mean, when you antagonize a country that likely has ready resort to pandemic flu strains, and other fun creative projects, it would be a good idea to practice eradicating flu on a yearly basis. Wnt (talk) 20:03, 17 October 2017 (UTC)