Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia


The Wikipedia Reference Desk covering the topic of mathematics.

Welcome to the mathematics reference desk.
Want a faster answer?

Main page: Help searching Wikipedia

How can I get my question answered?

  • Provide a short header that gives the general topic of the question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Post your question to only one desk.
  • Don't post personal contact information – it will be removed. All answers will be provided here.
  • Specific questions, that are likely to produce reliable sources, will tend to get clearer answers,
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we’ll help you past the stuck point.
    • We are not a substitute for actually doing any original research required, or as a free source of ideas.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
 
Choose a topic:
 
See also:
Help desk
Village pump
Help manual


September 14

Applying the symbol "approaches the limit", ≐.

What exactly does the symbol "approaches the limit", ≐, mean? Say

Does this mean f'(x)≐ or maybe f'(x)= (h≐0)? 166.186.169.75 (talk) 01:38, 14 September 2017 (UTC)

The limit symbol is part of the expression on the lefthand side of the equation. Put simply, it means "as h gets close 0, the expression within the limit gets close to ". For a precise definition, see our article on limits.--73.58.152.212 (talk) 03:10, 14 September 2017 (UTC)
I am fairly familiar with the concept of limits and even approaching a limit. My question is how does that particular notation fit into that derivative example? 166.186.168.71 (talk) 13:09, 14 September 2017 (UTC)
List of mathematical symbols#Symbols based on equality says that ≐ means "is defined as; is equal by definition to". It doesn't mention anything about limits. Loraof (talk) 15:51, 14 September 2017 (UTC)
But if you Google ≐, the general consensus seems to be that it means "approaching the limit". 166.186.168.12 (talk) 02:18, 16 September 2017 (UTC)
I think in this context it means approximate equality or limiting value, see Approximation#Unicode. Different authors may use the same symbol for different things, so it's difficult to say what the exact meaning is intended to be, especially since the ≐ isn't exactly standard. Note also that assumes that f is differentiable at x in the first place; it's possible for the limit to exist even if the derivative does not. In terms of numerical analysis, the expression gives a better approximation for the derivative for 'nice' functions than the usual one. See also Symmetric derivative. --RDBury (talk) 20:25, 14 September 2017 (UTC)

From a computational perspective , how do mathematicians succeed proving theorems?

This question might seem ill-defined, but I will try my best. It's known there's a strong connection between the P=NP problem and proving mathematical theorems; verifying a suggested proof is in NP, and if P=NP (with an acceptably small polynomial), there's a fast polynomial algorithm for automatic theorem proving. Assuming P!=NP, how come human mathematicians succeed in proving so intricate theorems, without resorting to using brute-force to scan all possible strings that are hypothetical proofs? Thanks, 77.127.95.225 (talk) 06:46, 14 September 2017 (UTC)

The literature is sometimes confusing on this point, but P and NP apply to classes of problems rather than particular problems, so individual problems in the class may be easy even if the class as a whole is difficult. The difficulty of a class generally refers to the worst cases, so even if a particular class is considered NP-hard, it may be that a randomly selected instance is solvable in polynomial time in the majority of cases. That being said, there are many different levels of complexity other than P and NP, and NP describes the complexity of solving problems in Boolean logic (see Boolean satisfiability problem), which is a very small part of mathematics as a whole. The class of problems which consist proving general mathematical theorems is undecidable (see Undecidable problem), which means there instances which can't be solved under any reasonable model of computation. In contrast, problems in an NP class are solvable, even if it's impractical to solve some of them because it takes such a long time. Just as in the NP case though, just because a class as a whole is undecidable doesn't mean that all instances must be difficult, and perhaps most instances will be easily solvable even if there are the occasional instances which are impossible to solve. This explains what happens in mathematics; some statements are easy to prove or disprove, while some are difficult/impossible. Mathematics is full of unsolved problems, most of which can be formulated in the form of proving a theorem, but that doesn't mean that at least some problems can't be solved easily. --RDBury (talk) 18:32, 14 September 2017 (UTC)
PS. Automated theorem provers use a variety of heuristics to prove (or disprove) statements. But, as predicted by undecidability, they don't always produce a result. Most, such as Coq, are interactive so that humans can guide the process using experience and intuition while the software takes care of gritty details. --RDBury (talk) 19:03, 14 September 2017 (UTC)
I interpret the OP's question, as per his last sentence, as "What is the nature of human intuition that allows people to prove theorems even when the universe of possible approaches to try is huge or infinite? I think it's a fascinating question, and I'm sure I've read discussions of it that give some insights but not a whole lot. (Sometimes people ask chess experts a similar question.) Surprisingly, our article Intuition doesn't seem to help, nor does the article Artificial intuition. The article Cognition, or wikilinks therein, may be of some help. Or maybe the bibliographies of any of the articles I've linked might lead you somewhere useful. Loraof (talk) 21:20, 14 September 2017 (UTC)
To sort of answer the question in the last line, I don't think any computation is involved. The teacher I had for the NP stuff said that when it came out, he wondered how it was possible to do it. But basically first the 3-SAT problem was shown to be in NP and is not known to be in P. Then other problems can be shown to basically be equivalent to 3-SAT, or another problem in the NP=complete class, and that proves it to be NP-complete. Bubba73 You talkin' to me? 22:30, 14 September 2017 (UTC)
Another relevant link here might be Problem solving, especially the sections on strategies and methods. The thing with 'interesting' questions is that they are difficult, if not impossible, to answer, at least for current science/philosophy/metaphysics. It should be noted that 'Intuition' in the context of solving a mathematical problem has somewhat different meaning than its usual one. (Plus there is intuitionistic logic which has yet another meaning.) The article defines intuition as a source of knowledge, but in mathematics this has proven to be notoriously unreliable. Nevertheless, a mathematician can often look a theorem statement and come up with the outline of a proof without the use of the problem solving techniques normally taught; turning the outline into an actual proof if often still an issue though. You might then say that the outline comes from intuition, for lack of a better term. In this context, intuition has the rather Zen-like quality that it can be learned but it's impossible to teach, since otherwise it would be a method or strategy. Perhaps the brain is really executing some internal and poorly understood algorithm. or perhaps there is some ineffable quality of the brain which can't be emulated on a computer. My personal point of view is that until you can put either alternative into a falsifiable form it's not something you can decide. I'm pretty sure the first class of problems to be shown NP-complete was SAT, aka the Boolean satisfiability problem; see Cook–Levin theorem. Proving NP is usually relatively easy, basically just show that if you have a potential solution you can verify it quickly. The proof of the C-L theorem involves emulating a Turing machine with a set of logical statements and is fairly complex. --RDBury (talk) 08:46, 15 September 2017 (UTC)

Name of a particular type of "temporal" averaging function?

I'm thinking of a function that works in the following way: an infinite list of variables is processed sequentially, and at each step the "temporal average" (TA) is the sum of the current and next value divided by two. So for example, once the values 9976, 3008, 4582, 1733, 576 have been fed into the function we have a TA of 2105.5 (whereas the classical average would have been 3975). Obviously, had the values been processed in reverse the TA would compute to 6457.0625, hence the so-called "temporal" aspect of this function. Now for some reason this seems to me to be a useful metric of sorts, though admittedly I can't really formulate why I think that is at the moment. So my question is whether there is in fact a name for this kind of function and also if are there any interesting (or more useful) variants of it? 73.232.241.1 (talk) 07:25, 14 September 2017 (UTC)

You are describing an exponential smoothing with smoothing factor of 1/2. Dragons flight (talk) 07:34, 14 September 2017 (UTC)
Excellent, thank you! 73.232.241.1 (talk) 07:48, 14 September 2017 (UTC)
I'm not getting your result. Can you provide a worked example similar to that here.
If current value starts at zero, an input of 9976 results in (0+9976)/2 = 4988
Then (4988+3008)/2 = 3998
Then (3998+4582)/2 = 4290
Then (4290+1733)/2 = 3011.5
Lastly (3011.5+576)/2 = 1793.75 which isn't your 2105.5 . Puzzled -- SGBailey (talk) 13:25, 14 September 2017 (UTC)
Ah found it - you don't start with a "pot" of 0, you are initializing your "pot" to the first value and starting with the second value. -- SGBailey (talk) 13:27, 14 September 2017 (UTC)
That set of data would probably be better described using double exponential taking account of the trend. You might be interested in Smoothing, methods like Kalman filter or Savitzky–Golay smoothing filter are also quite often used in practice and you don't have to pick smoothing factors out of the air. Dmcq (talk) 13:47, 14 September 2017 (UTC)
Okay, yeah the Kalman filter may be a better fit for the data I'm working with actually. I appreciate the suggestions. Cheers. 73.232.241.1 (talk) 19:10, 14 September 2017 (UTC)

September 16

Polynomial-time reduction and Congruence relation within the scope of Heyting algebra (bounded lattice)

Let's say A ⊃ B. Does that mean that ℙ(X) ends within ∈ because S means a is an element of the set S? --VoisesinHead (talk) 20:45, 16 September 2017 (UTC)

Could you give some more context to your question please? (Who are ℙ(X) and ∈ and S?) —Kusma (t·c) 20:52, 16 September 2017 (UTC)
Absolutely not. --VoisesinHead (talk) 22:42, 16 September 2017 (UTC)

September 18

Recent result on infinite cardinals

To quote from the aleph number talk page:

 Just ran across this article this weekend: https://www.scientificamerican.com/article/mathematicians-measure-infinities-and-find-theyre-equal/ which seems to imply that all infinite sets have equal cardinality. It's been way to long since I studied any set theory for me to tie that into aleph numbers, but it's probably worth a mention. 38.109.137.2 (talk) 15:12, 18 September 2017 (UTC)
   The title of that article is clumsily worded. No, it doesn't say at all that all infinite sets have the same cardinality. It says that two particular infinite cardinalities, ones important enough to have been given names ( p {\displaystyle {\mathfrak {p}}} {\mathfrak {p}} and t {\displaystyle {\mathfrak {t}}} \mathfrak{t}), turn out to be provably equal, which was contrary to the previous expectation.
   This isn't the right place to go into detail. I'd love to say more at the mathematics reference desk, if you'd care to ask a question there. --Trovatore (talk) 16:57, 18 September 2017 (UTC) 

I lack the access to the journal to read the original article, and Google isn't exactly good for searching for the definitions of p and t. I understand the aleph zero and aleph one concepts (cardinality of integers and reals) and kind of get aleph two (cardinality of the set of all ranges on the real number line). Upon further reading of the articles referencing the paper I noted that while it was specifically talking about the p and t infinities, it never really defined them, and I was assuming that they corresponded to aleph numbers. Having looked a bit more into aleph numbers, I note that there is appears to be an assumption that there is no "aleph 1.5"; that is all infinite cardinalties have to fall into an integer numbered aleph. So what, roughly, are p and t? Are there any repercussions to the P versus NP problem from this? 38.109.137.2 (talk) 20:04, 18 September 2017 (UTC)

Ah, before we get there we have to clear up a misconception. You write, I understand the aleph zero and aleph one concepts (cardinality of integers and reals). But in fact ℵ1 is not necessarily the cardinality of the reals. The cardinality of the reals is 20, or 𝖈.
The assertion that these two cardinalities, ℵ1 and 𝖈, are equal, is called the continuum hypothesis. It is unknown whether it is true or false. What is known is that it can neither be proved nor disproved in the most common axiomatization of set theory, ZFC. --Trovatore (talk) 20:17, 18 September 2017 (UTC)

OK, no problems there? Cool. I suppose I should go ahead and answer the actual question, then.
First you need to understand the structure P(ω)/Fin. Basically this is the set of all sets of natural numbers, except that you ignore finite sets. So you consider the even numbers, {0, 2, 4, 6, 8, 10, ...}, to be the same as the same set, except leaving out 2 and 4 and adding 3 and 5: {0, 3, 5, 6, 8, 10, ...}. All finite sets are the same as the empty set — they're the "zero" of this structure.
And you have the natural ordering on this structure as "almost inclusion" — this is the subset order, except again you ignore finite differences. So AB if and only if AB with possibly finitely many exceptions (that is, AB is a finite set).
So now 𝖕 is the smallest cardinality of a collection S of elements of P(ω)/Fin for which any finite subcollection has a nonzero lower bound, but S itself does not. (A nonzero lower bound for such a collection is called a pseudointersection, which is what the 𝖕 stands for; the property that any finite subcollection has a pseudointersection is called the strong finite intersection property.)
On the other hand, 𝖙 is the smallest cardinality of a chain of nonzero elements of P(ω)/Fin that has no nonzero lower bound. (The symbol 𝖙 stands for "tower".)
It's easy to see that 𝖕≤𝖙, because any chain as in the definition of 𝖙 is automatically a collection as in the definition of 𝖕.
However, until the recent result, apparently the people most familiar with the subject thought that it was probably consistent with ZFC that 𝖕 is strictly less than 𝖙. I am not one of those people that is, I am not an expert on this subject so I can't help you with why they thought that.
It is now known that ZFC proves that 𝖕 and 𝖙 are equal.
By the way, I looked up the definitions and translated it into the stuff about P(ω)/Fin in my head; it seems clearer to me that way. It is possible that I have made a mistake — if anyone sees one please let me know. --Trovatore (talk) 06:11, 19 September 2017 (UTC)

Triaugmented triangular prism as die

Can a Triaugmented triangular prism be used as a fair 14-sided die? If not, can it at least be used as a fair 7-sided die by printing each number on two faces? NeonMerlin 20:37, 18 September 2017 (UTC)

I see no reason to suppose it would be a fair die. You can use a normal 6 sided dice to get 1 to 7 fairly by throwing it once to get a then again to get b, if they are both 6 throw them again, then calculate 6a+b, divide by 7 and add 1 to the remainder to give the result 1 to 7. Only the one case in 36 needs another throw and it is unlikely you'll have to throw too many times. Dmcq (talk) 22:01, 18 September 2017 (UTC)
A fair die needs to be isohedral. It is not enough that all faces are the same shape; they also need to all lie in the same symmetry orbit. The triaugmented triangular prism is not isohedral. Double sharp (talk) 04:38, 19 September 2017 (UTC)
You can get a fair die for 1 to seven by having a seven sided prism with seven faced pyramids at each end then each number should be on one of each of the triangles at the end and a rectangle on the prism. It should be easy to see this will choose 1 to 7 fairly. Dmcq (talk) 22:50, 19 September 2017 (UTC)
Provided the bases are regular heptagons, yes. Or use an octahedron and discard one number, when it comes up. StuRat (talk) 23:41, 19 September 2017 (UTC)

Symmetry axes of an ellipsoid

Ellipsoid#Standard equation says

...
where a, b, c are positive real numbers.
The points (a, 0, 0), (0, b, 0) and (0, 0, c) lie on the surface. The line segments from the origin to these points are called the semi-principal axes of the ellipsoid.

This seems to imply that there are also other axes. Can we say that every chord through the center is an axis of twofold symmetry? (This appears to me to be true for ellpses.) References if possible, please. Loraof (talk) 22:17, 18 September 2017 (UTC)

No, I don't believe so, and I don't believe this applies to a (planar) ellipse, either. Just draw a highly eccentric ellipse aligned with the X and Y axes and a 45 degree angle line through the center. You will see it is not symmetrical about that line. Same is true in 3D. Here's a rough pic to use to visualize the 2D case: [1]. Chord CW isn't perpendicular to the ellipse boundary, as you see, so if one half of the ellipse is mirrored about it, it's tangent would be discontinuous with the original half. StuRat (talk) 22:33, 18 September 2017 (UTC)
You're talking about reflectional symmetry. I'm talking about twofold rotational symmetry—if you rotate an ellipse by 180° it occupies the same position as it originally did. [added subsequently:] But rotational symmetry in 2D is about a point, not a line, so I guess it doesn't work for the ellipse wrt a chord.
So my question remains, slightly clarified: Does the ellipsoid have twofold rotational symmetry? Loraof (talk) 02:32, 19 September 2017 (UTC)
No, the three semi-principal axes are the only axes of twofold rotational symmetry,[1] if the ellipsoid is not a sphere. For example, x=y=z is not an axis of symmetry of . lies on the ellipsoid, but rotating the point 180 degrees about x=y=z will take it off the ellipsoid to . C0617470r (talk) 07:15, 19 September 2017 (UTC)

References

  1. ^ Holden, Alan (1992). The Nature of Solids. Courier Corporation. p. 66. ISBN 9780486270777. 
The semi-principal axes are actually principal semi-axes. Bo Jacoby (talk) 09:27, 19 September 2017 (UTC).

Thanks. So why are the three axes of rotational symmetry called (at least in the Wikipedia article Ellipsoid) "the principal axes" instead of "the (only) axes"? Loraof (talk) 16:48, 19 September 2017 (UTC)

An ellipsoid (as for any solid) has an infinite number of axes passing through the centre. The three axes of the ellipsoid that have rotational symmetry (of infinite order or of order 2) are called the principal axes. No other axis has rotational symmetry of any order. If rotated about any other axis, the ellipse would "wobble" and there would be stresses on the axis of rotation. The ellipsoid also has three planes of reflective symmetry. Each of these planes contains two of the principal axes. Dbfirs 18:04, 19 September 2017 (UTC)
So an axis of any solid object is by definition any chord through the center? If so, does the "center" of a solid object here mean the centroid? Loraof (talk) 03:08, 20 September 2017 (UTC)

September 19

Measure Theory

I am taking a course in statistical theory and am having some issues with measure theory. Admittedly these are homework questions (they're from the book Theoretical Statistics by Keener), but I'm not looking for answers, really, but perhaps some clarification on notation and whether my thought processes are correct so far. Thanks in advance!

Question 1: is a measure on subsets of the natural numbers where , for . I'm asked to compute . Since this is a counting measure, would this simply be ? I think having the in the integrand is throwing me off.

Question 2: I'm given that, for a set , . I'm assuming the "pound sign" notation refers to the size of the set, correct? So it's asking me to find various measures, one of which is , where is the set of all even numbers. I can see the sequence, for odd, , and for even, this exactly equals . I also see that . Since the limit exists, would this be , and can I apply the same sort of process to other such sets (e.g., the primes, the perfect squares)?

A few hints and such. For #1, you need to be a bit more careful. isn't given directly – but only by its value on certain subsets of But you can use properties of measures to recover it without much hassle. And it's not counting measure; that's the measure m with m({n}) = 1 for all singletons. But you're right that you can evaluate it as a sum; exactly how to do that might depend on what you're supposed to know at this point though, so I'm not 100% sure how you should proceed exactly.
For #2, I think you've basically got it, although it might be kind of a pain to try to come up with exact expressions for every value of n. Still, you should be able to reason out a limit in either of those other 2 cases. --Deacon Vorbis (talk) 02:13, 19 September 2017 (UTC)
For Question 1, I'm pretty sure the sum given is the correct value even if the reason given isn't. On question 2, this is known as the Natural density. It should be easy to show that the natural density of the squares is 0, but the article quotes the Prime Number Theorem (a fairly deep result) to compute the natural density of the primes. --RDBury (talk) 02:22, 19 September 2017 (UTC)
The sum isn't right though; you really need to figure out μ's value on singletons and then use that along with the fact that there's a factor of x in the integrand. Also, I would assume that something like quoting the Prime Number Theorem is fair game. But that's stronger than you need anyway. You can show that the number of primes up to n is no more than about n(1 - 1/2)(1 - 1/3)(1 - 1/5)...(1 - 1/pk) for pk < n, and that this (divided by n) tends to 0 as n goes to ∞. See Euler product and Riemann zeta function for a bit more detail. --Deacon Vorbis (talk) 02:39, 19 September 2017 (UTC)
On 1, my thinking is that for any measure on N,
which turns out to be the given sum in this case. The second equation follows by expanding out the lhs, rearranging, and applying countable additivity. On 2, it did seem like using the PNT was like using a sledge hammer to swat a fly and I like your argument better. The OP did mention though that it was a course on statistical theory, which I assume would not have number theory as a prerequisite. So while the squares are straightforward, the primes aren't unless you use a strong result like that, and even if it is fair game to use it wouldn't be fair to assume someone taking that course would be familiar with it. --RDBury (talk) 03:23, 19 September 2017 (UTC)
Oh yeah, that's clever. I guess the boring way I had would have given the same answer had I bothered to crank out the sum, but that's definitely nicer. --Deacon Vorbis (talk) 03:30, 19 September 2017 (UTC)

OP here... the insight is appreciated! Thanks for clarifying my confusion on Question 1; it makes quite a bit of sense now, and RDBury, that is definitely an elegant approach. (I'm shocked I got the right answer for the wrong reason, but am glad to know the proper angle now.) I feel much better about Question 2 as well. Thank you again. 2600:387:A:9:0:0:0:61 (talk) 04:24, 19 September 2017 (UTC)

Tripling odds

If three things each have a 17% chance of happening, how likely is it all three happen? InedibleHulk (talk) 01:17, September 19, 2017 (UTC)

If their occurrence is independent of each other then 0.17 * 0.17 * 0.17 110.22.20.252 (talk) 01:59, 19 September 2017 (UTC)

Otherwise it is Pr(A) * Pr(B|A) * Pr(C|A,B) = 0.17 * Pr(B|A) * Pr(C|A,B) 110.22.20.252 (talk) 02:01, 19 September 2017 (UTC)

Fortunately, it's not otherwise. Simpler than I'd hoped. Thanks! InedibleHulk (talk) 02:09, September 19, 2017 (UTC)
Or wait, no. It depends. Maybe. If a boy is 10 at some point in 1954, 15 at some other in 1959 and 17 at another in 1961, what are the odds (without knowing anything else) that all of these points occur in November or December? InedibleHulk (talk) 02:20, September 19, 2017 (UTC)
He would have to have reached age 10 between November 1 and December 22 (resulting in his reaching 17 by December 31). Thus he would have to have been born in 1944 between October 20 and December 10. Thus there are 52 admissible birth dates. Since 1944 ( the year when he must have been born) had 366 possible birth dates, the probability is 52/366 = 26/183. Loraof (talk) 02:46, 19 September 2017 (UTC)
Why couldn't he turn 10 between December 22 and 31? InedibleHulk (talk) 03:41, September 19, 2017 (UTC)
Sorry. Striking my post. Too convoluted to explain the nonsense that I had in mind. I'll think about it some more. Loraof (talk) 17:32, 19 September 2017 (UTC)
Let's see if I understand the question correctly: Given that a boy was 10 at some point in 1954, 15 at some point in 1959 and 17 at some point in 1961, what is the probability that he was 10 on some day in November or December of 1954, 15 on some day in November or December of 1959 and 17 on some in day in November or December of 1961?
If that's correct, then most of the question is redundant. If the boy was 10 at some day in 1954, he's guaranteed to be 15 at some day in 1959. Similarly, if he's 10 at some day in November or December of 1954, he's guaranteed to be 15 at some day in November or December of 1959.
So the question really comes down to: given that the boy was 10 on some day in 1954, what is the probability that he was 10 on some day in November or December of 1954? To be 10 on some day in 1954, he would have to be born on some day from Jan 2, 1943 to Dec 31, 1944. Since 1944 was a leap year, that gives us 730 possible days. To be 10 on some day in November or December of 1954 requires being born on some day from Nov 2, 1943 to Dec 31, 1944. Again, because leap year, that's 425 days. Assuming an a priori uniform distribution on possible birthdays (not true, but eh), that gives us 425/730.--2601:245:C500:A5D6:418F:C405:A22F:8942 (talk) 22:40, 19 September 2017 (UTC)
I interpret the question differently. Pick a random day in each of the years 1954, 1959, 1961. Ask how old a boy was on those dates. If the answer is 10, 15, 17, then what is the chance that all three days were in November or December? 61/365 is near 17% so that matches the original question better. PrimeHunter (talk) 14:17, 20 September 2017 (UTC)

Type-I and Type-II Errors in The Presence of Autocorrelation

Scheffé (1959) shows that type-I errors increase in the presence of autocorrelated (serially dependent) data sets. In one simulation I regress time series on a set of ordinal numbers, which should be meaningless, yet the regression coefficient is statistically significant about 20% of the time, which I interpret as a type-I error. In another simulation, I split time series by pre-intervnetion and post-intervention, resulting in 7 times as many statistically significant changes in an interrupted time series analysis versus those detected by paired samples t-tests, which I interpret to be type-II errors. The two simulations seems to disagree with each other, while the first agrees with Scheffé (1959). Am I missing something? Can someone shed some light on the matter for me? Schyler (exquirere bonum ipsum) 14:05, 19 September 2017 (UTC)

Fixing the redlink in the OP's post. Loraof (talk) 17:38, 19 September 2017 (UTC)
In general, I'm not even clear on what is confusing you: why should any of this different (and only abstractly described) stuff be the same? Is there some reason you'd expect these two different simulations and different methods to "agree"? And if so, what specifically do you expect them to agree on? Maybe it's obvious to you because you're close to it, but I feel like you're leaving out a lot of potentially important context. For starters, time-series analysis is something I turn to for understanding observational data about the real (non-simulated world). It can help us conjecture underlying causes, etc. But, if I want to understand the results of a (possibly very complicated) simulation scheme, I start by analyzing how it works, not by throwing that out and doing stats on the output. All the underlying causes are right there in the code, right? I say this not to criticize what you're doing, but to point out how much we are missing in this description.
I think we might be able to offer a bit more insight and appropriate references if you can give more detail on what this is all about. What is the nature of simulations A and B? What exactly are the two different methods used on B? I know what you mean by the paired sample t-test, but not what you mean by interrupted time series analysis, as that is whole field of related methods, not a specific method. SemanticMantis (talk) 21:14, 19 September 2017 (UTC)


September 20

composition of distributions and submersions

I don't really understand the distributions article, but I presume the composition of a distribution with a submersion, T(S), would be a distribution rather than a function? Is there anything roughly(or even precisely?) like an inverse to the distribution T, call it T* rather than T^-1, such that T(S(T*)) would be a function?144.35.45.72 (talk) 00:41, 20 September 2017 (UTC)

Do we have an article on Cancelling out terms (in basic algebra and in otherwise-divergent results)?

By this I mean the simple mathematical/algebraic concept that opposite terms in an equation can be cancelled out, or that problematic terms in group, gauge or other theories can be cancelled out to avoid divergencies.

If a suitable article exists can someone

(and if not, can a basic stub or something be created too?)

I don't want to create an article myself, yet, in case one already exists.

Thanks FT2 (Talk | email) 10:39, 20 September 2017 (UTC)

I guess there should be an article on it, especially as seeing the original book on algebra had a title which translates as 'The Compendious Book on Calculation by Completion and Balancing'. Dmcq (talk) 11:10, 20 September 2017 (UTC)
There should be, but I don't know if there is (I couldn't find it). If not, can you or anyone who has the requisite knowledge write it, even if just as a stub, and link it as mentioned, because I suspect it's one of those subtle/deceptively non-trivial topics which requires more capability than it seems on the surface might be needed, to cover even crudely. Thank you. FT2 (Talk | email) 11:52, 20 September 2017 (UTC)
  • Created a basic stub at Cancelling out but definitely needs some work done to make it any use. FT2 (Talk | email) 12:07, 20 September 2017 (UTC)
@FT2: Nice work! --CiaPan (talk) 12:53, 20 September 2017 (UTC)
Huh, strange that nothing like that existed, even as sections at other articles, like Algebra or Equation. But I guess this should probably exist on its own, also. --Deacon Vorbis (talk) 12:54, 20 September 2017 (UTC)

@FT2, Deacon Vorbis, and Dmcq: There exists a Cancellation property article, possibly the two should be linked to each other (but rather not merged). --CiaPan (talk) 13:02, 20 September 2017 (UTC)

(edit conflict) Well, hold on. We do have Cancellation property. I'm guessing this is what you were looking for, although it's written a bit more abstractly than one for basic algebra should be. (You just beat me to the punch here). --Deacon Vorbis (talk) 13:04, 20 September 2017 (UTC)
I was on the fence about merging or not merging, but I think I fell on the not merging side, so I went ahead and added an entry to the disambig page at Cancel. --Deacon Vorbis (talk) 13:24, 20 September 2017 (UTC)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Wikipedia:Reference_desk/Mathematics&oldid=801572504"
This content was retrieved from Wikipedia : http://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Mathematics
This page is based on the copyrighted Wikipedia article "Wikipedia:Reference desk/Mathematics"; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA