Wikipedia:Reference desk/Mathematics
Welcome to the mathematics reference desk.

Choose a topic:
See also:

Contents
August 7
Fourier series question
How do I get the Fourier series expansion for ? I've tried doing the integrals with Mathematica, but they look unreasonably ugly.Leon (talk) 13:06, 7 August 2018 (UTC)
 It's been a while, but is not a squareintegrable function (over say), so it doesn't have a Fourier series. –Deacon Vorbis (carbon • videos) 14:57, 7 August 2018 (UTC)
Distance on an oblate spheroid
On a normal spheroid, the distance between two points along the surface is relatively trivial to compute. But is there a closed form solution for the same problem on an oblate spheroid like the earth? For the purposes of this, "closed form" can include elliptic integrals, and I have this feeling that will be necessary.Jasper Deng (talk) 17:54, 7 August 2018 (UTC)
 See Geodesics on an ellipsoid catslash (talk) 19:28, 7 August 2018 (UTC)
August 8
Higher order derivatives
(refactored from Talk:Lagrangian mechanics and Talk:Lagrange multiplier)
Hi I've been trying to find an answer to my question but I can't find it anywhere so I'd thought I'd ask here...
Given a functional and a constraint of the form one can form a Lagrangian of the form and get the EulerLagrange equation from varying the corresponding action. i.e.
But what if the constraint equation depended on higher order derivatives, i.e. ?
Can I still form a Lagrangian of the form and get the EulerLagrange equation from varying the corresponding action. i.e.
— Preceding unsigned comment added by 2604:3D09:A47F:F630:8C32:405D:8927:B5CD (talk)
 Systems whose constraints depend on velocities and higher order derivatives are generally nonholonomic systems where the state of the system depends on the path history. These are generally harder to solve than holonomic systems. But the techniques of variational calculus and Lagrange multipliers should be able to be used here, too. 
{{uMark viking}} {Talk}
17:41, 8 August 2018 (UTC)
Ok so they can be harder to solve but theoretically possible using the same outline as the Lagrange multipliers technique for holonomic systems? Thanks for your help. One more question, what exactly makes it harder to solve? Could you give an example please? — Preceding unsigned comment added by 2604:3D09:A47F:F630:5148:5614:E8C6:8D1B (talk)
 Please sign your posts with
~~~~
. See Wikipedia:Signatures. RichardofEarth (talk) 19:21, 8 August 2018 (UTC)  Also, please, take into account that is a function, not a number. Ruslik_Zero 20:53, 8 August 2018 (UTC)
Hi
I heard about that but never seen it in an example. Would i be right to guess that it is a function of time explicitly when the Lagrangian is a function of time explicitly? Can you give an example cause I never seen to be a function of time....thanks — Preceding unsigned comment added by 2604:3d09:a47f:f630:5148:5614:e8c6:8d1b (talk) 22:14, 8 August 2018 (UTC)
August 10
Cartesian axes of a rhomboid graph
How would one describe the Cartesian axes in the chart at the top right of the article Nolan Chart? Would one describe the Personal axis as the xaxis and the Economic axis as the yaxis? Or vice versa, perhaps? I suppose one could say "top quadrant", "right quadrant", "bottom quadrant", "left quadrant", but what terminology could be used for the axes? Would someone more hippieish be described as having a higher value or a lower one on the Personal axis than a straightasabanker type? And if one does descirbe the axes with the terms xaxis and yaxis, how does one explain that they are turned 45° from their usual positions? Khemehekis (talk) 04:59, 10 August 2018 (UTC)
 I would just call them the Personal axis and the Economic axis. Loraof (talk) 20:46, 10 August 2018 (UTC)
 Keep in mind that the labels x and y on the axes, and their orientations are just conventions so we can talk about them sensibly. Any two perpendicular lines can be chosen as x and y and it's a matter of turning the piece of paper around and/or turning it over to get the conventional orientation. In applications, any two variables can be used regardless of geometrical interpretation, e.g. time or pH. The hippy vs. banker thing seems to me a matter of interpretation and I'm not an economist. Similarly, the reason for the angle change is a matter of interpretation, but presumably it's to keep the usual labels of left and right on the political spectrum where you'd think they belong. RDBury (talk) 09:43, 11 August 2018 (UTC)
August 11
How many samples you need to describe a language?
That's a question about linguistics, and besides that, it's also relevant for formal languages. The language or computing ref desks would be valid choice. But the tool is statistics, and I expect a mathematical answer, so here we go.
How broad must be a corpus to describe accurately a language? At least, you could aspire to capture above 99.x% of words and grammar structures. If you limit yourself to describing the language of newspaper articles (just to draw some limitation), how many articles would you need to analyze? How many articles that do not contribute with any new words/new rules do you need to get until you can safely stop? Doroletho (talk) 13:29, 11 August 2018 (UTC)
 First of all, formal languages are very different than natural languages so I'm not convinced that an answer for one will be entirely relevant to the other. For natural languages the problem seems similar to the Mark and recapture problem, to which the Lincoln index can be applied in some circumstances. One issue is that word frequency in a natural language is not nearly uniform but follows something akin to Zipf's Law; the articles point to research which allows for this but I didn't see anything freely available. Another problem is the number of words different languages have can vary tremendously, with the numbers of words in some languages being virtually infinite. The article on Inuit languages gives the example 'Tusaatsiarunnanngittualuujunga' which translates to "I cannot hear very well." Yet another issue is that natural languages are constantly changing so what may have been a sufficient sample 5 years ago might miss a significant portion of the language as used today, at least if you're going for 99%+ coverage.
 The formal language case is complicated by the fact that there are many different classes (see Chomsky hierarchy), and conclusions about one class won't always work for another class. So a formal language is a set of strings defined by a rule undetermined complexity, and you're trying to guess the rule based on a subset, hindered by the fact that you can't say with certainty that a string isn't in the language just because it isn't in the sample. For example if the sample is {"ababab", "aababb", "aaabbb", "aabbab"}, is the language the set of all strings of 'a' and 'b' of length 6, all such strings with start with 'a' and end with 'b', all strings with 3 'a's and 3 'b's, or strings of length 6 in the Dyck language with brackets replaced by 'a' and 'b'? Any answer could be correct or "cabcab" might be a valid word as well in which case none of them are. In addition, the strings in a formal language can be any length and there are usually an infinite number of them. All of this points to the conclusion that statistical analysis alone is insufficient to determine the rule which generates a formal language. As another example, consider the language defined as the set of substrings of the output of some given Pseudorandom number generator. The number of strings in the language is actually quite small compared to the number of possible (i.e. random) strings, and the algorithm to generate the strings is probably relatively simple, but the idea is that statistical tests should be unable to distinguish between the pseudorandom and truly random outputs. Fortunately, the rule which defines a formal languages is usually known from the start, and you generally don't have to guess what they are based on samples.
 So all this is a very long way of saying that I doubt there is any single answer which would work for any language, whether natural or formal. Probably the best you could do with natural language to use 90% of your sample to generate a word list, then use the remaining 10% as a test to see how complete it is. For formal languages you may not be able to find the specific rule easily, but perhaps there is less specific information you could glean. For example with a large enough sample size you could determine with reasonable accuracy the number of words of length 1, 2, .., 10, and then from that extrapolate an estimate for the number of words of length n. RDBury (talk) 04:54, 13 August 2018 (UTC)
August 13
What is this relationship called?
Imagine some educational assessment with 5 items each with dichotomous outcomes. There is/are:
 1 way to get 100%,
 5 ways to get an 80%,
 10 ways to get a 60%,
 10 ways to get a 40%,
 5 ways to get a 20%, and
 1 way to get a 0%.
Combining these probabilities with the outcome utility in an expected utility, the expected utility:
 for 5 correct answers is .03125,
 for 4 correct answers is .125,
 for 3 correct answers is .1875,
 for 2 correct answers is .125,
 for 1 correct answer is .125, and
 for 0 correct answers is 0.
Moreover, .03125=.5^5, .125=(.5^4)*2, and .1875=(.5^3)+(.5^4).
Now, imagine a similar assessment with 3 items with polytomous outcomes, where there is a 25% probability of getting the answer correct (4 multiple choice answers, 1 is correct). There is/are:
 1 way to get a 100%,
 9 ways to get a 66%,
 27 ways to get a 33%, and
 27 ways to get a 0%.
Again, using the expected utility function, the expected utility:
 for 3 correct answers is .015625,
 for 2 correct answers is .09375,
 for 1 correct answer is .140625, and
 for 0 correct answers is 0.
Moreover, .01625=.25^3, .09375=(.25^2)+(.25^3)*2, and .140625=(.25^2)*2+.25^3.
Is this some fascinating relationship, or am I just playing with numbers? Schyler (exquirere bonum ipsum) 16:03, 13 August 2018 (UTC)
 The number of correct answers is an example of a binomial random variable. In the first case, n = 5, and p = 1/2, and in the second, n = 3, and p = 1/4. –Deacon Vorbis (carbon • videos) 16:09, 13 August 2018 (UTC)
 Great! What about the relationship between the expected utilities and the other parameters? Schyler (exquirere bonum ipsum) 16:12, 13 August 2018 (UTC)
 I'm not sure what you mean by "utility" exactly, but from the article there, the expected number of correct answers is just np. –Deacon Vorbis (carbon • videos) 16:13, 13 August 2018 (UTC)
 The percent correct is the utility, so the expected utility is the probability of a correct answer times the percent correct. For example, the exepcted utility of getting four correct answers on assessment a is (5/32)*(0.8)=.125. Schyler (exquirere bonum ipsum) 16:21, 13 August 2018 (UTC)
 Expected has a technical meaning (see Expected value), which I mention because it doesn't make sense to talk about the "expected ___" of getting 4 answers right out of 5. There's also a meaning of utility in economics, which may be what you're thinking of here, but I'm not sure. You seem to be wanting to take the expected value of a function of a random variable, where the function is just dividing by the number of questions, in order to convert from a raw score to a fraction. By the linearity of expectation, it doesn't matter what order you do that in. So you can just find the expected number of correct answers (which was np as noted above), and then divide by the total number of questions (n), to get the expected score (p). This just verifies the intuitive result that you if you have a probability p of getting each question correct, then you should expect to get a total score of p on the test as well. –Deacon Vorbis (carbon • videos) 16:31, 13 August 2018 (UTC)
 I am wondering about the distribution of expected utilities in the econometric sense. Sorry if that was a problem. The expected utility is n_correct*p(n_correct), not np. I am looking for the probability density function of the expected utility, where the expected utility is a function of n_correct and p(n_correct). Then, the 'expected value' is of less interest than the 'maximum,' although I think they may be the same thing in the end. I want the maximum because I want to maximize my expected utility when taking an assessment, for example. Schyler (exquirere bonum ipsum) 16:45, 13 August 2018 (UTC)
 Expected has a technical meaning (see Expected value), which I mention because it doesn't make sense to talk about the "expected ___" of getting 4 answers right out of 5. There's also a meaning of utility in economics, which may be what you're thinking of here, but I'm not sure. You seem to be wanting to take the expected value of a function of a random variable, where the function is just dividing by the number of questions, in order to convert from a raw score to a fraction. By the linearity of expectation, it doesn't matter what order you do that in. So you can just find the expected number of correct answers (which was np as noted above), and then divide by the total number of questions (n), to get the expected score (p). This just verifies the intuitive result that you if you have a probability p of getting each question correct, then you should expect to get a total score of p on the test as well. –Deacon Vorbis (carbon • videos) 16:31, 13 August 2018 (UTC)
 The percent correct is the utility, so the expected utility is the probability of a correct answer times the percent correct. For example, the exepcted utility of getting four correct answers on assessment a is (5/32)*(0.8)=.125. Schyler (exquirere bonum ipsum) 16:21, 13 August 2018 (UTC)
 I'm not sure what you mean by "utility" exactly, but from the article there, the expected number of correct answers is just np. –Deacon Vorbis (carbon • videos) 16:13, 13 August 2018 (UTC)
 Great! What about the relationship between the expected utilities and the other parameters? Schyler (exquirere bonum ipsum) 16:12, 13 August 2018 (UTC)
You don't really have an expected value of a single outcome. Expected value is an average (technically, an integral with respect to a probability measure) over all possible outcomes. The only difference you seem to be talking about is fractional score versus number of questions correct, but that's a fairly trivial difference. Moreover, there doesn't seem to be anything to maximize; you just answer what you think has the highest chance of being correct (if you're just guessing, then it doesn't matter), and the expected score is as I noted above. –Deacon Vorbis (carbon • videos) 16:54, 13 August 2018 (UTC)