The AstroStat Slog » Physics http://hea-www.harvard.edu/AstroStat/slog Weaving together Astronomy+Statistics+Computer Science+Engineering+Intrumentation, far beyond the growing borders Fri, 09 Sep 2011 17:05:33 +0000 en-US hourly 1 http://wordpress.org/?v=3.4 News and related stories http://hea-www.harvard.edu/AstroStat/slog/2009/news-and-related-stories/ http://hea-www.harvard.edu/AstroStat/slog/2009/news-and-related-stories/#comments Mon, 27 Jul 2009 11:09:18 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=3180 I’m getting behind these days because of chasing too many rabbits. One of those rabbits is hunting online lectures useful for everyone. Prof. Feynman’s lectures have great reputations but they have been hard to come by. I once listened to a pirate version of his lecture tape with horrible sound quality. Thanks to Bill Gates and Microsoft Research, although it is a belated news, I’m very delighted to say “Feynman lectures are online.”

I once described how iconic Prof. Richard Feynman is (see Feynman and Statistics). At last, these lectures are publicly viewable through Project Tuva. Not knowing what this Project Tuva is, naturally I checked wikipedia, from which I found it’s related WorldWide Telescope which runs on Silverlight by Microsoft Research. Virtual Observatory is one of the most sought projects in astronomy. Several postings related to Google Sky are available here but not much about WorldWide Telescope. I attribute its lack of discussion in the slog to its late debut. Also, the fact that renown astronomers are working on site for the WorldWide Telescope has pressured me. Please, visit worldwidetelescope.org.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/news-and-related-stories/feed/ 1
[Book] The Physicists http://hea-www.harvard.edu/AstroStat/slog/2009/book-the-physicists/ http://hea-www.harvard.edu/AstroStat/slog/2009/book-the-physicists/#comments Wed, 22 Apr 2009 19:02:44 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=2157 I was reading Lehmann’s memoir on his friends and colleagues who influence a great deal on establishing his career. I’m happy to know that his meeting Landau, Courant, and Evans led him to be a statistician; otherwise, we, including astronomers, would have had very different textbooks and statistical thinking would have been different. On the other hand, I was surprised to know that he chose statistics over physics due to his experience from Cambridge (UK). I thought becoming a physicist is more preferred than becoming a statistician during the first half of the 20th century. At least I felt that way, probably it’s because more general science books in physics and physics related historic events were well exposed so that I became to think that physicists are more cooler than other type scientists.

The Physicists by Friedrich Durrenmatt

This short play (wiki link) is very charming and fun to read. Some statisticians would enjoy it. A subset of readers might embrace physics instead of repelling it. At least, it would show different aspects of non statistical science to statisticians beyond genetics, biology, medical science, economics, sociology, agricultural science, psychology, meteorology, and so on, where interdisciplinary collaborations are relatively well established.

The links for The Physicists and the book by Lehmann below were from Amazon.

Reminiscences of a Statistician: The Company I Kept by Erich Lehmann

The following excerpt from Reminiscence…, however, was more interesting how statistics were viewed to young Lehmann because I felt the same prior to learning statistics and watching how statistics were used in astronomical data analysis.

… I did not like it (statistics). It was lacking the element that had attracted me to mathematics as a boy: statistics did not possess the beauty that I have found in the integers and later in other parts of mathematics. Instead, ad hoc methods were used to solve problems that were messy and that where based on questionable assumptions that seemed quite arbitrary.

Aside, I have another post on his article , On the history and use of some standard statistical models.

I’d like to recommend another with a hope that someone finds its english translation (I have been searching but kept failed).

Der Teil und das Ganze by Werner Heisenberg.

YES, Heisenberg of the uncertainty principle! My understanding is that the notion of uncertainty is different among physicists, statisticians, and modern astronomers. I think it has evolved without communications.

Related to uncertainty, I also want to recommend again Professor Lindley’s insightful paper, discussed in another post, Statistics is the study of uncertainty.

Not many statisticians are exposed to (astro)physics and vice versa, probably the primary reason of wasting time on explaining λ (poisson rate parameter vs. wavelength), ν (nuisance parameter vs. frequency), π or φ (pdfs vs. particles), Ω (probability space vs. cosmological constant), Ho (null hypothesis vs. Hubble constant), to name a few. I hope this general reading recommendations is useful to narrow gaps and time wastes.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/book-the-physicists/feed/ 0
systematic errors http://hea-www.harvard.edu/AstroStat/slog/2009/systematic-errors/ http://hea-www.harvard.edu/AstroStat/slog/2009/systematic-errors/#comments Fri, 06 Mar 2009 19:42:18 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=1722 Ah ha~ Once I questioned, “what is systematic error?” (see [Q] systematic error.) Thanks to L. Lyons’ work discussed in [ArXiv] Particle Physics, I found this paper, titled Systematic Errors describing the concept and statistical inference related to systematic errors in the field of particle physics. It, gladly, shares lots of similarity with high energy astrophysics.

Systematic Errors by J. Heinrich and L.Lyons
in Annu. Rev. Nucl. Part. Sci. (2007) Vol. 57 pp.145-169 [http://adsabs.harvard.edu/abs/2007ARNPS..57..145H]

The characterization of two error types, systematic and statistical error is illustrated with an simple physics experiment, the pendulum. They described two distinct sources of systematic errors.

…the reliable assessment of systematics requires much more thought and work than for the corresponding statistical error.
Some errors are clearly statistical (e.g. those associated with the reading errors on T and l), and others are clearly systematic (e.g., the correction of the measured g to its sea level value). Others could be regarded as either statistical or systematic (e.g., the uncertainty in the recalibration of the ruler). Our attitude is that the type assigned to a particular error is not crucial. What is important is that possible correlations with other measurements are clearly understood.

Section 2 contains a very nice review in english, not in mathematical symbols, about the basics of Bayesian and frequentist statistics for inference in particle physics with practical accounts. Comparison of Bayes and Frequentist approaches is provided. (I was happy to see that χ2 is said to not belong to frequentist methods. It is just a popular method in references about data analysis in astronomy, not in modern statistics. If someone insists, statisticians could study the χ2 statistic under some assumptions and conditions that suit properties of astronomical data, investigate the efficiency and completeness of grouped Poission counts for Gaussian approximation within the χ2 minimization process, check degrees of information loss, and so forth)

To a Bayesian, probability is interpreted as the degree of belief in a statement. …
In contast, frequentists define probability via a repeated series of almost identical trials;…

Section 3 clarifies the notion of p-values as such:

It is vital to remember that a p-value is not the probability that the relevant hypothesis is true. Thus, statements such as “our data show that the probability that the standard model is true is below 1%” are incorrect interpretations of p-values.

This reminds me of the null hypothesis probability that I often encounter in astronomical literature or discussions to report the X-ray spectral fitting results. I believe astronomers using the null hypothesis probability are confused between Bayesian and frequentist concepts. The computation is based on the frequentist idea, p-value but the interpretation is given via Bayesian. A separate posting on the null hypothesis probability will come shortly.

Section 4 describes both Bayesian and frequentist ways to include systematics. Through its parameterization (for Gaussian, parameterization is achieved with additive error terms, or none zero elements in full covariance matrix), systematic uncertainty is treated as nuisance parameters in the likelihood for both Bayesian and frequentist alike although the term “nuisance” appears in frequentist’s likelihood principles. Obtaining the posterior distribution of a parameter(s) of interest requires marginalization over uninteresting parameters which are seen as nuisance parameters in frequentist methods.

The miscellaneous section (Sec. 6) is the most useful part for understanding the nature and strategies for handling systematic errors. Instead of copying the whole section, here are two interesting quotes:

When the model under which the p-value is calculated has nuisance parameters (i.e. systematic uncertainties) the proper computation of the p-value is more complicated.

The contribution form a possible systematic can be estimated by seeing the change in the answer a when the nuisance parameter is varied by its uncertainty.

As warned, it is not recommended to combine calibrated systematic error and estimated statistical error in quadrature, since we cannot assume those errors are uncorrelated all the time. Except the disputes about setting a prior distribution, Bayesian strategy works better since the posterior distribution is the distribution of the parameter of interest, directly from which one gets the uncertainty in the parameter. Remember, in Bayesian statistics, parameters are random whereas in frequentist statistics, observations are random. The χ2 method only approximates uncertainty as Gaussian (equivalent to the posterior with a gaussian likelihood centered at the best fit and with a flat prior) with respect to the best fit and combines different uncertainties in quadrature. Neither of strategies is superior almost always than the other in a general term of performing statistical inference; however, case-specifically, we can say that one functions better than the other. The issue is how to define a model (distribution, distribution family, or class of functionals) prior to deploying various methodologies and therefore, understanding systematic errors in terms of model, or parametrization, or estimating equation, or robustness became important. Unfortunately, systematic descriptions about systematic errors from the statistical inference perspective are not present in astronomical publications. Strategies of handling systematic errors with statistical care are really hard to come by.

Still I think that their inclusion of systematic errors is limited to parametric methods, in other words, without parametrization of systematic errors, one cannot assess/quantify systematic errors properly. So, what if such parametrization of systematics is not available? I thought that some general semi-parametric methodology possibly assists developing methods of incorporating systematic errors in spectral model fitting. Our group has developed a simple semi-parametric way to incorporate systematic errors in X-ray spectral fitting. If you like to know how it works, please check out my poster in pdf. It may be viewed too conservative as if projection since instead of parameterizing systemtatics, the posterior was empirically marginalized over the systematics, the hypothetical space formed by simulated sample of calibration products.

I believe publications about handling systematic errors will enjoy prosperity in astronomy and statistics as long as complex instruments collect data. Beyond combining in quadrature or Gaussian approximation, systematic errors can be incorporated in a more sophisticated fashion, parametrically or nonparametrically. Particularly for the latter, statisticians knowledge and contributions are in great demand.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/systematic-errors/feed/ 0
[ArXiv] Particle Physics http://hea-www.harvard.edu/AstroStat/slog/2009/arxiv-particle-physics/ http://hea-www.harvard.edu/AstroStat/slog/2009/arxiv-particle-physics/#comments Fri, 20 Feb 2009 23:48:39 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=1234

[stat.AP:0811.1663]
Open Statistical Issues in Particle Physics by Louis Lyons

My recollection of meeting Prof. L. Lyons was that he is very kind and listening. I was delighted to see his introductory article about particle physics and its statistical challenges from an [arxiv:stat] email subscription.

Descriptions of various particles from modern particle physics are briefly given (I like such brevity, conciseness, but delivering necessaries. If you want more on physics, find those famous bestselling books like The first three minutes, A brief history of time, The elegant universe, or Feynman’s and undergraduate textbooks of modern physics and of particle physics). Large Hardron Collider (LHC, hereafter. LHC related slog postings: LHC first beam, The Banff challenge, Quote of the week, Phystat – LHC 2008) is introduced on top of its statistical challenges from the data collecting/processing perspectives since it is expected to collect 1010 events. Visit LHC website to find more about LHC.

My one line summary of the article is solving particle physics problems from the hypothesis testing or rather broadly classical statistical inference approaches. I enjoyed the most reading section 5 and 6, particularly the subsection titled Why 5σ? Here are some excerpts I like to share with you from the article:

It is hoped that the approaches mentioned in this article will be interesting or outrageous enough to provoke some Statisticians either to collaborate with Particle Physicists, or to provide them with suggestions for improving their analyses. It is to be noted that the techniques described are simply those used by Particle Physicists; no claim is made that they are necessarily optimal (Personally, I like such openness and candidness.).

… because we really do consider that our data are representative as samples drawn according to the model we are using (decay time distributions often are exponential; the counts in repeated time intervals do follow a Poisson distribution, etc.), and hence we want to use a statistical approach that allows the data “to speak for themselves,” rather than our analysis being dominated by our assumptions and beliefs, as embodied in Bayesian priors.

Because experimental detectors are so expensive to construct, the time-scale over which they are built and operated is so long, and they have to operate under harsh radiation conditions, great care is devoted to their design and construction. This differs from the traditional statistical approach for the design of agricultural tests of different fertilisers, but instead starts with a list of physics issues which the experiment hopes to address. The idea is to design a detector which will proved answers to the physics questions, subject to the constraints imposed by the cost of the planned detectors, their physical and mechanical limitations, and perhaps also the limited available space. (Personal belief is that what segregates physical science from other science requiring statistical thinking is that uncontrolled circumstances are quite common in physics and astronomy whereas various statistical methodologies are developed under assumptions of controllable circumstances, traceable subjects, and collectible additional sample.)

…that nothing was found, it is more useful to quote an upper limit on the sought-for effect, as this could be useful in ruling out some theories.

… the nuisance parameters arise from the uncertainties in the background rate b and the acceptance ε. These uncertainties are usually quoted as σb and σε, and the question arises of what these errors mean. … they would express the width of the Bayesian posterior or of the frequentist interval obtained for the nuisance parameter. … they may involve Monte Carlo simulations, which have systematic uncertainties as well as statistical errors …

Particle physicists usually convert p into the number of standard deviation σ of a Gaussian distribution, beyond which the one-sided tail area corresponds to p. Thus, 5σ corresponds to a p-value of 3e-7. This is done simple because it provides a number which is easier to remember, and not because Guassians are relevant for every situation.
Unfortunately, p-values are often misinterpreted as the probability of the theory being true, given the data. It sometimes helps colleagues clarify the difference between p(A|B) and p(B|A) by reminding them that the probability of being pregnant, given the fact that you are female, is considerable smaller than the probability of being female, given the fact that you are pregnant.

… the situation is much less clear for nuisance parameters, where error estimates may be less rigorous, and their distribution is often assumed to be Gaussian (or truncated Gaussain) by default. The effect of these uncertainties on very small p-values needs to be investigated case-by-case.
We also have to remember that p-values merely test the null hypothesis. A more sensitive way to look for new physics is via the likelihood ratio or the differences in χ2 for the two hypotheses, that is, with and without the new effect. Thus, a very small p-value on its own is usually not enough to make a convincing case for discovery.

If we are in the asymptotic regime, and if the hypotheses are nested, and if the extra parameters of the larger hypothesis are defined under the samller one, and in that case do not lie on the boundary of their allowed region, then the difference in χ2 should itself be distributed as a χ2, with the number of degrees of freedom equal to the number of extra parameters (I’ve seen many papers in astronomy not minding (ignoring) these warnings for the likelihood ratio tests)

The standard method loved by Particle Physicists (astronomers alike) is χ2. This, however, is only applicable to binned data (i.e., in a one or more dimensional histogram). Furthermore, it loses its attractive feature that its distribution is model independent when there are not enough data, which is likely to be so in the multi-dimensional case. (High energy astrophysicists deal low count data on multi-dimensional parameter space; the total number of bins are larger than the number of parameters but to me, binning/grouping seems to be done aggressively to meet the good S/N so that the detail information about the parameters from the data gets lost. ).

…, the σi are supposed to be the true accuracies of the measurements. Often, all that we have available are estimates of their values (I also noticed astronomers confuse between true σ and estimated σ). Problems arise in situations where the error estimate depends on the measured value a (parameter of interest). For example, in counting experiments with Poisson statistics, it is typical to set the error as the square root of the observd number. Then a downward fluctuation in the observation results in an overestimated weight, and abest-fit is biased downward. If instead the error is estimated as the square root of the expected number a, the combined result is biased upward – the increased error reduces S at large a. (I think astronomers are aware of this problem but haven’t taken actions yet to rectify the issue. Unfortunately not all astronomers take the problem seriously and some blindly apply 3*sqrt(N) as a threshold for the 99.7 % (two sided) or 99.9% (one sided) coverage.)

Background estimation, particularly when observed n is less tan the expected background b is discussed in the context of upper limits derived from both statistical streams – Bayesian and frequentist. The statistical focus from particle physicists’ concern is classical statistical inference problems like hypothesis testing or estimating confidence intervals (it is not necessary that these intervals are closed) under extreme physical circumstances. The author discusses various approaches with modern touches of both statistical disciplines to tackle how to obtain upper limits with statistically meaningful and allocatable quantification.

As described, many physicists endeavor on a grand challenge of finding a new particle but this challenge is put concisely from the statistically perspectives like p-values, upper limits, null hypothesis, test statistics, confidence intervals with peculiar nuisance parameters or rather lack of straightforwardness priors, which lead to lengthy discussions among scientists and produce various research papers. In contrast, the challenges that astronomers have are not just finding the existence of new particles but going beyond or juxtaposing. Astronomers like to parameterize them by selecting suitable source models, from which collected photons are the results of modification caused by their journey and obstacles in their path. Such parameterization allows them to explain the driving sources of photon emission/absorption. It enables to predict other important features, temperature to luminosity, magnitudes to metalicity, and many rules of conversions.

Due to different objectives, one is finding a hay look alike needle in a haystack and the other is defining photon generating mechanisms (it may lead to find a new kind celestial object), this article may not interest astronomers. Yet, having the common ground, physics and statistics, it is a dash of enlightenment of knowing various statistical methods applied to physical data analysis for achieving a goal, refining physics. I recall my posts on coverages and references therein might be helpful:interval estimation in exponential families and [arxiv] classical confidence interval.

I felt that from papers some astronomers do not aware of problems with χ2 minimization nor the underline assumptions about the method. This paper convey some dangers about the χ2 with the real examples from physics, more convincing for astronomers than statisticians’ hypothetical examples via controlled Monte Carlo simulations.

And there are more reasons to check this paper out!

]]>
http://hea-www.harvard.edu/AstroStat/slog/2009/arxiv-particle-physics/feed/ 0
Why Gaussianity? http://hea-www.harvard.edu/AstroStat/slog/2008/why-gaussianity/ http://hea-www.harvard.edu/AstroStat/slog/2008/why-gaussianity/#comments Wed, 10 Sep 2008 14:15:03 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=637

Physicists believe that the Gaussian law has been proved in mathematics while mathematicians think that it was experimentally established in physics — Henri Poincare

Couldn’t help writing the quote from this article (subscription required).[1]

Why Gaussianity? by Kim, K. and Shevlyakov, G. (2008) IEEE Signal Processing Magazine, Vol. 25(2), pp. 102-113

It’s been a while since my post, signal processing and bootstrap from IEEE signal processing magazine, described as tutorial style papers on signal processing research and applications. Because of its tutorial style, the magazine delivers most up to date information and applications to people in various disciplines (their citation rate is quite high among scientific fields where data are collected via digitization except astronomy. This statement is solely based on my experience and no proper test was carried out to test this hypothesis). This provoking title, perhaps, will drag attentions about advances in signal processing from astronomers in future.

A historical account on Gaussian distribution, which goes by normal distribution among statisticians is given: de Moivre, before Laplace, found the distribution; Laplace, before Gauss, derived the properties of this distribution. The paper illustrates the derivations by Gauss, Herschel (yes, astronomer), Maxwell (no need to mention his important contribution), and Landon along with these following properties:

  • the convolution of two Gaussian functions is another Gaussian function
  • the Fourier transform of a Gaussian function is another Gaussian function
  • the CLT
  • maximizing entropy
  • minimizing Fisher information

You will find pros and cons about Gaussianity in the concluding remark.

  1. Wikiquote said it’s misattributed. And I don’t know French. My guess could be wrong in matching quotes based on french translations into english. Please, correct me.
]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/why-gaussianity/feed/ 0
LHC First Beam http://hea-www.harvard.edu/AstroStat/slog/2008/lhc-first-beam/ http://hea-www.harvard.edu/AstroStat/slog/2008/lhc-first-beam/#comments Wed, 10 Sep 2008 06:38:53 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=661 10:00am local time, Sept. 10th, 2008
As the first light from Fermi or GLAST, LHC First Beam is also a big moment for particle physicists. Find more from http://lhc-first-beam.web.cern.ch/lhc-first-beam/Welcome.html. You may find more from your own search (I found interesting debates about doomsday triggered by LHC). I’ll wait until good quality data are collected.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/lhc-first-beam/feed/ 0
A Confession from a former “keV” Junkie: 1. It’s a Plague. http://hea-www.harvard.edu/AstroStat/slog/2008/a-confession-from-a-former-kev-junkie-1-its-a-plague/ http://hea-www.harvard.edu/AstroStat/slog/2008/a-confession-from-a-former-kev-junkie-1-its-a-plague/#comments Wed, 10 Sep 2008 04:19:27 +0000 Jaesub http://hea-www.harvard.edu/AstroStat/slog/?p=644 (Inspired by vlk’s “keV vs keV”)

Beside the obvious benefit of confusing the public and colleagues in other fields, the apparent chaotic use of physical units like keV and Kevin has an addictive convenience beyond a simple matter of convention. Yes, I said “convenience”.

All roads to Rome, All quantities to Energy

In fact, mixing up the units of the physical quantities doesn’t end with energy and temperature.  If I am the one breaking this to you, then I am terribly sorry, but the plague has already spread to pretty much all the physical quantities.

Energy = Temperature = Mass = Length = Time = …

So, there you have it: Energy costs Money since Time is Money.

At the center of this pandemic, you find “energy”, which seems to be the culprit, linking all of them together. Although, once they are all linked, it doesn’t really matter. This appears a gross misuse or misunderstanding of these quantities at best. Now what drives “normal” physicists to become “keV” junkies if there is such a thing called normal physicist? Well, the answer is already in vlk’s slog about “keV vs keV”.

Ice, Water or Vapor

Do you know what it feels like at T=200 K, T=300 K or T=400 K?

Of course, it’s a matter of being frozen to death, being comfortable under the sun shine or being burnt to death.

Ok, then, now how about T=10,000 K, 10,000,000 K, or 10,000,000,000 K?

Hmm, hmm, interesting. Very hot, extremely hot, and hellish hot? I know I will be dead either way.

A shear number of zeros is already annoying and begging for another unit or a shorter form, but more importantly, it seems that we, … I mean, normal human beings don’t really have a good “physical” feel of Kelvin, Celsius or Fahrenheit at these high temperatures. Why should we? After all, we use Celsius or Fahrenheit for the daily use, and it’s not like that we need to burn and destroy T1000 to save the humanity everyday. Even so a couple of 1000 K heat might do the job.

Ok, ok, Let me think for a moment for change.  Judging from a factor of 1000 differences which is a lot more than 100, there’s got to be equally significant differences in the matter at these temperatures. Perhaps changing-their-physical-state kind of differences?

Hmm, only if we have an easy way to express the state change or a unit to convey the essence of their differences. A-ha, yes, yes, I knew it’s not our fault, our ignorance or lack of our knowledge.  It’s the useless unit here that keeps us in the dark. The unit is good at around, say, less than 1000 C.  Just like T=0 C means ice to water and T=100 C means water to vapor.  More than that, this K, C, or F is so useless.

De Nile isn’t just a river in Egypt, and it flows in our mind too, quite beautifully I suppose.

Wait a minute, there are more states than solid, liquid and gas?

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/a-confession-from-a-former-kev-junkie-1-its-a-plague/feed/ 3
A lecture note of great utility http://hea-www.harvard.edu/AstroStat/slog/2008/a-lecture-note-of-great-utility/ http://hea-www.harvard.edu/AstroStat/slog/2008/a-lecture-note-of-great-utility/#comments Wed, 27 Aug 2008 18:35:14 +0000 hlee http://hea-www.harvard.edu/AstroStat/slog/?p=439 I didn’t realize this post was sitting for a month during which I almost neglected the slog. As if great books about probability and information theory for statisticians and engineers exist, I believe there are great statistical physics books for physicists. On the other hand, relatively less exist that introduce one subject to the other kind audience. In this regard, I thought the lecture note can be useful.

[arxiv:physics.data-an:0808.0012]
Lectures on Probability, Entropy, and Statistical Physics by Ariel Caticha
Abstract: These lectures deal with the problem of inductive inference, that is, the problem of reasoning under conditions of incomplete information. Is there a general method for handling uncertainty? Or, at least, are there rules that could in principle be followed by an ideally rational mind when discussing scientific matters? What makes one statement more plausible than another? How much more plausible? And then, when new information is acquired how do we change our minds? Or, to put it differently, are there rules for learning? Are there rules for processing information that are objective and consistent? Are they unique? And, come to think of it, what, after all, is information? It is clear that data contains or conveys information, but what does this precisely mean? Can information be conveyed in other ways? Is information physical? Can we measure amounts of information? Do we need to? Our goal is to develop the main tools for inductive inference–probability and entropy–from a thoroughly Bayesian point of view and to illustrate their use in physics with examples borrowed from the foundations of classical statistical physics.

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/a-lecture-note-of-great-utility/feed/ 0
Blackbody Radiation [Eqn] http://hea-www.harvard.edu/AstroStat/slog/2008/eotw-blackbody/ http://hea-www.harvard.edu/AstroStat/slog/2008/eotw-blackbody/#comments Wed, 27 Aug 2008 17:00:22 +0000 vlk http://hea-www.harvard.edu/AstroStat/slog/?p=507 Like spherical cows, true blackbodies do not exist. Not because “black objects are dark, duh”, as I’ve heard many people mistakenly say — black here simply refers to the property of the object where no wavelength is preferentially absorbed or emitted, and all the energy input to it is converted into radiation. There are many famous astrophysical cases which are very good approximations to perfect blackbodies — the 2.73K microwave background radiation left over from the early Universe, for instance. Even the Sun is a good example. So it is often used to model the emission from various objects.

The blackbody spectrum is
$$B_{\nu}(T) = \frac{2 h \nu^3}{c^2} \frac{1}{e^{h \nu / k_B T} – 1} ~~ {\rm [erg~s^{-1}~cm^{-2}~Hz^{-1}~sr^{-1}]} \,,$$
where ν is the frequency in [Hz], h is Planck’s constant, c is the speed of light in vacuum, and kB is Boltzmann’s constant. The spectrum is interesting in many ways. Its shape is characterized by only one parameter, the radiation temperature T. A spectrum with a higher T is greater in intensity at all frequencies compared to one with a lower T, and the integral over all frequencies is σ T4, where $$\sigma \equiv \frac{2\pi^5k_B^4}{15 c^2 h^3}$$ is the Stefan-Boltzmann constant. Other than that, the normalization is detached, so to speak, from T, and differences in source luminosities are entirely attributable to differences in emission surface area.

The general shape of a blackbody spectrum is like a rising parabola at low ν (which led to much hand-wringing in the late 19th century about the Ultraviolet Catastrophe) and an exponential drop at high ν, with a well-defined peak in between. The frequency at which the spectrum peaks is dependent on the temperature, with
$$\nu_{\rm max} = 2.82 \frac{k_B T}{h}$$,
or equivalently,
$$ \lambda_{\rm max} = \frac{2.9\cdot10^4}{T} ~~ {\rm[\AA]} \,,$$
where T is in [degK].

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/eotw-blackbody/feed/ 0
The Banff Challenge [Eqn] http://hea-www.harvard.edu/AstroStat/slog/2008/eotw-banff-challenge/ http://hea-www.harvard.edu/AstroStat/slog/2008/eotw-banff-challenge/#comments Wed, 23 Jul 2008 17:00:48 +0000 vlk http://hea-www.harvard.edu/AstroStat/slog/?p=357 With the LHC coming on line anon, it is appropriate to highlight the Banff Challenge, which was designed as a way to figure out how to place bounds on the mass of the Higgs boson. The equations that were to be solved are quite general, and are in fact the first attempt that I know of where calibration data are directly and explicitly included in the analysis.

The observables are counts N, Y, and Z, with

N ~ Pois(ε λS + λB) ,
Y ~ Pois(ρ λB)
,
Z ~ Pois(ε υ)
,

where λS is the parameter of interest (in this case, the mass of the Higgs boson, but could be the intensity of a source), λB is the parameter that describes the background, ε is the efficiency, or the effective area, of the detector, and υ is a calibrator source with a known intensity.

The challenge was (is) to infer the maximum likelihood estimate of and the bounds on λS, given the observed data, {N, Y, Z}. In other words, to compute

p(λS|N,Y,Z) .

It may look like an easy problem, but it isn’t!

]]>
http://hea-www.harvard.edu/AstroStat/slog/2008/eotw-banff-challenge/feed/ 4