Menu

Bayesian probability theory is used to analyze random data drawn from a Gaussian probability distribution with unknown mean and standard deviation. The analysis begins with the determination of the parameters of a single sample and then proceeds to the joint analysis of a pair of samples.

The Bayesian evidence is used to characterise the probabilities of hypotheses about the relationships between the means and standard deviations of the two samples. Of particular interest is the probability that the two samples have different means.

This document illustrates how various types of uncertainty affect the forecasting of sonar performance in naval applications. The first type of uncertainty arises from the fact that we have incomplete knowledge regarding key target kinematic parameters such as range, bearing, depth, heading, speed, etc.

In general key sonar performance metrics such as the sonar probability of detection P(D) are dependent upon each of these kinematic parameters. A second type of uncertainty is caused by the actual oceanographic environment in which the sonar operates. At a conceptual level, a sonar makes a mark on a gram or display when the voltage in a detector circuit exceeds a threshold.

The probabilities with which these marks occur are determined by the statistics of the noise and signal that the sonar actually experiences. The statistical distribution of the signal and noise fields at the sonar receiver are strongly influenced by a nondeterministic component of ocean sound transmission. Numerous examples are presented.

The bending of light by gravitational effects as predicted by general relativity is compared to a Newtonian theory. The mechanism of comparison is Bayesian probability theory. Comparisons are presented with and without the effects of measurement error.

The essay begins with a brief overview of Bayes theorem and the associated evidence. Light bending is motivated by dimensional analysis. Finally evidence computations are presented using a late 20th century data set.

The method of least squares refinement with error calculation for finding the best fit to a nonlinear curve in the presence of unknown measurement error is presented. The technique is based on approximating the relationship between the observed data and the model as linear and successively improving the approximation. The results are compared to a Bayesian approach with numerical computations performed by the Metropolis Markov Chain Monte Carlo algorithm. In the numerical example considered here agreement is found to be good between the two approaches.

The problem of fitting a straight line y = ax + b to measured data when there are errors in the measurement of both the x and y is considered. Additionally the effects of intrinsic dispersion on the fit are

accounted for as well. Astrophysical data which describes the relationship between the log of a galaxy’s’s velocity dispersion and the log of the mass of the black hole located at the center of the galaxy is used to illustrate the technique. The unknown parameters are (a, b, σ ) where σ is the intrinsic dispersion associated with the black hole mass. The likelihood of the data in the presence of all three types of errors is derived using Bayes theorem and the technique of marginalization. Marginal posterior probability distributions are found for each parameter using brute force computation.

Two examples of the refinement of a prior probability distribution based upon the requirement of minimum information and testable constraints are presented. Information is the probability distance between the updated distribution and the prior. Constraints are additional background information not incorporated into the initial choice of the prior. They are imposed by the technique of Lagrange multipliers.

An example of a testable constraint would be that the updated prior have a prescribed mean or mean and variance. Information as defined here is just the negative of entropy so the approaches of minimum information and maximum entropy are equivalent.

If the testable constraint takes the form of imposing moments on the prior then a fat tailed prior is turned into a thin tailed update. This may or may not be desirable. Minimizing information is away to refine a prior. It is not a replacement for Bayes theorem.

Bayesian probability theory is the logically consistent calculus for inference in the presence of uncertainty. The Bayesian approach requires us to express our knowledge of unknown parameters in a model in the form of a prior probability distribution.

We must also express the relationship between the measurements we make and the parameters in the form of a likelihood function. This likelihood function is a probability distribution with respect to the data but not the parameters. In simple cases involving independent measurements it is the product of the sampling function.

Bayes theorem tells us that the posterior probability distribution is proportional to the product of the prior and the likelihood. The constant of proportionality is called the evidence and it is a multidimensional integral. Here we present examples in which the number of dimensions is one or two and all necessary computations can either be done analytically or by brute force numerical integration.

Algorithms for computing the Signal-to-Noise Ratio (SNR) achievable by idealized frequency modulated continuous wave radars optimized for atmospheric sounding are presented. Two cases are considered. In the first the target has reflectivity characterized by radar cross section σ and echoes are proportional to σ r^{-4} where r is slant range to the target. In the second the targets are distributed across volume with reflectivity η per unit length. In this case echoes are proportional to η r^{-2}.

In either case target detectability is limited by background noise characterized by the thermal noise spectral density T_{system} k_{Boltzmann} where T_{system} is the radar system noise temperature and k_{Boltzmann} is Boltzmann’s constant. Atmospheric absorption also limits detection range.

Radar echoes measured on a signal-to-noise ratio scale that are greater than a threshold are shown to follow a Pareto probability distribution. The decay parameter of this distribution is directly related to the spreading loss mechanism coupling the radar transmitter to the target.

The problem is motivated as an example of a selection effect in Bayesian probability theory. Numerical examples are presented using synthetic data and data from the WiPPR radar system.

LogLinear Group takes your ideas and needs, turning them into results for the real world.

- Inception to representative production
- Custom algorithm development
- Rapid prototyping
- Thorough field testing
- Advanced concept demonstrations

© Copyright 2012 – 2024 LOGLINEAR GROUP, LLC All rights reserved.

Web Design by: Magnetic Arrow Designs