Language:EN
Pages: 17
Rating : ⭐⭐⭐⭐⭐
Price: $10.99
Page 1 Preview
theorem and bayesian inferencebayes theorem

Theorem and bayesian inferencebayes theorem

Error Bars live with the Model

Not with the Data Usually the

Error bars live with the model, not the data!

Example: Poisson data:

n = 0, 1, 2, ...

Xi
15 i λ 20

8

6

Xi = �, �2(Xi) = � 0
5
How to attach error bars to the data points?
The wrong way:

) =

Xi, then 1 �2= � when Xi = 0

and ˆ X
i

Assigning σ(Xi )= √ Xi gives a downward bias. Points lower than average by chance are given smaller error bars, and hence more weight than they deserve.

The right way:�

Y

P( X, Y )

P(X) = � P(X,Y) dY

P( X )

X

at a fixed value of Y.

= P( X | Y )

Y = 3 X
X = Gaussian

1 2 3

Y

P( Y | X = 2 ) = ?

6 Y

Y = 3 X
X = Gaussian

1 2 3 X

Conditional Probabilities

Y
P(X) =

P(X,Y) dY

P( Y )
P(Y) =

P(X,Y) dX

P( X ) X

P(X |Y) �P(X,Y) P(Y)

=

P(X,Y)

P( X | Y )

Bayesʼ Theorem:

P(X |Y) =P(Y | X) P(X) P(Y)

= P(D)
= P(D | M)P(M)dM

P( M2 | D)= P(D | M1) P(D | M2)� P(M1) P(M2)� exp ��� 2


� � � P(M1)

X

• Our knowledge of α before measuring X is quantified by the prior p.d.f. P(α).

P( X | α )
α

Uniform P( log α ) P( α ) ~ 1 / α

• Different priors P(α) lead to different inferences P( α | X )

�2

Parameter Space

Model X2

Gaussian Datum with Uniform Prior Data : X ± � Model parameter :µ

Likelihood function:

P(X |µ)
=e

2�

� �

µ X

P(µ) = constant�

P(µ | X) =P(X |µ) P(µ) P(X)

P(µ | X)
P(X) =
µ

Gaussian Datum with Gaussian Prior

P(X |µ)
Likelihood : L(µ) = P(X |µ)

� � �

� 2

2� �
µ X

µ�µ0� � � 0

Likelihood:�

P(µ)

2� �

e

� � �

� 2

� µ�µ0

� � � 0

2� = e

� � µ ML )

� � � �
µML =
�2µML ) = 1 µML
�2 + 1� 0
�2 + 1� 0

2

Gaussian prior acts like 1 more data point.� � �

Gaussian Data with Gaussian Prior

Likelihood : P(X |µ)

= e i

2

=

exp � � 2 � � 2

P(Xi |µ)
(2�) N / 2 i�� i (2�) N / 2 i�� i
µ Xi
=e

2

Gaussian prior:

2� �

�1

2

2� = e

�1 2

� µ�µ ML

� �
µML = Xi

� 0

i i 2 �2µML

) =

i 1
1 1 1
i i 2
+ � 0
i 2
point.�

Max Likelihood for Gaussian Data Likelihood of parameters α for a given dataset:

N
i=1

For Gaussian error distributions:

P(Xi |�)

= e

� 2

)

2� � i

i=1

1

� �N / 2

i
�2 lnL = �2+ 2
ln� i + N ln 2�

2

Var[�ML] � � �2 �


To maximise L(�), minimise �2+ 2 i

Need ML when Parameters alter Error Bars

• Data points Xiwith no errors:

120
Xi
10 σML
110
100
�2 = N

XiA

� � �

90
i=1 80
70
60
0
350 10

σ

300

• To find σ , minimising χ2 fails!

250
200
150
100 2 N ln σ
50
� 2lnL = �2 + 2 N ln�
χ2
0
i

20

18

10

8

Poisson data X with rate parameter λ : 4
2 4 6 8 Xi 10
2
0
0
0.3 λ=1
Likelihood for N Poisson data points : 0.25

λML=4.5�

0.2

L(�) = N P(Xi | �) = N
0.15
λ=7
0.1
i=1 i=1 0.05 6 8
10
lnL = i ( � � + Xi ln� � ln Xi!
0
2 4
Maximum likelihood estimator of λ : 1.2

0

2

4
1
i Xi
0.8
0.6
0.4
0.2
6 8 10

i

Xi

.

0
-0.2

You are viewing 1/3rd of the document.Purchase the document to get full access instantly

Immediately available after payment
Both online and downloadable
No strings attached
How It Works
Login account
Login Your Account
Place in cart
Add to Cart
send in the money
Make payment
Document download
Download File
img

Uploaded by : Noah Evans

PageId: DOCBADAA6D