Language:EN
Pages: 5
Rating : ⭐⭐⭐⭐⭐
Price: $10.99
Page 1 Preview
logistic regression including kernel methods

Logistic regression including kernel meth-ods

Machine Learning Theory (CS 6783)

Lecture 7 : Rademacher Complexity

ES LDyerm) inf f∈FLD(f)� 2 nESEϵ f∈F sup n

{z1, . . . , zn} as

ˆRS(G) := 1 nEϵ�sup

Proposition 1. For any sample S = {z1, . . . , zn} and any classes G , H mapping instances in Z

to reals :

1

Proof.

�supg∈H n t=1ϵtg(zt)� 1 nEϵ �supg∈G n t=1ϵtg(zt)�

g∈Gϵt(g(zt) + h(zt))�

= 1 nEϵ�sup

3. cvx(G) = {z �→ Eg∼π [g(z)] : π ∈ ∆(G)}

ˆRS(cvx(G)) = 1 nEϵπ∈∆(G)

= 1 nEϵ�sup

g∈Gϵtg(zt)�

of the class of predictors. That is ˆRS(ℓ ◦ F) ≤ LˆRS(F)

2

ϵtφt (g(zt))�

n

supg,g′∈G��n−1 t=1ϵt (φt (g(zt)) + φt (g′(zt))) + L|g(zn)) − g′(zn)|

2 �

�sup

g∈Gϵtφt (g(zt)) + Lϵng(zn)�

Repeating the above argument we remove φ1, . . . , φn−1 and so, we conclude that

nEϵ 1 �sup

3

ˆRS(F) = 1 nEϵf:∥f∥1≤R

sup f⊤nϵtxt��

{e1, −e1, e2, −e2, . . . , ed, −ed}. Hence by Proposition 1 (4) we have that

Example applications : Lasso, SVM, ridge regression, Logistic Regression (including kernel meth-

have,

ESLDyerm) inf f∈FLD(f)� 2L ESEϵf∈F sup�ϵtf(xt)� n

subject to ∥f∥2 ≤ R

This corresponds to class F given by linear predictors with Hilbert norm constrained by R

Corresponds to linear predictor with 1 norm constrained by 1

4

ˆRS(Fi) = 1 nEϵ ∀j, fj∈Fi−1 ∥wi∥1≤Bi sup � n

ϵtwi jσ(fj(xt))

2Bi
nEϵ

nEϵ

f∈Fi−1
sup

sup
sup

ϵtσ(fj(xt))
j ������ϵtσ(fj(xt))

������� ������

5

You are viewing 1/3rd of the document.Purchase the document to get full access instantly

Immediately available after payment
Both online and downloadable
No strings attached
How It Works
Login account
Login Your Account
Place in cart
Add to Cart
send in the money
Make payment
Document download
Download File
img

Uploaded by : Shamik Bhatt

PageId: DOC95B0619