This lesson wraps up regression theory. Next time we’ll mainly do “practice” rather than “theory” — although we’ll have a little bit of that to discuss.

This course is brought to you free of charge, but we do need donations to keep it alive. You can join the many others whose generous support has made it possible with a donation at Peaseblossom’s Closet.

Just some feedback and a question or two from Lessons 4 and 5 of the course. These lessons are difficult but ultimately went well and were highly informative in my opinion.

The Einstein Summation Notation took a while getting used to for me anyway as it is the first time I have used it,and it required some outside resources which was to be expected. . By the end of Session 5 it wasn’t that bad. For anyone who has worked a lot with matrices there is an intuitive basis on which to build an understanding of ESC but it still gets tricky. The thing is there a number of steps in the notes that involve matrix/vector calculations based on indicial notation that requires correct interpretation to get to the final result and this interpretation might not be clear if one does not have a good understanding of (tensor) indicial notation.

It would be interesting to hear other students comments on the approach with ESC and index notation.
It was useful for me to the rules for ESC and index notation on hand in one spot as I worked through the equations in the lecture. One paper that was of some limited use with the rules was this one:

I can see where the ESC is a worthwhile method to study because of the notational advantages that you mention in the lectures. Actually it becomes an interesting topic in its own right. Having the same matrix relations from what you describe as “the usual method” together with the method presented in the slides (Lesson 4) was very useful because it helped to understand what the correct interpretation of the ESC and associated matrix algebra had to be in order to produce the same result. I found myself writing in the “usual method” results in a number of instances to help get the ESC interpretation correct. Also the example in Lesson 4 with linear regression was very helpful.

Lesson 5 was very informative regarding the various types of regression and the Theil-Sen example provided. I think however there may be an error in the Lesson 5 slides on slide 30 pertaining to the Theil-Sen slope confidence interval. After the two indexes L and U get calculated the slide indicates using the Dth and Uth slope estimates as the 1-alpa CI. In the audio I believe you are referring to the Lth and Uth estimates which would seem like the right values. Is that correct? [Response: Yes. My bad.]

One takeaway from Lesson 5 might be that weighted least squares would be an effective way to compensate parameter estimates for heteroscedasticity provided the variances (in the case discussed measurement errors) are known or can be estimated. Would that be correct and is that typically how heteroscedasticity is compensated? [Response: Yes. In fact you don’t need to know the actual variances, it suffices to know the *relative* variances so you get the weights right.]

Personally I am enjoying the lessons very much and it is proving to be an excellent course on this subject without doubt!

[Response: Thanks. I *love* the Einstein summation convention, and I think its advantages are legion. But it does take getting used to, as does the index notation for vectors/matrices. Those who stick with it, and get accustomed to it, will reap many benefits.

Perhaps I don’t fully remember how difficult it was for me when I was first introduced to it. That’s a common problem with teaching in general — the teacher thinks it’s “obvious” but it’s only so if you already know it!]

Tamino,

Just some feedback and a question or two from Lessons 4 and 5 of the course. These lessons are difficult but ultimately went well and were highly informative in my opinion.

The Einstein Summation Notation took a while getting used to for me anyway as it is the first time I have used it,and it required some outside resources which was to be expected. . By the end of Session 5 it wasn’t that bad. For anyone who has worked a lot with matrices there is an intuitive basis on which to build an understanding of ESC but it still gets tricky. The thing is there a number of steps in the notes that involve matrix/vector calculations based on indicial notation that requires correct interpretation to get to the final result and this interpretation might not be clear if one does not have a good understanding of (tensor) indicial notation.

It would be interesting to hear other students comments on the approach with ESC and index notation.

It was useful for me to the rules for ESC and index notation on hand in one spot as I worked through the equations in the lecture. One paper that was of some limited use with the rules was this one:

http://homepages.engineering.auckland.ac.nz/~pkel015/SolidMechanicsBooks/Part_II/07_3DElasticity/07_3DElasticity_01_3D_Index.pdf

I can see where the ESC is a worthwhile method to study because of the notational advantages that you mention in the lectures. Actually it becomes an interesting topic in its own right. Having the same matrix relations from what you describe as “the usual method” together with the method presented in the slides (Lesson 4) was very useful because it helped to understand what the correct interpretation of the ESC and associated matrix algebra had to be in order to produce the same result. I found myself writing in the “usual method” results in a number of instances to help get the ESC interpretation correct. Also the example in Lesson 4 with linear regression was very helpful.

Lesson 5 was very informative regarding the various types of regression and the Theil-Sen example provided. I think however there may be an error in the Lesson 5 slides on slide 30 pertaining to the Theil-Sen slope confidence interval. After the two indexes L and U get calculated the slide indicates using the Dth and Uth slope estimates as the 1-alpa CI. In the audio I believe you are referring to the Lth and Uth estimates which would seem like the right values. Is that correct? [

Response:Yes. My bad.]One takeaway from Lesson 5 might be that weighted least squares would be an effective way to compensate parameter estimates for heteroscedasticity provided the variances (in the case discussed measurement errors) are known or can be estimated. Would that be correct and is that typically how heteroscedasticity is compensated? [

Response: Yes.In fact you don’t need to know the actual variances, it suffices to know the *relative* variances so you get the weights right.]Personally I am enjoying the lessons very much and it is proving to be an excellent course on this subject without doubt!

[

Response:Thanks. I *love* the Einstein summation convention, and I think its advantages are legion. But it does take getting used to, as does the index notation for vectors/matrices. Those who stick with it, and get accustomed to it, will reap many benefits.Perhaps I don’t fully remember how difficult it was for me when I was first introduced to it. That’s a common problem with teaching in general — the teacher thinks it’s “obvious” but it’s only so if you already know it!]LikeLike