Welcome to Fitting Statistical Models to Data with Python. This is third course in a larger Statistics with Python Specialization, each course building upon knowledge from the previous courses to give us a well-rounded view of what statistics is, what statistics can do, while also allowing us to apply what we learn with Python. This particular course has four parts. In part one, we'll get an overview of what does it really mean to fit models to data, and what are some of the considerations, such as the type of variables and the role they play in the study. How to handle missing data appropriately. How to account for the study design in fitting models. The importance of having clearly defined objectives, and being able to measure that prediction uncertainty. On the computing side, we're going to see an introduction to the landscape of Python statistics. And we'll be using Jupiter notebooks to help guide us through our modelling experience. In part two, we'll start looking at two basic types of regression, linear and logistic. We'll learn how to fit these models, how to assess how well they fit and how to interpret the results in the context of the data. We'll be working with the cartwheel data set. We'll be predicting cartwheel distance from a persons height. And on the logistics side or measuring a binary outcome variable, we'll predict the probability of completing a cartwheel based on a person's age. We'll get to hear from a couple of colleagues as they discuss the differences between association and causality. We'll also see the importance of data visualization by looking at the data source doesn't. In part three, our focus will be on multilinear and marginal models. We'll see how random effects can be accounting for the dependency in our variables. And the GEE, or the generalized estimating equation technique allow us to handle those marginal models. Some of the examples we'll look at is predicting the probability a person has never smoked and looking at interviewer effects in a study about people's perception and trust in police. In part four, we get to see a broad base of special topics. These will include different sampling designs, and whether or not to include survey weights in our modeling process. We'll see some examples of random forests and some in depth case studies on the Bayesian techniques, where the language of Bayesian modeling is called Stan. Throughout the course, you'll have the opportunity to assess your knowledge in a number of ways. And we'll give you a lot of readings and resources to do some deeper dives in these different concepts. Along with some hands-on opportunities, playing with some interactive web applets. And of course, there will be a lot of Python-based tutorials to guide us through this modeling experience. You'll be hearing from a number of our team members as they share their insights and perspectives on fitting statistical models to data.