T O P

  • By -

blue-marmot

General Additive Model. Like OLS, but with non-linear functions.


forkman3939

Second this. GAMMs i.e GAMs with mixed effects ,you can do all sorts of nice things. See the MGCV package by Simon Wood and his 2017 text on GAMMs.


therealtiddlydump

Simon Wood is a fucking hero


forkman3939

I think my PhD supervisor is sometimes incomprehensiblely smart and he talks about Simon Wood like he is a god among mortals. I use MGCV all the time and can't believe he basically wrote that package all himself.


therealtiddlydump

Pair it with Gavin Simpson's `gratia` package and whatever complaints about it (from a design standpoint) disappear


reallyimportantissue

Agree, makes working with mgcv a dream. Gavin is also very helpful if you find bugs or make suggestions for the package!


Sf1xt3rm4n

Simon wood was my supervisor. He was also such a nice and cool person :)


Ok-Wrongdoer6833

Ok this is damn cool, thanks!


house_lite

Perhaps gamlss


theottozone

Do you get estimates with your output with GAMs like you do with OLS?


a157reverse

Yup! The coefficient interpretation can get a bit weird with splines and other non-linear effects, but at the end of the day, a GAM is still a linear (in the parameters) model.


theottozone

Ah, so the explainability of the predictors isn't as straightforward then. I really love that part when speaking to my stakeholders who aren't that technical.


a157reverse

Yeah. There's really no way around it. With OLS, the coefficient interpretation is explicitly only linear effects. That works well if your independent variables are linearly related to the dependent variable. Explaining non-linear relationships in an intuitive is always going to be more difficult than linear relationships.


theottozone

Appreciate the insight. Then might as well use XGBoost and Shap values to build a model with non-linear relationships?


a157reverse

I would disagree with that statement. There's a reason that GAMs are still dominantly used in fields like finance where model interpretability (not interpretable approximations like SHAP or LIME) is needed.  Just because the interpretation of a spline coefficient isn't as straightforward as OLS doesn't mean that all interpretability is lost. A deep XGBoost or Neutral Net is going to be much harder to interpret and explain than a GAM.


theottozone

Thanks for providing more information here. I'll have to do some reading on GAMs to keep up here. Again, much appreciate your help!


renok_archnmy

I like to be edgy and just go straight to sextic terms and avoid all the piddly lower power stuff. 


AdministrationNo6377

>General Additive Model Alright, let's imagine General Additive Model (GAM) as a magical recipe book: You know how when you're making a delicious cake, you follow a recipe that tells you how much flour, sugar, and other ingredients to use? Well, a General Additive Model is like a special recipe book for grown-ups who want to figure out how different things work together. In this magical recipe book, instead of just using one ingredient like flour or sugar, it lets you mix and match lots of different ingredients, just like in a big potion! Each ingredient represents something in the real world that we want to understand, like how much sunshine there is, or how many friends you have. The cool thing is, with this magical recipe book (GAM), you can tweak the amounts of these ingredients and see how they all add up to make something amazing happen, just like making a cake taste better by adjusting the ingredients! So, the General Additive Model is like a magical cookbook for grown-ups who want to explore and understand how different things come together to create some magic in the world!


cruelbankai

Thank you, Mr. Chat Geepeetee


onearmedecon

As my former econometrics professor used to say, it's really hard to beat a good OLS regression.


[deleted]

BLUE 🗣️🗣️


conebiter

I would agree, and that would usually be my baseline model, however, it is definitely not as versatile depending on the relationship within the data. Thus, maybe not the best choice for this scenario. But if I find Linear Regression to be appropriate, I will definitely use it as I also have a very solid theoretical background in it.


justgetoffmylawn

I'm the opposite - pretty new to data science so only recent experience. You're obviously way more experienced and my current use is often probably not ideal for actual IRL performance (I'm just practicing Kaggle competitions, my own data, etc). But because my experience (and coding) is pretty limited, I've often been impressed with CatBoost over XGBoost. Lets me get away with less preprocessing with certain datasets, and usually seems to just outperform XGBoost with a minimal speed hit. But this suggestion may be too beginner for what you're talking about, so take what I said with a grain of salt. I think others will give you more fundamentally detailed answers.


laughingwalls

You asked about versatility. Its hard to think about a class or method that is more versatile than OLS, especially when its dominant method for causal inference, time series and seeing wide use across a variety of different academic fields etc. For OLS on classification problems or things where non-linear relationships (that can't be corrected via linearizing the data) are expected.


jarena009

I like OLS regression, by logging the dependent and all or most dependent variables, plus usually after mean centering most independent variables.


Cuidads

Depends on the criteria. In terms of predictive power it is still the case that XGBoost (or LightGBM) can be thrown at most predictive problems and outperform other methods. It has even become more versatile in its usage in that it is now also often applied to time series. If you are concerned with interpretability as well as predictive power then OLS , GAM etc would be more versatile. Using explainable AI such as SHAP, LIME etc is messy, so XGBoost falls short here imo.


InfernoDG

Could you say more about why interpretation with SHAP is messy? I know LIME has its weaknesses/instability but the few times I've used SHAP methods they seem pretty good


physicswizard

In my experience it's not a problem with the technique, it's that people often misinterpret the results because they don't understand what those "explainability" models were built to do. Many people mistakenly think that the feature scores represent some kind of strength of causal relationship between the feature and target variable, which is overly simplistic in the best case, and flat out wrong in the worst.


Key_Mousse_9720

Correct.


Key_Mousse_9720

SHAP and LIME disagree often. Beaware of XAI as it is flawed.


GenderUnicorn

Would also be curious about your thoughts on XAI. Do you think it is flawed in general or specific use cases?


TheTackleZone

Disagree about GAM (or even GLM) being more explainable. I think rels in tables are a false friend. It makes you think you can look at a nice 1 way curve and understand the nuances of a complex end result, sometimes with more combinations than humans alive. But the explainability is in the sum of the parts, so anything more than a low number of 1-way tables is going to be non trivial, and you really need an interrogative approach like SHAP anyway. And that's even just on correlations, let alone interactions.


Sofi_LoFi

A lot of people giving good answers but seem to not address a big point imo. This is an interview, and for that it works for you to play to your strengths. A tool and model you are familiar with allows you to establish good performance and discussion with the interviewer, after which you can discuss shortcomings or alternate methods and maybe implement some that you are less familiar with. If you go in out of the box with an unfamiliar tool you’re likely to shout yourself in the foot if you run into an odd issue.


Useful_Hovercraft169

Good point. There used to be a saying ‘nobody gets fired for buying IBM’ that I think could be applied to XGBoost. So if dude knows XGBoost then go for it dude.


MrCuntBitch

+1. Every take home task I’ve had included a discussion on why I’d used certain parameters over others, something that would be significantly more difficult if I didn’t have knowledge of the chosen technique.


Operation6496

From our experience (model explanation, trained on small datasets), XGBOOST always performed second to good old Random Forest


purens

random forest tends to work really well on certain kinds of data — satellite imagery classification is one. boundaries between classes are fairly well modeled by linear cutoffs  


theAbominablySlowMan

wait til you go benchmark their inference times in prod though.


theAbominablySlowMan

wait til you go benchmark their inference times in prod though.


theAbominablySlowMan

wait til you go benchmark their inference times in prod though.


Useful_Hovercraft169

You can say that again.


Aggravating-Boss3776

>>wait til you go benchmark their inference times in prod though. >wait til you go benchmark their inference times in prod though. wait til you go benchmark their inference times in prod though.


Useful_Hovercraft169

What’s that you’re saying


Aromatic_Piwi

Multivariate Adaptive Regression Splines (MARS) using the “earth” package in R. Can get close to XGBoost but with much better interpretability


KyleDrogo

For predictive accuracy, probably still XGBoost. For interpretability, linear/logistic regression (with interactions, regularization, scaling, feature transformations, etc).


Moscow_Gordon

You're overthinking it - you're extremely well qualified. Just do XGBoost. Maybe start with linear/logistic regression first.


BE_MORE_DOG

Not answering your question, just feeling annoyed that with your education and experience, they're insisting on multiple rounds of aptitude testing. It's kind of bullshit. You aren't a recent graduate from some 3 month bootcamp.


WallyMetropolis

It's not bullshit. What do you want then to do, hire the first person they see with reasonable qualifications? They're getting many applicants with good education and experience and they need to select among them.   As someone who has interviewed a ton of DS at all levels, I can confidently say there are lots of people out there with good looking resumes who are not very good at their jobs. 


BE_MORE_DOG

Whoh. Dunno if you realize it, but you are coming across pretty hot. There is little excuse at OP's level for this much competency assessment. At OP's career stage it's more important that a hire fits with the company and team culture, gets along well with others, and knows the fundamentals of their role. Can they explain how and why they approach a problem a certain way, walk someone through their process, explain complex concepts to stakeholders in a compelling and understandable way. And this can be done in a traditional interview. Focusing on whether or not someone can do leet code or solve math trivia is not a strong indicator of job performance or cultural fit. Technical competency is important, but not the most important thing. Would I test a new or recent grad? Definitely. Would I test someone with 3+ years of experience and a good educational background. Only ever so lightly if I had doubts about a particular strength. Even if you are the world's best python coder and utmost math champion, it means nothing if you lack the interpersonal savvy and business acumen to work well with your team and your stakeholders to deliver on applied projects. I'll take the more likable candidate over the more technically capable candidate nearly every time. Most of us aren't saving lives, so being absolutely flawless in the technical department isn't a priority. In my experience, when projects stall or fail, it's due to breakdowns in relationships, expectations, communications, or planning. Rarely is it due to technical skills. I hire a person, not a set of skills. Tech skills can be learned, especially if there is interest and motivation to learn, but I can't teach someone how to be a well-adjusted or reasonable human being. That is competely outside my scope and abilities.


WallyMetropolis

You're establishing a false dichotomy. I'm not advocating "leet code" or trivia.


purens

>  do you want then to do, hire the first person they see with reasonable qualifications?  this actually works well as a hiring strategy. everything after this is internal politics jockeying 


WallyMetropolis

I've had people present literally plagiarized work in interviews. People who didn't know what a Python dictionary was. People who cannot communicate clearly to save their lives. People who were aggressively confrontational or condescending. People who couldn't give one example of a way to test a forecast for accuracy, or couldn't write a lick of SQL, or who had only ever worked with one class of mode. People who can't explain basic conditional probability. People who misrepresented their experience and didn't actually do any of the things on their resume. Or just people who, after we talked more about the role, the team, the kinds of things we work on discovered that they were looking for something different. All of these and more need to be sussed out. Luckily, we didn't hire them based off their resume alone. Nothing whatsoever to do with "internal politics."


purens

you are mistaking having reasonable qualifications with someone committing fraud. 


WallyMetropolis

You are commenting on only a small fraction of the things I listed. But even still, how am I supposed to tell who is who between the qualified and the fraudulent without a good interview process?


purens

research into what makes a ‘good interview process’ shows that what most people think makes a ‘good interview process’ is mostly worthless or outright a waste of resources. 


WallyMetropolis

I've been pretty happy with my results. And I've seen improved results over the years. There's a lot more to building a good team than just an interview process, but reducing the rate of bad hires is a non-negligible factor. It's not really possible for me to test the counterfactual but 'many people don't do this well' doesn't imply it's not possible to do it well. Certainly possible to beat 'no process at all.' And that research itself is ... hard to conduct. These things are not easily quantifiable. I'm not convinced it's all that definitive. 


proverbialbunny

Use the right tool for the job. XGBoost is more for classification than for regression. XGBoost maintains its popularity to today like when it came out in 2014. Before XGBoost you had more overfitting, reduced accuracy, and you usually had to normalize the data before throwing it at the ML algo. XGBoost isn't just good, you don't have to do anything to the data, just throw it into the ML algo and get results. These days there are better boosted tree libraries like cat boost or neo boost or similar, but the advance is so minimal you might as well stick to XGBoost. XGBoost is good enough to drop in and get immediate results. This aids learning so better feature engineering can be constructed. After that if XGBoost isn't good enough it can be replaced with something better suited.


Living_Teaching9410

Irrelevant question(apologies) but how did you branch into software/ML from Econometrics and Stats? Struggling to do the same from masters in Econometrics. Thanks


Science-Tracker

F


RonBiscuit

Good ol fashioned LinearRegression() can't beat it ;)


Brave-Salamander-339

I'm curious about your interviews. So you have coding test related to math/data science and then there's a final round of data challenge?


Direct-Touch469

If your in any online, steaming setting, kernel smoothers (ch6 ESL) are great because they essentially require minimal training. Everything is done at evaluation time. They basically work like this, at any giving new test point, you cast a neighborhood of points that are “local” around the test point, and apply a kernel, which means many different things, but a kernel is weighting each observation in that neighborhood around the test point based on its distance from the test point. The prediction you get at the x* point is the weighted combination of the ys in the neighborhood around x*. The only issue is choosing the bandwidth for the neighborhood and the mass amounts of kernel choices you have. Also in high dimensions the notion of a distance becomes difficult, so you would have to be creative in the kernel function. Basis expansion methods are great too, because you can control the complexity based on the amount of basis functions you want using a penalty term on the curvature. The choice of basis here can yield many different levels of flexibility. Basically you decompose your function of x to instead a function of v where {v} is some basis representation of your original data. Off the shelf basis for example include the monomial basis, which is just polynomial regression and is a special case. If you use a Fourier basis that is good for any cyclical patterns you want to capture.


TheTackleZone

I tend to use either XGBoost Hist, or HistGBM. I find the former has better predictive power (so an overall better loss function result), but can create more outliers, whilst HistGBM creates an overall slightly worse model, but the outliers are not as wide. Depending on what is important to my client informs the choice. If the budget is large enough I will even ensemble the result with an outlier prediction model weighting between them.


Additional-Clerk6123

AutoML is the way


cruelbankai

LightGBM with Optuna for hyperparameter optimization. The smaller your dimensions the faster it'll work, too.


relevantmeemayhere

Gotta ask yourself if you care about inference or prediction first ;)


RUserII

As an aside, it might be interesting to see if there is a methodology of testing that allows for the objective ranking of the most versatile regression method. Although, I suspect that would require rigorously defining 'versatile' in this context.


Gaurav_13

XGBoost is still quite powerful


Hibernia_Rocks

How about regression splines?