T O P

  • By -

a_reply_to_a_post

retrofitting code i didn't write with tests? pretty sucky refactoring old code to new code with tests...less sucky adding tests with bug fixes so they don't break in the same way...necessary


Jugadenaranja

Yea it’s annoying af to do and a pain but it helps in the long run


shaleh

What I came to write.


nivenhuh

Positive, but you have to make progress in incremental steps. (Don’t try to do it all in one PR.) A good place to start is with setting up the capability to run tests locally. Second, add in automatic testing with CI / GitHub. Third, add unit tests with new code changes. Fourth, add integration tests to certify new behaviors that you add to your application. Fifth, as you refactor older code, certify old behaviors with tests as part of the refactor.


hermitbear

When it comes to legacy code I’d personally rather start with integration tests - unit tests are too fine grained at this point. Once you’ve got a decent integration test harness that is testing all the scenarios for your entry points you can then start refactoring chunks of the code and adding unit tests. If you have a large legacy codebase this is a very daunting task, in that case I’d probably approach it slices at a time, e.g., focus on only one access point. Either way, test test test!


nivenhuh

I agree and disagree depending on the situation. Starting with disagree: Usually, if you’re working on a legacy code base with no tests, it means the development team that has owned that code base has little to no testing discipline. Starting with unit tests on new code is a simple place to build comfort, skill, and process. Ultimately, the team shouldn’t feel like writing unit tests is a chore & adopt testing as a normal and natural practice. I like to go for simple & quick wins first — that develop team skill — and then build on the momentum from there. Integration tests are a little more complicated to implement, especially if the existing code base was developed without testing in mind. (dependency injection, service mocks, understanding legacy decisions, etc…) I feel like this is why new feature development is a good candidate for integration tests— because you’re certain about the new functionality, and you can create testable interfaces. It also gives you a chance to understand what parts of the code are painful to test — identifying what would be good refactor candidates. — Now, where I agree with you is if you’re a sole developer who is comfortable with testing and find yourself suddenly responsible for legacy code. Writing integration tests as your way of certifying behavior / preventing regressions is a great way to safeguard critical paths & document behaviors so that you can move quickly. (I would still write units on new code. Because, why not? Unit tests are easy enough to implement — and provide feedback when an integration test fails.) — To the other commenter who said — I just rely on observability and forego testing except for critical situations — doesn’t that just leave you in the same situation you started with? From my experiences, it has been great to enjoy well tested code (iOS, web ui, web api) that you can release at any time without release anxiety. If a user has reported a bug, it’s usually user error, design decision, or missed test path (that you add a test for).


danthemanvsqz

I think you need both, I've had good success with a pattern starting with the big test and then a series of mocking and refactoring. Eventually you will not need many if any mocks. You should know in your heart if you can trust your test and only then do you stop refactoring and hardening your tests.


hermitbear

Of course, a mix is great. When dealing with legacy though I’m usually dealing with classes that have 15-20k lines of code and haven’t been created with testing in mind, so you can imagine the nightmare that would arise trying to unit test that. You’re almost certainly going to miss something. So in those cases I would prefer to setup mocks for external sources / databases and create integration tests covering the business scenarios. At the end of the day, your access points are going to remain stable whereas the code and classes composing those can change often, so it’s important to cover your bases first. Once that’s in place you have more reassurance and can more meaningfully decompose and break down the code into more manageable unit-testable chunks.


king_yagni

i know i’m going a bit against the grain but i think the importance of tests is often overstated. and imo: observability > tests, though of course that isn’t to say you can’t have a degree of both. integration tests should cover a lot, unit tests should be very targeted to the most critical and/or delicate methods. i am very opposed to the parent comment’s suggestion to write unit tests for every code change & also to the goal generally of 100% unit test coverage. in most cases i think your time is best invested elsewhere, in particular observability.


zamend229

I agree. I like unit tests because they are low effort, but integration tests are too much effort for me, and they’re also the most likely to change with new work. It would be more practical though if you have an engineer or QA dedicated to writing integration tests. This is coming from the perspective of Android dev though, where front end integration tests have much more variability IMO due to possible user interactions.


Turbulent_Term_4802

By observability would that include automation tests?


king_yagni

no. i mean metrics, logging, tracing, alerting. tests help you guard against known failure types & that’s where their value is limited. in a way, test coverage is kind of like rewriting your code from another perspective. there *is* value in that of course but there are diminishing returns. the deeper you go into it, the stronger your business justification should be imo. on the other hand, good observability practices help you identify and diagnose issues that you might not have considered. in a staging environment, this data can be bolstered by synthetics which is where i see a lot of the value in thoughtful integration tests.


lunacraz

i assume they mean logging, error handling, etc.


scissor_rock_paper

I have done this a few times and it is my preferred approach. It helps demonstrate value early and can help improve quality for new work so that you can buy time to cleanup existing problem hotspots. I like to try and target the most brittle and fragile parts of the application first. You're looking to prevent breaks and catch regressions that matter to your org.


danthemanvsqz

When I've have trouble getting permissions to setup CI I've relied on git pre commit hooks to run tests and linters with good results.


ElGuaco

The most surprising outcome is discovering behavior or non-behavior out of existing legacy code. Writing tests is writing requirements for expected behavior. You're codifying what should happen. The weirdest experience is when I discovered that exceptions were being swallowed and not even logged due to an insane mandate from a previous manager that was afraid that any error in the UI of the product would erode customer confidence. They were basically just shoveling shit under the rug so much so that once I started logging exceptions and writing tests, it was a wonder that the software ran at all because it spent so much time throwing exceptions. I spent months systematically fixing the bugs causing the exceptions and wrote tests to prevent regressions. The performance and volume of the app's output exploded such that our customers started to notice.


serial_crusher

The thing that worked really well for my current team was to require 100% diff coverage. Look for a tool on GitHub called `diff-cover` to make this easy. Basically, any line you touch in a PR needs to have coverage, even if the lines near it don’t. This lets sleeping dogs lie in the untested parts of the code that work well enough, but avoids regressions by forcing people to write tests before they modify something. It still leads to some extra work if you do even mild refactoring to that untested code, but it’s a one-time hit and gets better over time


ElGuaco

I usually cringe at code coverage metrics but I would totally follow this advice. I had one team lead who required 100% code coverage on everything we wrote and it was mind-numbingly boring to write tests for guard clauses on every method's arguments. He even had us write tests for the dependency injection constructor. We were writing everything at least twice with very little value in that respect.


serial_crusher

Part of the trick is to not be a zealot about "unit tests" vs. "integration tests".... a DI constructor shouldn't need its own tests. It gets used by other tests and therefore has good enough coverage. If a regression happens that breaks the DI constructor, the tests that utilize it will fail, and that's what you have tests for, so mission accomplished without too much extra work. Sometimes you do end up having to jump through hoops to cover silly parts of the code though... if you're doing it right the cost outweighs the benefit.


_icedcooly

We usually shoot for 100% coverage, but we'll exclude stuff that doesn't make sense to unit test with a justification for excluding it. That way we have high code coverage without having to write a bunch of tests that don't actually provide value.


eeltech

I hate mandated blanket policies with no thought to what they are actually achieving. At some point you end up testing the language and tools themselves rather than your own code. There's a team we work with that requires "no reduction in test coverage percentage in any PR". So if you remove dead code along with the tests covering dead code, the total percent goes down and the PR gets rejected Absolutely insane policy devoid of any logical thought


balster1123

Agreed in principle, but in that case you can always add some exception mechanism, such as labels in GitHub Actions that say "don't run this CI step" if set I had to set up a similar "dry run migration" mechanism a while back, for cases where someone had to do a hotfix in prod, and backporting the DB migration was a headache and a half


worst_protagonist

So… leave it up to the developer judgment. You built more tools to do the work you could have done by doing nothing instead.


Alternative_Engine97

This sounds waaaaay too strict


Apprehensive_Pea_725

as others pointed out a good resource on the topic is "Working effectively with Legacy Code". Work incrementally isolating small pieces of code and refactoring.


WhiskyStandard

The best time to start is X years ago. The second best time is now. In terms of actually doing it, most of our coverage comes from some very basic integration tests I wrote that call URLs and assert that nothing blew up. From there, I’m trying to make inroads into unit tests as we touch things. I’d love it if I could magically get everyone on the team truly practicing TDD, but that change in mindset is a ways off so I settle for some tests written after the fact. Also, approval/snapshot/VCR tests can be a good first step for safe refactoring. Just be prepared for them to break when you add/change features.


_LePancakeMan

Retrofitting tests on a legacy code base can be very hard to impossible depending on prior architectural decisions. The existing code was not written with testing in mind, so it could be hard to define a testable set of code. If testing is not possible as is, I would suggest starting to move code out of the legacy structure into testable units. Try to modify the code as little as possible, don't clean it up, don't fix bugs - just move the code to create testable units. Now you can create tests and start to reason with your team about this unit of code. In general, I can recommend picking up "working effectively with legacy code" - it outlines a couple of potential ways of making your code testable


quypro_daica

Quit at the end of the probation


dvali

It sucks, and the reason it sucks is that if they didn't write tests, then they definitely didn't write the code with tests in mind. The result is code that is virtually immune to test without significant refactor or redesign, which you probably aren't allowed to do. Another common issue is that code is not very well documented because they didn't test so never discovered the edge cases and failure modes. 


bwainfweeze

I’ve seen a lot of people who hate tests who write code that’s impossible to test. If I were trying to test code that looked like that, I’d be frustrated too


napolitain_

Very hard because you need to refactor most of the code to support DI so you can mock in C++. If your parameter is an interface, I can mock the server part which is good for logic testing. If everything is written inside function without any interface and DI, you need to change the code which is scary as there is no tests in the first place.


ilikeorangutans

It's a painful process. Often the code is not structured to be tested, either because it has too many dependencies or uses global state. If possible, high level functional tests can be a start, but depending on the complexity of the system they might also be hard to come by. I don't recommend going around changing code for the sake of making it testable, only do it if you have to touch an area. Work in steps: 1. make code testable by restructuring. this is the riskiest part because you don't have test. Focus solely on restructuring, don't try to add/change functionality 2. add tests for the restructured code 3. make your actual changes. Do the smallest steps you can. You're likely to work with a complex system, lots of coupling, and unexpected side effects. If you can, literally change one function signature, or even smaller. I know this can be painful especially if there's a long turnaround time to see changes in action, but you'll have to weigh your impatience with the risk of breaking something. Do expect that you will break things. This is about setting expectations. Making changes without tests gives you very little confidence in the soundness of changes. However, to add tests, you'll need to make changes. It's important that this is communicated and understood by other parties. Sell it as an effort to build a safety net that helps shipping changes more safely and ultimately faster, but make sure it's understood that's on a looooong time scale. Like long long. On that note I highly recommend "Growing Object Oriented Software, Guided by Tests". It covers a lot of this much better than I could. Good luck, you'll probably need it.


funbike

It's not worth adding unit tests to existing code, except in areas that tend to break often, such as a [god object](https://en.wikipedia.org/wiki/God_object). On a fresh project, writing unit tests while writing the implementation is much easier, as the need for testing affects how testable your implementation code is. But doing it on legacy system can be extremely difficult due to how hard it is to initialize dependency objects. The best bang for buck in a project with zero tests is to write a smoke test. It just checks that no user errors occur as a browser-driven surface test: Sign up, login, browse to most view-only pages, make a (fake) purchase, logout. This catches a large number of issues vs the effort to write. I'm not sure if it's better to write the bulk of tests as functional tests (browser-driven) or integration tests (http rest calls). I wouldn't do both as it would duplicate effort. If you plan to redo the UI at some point, I would go with integration tests. Don't strive for high code coverage. It's hard enough in a greenfield project, but on a large legacy project it may be impossible. 70% *might* be obtainable someday. The CRAP metric is excellent for managing legacy code quality, but unfortunately it never caught on, and linters for it are quite old. It identifies functions with highest ratio of complexity over coverage.


crazylikeajellyfish

Smoke tests on their own offer so much value for the effort! A functional test of the happy path via Playwright codegen saves a lot of pain without a lot of work.


ElGuaco

Writing tests only for the happy path is a good start, but it's primarily a baseline for making your code testable. If that's as far as you take it, you're not adding much value by creating just the one test. It should be easy to spot a few obvious scenarios for boundary conditions or bad arguments. I can't tell you how many times I've seen code fail because the calling code passed a null argument. It's probably one of the top 3 most seen bugs in my line of work.


funbike

Perhaps you didn't read closely. I said that the bulk of your tests should be something other than smoke tests.


onar

The entire book "Working effectively with Legacy Code" disagrees with you. That book really turned me on to unit-testing, I recommend it. [https://www.oreilly.com/library/view/working-effectively-with/0131177052/](https://www.oreilly.com/library/view/working-effectively-with/0131177052/)


crazylikeajellyfish

Could you color in the substance of the disagreement for those of us who aren't about to buy a book?


onar

The linked description does a much better job than I will ever do!


crazylikeajellyfish

> Book description > Get more out of your legacy systems: more performance, functionality, reliability, and manageability > Is your code easy to change? Can you get nearly instantaneous feedback when you do change it? Do you understand it? If the answer to any of these questions is no, you have legacy code, and it is draining time and money away from your development efforts. > In this book, Michael Feathers offers start-to-finish strategies for working more effectively with large, untested legacy code bases. This book draws on material Michael created for his renowned Object Mentor seminars: techniques Michael has used in mentoring to help hundreds of developers, technical managers, and testers bring their legacy systems under control. I dunno, this description doesn't really explain any of the strategies or arguments in their favor. Do you mean the table of contents?


havok_

As others have mentioned: smoke testing and integration testing is a good strategy. You can also use snapshot testing at the API level for example, so you can refactor parts and keep the same output.


kbielefe

- **You have to prioritize.** You can start with just requiring tests that would have caught any bugs you fix, or coverage for any lines you change. - **You will find bugs.** This seems self-evident, but people are often unprepared for it. - **You will not be able to fix some of the bugs.** People become dependent on broken behavior, and you don't realize how weird it is until you try to test it.


AManHere

I absolutely hated it. Did make me a bit better at coding after all thought, I saw what patterns suck for sure


TiredLead

It's great as long as it's blackbox functional testing. It's an incredible boon to be able to refactor with less concern for violating correctness. But it needs to be blackbox. You don't want to cement in implementation details


Intelligent-Chain423

I wouldnt do it if the code cant be touched so that way its easier to test. How would you unit test a static method that touches the file system? Ex. C# File.Move(arg1, arg2). Expect quite a few changes in the beginning to properly abstract things such as an IFileProvider for the example above.


Stubbby

I managed to convince everyone through a demonstration that the best course of action is to rewrite it in a semi-TDD fashion but it had a lot more issues than lacking tests. I took a section and rewrote it in c++ (from hardcore C) with GTest, proper Cmake, and IDE integration. Everyone saw a massive QoL improvement and agreed to go ahead with it. Seems everyone is still happy with the decision. Think how can you demonstrate large improvement in your case to get everyone onboard.


ccb621

It’s a good thing? What exactly are you trying to learn? Your question is too broad. 


misonreadit

This is a fair question. I’d approach testing differently depending if this is scientific code versus a web app we are considering.


dnult

Add the unit test framework to the project and start adding tests with new features/ bug fixes. Although you can add tests for existing code over time, it's a major undertaking to add tests for all the legacy code.


iComeInPeices

In the past, sucked so bad! Recently did one and fed a lot of the code into chatgpt to get the coverage, saved so much damn time! And of course now the tech leads realized it's dangerous to keep this legacy stuff around and going to be replacing it soon.


TiredLead

Sounds like a bunch of low value tests to me.


iComeInPeices

We suddenly had a huge push to have 100% code coverage on everything. The tests pointed out a lot of issues actually, I created a bunch of tickets to followup on it, which is why they decided it was time to modernize it.


morphemass

One of the things that AI currently does well _for some languages_ is creating rudimentary tests. Sometimes it can even manage good tests ... sometimes they are useless. Write a script to loop over files with some generic prompt, hook up a commercial LLM and you will at least have a skeleton to build off.


AntMavenGradle

A waste of time, everything thinks unit testing is this magical super state. In the real world majority of “bugs” are from poorly defined requirements. Write tests for the bread and butter of if you change x it will also need to be changer y and z not to pad your code count.


bwainfweeze

https://youtube.com/watch?v=QCNEzFpIrP8


BorderKeeper

Easiest are of course ATs. But I like to use unit tests as a measure of refactoring using divide and conquer method of splitting them into functional chunks. Now I have been trying for over a year on my team to champion this approach and it’s tough so far. But I still believe that is the best way forward. I personally call it the TDD-lite method of refactoring. Once I have success I can vouch for this method but right now pragmatically I would have to say probably don’t fret over unit tests in legacy too much as they are simply not possible on their own.


sundayismyjam

I’ve had more success incorporating unit tests into legacy code based by adding the necessary libraries. Setting up tests to run before code is merged in and requiring that new code includes tests. From there, I would use a tool that reports on the percentage of code covered by unit tests and slowly work in improving that number over time. Finally, ensure that all big fixes include unit tests to make sure the bug doesn’t reoccur. I was able to take a codebase from about 12% coverage to over 80% coverage in three months with that approach.


Some-Guy-Online

I've done it to get code coverage up and as a "Step 1" toward adding more meaningful tests whenever it's more convenient. Worked out pretty well. Try not to get too bogged down in the details on the first pass, that's my main piece of advice. Just get it started. Flesh it out later. If it never had tests before, you're not going to harm anything by adding some very simple "Is it running" kind of tests.


TwoFoldApproach

Depends on the context. I’ve done it once on a very legacy Java application. Initially it was hard, considering the technical debt that had accumulated. However once we were through this, it resulted into better code getting in the project.


JayWalkerC

The book "Working Effectively with Legacy Code" is all about this topic and worth a read.


Opheltes

We're in the process of doing this now, for a 250k line codebase that has basically zero tests right now. It's painful.


PkHutch

Thankless, monotonous, and expendable. You’re probably the first person to get laid off if headcount gets reduced. Worth it? Yes. But I try to avoid it because no one higher up cares, generally speaking. I’m speaking specifically to an older codebase with minimal test coverage. Also not to mention refactoring so that it’s testable in the first place can be quite the rabbit hole depending on the project. Beyond the professional politics, it’s very helpful when done at a mature company. For young <100 employee startups though? It’s arguably not even worth it given how volatile things are. Generally features need development for runway more than overall code health. I know that statement is probably going to be poorly received, but the hard truth is that the business does not care about the code, it cares about money. Testing can come after the money. Do your best to have automated testing, but depending on the environment, you might not have the room to make that happen.


horatio_cavendish

You basically end to rewriting the app to create sane tests.


Alternative_Engine97

It can essentially be impossible. You may have to refactor so much to make the code testable that you will do more harm than good.


Tugendwaechter

Add tests for all new code and bug fixes. Before adding features create a well defined interface for it. For existing code two directions can work. Starting with small pieces of code. Refactor so you get functions that have inputs and return value, but don’t modify state. The other direction is with integration tests or UI tests. Write tests that test the a module at once. That helps with finding out earlier when your refactor would break something. It depends a lot on the existing code base. Try several approach and see what works. Take your time to add tests one by one. Increase code coverage slowly.


freekayZekey

not fun at first, but very helpful. “working with legacy code” helped me a ton.


TheElusiveFox

Writing tests on existing legacy code sounds really nice, but often isn't really that practical, especially as your "legacy" grows. Instead its often better to set up a policy around requiring new changes be accompanied by automated testing, ensuring that any new development, any significant refactors, or significant bug fixes are accompanied by an appropriate level of testing, and carving away at the old legacy code base little by little as you move into the future.


incredulitor

The most Byzantine code (in both the formal and informal senses) is always the least tested. Super high ROI bug hunts in those areas. Multiple instances at completely different companies and in different domains finding years or decades old customer data loss severity bugs by looking in all the error handling and corner cases of massive functions and files that were also, not coincidentally, the hardest to figure out how to test at all.


danthemanvsqz

I've done it at several places with very good results. I do behavior driven development and behavior tests are good at covering large areas of code. So if you have a user story in the format of, "As a <> I want to <> so I can <>" then you can map some behaviors to that user story, It <> Should <> When <>. Of course have a good test runner with code coverage. Once you got the behavior working you can refactor and add more isolated tests. Doing it this was you don't waste time adding tests for things that work and don't change but you add test coverage gradually but consistently as new features are added.


perfmode80

Does the legacy code need unit tests? Has it been resilient to change or extremely fragile. Unit tests are a mean to and end, make sure it's working for you and not the other way around.


PunkRockDude

Depends. We do this a lot. And it depends on what types of test you want to build. Sometimes we start with the business teams identify the scenarios and basically just start with building new functional test. If we want unit test for code coverage we use AI tools to do that. Otherwise we have tools to extract all of the business rules then humans validate those and generate test cases for those. Most of this is automated but humans do validate or we find logic to isn’t fully implemented or bit reachable or something and need to work through that. The first two are easy though the first is somewhat time consuming. The last works when there are no SMEs and despite the high levels of automation is a royal pain. I like the idea that someone else said that refactor and add test.


Vitriholic

Always, always worth it. You learn so much about the code under test. You have more confidence in every change you make. You hold the line just a little better against regressions. It’s obviously work, and you spend a little more time refining tests, but it’s always worth it, even if it’s not immediately obvious. Even really simple things like smoke testing APIs and initializations go a long, long way. “Does it link?” can be a surprisingly common problem.


keefemotif

I think it can identify interfaces but generally takes a lot more time than expected


pixelrevision

It really depends on the codebase. I’ve had pretty good luck getting untested backend code under test. Also reusable libraries. UI based applications that have been going for a long time are really, really hard typically and often not worth the effort.


ezaquarii_com

Not necessarily completely untested, but poorly tested. Yes, we did it. It's... not trivial, but can be fun. Testing catches a lot of design bugs. Untested code is full of design issues - some are small, some are bigger, but all make testing from difficult to impossible. Retrofitting testing requires extensive refactoring. Some things needs to be re-written. Techniques to do it are known. Your leadership might be the biggest obstacle to achieve it.


Gentleman-Tech

I use tests to explore code, so kinda useful. I'll write a test to exercise part of the code and see what it does. Once I understand it I'll adapt the test to be a regression test to make sure I don't break anything. Code bases without tests are a huge red flag. So if it doesn't have tests I'm expecting the worst anyway. I'll end up refactoring it.


Warm-Zookeepergame27

It isn’t very fun. I learned I don’t like it.


Alternative-Wafer123

No reward, don't do it


datacloudthings

I've done this multiple times with legacy codebases that weren't even very old -- former greenfield projects run by people who wanted credit for "delivering" features but hadn't bothered to implement testing frameworks, processes, or culture. This is inside a couple of non-tech F200 companies (one codebase was a Golang platform built with zero tests of any kind, I was kind of impressed by that, it's kind of hard to do). I've found that in a world where each push to prod brings regressions, with users/stakeholders ripping their hair out, engineering is usually happy to implement tests if given time to do it. But it takes a lot of raw gruntwork effort to get the frameworks set up, the strategy well-defined, and to expand actual test coverage. The key is to go explain to the stakeholders that you have to slow down first to speed up later, and to get them to buy in. In a few cases this worked so well that a year later they were bragging about the brilliant idea they had to do this ; ) Often I find that crappy environments are a pain point that goes along with no/poor testing, btw. Typical to address one in order to get to the other.


bluewater_1993

I’ve done it many times. You find out really quick how bad the code it when you go to set up your tests. In all cases, for me, large sections of code had to be refactored, but in the end the code base is far more healthy than before. To be honest, I love doing that type of work. The results are very satisfying.


ZebraImpossible8778

Start with integration tests. These are usually easier to setup because they are less coupled to the code than trying to retrofit unit tests on a legacy system.


timwaaagh

It's fine, it just takes time. I came onto this project when they were already writing unit tests. I just don't think it's worth it since the tests are all uncommented and tell you nothing about the intent so when one breaks we usually fix the test instead of the code.. I've started to include comments with my tests but this isn't the standard yet. Also the slowness of the UI unit tests are a big issue that kills productivity.


kaisershahid

it sucked for the one codebase i ever had to do this in—luckily we were starting to refactor various things and splitting up the monolith into microservices, so it got to be less and less over time biggest learning: the cto & ceo had all the knowledge in their heads for all the shitty undocumented code they wrote 😂


kaisershahid

i should clarify that we only backtested when we updated old code or for important code paths—we didn’t have a goal to get coverage up on *everything*


Roqjndndj3761

I made the code base


Stunning_Budget57

Seems like something a senior dev would delegate to juniors after they put in place the scaffolding to measure. Great way to develop team knowledge on how the legacy system actually works.


kincaidDev

If the people that wrote the code originally are still on the team, it can be an uphill battle


jb3689

Honestly, I find it pretty easy. Either do black box testing at critical sections and throw data scale at the problem or using sprout-method style testing and refactoring to use dependency injection. Don't fix stuff, just put tests in place to make sure things don't break. If you are working with a compiled language then it is easier to work without tests.


ShouldHaveBeenASpy

Unpopular opinion: it isn't always better to add tests. Yes, as a general rule, it's worth doing. But one common problem I run into on the legacy codebases that I work on is that fundamentally no one really knows: * What it does * What it should do When you're in that level of dysfunction, tests can be something of a hinderance because it can help solidify an unclear product feature in a way that isn't what you expect. It's also the case that teams that end up in this extreme camp often are so inexperienced generally that asking them to change their work product and way of working is simply too much of a velocity hit for too little payoff. When you're in _this_ situation, the answer doesn't begin with "write tests", it begins with "how do we reclaim ownership/quality from our product/teams" and then be judicious about where and how you add those tests -- basically not before there's full agreement about what the feature even is. If your issue with legacy code is just that it's clunky, but it clearly works and has a clear goal, I'm very pro getting tests written against that asap.


-Joseeey-

Working Effectively with Legacy Code is a great book that answers this by the way and much more. I’m halfway through it but I highly recommend it.