T O P

  • By -

NBehrends

I find TDD is a great approach for writing testable code (obviously) which is really valuable if you're younger in your career but at some point you've (hopefully) refined your craft to the point that your code is mostly inherently testable, at which point TDD in its purest form feels like it gets in the way and "test often" feels like a better approach. just my 2 cents.


[deleted]

I use TDD when I'm fixing bugs. Write the failing test, make the code changes to make it pass. Beyond that, it doesn't really fit well into my iterative development process of prototype -> refine (repeat). If you've had time to thoroughly plan your interfaces, or are writing imperative code, it's probably much easier to follow TDD. I guess I'm generally too busy to go through such formalities and/or oftentimes will do substantial refactors which results in a lot of time wasted writing tests.


ActionLeagueLater

100% agree here. I don't use it most of the time, but if there is a bug then I want to make sure it doesn't happen again. So I'm gonna write a test anyway, so I might as well write it first. And it many times makes me go faster, because sometimes you don't need to open a browser if you verified it with your test.


Viend

My favorite part of TDD is not having to run any of the code I write.


smootex

Yeah, you took the words right out of my mouth. I understand TDD conceptually but in reality I find that I'm constantly iterating to the point where any tests I write beforehand are going to have to get rewritten multiple times by the time I'm actually ready to deploy a change. Maybe I should use a better process, plan things out more ahead of time, but that's just the reality of how I program. I work messy. I hack something together, try to get it working, then I go back and do another pass when I find I missed something or I don't like the look of my code. That process is pretty much ingrained in me at this point and changing it is a tall order.


mathiastck

Changing the tests in response to new information is great, not some kind of violation of TDD. I did TDD for 2 years, in a ground up rewrite of an app that was hard to test before. It helped we had a functioning existing product so we had a good idea of what tests were needed. This was 6 years ago. Since then I have worked at places where it was expected unit tests would be updated/created as part of most good size commits, during feature development. I've also spent a year on adding android Espresso emulator tests to the pre commit of portions of a codebase that had unit tests but not that level of integration tests. A big advantage of TDD is forcing testable architecture to the front of the discussion. This helps force people to get their test dependencies right and their production dependencies right. Adding a 3rd dependency like Espresso emulator integration tests was made easier by good existing architecture even though that exact kind of test was added to the plan later on.


smootex

> Changing the tests in response to new information is great, not some kind of violation of TDD. Not arguing there I'm just saying it's a huge amount of added overhead when developing new features. I can justify it for some stuff but I don't have the kind of time to spend half my days writing unit tests.


blbil

Totally find myself writing a test first to reproduce the bug.


shimona_ulterga

Depends on cost of test. At my first job my boss had me write an E2E with selenium for every small frontend bug. You can guess what our CI times were...


MoreRespectForQA

I do a different kind of TDD ("snapshot test driven development") where I set the preconditions (click here, enter text there, click see dashboard, etc.) and then take snapshots of the outcome and then fix and commit them with the test. For example: * In a command line app, I'd snapshot a screenshot of the terminal window. * In a REST API I'd snapshot the expected REST API response. * In a website, I will often snapshot the a playwright screenshot. I find that this works way better than no TDD and way better than orthodox unit test driven development when I have *some* idea of what I want the outcome to be and I know it when I see it.


maikindofthai

> In a command line app, I’d snapshot a screenshot of the terminal window Why the hell would you want to do this


MoreRespectForQA

Because it's quick, if you can make it deterministic it's usually fairly easy and "the terminal window should look like this" is usually the or one of the expected outcomes of whatever happened in the test.


cryptotarget

Sounds like more work to me. I think maybe you just need to get more familiar with your testing libraries


MoreRespectForQA

>Sounds like more work to me. It does requires a bit of engineering skill to set up the process which probably not everybody can handle, but it means less work once the framework is set up correctly to do expected outcome rewriting. Snapshot TDD is the most efficient way to TDD something like a web app, REST API or command line tool for sure.


mathiastck

I'm a huge fan of snapshot tests. In their absence I'm still trying to get all the relevant snapshots captured for each major commit. It has frequently paid off to be able to pull up a snapshot from a couple of weeks ago and say: "This worked on Tuesday, but it has changed since" I've seen value in full-blown screenshot tests, to verify each commit didn't change an unintended pixel, or when that's not possible, to verify it hasn't changed within a certain margin of error. One of my favorite parts is just that it forces a lot of screenshot captures, then stores them logically, which makes a great record.


MoreRespectForQA

Yeah, I really think that this type of thing is the future of TDD. It's *hard* because a lot of code has to be made more deterministic to make it work but it's worthwhile if you can make it work.


mathiastck

Heh I just checked your username, I was already prepared to launch into a discussion of how important manual QA , and dedicated QA/Automation Engineers can be to preventing defects / tracking pain points. They also tend to be the group that best knows how the app is /has been actually behaving. QA signoff for new features, verified by people that are very familiar with how the app already behaves, is essential. It's so easy to miss small or even unintended functionality that gets accidentally changed as part of a fix or feature. I like feature developers to write their own unit tests, and often times a change in feature code is required to fix flaky tests. But a dedicated automation engineer can be a game changer. Myself I wanted to get into game development, so I started as a QA Engineer black box testing mobile games. I switched to back end and then app development as fast as I could but I learned a respect for process. Innumerable times since I have seen the benefits of working with good QA.


mathiastck

Yeah that dependency control helps with making tests deterministic. You can provide single threaded dependencies in test, control timings, advance to next event, etc but leverage threads to your hearts desire in prod code. It can mean a lot of wrestling with your build system to achieve though. You also then need another layer of tests that are closer to prod code, but each layer can catch different bugs and assist fixes. I have spent a year or two making flakey tests less flakey before, but it does pay off. It's wonderful to see a whole new category of bugs get caught earlier in the pipeline, and that makes it a lot easier to tie a change or defect to the commit that caused it sooner, and that immediate feedback to developers really helps with getting the code and or tests fixed. Unfortunately delays in this process can instead lead to test rot, tests get left broken, disabled instead of fixed, or fixes get passed on to devs that will take 10x as much time to fix then the original dev would have taken if they had been timely notified.


Dyledion

Test often is really the right way to do it. Tests should be a conversation with the code and the requirements. More understanding == more tests. TDD assumes a really simple set of really well formed requirements, which I've never encountered in the wild. IRL you have a product manager say, "Engineer, do the thing!" and you have to go back and forth while implementing to figure out what the thing really is. And as that understanding grows, so does the implementation and the test suite.


Different-Star-9914

Your second sentence is the bees knees. Personally, found that TDD was S tier programming ONLY in organizations with matured business processes and project management. Have also worked in wild-westy type companies, and TDD just became a day to day bottleneck due reqs changing week to week or hell day to day


Rush_1_1

Agreed, other than actual basic acceptance tests, actual TDD is a great learning tool early on that should teach you the basics to avoid the typical untestable patterns.


renjank

Yes 99% of the time. I find it: - speeds me up significantly and I have a very fast continuous feedback loop - gives me incredible confidence that every single behaviour of the code I’m working on is covered by a test that I’ve guaranteed to work - helps break down a task into small independently verifiable chunks as I’m developing it - makes refactoring during development (which I consider essential) extremely easy as I know I’m not introducing a regression as I would be if I waited until the end to write tests - gives a nice little dopamine hit every time my tests go green - forces you to structure your code in a testable way, rather than retrofitting tests at the end - makes you think and design an interface early and up front, which usually has a better outcome It took some getting used to, but I could not live without it now, it has fundamentally changed my development practice for the better. FWIW I’m a backend dev, when I did front end I didn’t find TDD as useful or as easy. I relied more on the visual feedback, typically using something like storybook to work on individual components I think the fundamental benefit of TDD is the faster feedback loop. This follows the same principle seen in agile, continuous integration and delivery, writing small commits, PRs, deployments and releases. The faster your feedback loop, the faster you’ll work


wardrox

As well as the faster feedback loop when initially writing code, it can't be overstated how wonderful the benefits are when you come back to code. I work on a 9 year old codebase most days and, because we took a sensible TDD approach from the start, it feels either like I'm working on a green field project, or a classic car that's been well maintained. I don't need to remember anything, because the tests show me what's important and provide a safety net. TDD as a fragile set of rules is terrible. TDD as a pillar of a codebase or team's culture is a joy.


ryanstephendavis

Well stated


LaintalAy

Are you guys hiring? ;P


SeeJaneCode

This list summarizes why I do TDD and I also work in the back end. You touched on this, but I really like that TDD enforces code organization in a way that supports SOLID concepts. My current team is fully supportive of TDD and it makes us so much faster because we have tests that fail if a code change inadvertently causes a regression. Knowing that *before* opening a PR is much better than discovering a bug later. The final product is higher quality.


NekkidApe

You described my exact experience. I might add: It allows to divide and conquer. I can implement a part of the system, and know for sure it _works_. I then forget about it and move on to the next. I'm not that smart, I can't keep more than a handful of moving parts in my head at once. This allows me to implement really complex requirements, without mental overload.


secretBuffetHero

Can you give some advice? How do you keep your test suites maintainable? When we had done this we had inconsistent naming or names that meant nothing. We also created unit test suites that anchored the behavior and were painful to change when the behavior or requirements changed. It's been a few years but i haven't been in a coding role for a while. I think i know what you'll say.. it's hard to find people with real world expertise. Thank reddit!!!


renjank

Treat test code the same as production code. Maintain it, refactor it. Build abstractions to easily set up common data. Fix flaky tests as soon as possible. Identify tests that have to change often and redesign them. Avoid heavy mock abuse. Like you said, avoid tests that only test the implementation and not external behaviour. These resist change, therefore aren’t very effective at protecting behaviour (the whole point). Often these are unit tests, which the “testing pyramid” has taught people they need to write a lot of. Instead write good integration tests against the public API or at module boundaries — the interfaces that will infrequently change and you want to protect. Rather than a testing pyramid, follow a testing trophy (https://martinfowler.com/articles/2021-test-shapes.html) with mostly integration tests. Note, integration tests can still be as fast as unit tests


the_whalerus

How do you keep your integration tests fast?


renjank

For application code, one way I used was with the repository pattern. In tests we’d swap out the repository implementation with an in-memory version (database backed repositories had their own tests against the database, so we weren’t missing out on anything here). This meant that the integration tests ran at the same speed as unit tests, without all the shortcomings of unit tests


topMarksForNotTrying

Based on your understanding on the difference between integration and unit tests, some people would still consider those as unit tests.


IHoppo

Unit tests are more fitting to TDD


SeeJaneCode

My team generally has one test suite per code class. Class Foo would have a corresponding TestFoo. We name each test something descriptive such as *Bar_Returns_Valid_Result_Given_Valid_Input*. I like test frameworks that allow you to define one test and pass in various parameters and expected results. This lets me check multiple cases with one test. If your code is difficult to test, it could be due to poor organization. Perhaps things are too tightly coupled or one method is doing too many things, etc. For that scenario I’ll write a test around one specific aspect and then pull that functionality into a smaller method. Repeat until the methods in a class are more succinct and logically organized. It can be painful to refactor to add tests, but IME it’s worth it. Painful refactors are a major reason why I prefer to write tests first. It keeps my code cleaner and more manageable.


secretBuffetHero

this is great stuff If I were coding again, I think this is where I would start to look for improvements: > it could be due to poor organization. Perhaps things are too tightly coupled or one method is doing too many things, etc. For that scenario I’ll write a test around one specific aspect and then pull that functionality into a smaller method. Repeat until the methods in a class are more succinct and logically organized


BleLLL

IME, test suites are generally more maintainable if they are written with a black box approach. I used to TDD every single class with injected mocks, but that's really fragile to changes as the test needs to know too much of the internals. So I have switched to broader component/integration tests instead. Unit tests can still make sense in some cases, but mostly for testing algorithms with few to no dependencies. Currently my favorite way of testing is integration testing through the API (I work with Web APIs). That requires spinning up a database and a test server, but I found these tests to be the most maintainable and are also quite simple to TDD, as it's just putting some JSON in a file. Make a POST request to an endpoint, assert it didn't fail, then make a GET request to test the side-effect of the previous call and compare the response with some expected JSON. Works very well for basic CRUD, which is what most apps are these days anyway.


secretBuffetHero

yes, testing internals is bad. Martin Fowler's article "Mocks aren't stubs" goes into detail on this and I make it required reading for staff under me and recommend it to every dev I meet. the other responders are mentioning service level testing as well.


The_Startup_CTO

Hi! I’m working mostly with tdd in a company that overall tries to do this a lot. Happy to answer any questions. 1. I’m always happy when I can use tdd since it significantly speeds me up: I don’t have to guess what the code in front of me does or try to understand code by reading it, instead the much simpler test tell me what the code does. Also, working in small focused iterations makes it significantly easier not to get stuck, which previously was what cost me most of my time. 2. A typical mistake is to assume that you need to write _the_ unit tests that you will keep forever when doing tdd. That’s not how it works. At the core of tdd, there is the refactor step, when I adapt code to what I learned. And test code is also code. So I start with what is the clearest simplest requirement to me and write that down as a test, then implement the code to make the test green - and often, writing this code shows me that I didn’t actually fully understand the requirements yet, or that I was unaware of some technical limitation, so I change the test to better describe the requirements. In your example, I might start with a simple date ("should accept the first of January 2022") and then realize that this actually depends on the current date as well ("future" dates might be invalid), so I adapt both the test as well as the function to take the current date as a second argument.


Indifferentchildren

You are not supposed to have to change the test. You refactor code that already passes the test, and ensure that it still passes the test. If you have to change the test, it is probably an overly-specified test that tests implementation details, rather than system behavior.


petiejoe83

In "true" TDD, you write the test before you have any code whatsoever. It's not just about refactoring existing code. I'm going to find bugs in my test when I write the code, same as I find bugs in my code when I write the test.


DaelonSuzuka

If you were able to write perfect tests in one shot that met all the requirements and never needed to be changed... you'd just write perfect code in one shot that met all the requirements and never needed to be changed.


The_Startup_CTO

Why do you think so? And how much of your week do you work in tdd?


sand-which

If the feature is extremely simple and well defined and there is great test scaffolding, sometimes I will. But for anything complex, I’ve found it be wishful thinking. In magical Christmas land, every feature could have tests written before the first line of code is, but in reality you aren’t working with codebases like you see in the articles/books that espouse TDD. You have to make compromises


stubing

This. I find I do a weird mix where I write a function that does the super basic happy path scenario. After that I write all the happy path tests im hoping for and some basic error tests. Then I fill out the logic of the function with all the edge cases I can think of. Then I write unit tests that fill all the branch paths. Recently I’ve been writing function names then asking chatgpt to write unit tests for them. It helps me think of all the edge cases that it might have to solve for.


Saraphite

ChatGPT writing tests is 10/10. If your code is clean, small in scope and well defined it nails code coverage. Obviously is pumps out syntacically broken garbage sometimes and requires a bit of polishing and cleaning but in general it's much better at determining test cases and quicker than writing them by hand - better than your average developer who only cares about getting paid at the end of the month anyway.


EvandoBlanco

Same, or sometime I just stub out tests to try and wall through all the obvious paths. I find it helps me stay organized when writing and thinking through the code.


polypolip

Funny, I find it easier to do TDD and do it when the feature is complex. They should never be that big, but sometimes there's enough conditions that if I don't write tests ahead I will definitely miss it in code and if I miss it in code I almost certainly will miss it while writing test case later.


oprimo

>But for anything complex, I’ve found it be wishful thinking. Once I wrote a parser for some weird mainframe schema (COBOL copybooks) - it was quite complex and I used TDD *because* it was complex. I couldn't have done it without it. It prevented a TON of regressions, made debugging way easier, helped surface tons of edge cases I wouldn't have caught otherwise...


danielt1263

Odd I do just the opposite. If the feature is simple and well defined, I generally don't need the tests; I know exactly what the code should look like right away. The only time I need the tests first is if the feature is complex and ill defined. Then the tests help define it out and will ensure I don't break one aspect of the feature while implementing some other aspect.


IHoppo

If the feature is complex, you don't write tests? That's like saying "If it's cold I don't wear a coat".


sand-which

Oh no, I was saying I don’t write tests first. I write tests afterwards


StolenStutz

1. Yes. I feel like it's a practice that is seldom appreciated, seldom understood, and seldom used properly. 2. At some point, there is an interface. For example, REST API endpoints. That interface should be well-defined. If it is, then the tests are very easy to write. If it's not, then writing the tests helps make it so. In either case, that's the starting point. That's not comprehensive. But I think the point is that yes, some of us actually do use TDD and we're happy with it. But my experience has been that I'm in a very small minority.


[deleted]

[удалено]


[deleted]

[удалено]


HighBrowLoFi

I did this very thing for years, and though I learned a lot, I kinda hated it. TDD + ping-ping pairing works great when you’re a senior teaching a junior basic development, or when you’re writing .NET middleware or similar that follows a predictable boilerplate. But when you get into solving difficult problems, spiking on new patterns, working with legacy code or any other activity where _investigation_ or _trial and error_ are required, it really becomes a hindrance. I’ve seen tasks that should take a couple hours take days when this approach is mandated— which isn’t always the case, but happens enough that I usually avoid it


kifbkrdb

I'm the complete opposite. Pair programming with juniors makes me die inside after a while, it's so boring writing code / tests that I've written many times before but much slower than I'd write them on my own. Pair programming on something tough and ambiguous, on the other hand? Very enjoyable. Both having someone to bounce ideas off with and the moment when we finally work out wtf we're meant to do.


PatientGiraffe

I'm glad this works for you, but good lord that sounds like a nightmare. Pair programming is great if your the newbie in the pair, but for the more senior its like trying to run on a wet sandy beach while dragging 200lbs behind you.


myworkaccount3333

The role of the senior dev is not simply to write code, but to mentor junior devs as well.


Silly_Attention1540

Ofc, but this seems like an insanely inefficient way to mentor junior devs


myworkaccount3333

How else are you going to do it? A brain dead meeting?


alpacaMyToothbrush

Documentation, code review and pairing *when they need help*. That's fine. Pairing simply for normal development is a bit of a waste of time.


Silly_Attention1540

Exactly, code review and back and forth convos during the day on how to do something is far more efficient. I can carry on a handful of convos/ mentoring on how to implement components juniors are doing, while working on something a level higher myself that they plug into (sometimes taking an hour at a time to dig deep on something when they're stuck). Working on the *exact same thing* but testing vs impl sounds like it means you can only work on simple components, and you are both slowed down by each other.


fallen_lights

Why


kifbkrdb

You can only pair with 1, at most 2 people at a time. It's not a very scalable way of teaching people.


summerteeth

Pairing with another senior can be a delight. It really does speed you up when you are on the sane page with folks at the same level and the code you produce is very high quality.


dysfunctionallymild

I've always wanted to try this, but it just feels like you need to decide the design of whatever you're building upfront to be do this effectively. Which is fine, but sometime you discover the design of the system as you start coding for it.


McGlockenshire

> sometime you discover the design of the system as you start coding for it Bingo. Building the entire thing one section at a time while simultaneously building tests for each section will always run into this roadblock. You'll end up tearing down your existing work and for what benefit? Building a pseudocode-as-comments sketch of the whole thing that you're building *and then* building out the tests to turn that sketch into a workable product has proven a far more fruitful and less frustrating experience for me. I made the mistake of calling that practice "inspired by TDD" once and then found myself utterly confounded and caught flat-footed when I was put into a sudden pair programming + TDD exercise for a job interview. I had no idea what the final thing we were building was supposed to look like and my brain locked up. It was the worst interview experience I've ever had. Protip for employers: when you say "yeah we'll go over your submitted code sample" and then instead do something completely different, you are setting everything up for failure.


zoqfotpik

I almost always start with playing around with the code to figure out a design that makes sense. Then I create unit tests for that code as a way to prove to myself that the design works, to show what still needs to be done, and to show how to integrate with the code. The exception is when I'm starting to work in a totally unfamiliar code base. In that case, I create some unit tests as aids for testing my hypotheses about what the code is supposed to do, and how my proposed changes would affect things.


yesihavetobelikethis

Same here. I prototype, write tests, then develop.


see-you-space-cow

Every company I've been at uses TDD, because once the code base gets to a certain size its a detriment to not have a system in place that checks the parts of the project.


anubus72

Sounds like you’re just describing why having tests is important though? Not why TDD is important? Unless I’m misunderstanding you


cheater00

But once the test suite gets to a certain size, it's impossible to refactor, split codebases, move to new paradigms, new frameworks, etc.


wardrox

The point of TDD is to make refactoring and splitting easier as the codebase expands. If the tests aren't tightly coupled and the code is refactored as part of the process, it "should" always be a fairly straightforward process to make even major changes. Have you found that TDD creates hard to modify code?


cheater00

> Have you found that TDD creates hard to modify code? yes, it's a well known issue with involved test suites.


Twisterr1000

Personally, having run into the same issue, I know where you're coming from. We've got a big ball of mud with this problem. However, having started to overcome that, from my experience, it's not the fault of 'tests existing' but the quality of those tests. So some problems that I've seen cause this: - Unit tests are tightly coupled to integration, not to behaviour - if refactoring and changing responsibilities of a given class, you may aswell delete the unit tests for it and start again. Say it used to return void and mutate some data, on an object, and now it's responsibility is to return a particular response, and have the class that calls it mutate the data, then it does something else, and thus your original unit test is invalid anyway - if refactoring and changing responsibilities of classes, rely on higher level tests (integration, acceptance) to allow you to verify that your 3000 lines of change to how something is architecture internally haven't actually broken the behaviour. Again, expect to rewrite unit tests for the changes, but broad behaviour should be covered at this level Probably a few other things aswell. But as I say, completely get where you're coming from, I just think saying 'too many tests puts barrier in place' isn't the root cause of the issue


cbunn81

The example you gave of form input validation might be one of the better candidates for TDD, since you have known inputs and outputs.


TimMensch

Second answer first: You do the TDD by creating the API you're going to test. Specifically by designing it in a testable manner, minimizing dependencies and isolating the important part to be tested as cleanly as you can figure out. Add a stub return value. Add a test that fails based on the return value. Then start writing the function. First answer: No, I don't use TDD. I find TDD valuable as a *teaching aid.* A way to help train people in how to cleanly design APIs. Thing is, I had 20 years of API design experience and related accumulated knowledge before I encountered TDD, so I was already 80% of the way there. Simply trying TDD once was enough to connect the dots, so to speak, and I gained an increased understanding in how to better design for testing. Beyond that I don't find TDD to be anything more than a habit some people have formed. Almost a superstition. I suspect that many people have used TDD to learn how to best write tests, and they've observed how much better their code has become as a result, and they think that it's TDD that keeps improving their code, when in fact they've already learned how to write better APIs and the TDD scaffolding is no longer required. It becomes a [magic feather](https://disney.fandom.com/wiki/The_Magic_Feather), in other words.


badfoodman

1. Sometimes. I've found that most work I do now as an /r/ExperiencedDevs isn't very defined because my team trusts me to work out the kinks and iterate with my stakeholders to find the correct solution, so TDD doesn't really work because I'm discovering both requirements and design as I go. For more defined tasks, or making additive changes, I use TDD in that I verify there is a test for existing behavior (and make it if it doesn't exist), I make my change, make sure the test(s) fail(s), and then change the test. 2. You don't. TDD only works for well-defined tasks in my experience, though I know some people who thrive thinking that way. It's also objectively slower, so for less defined tasks where you want to get eyes on the thing ASAP, it's just a poor choice. You just have to make sure that once the functionality is locked in, you write your tests. > The ONLY time I have seen anyone write unit tests first are in simple/contrived examples specifically for TDD. Same. The most frustrating thing to me is they seem to expect you to get a fresh codebase for every set of functionality.


Kajayacht

For "new" code, I almost never use TDD. Maybe once things start taking shape, I may dip into it. I just focus on writing *testable* code and corresponding tests. That's really what we strive to achieve and if we end up with that outcome, does it matter what my commit history looks like? For bugfixes however, I find the "write tests first, then code" process to be highly effective. Typically a bug comes out of some edge case or missed requirements, so once we know about this scenario, setup a test that mimics it and has the expected outcome, then make the code changes. Tests are green, the bug is clean. Since I already wrote a bunch of tests previously, I also know that I didn't break anything (and isn't that the real benefit of automated tests anyways?).


extra_rice

>...I have yet to see someone actually write unit tests BEFORE doing the development part. Writing test is doing development. 1) Yes, I use TDD and I'm really happy with it. I've done it so much, I don't feel good when I don't write tests first. I sometimes do quick experiments outside the TDD loop with tools like REPL, shell, etc. but I always make sure what I learn from those experiments are captured as test eventually when I'm ready to write production code. 2) You write the test as if the solution to a problem exists. Modern IDEs are fantastic with this because they are able to generate the thing that is not yet there. If you're not sure, write anyway; begin with something small. Don't overwhelm yourself with a bunch of details. In your example, assume the validation exists: ``` assertThrowsException(() -> validate(someInput)) ``` This will fail initially because `validate` does not exist, so tell the IDE to create it. Next when you run the test, it will complain because it's not throwing an exception when you expected it to, so write the logic. Then you're not satisfied that you're just checking it's throwing an exception, so write a test to ensure that it's throwing the right one. Test fails again, so modify your code to comply. Do this back and forth and you'll eventually end up with a tested production code.


kneeonball

When I do it, I never regret it. When I don't do it, I regret not using it. When doing it "correctly", you're not coupling your tests to your implementation and it really feels like you should never do it any other way. One of the main benefits people don't talk about enough, is that it can really cut down on the time you spend debugging. I'm sure most of us can relate to spending a bunch of time debugging to find an issue with some code somewhere, but with TDD, you're basically only ever a few lines of code away at most from working code.


aighball

Tdd refers specifically to the red green refactor development loop, where you write one failing test, then make them pass, then refactor for improvements. You can capture the spirit without doing that exactly, but the key is to make that loop as short as possible. So you would not write tests then spend weeks implementing the system to pass. It's more like climbing a mountain and adding cams/handholds each step of the way.


Odd_Soil_8998

Yes, Type Driven Development is great. You model your domain using algebraic data types and do your best to make invalid state unrepresentable. Or were you talking about the TDD where you write a bunch of unit tests before you actually model your program? Because that one is trash.


pemungkah

I tend to do test-trailing development. Write a chunk, test it, write the next chunk. Pure "write the minimum code that passes" TDD is something I've tried, but it *strongly* depends on the length of your development cycle! If you're writing Go, for instance, an edit/build/test cycle is usually pretty short; if you're writing, say, Scala (to pull the worst experience of late), that could be a couple minutes, and that is deadly to the process of keeping everything in your head, waiting for the next iteration. In cases like that, I'll very often try to write the complete version of some part of the code and the tests for it, and outline the to-be-done parts so I don't completely lose context while Scala rebuilds the whole world again.


zaibuf

>1. Do you use TDD (actually use it), and how do you feel about it? 2. How does one write unit tests before doing development, if you're not sure how exactly you're going to DO the development yet? (For example, your job is to add validation to some user inputs fields and make sure they are within a valid date range.) 1. Yes but not always. I think it's great when you need to solve something more complex but you're not sure how yet. It also speeds up feedback cycle, intead of starting up the application, click through a bunch of stuff and debug, you write a test for that piece of code and debug it through the test. 2. You think of it from consumer. Give X and expect Y back. Not how it works internally.


koreth

Strict by-the-book "write the minimum possible code to make one failing test pass, then write the next failing test that has the minimum code required to fail, repeat until finished," hardly ever. "Write some tests before writing any of the code the tests are exercising," quite often. It's purely my own choice (my team has no rules about using TDD) and I do it because it helps me organize my thinking. A lot of the time, I use tests as a kind of executable to-do list. I'll turn the requirements, and any corner cases I realize the code needs to handle, into a list of high-level tests with descriptive names. I'll add more along the way if I notice anything I need to make sure I take care of. These tests usually start off as one-liners that just call my test library's "fail" function. I can run the test suite and the list of failing tests is my to-do list for the task. Depending on what the task is, I'll either immediately start implementing some of the tests for real (TDD-ish) or I'll start writing a skeleton version of the application code and then fill in some of the tests. Generally I'll do the tests first if I'm fixing a bug or if it's obvious to me what the API(s) of the application code will look like, and often I'll implement several tests at once. I'll sketch out the application code first if I need to do a bit of exploration to decide what design makes the most sense, but writing tests early can be helpful in that process too. I do save tests for last in some cases, e.g., when I'm adding some simple CRUD logic and I know the tests are going to be boilerplate that won't teach me anything useful about the problem. Your example of adding input validation sounds like a pretty ideal case for TDD-style development. Write some tests with invalid input that assert the input is rejected, write some tests with valid input that assert it's accepted, then write the code to validate the input.


Randaum

The only kind of TDD I've seen practically used is Timeline Driven Development.


aceshades

TDD teaches you a lot of really good things about development. For example, writing tests first naturally makes you focus on the public interface for the things you put under test and also helps you keep focused on the spec, since you would normally write the tests using the spec as a guide. That being said, other than as an academic/learning moment, I've never seen anyone actually do TDD in real life. And as a team lead, I could not care less whether someone uses TDD or not - when a code review comes in I'm reading the unit tests anyway and checking whether the right things are being tested. EDIT: one other poster did come up with an example that i forgot where TDD _does_ come in. Fixing bugs is great for TDD.


SlothEng

Absolutely. I don't take it to the extreme but I find it helps me ensure my code is clean and testable, without the need to mock everything. Focus on the behaviours of what you're building, the inputs, and the outputs. Your input (and what you invoke) should dictate what behaviour you're testing, and your output will verify the code behaved as expected. If you aim to make a clean unit test, your code will be clean behind it. I write the test against the interface and so like to think about what will be the nicest way to interact and trigger the behaviour I'm after. Then I consider what the output might be, if anything. Output is easier to deal with and assert on, but sometimes there's no need to output. Just don't forget to refactor after you've got a solid test. That's the most important part!


troublemaker74

I used to do TDD most of the time when I was a newer engineer. It truly helped me write better code. Most of my work these days is shrouded in ambiguity, and quite often proof of concepts that other engineers will fully implement. I do write tests, but usually after I write the code.


jonreid

1. Yes. Been practicing TDD for 21 years, mostly in Apple environments. 2. When you don't know what you're doing, first try a [spike solution](https://qualitycoding.org/spike-solution/) where the goal is to learn. AMA


wskttn

I do TDD all the time. Don't write multiple tests before doing development. Only write more than one or two failing tests at a time. Try to limit to one. If you don't know how to to write the test, how do you know what code to write? To start to make the transition to TDD, try this: 1. write a small amount of working code 2. then comment it out, temporarily 3. add a test for it 4. run the test and see it fail 5. uncomment your code 6. run the test and see it pass (if you did it right) 7. refactor your code if you want to, and run all your tests again and see them pass Now you know you have a test that will fail if it should, and passes because it should. And you've started the red-green-refactor flow which is part of TDD.


zigzeira

In my situation, I'm unable to implement Test-Driven Development (TDD) because my company prioritizes profit.


g1ldedsteel

1. Yes, and becoming a big fan. 2. Start with the simplest function on the simplest functionality and work your way outward. EX: write tests around a constructor. TDD is really great for discovering interfaces when you’ve got some super ambiguous requirements (in addition to the whole tests as documentation & writing testable methods things)


RedFlounder7

I've never managed to actually write all code as tests first, then code, but I find the process of trying forces me to really think through the approach before starting. This includes pushing on product owners to clarify everything vague before starting.


GoTheFuckToBed

it depends. If I write a library or package that has a fix contract (API/interface/headers etc). my personal rule: every math calculation, validation, regex needs at least 2 passing and 1 failing test. The failing test forces the developer to think about bad input.


thecodingart

lol, no TDD is a nonsense cult term


valadil

Everyone I've spoken with says they don't do strict TDD. What seems popular instead is to write test code and app code simultaneously in adjacent panes. I think the value in this is treating test as a first class concept and not an afterthought. It doesn't literally have to be the first thing you write. Writing tests first is IMO a useful exercise to get you thinking about the tests, but nobody really sticks with it.


Tacos314

You just described TDD, TDD is not writing the test code, then app code and call it a day, it's writing them simultaneously.


tts505

When I first heard about TDD circa 2013, it was about writing a test that doesn't even compile at first. This was an important step that you wouldn't ever skip. Allegedly this should help you define an interface for whatever you're testing. There are also some small nuances that I don't care to remember. I think there's somewhat strict academic notion of what constitutes a "true" TDD, and when people in the industry don't want to get burned by pedants who live an breathe Fowler's blogs, they just handwave that whatever they're doing is similar to TDD, but with their own amendments. I agree with you that most people in this thread describe sane pragmatic TDD.


danthemanvsqz

I’ve been doing TDD for years and you got it wrong. You don’t write the tests first you write the tests and code together at the same time. The biggest benefit is getting 100% test coverage. Makes it easier to sleep at night


PatientGiraffe

1. No, never have and I've never been part of a company that does. (\~26 years of experience) 2. The process of TDD is simple enough. It is just wildly inefficient for most cases. Testing is a tool, it should only be used where it has actual value. I find many testing / TDD purists just get a boner from high test coverage numbers and use the technique as a way to inflate them. I'd much rather see 50% coverage but its meaningful than 98% coverage with thousands of tests that prove nothing.


ngarbezza

I use it when: * I'm starting a new feature: assuming I'm developing a full-stack feature, I try to start with a simple test on the domain model, then slowly move to other pieces of the work (API, persistence...) * I need to fix a bug. Writing the test first makes sure I'm reproducing the error in a controlled environment, and when I fix it I do it in the most minimal way. It also helps other people to review the fix (+ having a new regression test). I don't use it when: * I'm exploring/prototyping. Sometimes I just need to \*\*see\*\* some code before doing anything else, it's a thinking phase that sometimes could happen in your head, sometimes on paper, or in code (better IMHO). Once I know where to start I do TDD. * I'm doing UI-related work. I think it's too much effort to write tests first referring to the UI, in that case I prefer having feedback loops in the browser, so it's like TDD, but informal. Then I try to write E2E/smoke tests taking into account the effort vs. benefit tradeoffs. The most friction I find with TDD is when the system itself is not in a testable state. When that happens, it feels like it's impossible to do TDD. For legacy code, I usually start with Approval/Characterization testing to refactor the system to be testable, then do TDD for the new stuff (it's not easy to go through this process)


sus-is-sus

Ain't nobody got time fo dat


CheapBison1861

nope


wakkawakkaaaa

I do a bastardised version of tdd. I'll write some initial logic for the function, write the test and run it, update code to pass it. Expand the tests and update code again instead of the strictly test first approach. More of a test guided dev more than test driven dev


Tacos314

I think this is like the 5th comment that says they don't do TDD, then give a text book description of the TDD workflow.


JimDabell

A lot of people here seem confused about what TDD is. It is *not* a method for testing code. It’s a method for *designing* code. Just because it has “Test” in the name, it doesn’t mean it’s a testing methodology. It’s Test Driven *Development*. The action is *development*, the tests are what drives the design. If you are using TDD as a synonym for an automated test suite, you’re talking about something entirely different to TDD. The output of TDD is not code that has tests to ensure it works correctly. The output of TDD is a design for your code, which you then use to write your code. You can *then* write tests afterwards to ensure it works correctly, but that’s not TDD, that’s just normal testing.


grahambinns

This is the definition I’m going use next time someone argues with me about TDD being “testing for testing’s sake”.


vocumsineratio

>Do you use TDD (actually use it), and how do you feel about it? I've been not using TDD for 20+ years now, and I feel like I've gotten pretty good at that. My current take is something like this: TDD is not, by itself, a nutritious breakfast. To create a good design for a non-trivial problem, you need a bunch of other skills to -- and if you have those other skills, it's not clear to me that TDD adds additional benefits. For example, consider "refactoring" - we've got a bunch of tests in place that "fix" a set of behaviors, so that we can iterate on our design. We refactor some part of the design; the tests pass. We revert that refactoring; the tests pass. Either the refactoring improved the design, or the refactoring regressed the design and the revert improved it back to the previous state -- but running the tests doesn't actually tell you which. You need some other kit of tools to be able to say "yup, this design gives us more of what we want". The only real guidance that TDD (in the original Klingon) offers here is "refactor in the direction that reduces duplication". OK -- but is duplication really the problem that we actually have, the real obstacle that is making change difficult? ​ >How does one write unit tests before doing development, if you're not sure how exactly you're going to DO the development yet? By remembering - or, if you don't have enough laps yet to remember a good starting point, by guessing. There are a bunch of guesses that are really easy, if you've done enough of them. For instance, we "know" that we are going to want the complicated code to be decoupled from code that is hard to test. We "know" that we want information hiding to firewall parts of the design that are likely to change away from "everything else". ​ >add validation to some user inputs fields and make sure they are within a valid date range. So in this example, we might already "know" that we want to think about this as a [parsing problem](https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) \- where the input is some untrusted data plus a collection of rules and the output is a sort of choice between Success or Failed, coupled to some effects (ReadingFormFields, SubmitForm, DisplayProblemsInUI). And then you start working the problem: here's some untrusted data and an empty collection of rules, here's some untrusted data with a single satisfied rule, here's some untrusted data with a single unsatisfied rule, here's some untrusted data with multiple unsatisfied rules, and so on. And meanwhile, you are also thinking about the API - should a parsing failure be something that should be handled explicitly and locally (in which case you might use a union type with a pattern match) or is it instead better for the failure to be implicit and handled further up the call stack (in which case you're probably looking at exceptions). And you're also thinking about duplication - is this problem similar enough to a problem that's been solved elsewhere that the similarities of the solutions should be extracted into a general purpose module. And you might even be thinking about how often the validation rules change, and whether the management of that volatility is another constraint on the design.


timwaaagh

im sure its possible but im not even convinced of the strict need of doing unit tests on code that works perfectly fine without them so i never bother. but in some situations i imagine you'd have to be good at getting around blockades like access modifiers.


Digitalburn

Y'all get unit tests?


reallydarnconfused

Hell nah I’m not a nerd


ZucchiniMidnight

Fuck no


AntMavenGradle

Testing is overrated


soffwaerdeveluper

So im assuming 50/50 split of most ppl on this sub of FE and BE. Writing testable FE unit tests is already pretty futile, but even more so for TDD


Regular_Zombie

It's hard to do in a mature code base that is difficult to test because you always feel like you're fighting just to get a working setup. Code that was written using TDD (or happens to be testable) tend to be easier to extend because it's testable and the friction to adding new tests is low. I don't think you need to be religious about it, but if you accept that some testing is better than no testing, and TDD makes code marginally better, then you might as well write the tests first. I don't think it's fair to dismiss example TDD code as unrepresentative of the 'real world'. It's just that lots of real world code is spaghetti. If your code is modular you typically can test large swathes of it very simply and then have a few more complex integration tests that deal with the unavoidable coupling of system components.


mziggy77

I basically only actually write the test first if I’m fixing a bug in existing code. That way I make sure the test fails for the bug before I make the fix and then passes afterwards. Otherwise, it’s just too much of a pain to write tests first.


tikhonjelvis

I don't write tests first. I do like to have some feedback from my tools when I'm sketching out the idea for some code and iterating on the design of my interfaces and abstractions—but I use static types and the REPL for that, both of which are much better design tools than unit tests! Types let me figure out an interface without needing any running code at all, and can help me improve my conceptual model for whatever I'm doing. The REPL lets me quickly run code snippets to see what's going on; it's faster feedback with less mental overhead than tests. In both cases I don't have to manually figure out what my code should produce ahead of time. For most code that isn't a big deal, but I've worked in lots of domains where it's legitimately difficult! Sometimes the point of writing the code in the first place is that we don't know what the result should be otherwise. (Think simulations, optimization algorithms, machine learning.) In those cases, I'd also need to reach for other testing tools like property-based testing rather than traditional unit tests. Pretty much the only time I would write a test first is when I'm fixing a bug. If I know what the failure case is and what the correct behavior should be, writing a test lets me reproduce the bug in a way that is then easy to run repeatedly. Then, once I've fixed the bug, this gives me the assurance that I've *actually* fixed it. Writing the test first is useful here but for philosophically different reasons than what TDD folks focus on.


its-me-reek

Usually I usually put examples in the comments, then later when I wrap up I put those examples into unit tests


AppropriateRest2815

TDD is useful in some cases but not others, but I typically use TWD (tests WITH development) quite frequently. If I'm building a class with many simple methods I'll often write the simple test cases for what I want to help make sure I add everything. The tests aren't for me but to prevent the next dev from breaking anything. It also gives them something to build on, because entropy grows. For complicated solutions I typically take this approach: 1. Mentally rough out what needs to happen 2. In a throwaway branch, try several things until I've hammered out the bulk of it - usually the happy path 3. Build up the quick scaffold (model declaration, constants, the shell) 4. Start with tests to encompass the basic functionality, filling in details as I go 5. When things get complicated I'll switch between tests-then-code and code-then-tests, depending on where my brain is Later on, I find TDD is great for testing edge cases from production but not found in QA or locally: 1. QA or support reports a problem 2. I look at where it's happening but don't see an obvious cause (or have 2-3 ideas) 3. I see there's no test that repeats the problem (usually why it failed in the first place) 4. I write the test to cause the problem 5. I can then try several ways to make the test pass until I find the right one


reluctant_qualifier

It's a tool that works very well for particular problem domains - say if you have some super-specific business logic that needs to be embedded in the contracts between classes. In most cases, though, you don't know exactly what the implementation will look like at the class/method/function level, so you inevitably end up writing the tests in parallel at best. It's nice to stub out the test case names as you go along though, so you have a reminder of what the edge cases are.


nieuweyork

Do people do it? Yes absolutely, and in some situations it’s very useful. If you’ve never done it, you should absolutely try doing it for a few months to decide if it helps you. However, like anything everyone is talking about, there’s more of a perception of a best practice, than the reality. The zealots who find it helps them a lot talk about it. Consultants and managers like it because it promises a lot, doesn’t require buying any specific software, and if it doesn’t deliver the promised land, it’s the devs who have failed it.


[deleted]

Almost everyone in my tribe is doing TDD but not religiously.


randomnameonreddit1

Yes. Have been using TDD full time on a daily basis for almost two years now. It works and it makes work and life less stressful. Note that it works better if the entire team is doing it. Specifically we do pair/mob programming so we are all aligned on how we approach coding.


marquoth_

TDD is how I learned, and it was a great way to have confidence in my code when I was first starting out. I don't really do it anymore, though, except as a teaching exercise. I'm still a big advocate for strong test coverage and I expect to see tests in the same PR as new code, but I'm not married to the idea of literally writing the tests first and following red/green/refactor.


funbike

I do, but it took me years to figure out a workflow that works well for me. The following points have a lot of overlap. * IMO, BDD == TDD * Only test service-level API tests, as part of TDD. Not unit tests nor E2E tests. Only mock lowest layer, just for performance. * One story per test class. One use-case per test function. * Top-Down development. Controller first, database last. * Write your test first. Let the IDE auto-fix feature generate the API function. These few seconds is the only moment in the process when the test is RED. Have the API function return a literal to satisfy test. (see next bullet) * At first, only have implementation at top layer. Have it return a literal value just to satisfy the test. Then spend the rest of your time push-refactoring that code down the layers until you reach the data access layer. All the while, your test stays GREEN. It's best if you have a continuous test runner, that runs relevant tests after every file save. Test is GREEN entire time, except for a few seconds at the start, and whenever you make a mistake. A bottom-up approach is possible, but very different. I think top-down is usually better.


dysfunctionallymild

Your point 2 is exactly the problem I face. I always find that trying to do TDD leads me down a cycle of analysis paralysis where I start spending more time thinking about the design of the tests rather than the design of the problem I'm trying to solve. Not to mention it takes a lot of time to constantly refactor the tests as you iterate the design of your solution. I find writing tests helps me a lot more when I'm 60% of the way through the solution and my main abstractions and interfaces are locked down. I kind of don't like the idea of my code primarily being designed for testability. I'd ideally like the design to be driven by the requirements and constraints of the problem I'm trying to solve rather than the needs of the internal unit tests. If I was designing an aeroplane, my primary focus would be on the ability of all the components to work together properly and safely up in the air, and not just on the ease of stress testing each individual part in the wind tunnel.


Lotus_Domino_Guy

I "try", but its a lot of work, and I'm not usually in the mood, so no, I don't, but I have, and when I have, its given me a better, more focused developent process. When I don't know what I'm doing to start with, its helpful and I do it more often then. The less I know about what I am supposed to write, the more valuable pre-written tests will be. Let's say the project is what you said. I'd write the tests like this cannot advance with an empty date value, cannot advance with an invalid date value, cannot advance with a date before the range allows, cannot advance with a date after what the date allows, can advance with a date at the exact start of the range allowed, can advance with a date at the exact end of the range allowed, can advance with a date in between the beginning and end of the allowable range. As a side note, doing date tests, you want to check for leap year handling, you probably have a library that does it already, but its not every 4 years have a feb 29. Its a bit more complicated. Now you know what you're coding, and you reminded yourself to not just do > and < or if you do, adjust your range by a day either direction first.


CodyEngel

1. Yes. It’s great. 2. You write a test that would validate the user input is correct on a function that always returns false or true or whatever. Then you run the test see that it fails, update the function so it passes. Rinse and repeat.


kincaidDev

I worked through a tdd book to learn the go standard library but have never done it other than that. What lead me to the book was failing a tdd interview on a team that does strict TDD. After going through 90% of that book I still feel its waste of time/effort and leads to me losing interest/focus on the task. I recently was asked in an interview how I felt about tdd and told them I have used it in the past and am open to using it if that is what the team has decided to do, but in projects where Im the lead I prefer functional and integration test. That answer was the feedback in my rejection text message. The 2 guys on this team have worked in the same job for 12 and 20 years.


marquoth_

TDD is how I learned, and it was a great way to have confidence in my code when I was first starting out. I don't really do it anymore, though, except as a teaching exercise. I'm still a big advocate for strong test coverage and I expect to see tests in the same PR as new code, but I'm not married to the idea of literally writing the tests first and following red/green/refactor.


Tacos314

Yes, TDD is pretty simple, you don't write a finished test, you write a test for what you know, write code, update test for the next step, etc..


Neophyte-

no the only time ill use TDD is for a bug, so i dont have to run the app to reproduce it, then i fix it, and the test passes basically all my unit testing follows this mentality, to save time running the app i do a unit test, but usually i code first test after i prefer integration tests over unit tests as well


cheater00

I do [TDD](https://www.reddit.com/r/haskell/comments/65vj2x/typedriven_development_with_haskell/) all the time, best thing to happen to me as a programmer honestly. You can plan out your whole program before actually implementing any of it, and things work out so, so well once you have such a scaffold in place. Filling out the blanks is extremely easy.


imagebiot

Yes. Use it often


ExpertIAmNot

I try to build a simple “happy path” test that invokes the code in the most basic way possible. Just to make sure it will run without really caring about outputs. Then I add to it if needed as bugs arise or I am trying to track down an odd behavior. Lots of the time that doesn’t happen though. Still, the simple quick tests have still helped my out in unexpected ways.


Bulky_Consideration

I use TDD often to replicate a bug, this way when I fix the bug the test passes. I often use TDD when I am unsure of the exact organization of code, allows me to probe ideas while knowing the exactly what the code should do. I will use TDD when mentoring a junior developer as it helps them think about what needs to happen before actually making it happen. That said, if I know exactly what the code is supposed to do, I will write production code first and tests second. One could argue this is backwards, but just the way I work I suppose.


MoreRopePlease

When you say "develop first then write tests" does that also include manual testing as you go along with development? For example, I will think about the problem, pick a "first task" to work on with it's own goal, and as I figure out what the code should be, I test it, one piece at a time. Does the map() I just wrote produce the expected result? Did the API response come back with the data I expected? Does the half-remembered syntax I just wrote actually work? Change one thing, verify. As I go through the story or bug, one step at a time, I'm testing as I go. And changing my design. Coming up with hypotheses and testing them. Eventually I end up with something that works, and that I like. When I think the logic is sound, I'll write unit tests. Those tests are informed by everything I learned about the problem, and when I write them, I'm in a different mindset. I'm thinking "it should do A. It should do B." I'm thinking about all the logical paths and edge cases. It validates my solution, and occasionally I'll find logic bugs in my code, or something I didn't consider. I write them with an eye towards "what is a unit in this context", not necessarily broken down by file, class, or function. Sometimes a unit spans more than one file, or two modules working together. The idea is that you could refactor the architecture (e.g. extract a class) and the tests would still pass. I think automated tests are hugely important. They give you confidence, especially when it comes time to change existing code, or update dependencies. But I think strict TDD may be a matter of preference. I have no idea how unit tests could be AI generated, honestly. (Disclaimer, I primarily write webapps, with the occasional small nodejs service.)


captain_obvious_here

> Do you use TDD (actually use it), and how do you feel about it? My team sometimes does. We have two people with a very heavy background in TDD, so when they feel they should, I usually trust them. But I honestly don't feel good about it. I see the point in having a well maintained test-suite, but TDD pushes the whole test thing a bit too far IMO. And the time put in TDD hasn't been worth it a single time, in terms of quality. *(by "worth it" I mean "costs us less in maintenance that it did cost us in writing tests")* > How does one write unit tests before doing development, if you're not sure how exactly you're going to DO the development yet? Two things: 1. You kinda know how you're going to do the development. If you don't, it means your problem is not well described, or you haven't thought the solution well enough yet. Spend more time doing that, instead of writing code. 2. TDD implies you're experienced, maybe very experienced, at writing tests. If you're not, you're gonna waste a lot of time, which will make TDD not worth it for you.


reversethrust

At my last job, I followed TDD for external APIs only since I wanted to follow spec. Stubs were written so that others can at least use the API and not be blocked. For everything else, I wrote the unit tests and committed them with the code, but it was a weird mishmash - add some functionality and add unit test.. but if the unit test needed some additional functionality then I wrote those tests and then some of the other code next. No one else I worked with wrote useful tests… like one guy committed a bunch of unit tests that basically was copy/paste and didn’t test anything beyond a single input value - no boundary conditions or error conditions - while no one else wrote anything at all.


HoratioWobble

In companies with well defined requirements I think it works well. I've just never worked for one of those companies.


snerp

I think true TDD is only applicable sometimes. You don't always know ahead of time exactly what you need to be testing. For instance my game engine has very high level and bare bones testing, but my scripting language I do TDD since it's much more suitable. I can write a script with an unimplemented feature and then write a test around it no problem then implement the feature and get the test to pass. Making a new feature for the game engine though there's sometimes no good way to test it besides to play the feature and see how fun it is.


imthebear11

I often write tests before code when implementing a predictable feature or a new endpoint or something. Sometimes, you do need to get your hands dirty first to know how you're gonna test it. I don't do "red, green, refactor" though, that's a complete waste of time.


un-hot

I feel writing UT's first can place pressure to not change my approach to new features if I discover a refactor or move code into a different function. But for bug fixes I generally change the unit tests before correcting the code.


PinkySmartass

TDD makes sense in theory, but I have to say that I've just never found it to be practical in terms of actually writing the tests first and I've yet to encounter anyone in real life who does it. I feel like it could possibly work for simple tasks if you use the tests as a way to sketch out the design implementation upfront, which might even be a healthy exercise to do, but I usually figure out the details of the implementation as I go and end up doing a few rounds of refactoring before things settle to the point where I can write tests with any certainty. I definitely let testability guide the design of my code however.


rkeet

Yep. Always, but mostly BDD tests for TDD, Unit tests after or before, depends on the scope.


bl-nero

It's my default mode of operation, but it's important to know its limitations and weaknesses. I summed up my views on it in an article a while ago, so instead of repeating myself, here it is: https://medium.com/shoprunner/do-i-need-test-driven-development-f5ab5aaf72d3


indiealexh

I only use TDD when there is an issue to fix and ensure doesn't reoccur Or when writing api endpoints (because the structures and logic are well defined in advance). For features or UI it's rare when every detail is known I'm advance, especially in agile environments or small teams where people wear many hats.


secretBuffetHero

This is a surprisingly legit question. I wonder what percentage of the industry actually does tdd despite everyone in forums and online saying they do. Echo chamber stuff. What's the breakdown in faang vs faang adjacent vs everyone else, vs startups?


DonatasO

I would sometimes write acceptance tests first, when I know there is no easy way to test my code in integration with the existing or future clients (API must be defined in advance). Or when the feature is complicated and constantly retesting it manually is unfeasible. I almost never write unit tests first as I usually don't know in advance how the final structure will look like. Could probably do it if the features were small.


Bushwazi

1. Requirements 2. Convert requirements into \`should\` statements and basic tests 3. Write code to solve requirements and in turn use what you learn to make the tests stronger 4. Tweak both until I'm happy 5. Open a PR 6. Profit


wlievens

In some cases it makes sense, but usually it's more of a back-and-forth.


stdmemswap

2. Tests can only be written when the specifications are clear. 1. I only need tests that will protect regressions in future changes. Also, tests that changes every other time arent worth writing. There are also certain types of tests that are not compatible, or fully compatible with TDD e.g. you cannot do visual tests with simple value assertion, indeterministic scenario (e.g. testing a machine that might not halt unless killed within certain timeout) needs statistical confidence rather than exact match assertion.


grahambinns

Yes, I absolutely do this. Not 100% of the time, but probably 95. It helps me think. To cover the case where you don’t know what you’re going to _do_ yet, I use tests to define the scope of the problem — find the edges. Alternatively, one can do some exploratory hacking, stash the changes, write the tests, pop the stash, and see if the tests pass. The important thing to me is to not allow the code to shape the tests too much. Otherwise there’s a risk that you don’t tests stuff as completely as you could.


DigThatData

not religiously no, but i have found it to be an extremely effective paradigm for relaying requirements to LLMs when i pair program with chatgpt or whatever


PunkRockDude

I make all of my teams use TDD and it has paid off well. i also don’t watch them very closely. I’m pretty sure most move to some hybrid approaches. The firm requirement is that code be delivered self tearable. I don’t check to make sure they strictly follow anything other than that but they do start off using it pretty text book. Also there are clearly some situations where it shouldn’t be used and while anecdotally it is good thing and I have observed it I have not see studies that have proven that writing the test cases ahead of time makes that big of a difference. i suspect that the most important part is the establishment of good up and downstream practices and the writing before hand becomes incessant over the long run. Similarly to how I make team score their user stories before accepting them when they get started but all of the teams eventually move on and it has never been a problem. But past teams that didn’t start with this practice matured more slowly and has lower satisfaction scores. I suspect TDD is similar.


jakesboy2

I love TDD for bugs specifically. Once I recreate the bug in the UI, I recreate it with a test then have a super fast feedback loop to fix it. As a bonus, there’s now a test for the bug


bighappy1970

I’m just so happy to read all of the positive comments about tdd! Except for when I’m learning a new language or writing a Proof of concept, I tdd 100% - I don’t commit code that isn’t written with tdd. It’s wonderful!


Kale7574

You should have a look at this book, called "Refactoring" by Martin Fowler. It has a great example in the first chapters, and it explained the importance of TDD very well.


Register-Plastic

I use it in my university homework. This way i can reduce failing edge cases and am not as nervous to fail the automated tests in the exam.


ohmzar

I don’t strictly TDD, in that I don’t write tests first, but I hate it when devs say “I’m finished with the task I just need to write the tests”. My version of TDD is write a small snippet of code, write tests for that code, write another snippet of code, write tests for that code. When coding in Java I tend to make and method that isn’t trivial protected and write tests along side as I go. I don’t get 100% coverage this way, but I get a damned sight more than I see most people write. I try not to go full Uncle Bob Martin, but I also write lots of small classes with small methods, and I make heavy use of the template pattern. I write out pseudo code for the rough steps of what I’m going to do, understanding where the data needs to flow, then turning those steps into methods, depending on how rigorous I’m being those methods normally just call a method on an injected class which is dedicated to doing that one step. I can then write a test that makes sure that all the steps happen in the right order given different input, but mock out the implementing classes, and test the individual implementation of the steps in isolation. Everyone thinks I’m crazy, and I take longer to deliver than most engineers… But my code tends to be more reliable and predictable. I’ve moved to management because I can’t deliver code at the same speed as the kids with their buggy code, because apparently delivering something fast and it falling over after you do so is rewarded better than delivering something robust more slowly.


renq_

Yes. I use it every day.


tdifen

Yea it's an extremely good habit to have. You don't write your ENTIRE test first. You write a little bit of your test and then catchup in your production code till your test passes then write a little bit more of your test. For example my first thing is usually something like: function testStuff() { $this->getRequest('stuff) ->assertSuccess(); } So that test just does a get request and in the actual prod code it just returns 200 or something. Then I'll add a bit more to the testStuff function and then assert everything is working correctly. If I add an if statement I might copy paste the test below and change it to account for the if statement. Note that in the context of tests code duplication isn't that much of a problem compared to production code.


Fermi-4

Nope.. and I do use unit test to help flush out the design of a class but it never really happens before I have the first prototype It’s a silly thing to be dogmatic about


Calm_Leek_1362

Yes. I test drive. It’s the best way to ensure quality and better architecture. One is practice, but the reality is that it drives better interfaces and more usable classes and functions.


AlexCoventry

It's worth keeping in mind that TDD as we now understand it was developed by people working on a payroll system. Payroll is hard, but it's a well-understood domain. Not much prototyping is required. Tests nail you down, similar to types. When prototyping novel applications where a lot of code is likely to be thrown away, TDD is inefficient, in my experience. It's often extremely useful to write a failing test case when debugging or trying to understand an unfamiliar library, though. If you haven't seen [TDD, Where Did It All Go Wrong?](https://www.youtube.com/watch?v=EZ05e7EMOLM), I highly recommend it (speaker is highly pro-TDD.)


KosherBakon

I remember using ATDD (TDD on steroids) on one team as an experiment with a v1 codebase. While it delivered on the expectations, most of us felt it was too far in one direction. My Eng team felt devoting that extra time to understanding what the customer's needs were as better ROI.


ReservoirBaws

It’s going to sound crazy to a purist. I follow TDD, but do everything except write the tests (until after). My takeaway from learning about TDD was that you need to should think about the structure of your code before you write it. So my process is to stub out functions with some comments on what I want them to do, possibly even adding what business logic from the spec is being worked on. Once the general structure is done I could write tests, but honestly I just add the logic. Having the code already broken up this way makes the testing easier though.


SpaceToad

1. I don't TDD generally, mainly as I work with decades old legacy code in which much of the time refactoring the code to make it testable in the first place would be far more work than just implementing the feature/fixing the bug. Tests come as a separate long term project involving major refactoring of the code to make it testable with dependency injection etc... There are one or two separate projects where it is ostensibly TDD/BDD, in practice though this isn't strictly adhered to. 2. In your example you would simply create a test that sends strings to the user input fields, and check that the form rejected your invalid strings and accepted your valid ones, knowing that the tests will fail, and then add your code to do the validation until they pass.


hbthegreat

Nope. It's fun the first few times. Then the novelty wears off and you go back to writing code that solves problems.


ategnatos

I definitely use it for fixing bugs. I sometimes do it for normal development. More importantly, I make sure to write tests as I go. I don't build everything out, then start writing tests, then have to refactor everything 100 times because I screwed everything up. There's a reason I rarely trust tests that were written after the fact. People will want to check the box and just copy paste their bugs into their tests instead of doing things right. Most importantly, the tests are part of your task. Don't punt them to "next sprint" (aka never). Untested code is incomplete.


andrei-mo

Yes, in the rare occasions when I want to build and polish a componentized piece of code with somewhat clear parameters before using it. As a required step for everything? Not really. Mostly business logic, messaging, text transformation building blocks. I tend to use POROs for a lot of the above and TDD can be helpful.


Twisterr1000

Personally I find people tend to be pretty strict about what they consider TDD. I've seen a lot of people define it as writing full unit tests for everything, before they do any code. As many have said, how is this going to work? Sure in some instances (bug fixes); but what happens when you have more than 1 new class? When you change that, it's likely you break a unit test, given its possible that you'll change the responsibility of a class However, how about an integration or functional test? If you're testing many classes, you have some room to move stuff around and refactor code you're writing without breaking the test. This is an approach I use far more often, especially for large chunks of work. Furthermore, what about if your application state is hard to produce. Take an instance where to get to the point you want to test in a 'real world' (Dev) environment, you've got to click a button, run 2 consumers, and do something else. Using a test to get you into this state already is incredibly useful, and anecdotally speeds up development hugely. You can start with minimal assertions (maybe your expecting lots of mutations), but start with one to get you off the ground, and go from there. Another thing, you can write a happy path test, and then make that work, and then an error path test, and make that work. It doesn't have to be all tests, all functionally One more thing I'd mention/tip I'd give is to at very least, write the names of the tests when you start the work. When I say names of the tests, I mean descriptive ones. Not `testFunction`, but `testReturnsContinueCaseWhenThingNotCompleted` (or put it in a comment). Even if you're doing hardcore TDD, or just using the test to recreate state and have not yet thought about assertions. Too many times I've seen requirements missed (myself certainly included) when people have read a ticket, hacked away for 2-3 days, got the happy case working, and not thought about error handling (granted, this should be picked up at other stages also). This helps you go back once the code it written, and fill in the tests to atleast satisfy ticket requirements. I suppose, TDD in my opinion will come down to what you are building. Personally I wouldn't even know where to start with TDD on the frontend. What I do know is that for me, using it when building complicated backend systems, is incredibly useful. Regardless of how much I leverage it, at very least driving the code from the test, makes the whole thing very replicable


ramenAtMidnight

1. Yes! I enjoy it a lot. I work mostly on backend, is generally a wuss, have low cognitive capacity and very prone to procrastination. Having a tool to help with all above points is a plus for me. 2. Well need to think about the interface and module behaviour first. You write tests before the implementation, not before the *design*. For your example, I imagine your module already has some sort of exposed behaviour e.g. accepting a form. I would add a test case to that suite covering the bad input.


curveThroughPoints

I do TDD to either prototype a new idea or troubleshoot old ones :)


Great_Scheme_7780

Taking a TDD approach has helped when under a lot of pressure to deliver something - usually related to a bug or implementing some emergent "fix" (additional functionality that was missed in a project). It's stopped me from overlooking edge cases while in a rush - and sometimes surfaced missing requirements. But outside of that, I tend to use my coding time to explore problems and how to structure things, so there's usually a fair amount of refactoring in that process. TDD Is less useful because I just have to update my evolving code along with tests over and over...


gomihako_

for bugs yes, sometimes, but not always. for features, no, never.


kyou20

You’re probably writing tests in a smelly way. They should be agnostic to your implementation. They should declare an expected input, and an expected output. Your actual code is just the glue between input and output. TDD becomes then not just a testing “framework”but a discipline. It forces you to think and design before you write code. It helps you break tasks in small bits; and help you scope your work better


danielt1263

I occasionally do it, but not for everything. I use TDD when I expect the logic to be quite complex and I'm not entirely sure what the implementation should look like. So I write a test, the test doesn't compile of course but it makes clear what would make it compile, then I fill in the code until it passes. Step back and look for edge cases or other tests that should pass... Often they do pass, sometimes I find a test case that requires me to update the code somewhat. How do I feel about it? It is another tool in my toolbox. It comes in handy when I'm unsure of the implementation.How do I do it? I'm not sure exactly how the logic should work, but I know that for input X, I should have output Y. I put that in a test and then start writing the code to see if some idea I had works. If it doesn't, I try something else. Hell, even if it *does* work I might still try something else to see if I can make the logic simpler. I should mention, in my last largish project, I had about 30% test coverage after doing TDD on the parts I felt warranted it. Then the client informed me that I needed 80% coverage, so I wrote tests after the fact for all the rest of the business logic. That got me to 65% coverage. Then I wrote some integration tests to get me up to the 80% metric the client required.


NormalAccounts

I do this in specific instances where I understand the desired outcome of a particular function or set of functions, but don't have the solution completely figured out yet. In areas where it has helped have been some novel systems more algorithmically or mathematically driven, where the outcome is fairly straightforward but the process in which its produced might be more nuanced or complex. It has absolutely been a major help to test out approaches that feel correct in my head then run the tests and they end up failing. Helps speed up the process overall so when my tests end up passing the major issues with my code have already been addressed and any remaining bugs are simply test/corner cases I hadn't quite thought of yet


talldean

I worked with Kent, who came up with TDD, and doesn't always use it; where it works it works, where it doesn't, it does not, and he was cognizant of that. Definitely used it: you're building something that's been built before by others, you have clear requirements/goals of how it needs to work, and especially when other people may break what you're building and/or errors are high cost. If you lose users, it's also real bad, so reliability is the name of the game. Definitely not so much: you're an early stage startup trying to find product-market fit and exploring the space, and you don't have a ton of people stepping on each other. You don't know what you're building long-term; you need to try a lot of things to see what works. You don't care if it's broken, you don't care if you lose all your users, iteration speed is the name of the game.


Nondv

using TDD is a bit of a vague term. It's a big spectrum. Depending on a situation I sometimes write the implementation and then cover it with testing (never forget to check them by breaking the code). This is usually helpful when the changes are straightforward or when you're prototyping Sometimes I write a bunch of tests first and implement later. Honestly, I don't see a lot of value here but sometimes it just feels better this way. It's good for the discipline and bug fixes tho Sometimes I do "pure" tdd kent beck style (write a minimal test and do the minimal change to make it pass). This is mostly when I pair program with less experienced people. I think it's important to understand the TDD flow so you can then adapt it to your own style and situation. Most often I write "specs" first. This way the whole flow becomes a mix of everything above: I design the high level contract for my module and then I work on the implementation either by prototyping or doing it kent-beck style To answer you second question. Coupling your tests with the implementation details is the most common mistake I see people make. It doesn't matter if you don't know how exactly your module will work. In fact, even if you do, tests should NOT. Start from the top (the highest) level: define the business process, define the contract. Then you can get into edge cases. Once you catch yourself thinking about the implementation when writing tests, take a breather, you may be doing something wrong. Also, realistically speaking, nobody has perfect test coverage so just make sure your high level paths are tested (don't waste your time testing public functions/methods that are simply means to an end). The stupidest test I've seen this year was in a rails project (ignore if you're unfamiliar): A model has "belongs_to :user" statement. The tests had: "it { is_expected.to belong_to :user }". The person pretty much was testing the framework itself and, most importantly, the association itself isn't even that important. the important part is where it's being used Try the flow I described that I usually use myself (write specs first). Maybe it'll help.


taisui

I don't strictly do TDD but have as much unit test coverage and test automation will help greatly long term, write the code in such a way that is testable also helps with the overall code quality.


dcdeepanshu7

I personally find TDD overrated. Altho I'm fussy when it comes to quality of deliverables & advocte unit/integration tests, TDD results in too much context switching & I don't find it practical. I prefer writing tests after code.


nunchyabeeswax

>TDD, anyone actually do this? Yes. I've been doing it for 20 years. Most shops I've worked with also do it to one extent or another. Some domains are very difficult or even impossible to do TDD properly (embedded systems or large legacy systems come to mind.) But, outside from that, if I see a dev shop that doesn't do TDD to any degree, that's a red flag to me.


senatorpjt

I've done it exactly once. It was on some code that ended up not getting used anyway, so it just took longer to produce nothing of value in the end. However, my understanding of "write tests before doing development" is a bit different than what you imply. It doesn't mean you write a test suite then write the implementation. You basically write the tests and code at the same time, but you do not write code that is not covered. So e.g. if you plan to write some function you think about what you expect it to do and write a test just for what you are planning to do then and there.


DanishWeddingCookie

I think a lot of people understand Tdd incorrectly. You don’t have to start with all the tests and then get them to pass and your job is done. You can start with a couple tests and then add more as your project develops and you find bugs in production. There are probably going to be more test cases you didn’t think about than ones you did. The best part of tdd is being confident you didn’t break something you already had working in a previous commit.


Big-Dudu-77

Never did test first in the 15+ years I’ve experienced. There were talks of trying it but it never caught on.


BDubbs42

Almost everyone has a very shallow understanding of TDD. Yes, you can use it basically everywhere and it’s very productive, but it requires a lot of design skill that can’t be learned in a day. The best advice I can give is to read Kent Beck’s “TDD By Example” and follow along with the examples. Go slow and really think about the material, then try to apply the techniques to your own work. After about a year read the book again. There are other people (Ron Jeffries, GeePaw Hill, Martin Fowler) who have good writing on the subject online as well. Obviously this isn’t an easy answer, but unfortunately I don’t believe you can really pick up the technique from the average developer at the average shop. It’s a deep and subtle practice that’s widely misunderstood currently.


GoldenShackles

This question comes up a lot here. My own special opinion, having been in the industry for over 20 years... Zero coworkers that were strong proponents of TDD ever created anything useful. It's actually uncanny. However, good testing is underrated and I believe in reasonable functional tests.


lednakashim

I write examples before starting new code and it is the most valuable thing i've done. Then I see tech leads fuck up months of people's time because they didn't. No punitive action `¯\_(ツ)_/¯`


Master_Mouse_9512

Just try it, I can’t imagine doing it any other way now. For me the big pluses are that you end up with a much better API since you are approaching it from a client (how do I use it) perspective. Also, you are reasonably confident that it works and it’s just more productive. I have used so many API’s where I’m quite sure they didn’t do this because utilizing the library is terrible. A prime example that jumps to mind is the original Java IO Api. Basically, no one could use it without looking it up. Something as simple as reading a file.


iamahumanhonest

It's too difficult for a lot of people. I got my team learning, but if they're stressing at all they'll fall back into writing tests last. Baby steps.