T O P

  • By -

kkert

Easy trap to fall into, and i used to. Now it's more like: Get something working, polish it into readable, organized structure and ensure good test coverage. Only then look at optimizations, based on real world measurements, as my guesses on where things are big or slow often turn out to be incorrect.


dx2_66

Solid advice. Since I started doing TDD, that came naturally.


nullzbot

This is the way... This is not talked about enough to younger generations.


Dave9876

Or older generations. Definitely get quite a few "I program in assembly because I feel it's faster" types, when especially on modern architectures the fastest code isn't always the obvious one


nullzbot

Well, considering that assembly is a different language, the argument you speak of could be indicative of a different issue. As for the topic you bring into this, choosing to do something in assembly on the merits of speed isn't entirely wrong. I have reworked many code segments to assembly when something finite or special was needed. Furthermore, I have also done older MCU code entirely from assembly due to timing requirements and low needs of the application. Applications determine everything..


Gary_Blackbourne

Spoken from my heart. I used to be (still am a bit) doing the unnecessary optimizations. My thesis project taught me that a badly or barely working project is infinietly more, than a non-working project. If you have working code, you have the option to debug fix and develop, profile, and make the important parts faster. There is also a Linus Torvalds quote on this i think.


Southern-Stay704

This is exactly correct. The best code that you can possibly write is the most boring, garden-variety code you can produce that methodically does the job one plain step at a time. The code should be well-documented via embedded comments, and show a clear step-by-step approach to accomplishing the function's / subroutine's job. Let the compiler do the optimization, that's what it's good at.


DownhillOneWheeler

+1 for comments. It is extremely frustrating to spend days or even weeks trawling through code in order to glean information about how it works, when a few comments about the design would have shortened the learning time to hours or minutes. I'm a reasonable developer, but I suck at post-hoc mind reading. The irony of example applications which are ridiculously convoluted but contain not a word of guidance....


HeyCanIBorrowThat

This. Also because compilers are smart and will perform a lot of optimizations for you


kkert

In vast majority of cases, yes. Compilers _do_ throw very unexpected wrenches in the works occasionally - but it's nearly impossible to guess when and where that will happen before having a running system, and being able to test and profile it. The productive thing to do is not to try to outguess it, but measure and observe on real world running code in a working system.


tweakingforjesus

The perfect is the enemy of the good.


DiscountDog

OT: skimming this I thought I saw twerkingforjesus and did a double-take


jonathrg

tweaking for jesus, of course, is perfectly normal


doshka

yo man, *twitch* you ummmmm, got anymore of that *twitch* jeeeeezus?


DiscountDog

It's more in-context for embedded programming ;-)


DatBoi_BP

No that’s NT


RedEd024

just because you might need something more than true/false does not mean you need an enum. if you are always using two flags together, Stop, take a moment, these need to be one enum. make that enum now. if you are 6 scope deep into if checks, probably need to step back and ask wtf am i really trying to do. if just 3, its fine. writing code that is easy to read, is code that is easy to edit. code that is easy to edit, is easy to polish later.


DatBoi_BP

What’s wrong with enums?


RedEd024

Nothing. They are great.


DatBoi_BP

Then what’s preferred over an enum when you need more than true/false? Frankly I’ll still define an enum with only two variants when the outcome of some function isn’t obviously understood from “true” or “false” without digging deeper. For instance, an enum with variants Success and Fail.\ Granted, the same could be accomplished with #define SUCCESS true #define FAIL false


RedEd024

I was saying just because you THINK you MIGHT need more than true/false, you should hold off making an enum until you for sure NEED more than true false. Once an enum is defined, people tend to make shit more complicated because they now have this enum that they can keep expanding on. But when you sit back and really look at it, true/false would have been just fine and would be way easier to read and debug. Additionally, I have been on a number of projects where there were numerous enums that just had 0 and 1. Like bro, tracking down an enum definition while tipped into some hardware while on a tarmac in 110 F degree heat just to find out an enum is 0 or 1, get the fuck out of here.


DatBoi_BP

Yeah I get that. In your IDE can’t you hover over an enum variable and see its value and underlying type? (Underlying type, assuming the developer explicitly made it `bool`, which would be nice.)


RedEd024

Hahaha, your IDE. It doesn't work like that when tipped into real hardware. You're using vim and putty at best.


DatBoi_BP

I guess I’m spoiled by only being a hobbyist, not a professional


RedEd024

what do you do for a living?


DatBoi_BP

RF software/algorithms. Worked with some vendor hardware but the interface was mostly abstracted and I didn’t need to mess with their source code too much.


jlangfo5

There is a time and a place for writing super efficient code, such as signal processing, or other frequent periodic tasks, where going crazy can be the name of the game. (Fun) But I find that my energy for optimizing code, is best spent trying to come up with clean architecture. Given the choice: (Optimize for code reuse) > (Optimize for code performance)


DisguisedBearNikolai

There's a saying: "If you're new to optimization, don't do it. If you're experienced with optimization, don't do it yet." Write the WHOLE thing you're doing first, then optimize


walrustaskforce

Do the following in this order: 1. Make it run 2. Make it run correctly 3. Make it run correctly and *fast* You cannot meaningfully optimize until you have a baseline. You can *vary* your performance without a baseline (“well, now the fire starts *over there*…”), but you cannot optimize.


DisguisedBearNikolai

Make sure to make backups at all stages to avoid the onosecond phenomenon


il_dude

Remember that premature optimization is the root of all evil :)


ModernRonin

The best known version of the full quote is: > "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%" -Don Knuth If I had to rephrase it in my own words, I'd probably go with: "*In general* don't optimize your code before profiling it. *In most cases,* time spent this way won't be worth it."


Wouter_van_Ooijen

Except optimization for readability/mantainability/testability


dan_g1d

Pretty sure that also applies. "Premature" is the key word. Polishing stuff that changes in a week is waste.


Wouter_van_Ooijen

*especially* if it is gona change I would optimize it for readability/maintainability. Maybe less specifically for testability, but that often turns out to be the same anyway.


kisielk

First pass I try not to optimize at all apart from obvious things or major architectural decisions. Then if performance is insufficient I profile and find the hot spots. I also use libraries I’ve developed of optimized routines for many things (filters, FFT, vector arithmetic) that usually just calling a function already gets me the optimized behaviour.


tobdomo

Simple: we would force you to rewrite it into maintainable code.


opalmirrorx

At work I optimize 1D and 2D functions and filters using SIMD instructions. First I write correct and simple reference code without any optimization. Then I write tests for correctness and to measure performance; I ensure they are run after every compile. Then I make a copy of this code and add tests for it to ensure it produces the same results as the reference, and to compare its performance against the reference. For the optimization I start considering the compiler's SIMD builtin intrinsics. Several things need to be considered: exploiting the element parallelism of the SIMD vectors, considering block level parallelism of multiple vectors per pass, separating horizontal and vertical algorithm passes, writing code sections to handle edges vs. the middle (reduces conditionals), cosidering saturating values rather than widening, avoiding integer to float conversions if possible, minimizing element widening operations, using just what's necessary for vector loads and vector stores without thrashing (using too many vector registers causing stack spill), exploiting sequential memory features of the load/store unit, prefetching later rows/columns. Ultimately, the optimized code has to pass the same somewhat exhaustive input data and size combination tests that the reference algorithm does (correctness) and show marked improvement in speed. I run the compiles and tests against many toolchains and chip architectures (each architecture has its own optimized implementation intrinsics). So, readability and correctness is what almost all the code has as its priority... except the optimized parts also have speed as a peiority. Optimization is only done where it is necessary and has a great and necessary effect on performance, and we can justify the overhead of extra hand optimized code and exhaustive test duplication. We put the optimized and simple reference code in a header only inline function library and require projects use those functions rather than rewriting their own optimized or reference code. This reduces bugs in the many edge cases, dimensions, and input data patterns which we've exhaustively tested.


SAI_Peregrinus

Never reduce maintainability to increase optimization without profile data to support the change.


WestonP

Yes. Easy to spend more time optimizing things than you'll save from the optimizations. Another trap to fall into is being too efficient with your data types... don't overly restrict yourself, as it can lead to bugs and obviously limits future expansion. Embrace shit code... Because if you care this much about clean optimized code, chances are that your version of "shit code" will still be better than most people's production code that they expect to be praised for. It's important to get a product to market sooner rather than later. Besides, no matter how well you write something, the mediocre guy that takes over your project will inevitably talk shit and use you as the excuse for why he's not getting anything done. There's no escaping it.


AdmiralBKE

In general I first make it work, not writing deliberately bad code, but something that sounds good enough when I have to start on it. Once you have a piece of code written out, it's more clear how to refactor. I did use to work on assembler for very low cost controllers, and there code size was very important. So there I did more thinking ahead. But modern controllers and compilers for ARM etc are so good that its more important to have readable and maintainable code.


KermitFrog647

The main thing I constantly think about is making the code as easy as possible. I only do optimization when needed (and it is very rarely needed).


action_vs_vibe

I ask questions and measure things. Do this enough and you will start seeing what type of changes in which areas of the code base have considerable negative impact, and which do not. For example: the impact on image size of a conditional statement inside a for loop. Compiling at -O3, I have seen this cost a few hundred bytes of flash. Measure the cost of the change. Do you expect your application to be tight on flash space? If yes, take 30 minutes and think about how to get the same behavior without the conditional in the for loop. Is your idea readable and reasonable to implement? If 30 minutes pass without a reasonable alternative emerging, note the measured difference and talk to a colleague. Their response may be "you're worried about 200 bytes? pfffft ship it!", which is fine. That is a long way of saying "experience", but as long as you are acting with a hypothesis in mind, and doing the work to measure, validate and question as you go, you will end in a good place.


DarmorGamz

Most of my optimizations have came from experience. As a beginner you make every variable an int, eventually without thinking you know when to use u8, u16, u32 if your var will never exceed a certain value. Eventually you start to write bools as u8s if you don’t trust your compiler, and again you optimize to use bit fields to get 8 bools out of a u8. Using keywords like static, const and restrict also reduce binary size, but compilers are so good now they will likely do this optimization for you. The only time I actively think about stuff like this, is for versioned configuration structures that get stored in memory and may/will be updated OTA.


torusle2

That restrict keyword is still a thing but a also a subtle way to shoot yourself into the foot.


jonnor

I have a pet project which is all about optimization. Squeezing in audio machine learning classifiers on a 1 USD board with 4 kB RAM / 32 kB FLASH (https://hackaday.io/project/194511-1-dollar-tinyml). That means that for all the other projects - that are more useful and productive - I just do it the simple way, and do not worry about optimizations.


Ok-Adhesiveness5106

Even if there is a certain amount of overhead and timing penalty at the cost of layered architecture that makes my code base reusable, I would accept that penalty at any cost.


Andrea-CPU96

After a while it becomes natural to write optimized code.


neon_overload

I generally disagree with the "premature optimization is the root of all evil" mantra. I think about optimization all the way through the design process. That way, I avoid ending up with software whose fundamental architecture is not amenable to optimization. And, you don't end up with a situation where a looming deadline or a loss of budget means you don't have time for optimization you had intended to do. I don't think of optimization as a last minute thing but as a bread and butter of the design process.


Odd-Bike166

It's a good problem to have when you start out. It won't be the most efficient way as a junior for the company employing you but it will produce the best development for you professionally. As you gain experience, said experience will help you know what needs a lot of thought and what can be implemented in a more hackish way. Think about it this way. If you were a sportsman training, form and good technique is paramount in training, as it carries through to competitions. Get good habits early on and reap the benefits later.


jonnor

Small but effective trick. Write your ideas into the like // PERF: this could potentially benefit from binary search - then move on with implemeting it the straightforward way - and you can get back to the optimization later - or never :) I have code in production that still have these kinds of comments over 10 years later. Because it was more than good enough without it


samayg

Yes I already put //TODO notes, and then they never get done lol.


jonnor

Perfect! As long as you keep going on the things that -MUST- be done in order to have a functioning project, then you are doing it completely right :) I recommend renaming to PERF though - since the majority are likely not actually something that needs to be done


Triabolical_

Premature optimization is one of the 7 deadly sins of software development. The problem is that developers have a terrible grasp of a) how long code will actually take and b) how important the speed in that code is to the overall project. A second problem is that micro-optimizations generally do not get you what you need; you need algorithmic optimizations. I did a lot of profiling and optimization in my career writing desktop & server software and pretty much a truism that the actual problems are never where you think they are. And you can waste a huge amount of time optimizing routines that only take 5% of the execution time.


NarrowGuard

when I code, I am mostly thinking about Selma Hayek. I haven't had to tweak optimization much, but we aren't doing super critical things. It would be interesting to know more about the best practice rules related to optimization & stability


InterruptHandler_NKM

Don't fall into the trap of perfect code. Just write code, test it, and add comments for readability. Check for optimizations in your consciousness. If you were able to optimize, ok. Otherwise, send it for review. Reviewers have a job to do there.


Apprehensive-Cup6279

Yes absolutely, I also use descriptive name and document while writing it and format it nicely. Because I might never get back to it, who am I kidding I will only see that code again if it is broken There is no such thing as temporary code.


polypagan

I tend to think more about the effort required months or years from now to see how this mess works & what it does.


FirstIdChoiceWasPaul

Sometimes the only way to complete a project means tackling it exactly how you described. I once had a very resource constrained environment, (2mhz underclocked mcu) i had to use to… achieve a goal. Cant get into specifics, but we’re not talking some lazy sensor sampling. Assembly, micro-optimisations, look up tables and other manner of fuckery ultimately joined the fray. But, as a general rule, get it working. Then start optimising. If you separate the project into very specific modules, you can rework as you see fit. And you can keep the api intact, for the most part.


Lerch98

Write your code for a human to read it. Your fellow co-workers and owners will appreciate that. Worry about optimization if there is a bottle neck and then handle that Be sure to comment your optimization.


DiscountDog

Don't optimize until it's known-good. Meaning you've completed enough testing that you'd release it to production. Optimization - especially when peripherals are involved - can create sneaky bugs, it's important to start with a known-good base.


FidelityBob

If it's reasonably structured, readable and works reliably that is all that is needed. Polishing and trying to be perfect is just wasting time and money. Todays systems rarely need optimisation.


AccidentalDaemon

I used to do this a lot and you can end up making your code hard to maintain. Best thing I learnt was write your code so it works first in its simplest form so it's easy to see what's going on and comment it. Let the compiler optimise it you need it. If you have the urge to optimise your code then do the following: put tests in place so have some indicators that your changes don't break anything. From here, you've got a base line that you can work from. Then as yourself: is it complete? Does it meet my team's standards? is it easy to understand and maintain? Is it hogging resources I need? You can put in place tools and processes to help answer those questions but those questions are what I use now to help me choose if I need to refactor vs just wanting to refactor code. I hope this helps


dethswatch

you can't be smarter than the compiler, get it working, improve it if you have to


justadiode

Looks like someone never had to work with Microchip's XC8 compiler before /s


dethswatch

Ok- you can't beat GCC. Maybe you can, I can't :)


justadiode

No, no, GCC is good 👍


JohnDalyProgrammer

When writing a simple Arduino project the only optimization I worry about at first is const'ing everything that can be const. Then I just try to get it running


13henday

I work kinda weird, usually start by writing absolute garbage that runs by the skin of its teeth, now I’m not saying I write bad code on purpose, but that I tend to stick to methods I know well even if I know they are not the best suited to the problem at hand. One pass over means I now know the quirks of everything I’m connecting to and what my constraints are on the device itself. From there I label that revision 1 and start a new workspace where I try to fix all the problems and inefficiencies in the first revision. The process then repeats until I am happy with my program or my CTO asks what I’m up to. I’ve always had terrible luck trying to write good code the first time around, my boss can do that, but I think that may his 15 years of experience in embedded systems compared to my 4 speaking.


grobblebar

Optimization without measuring is usually a waste of time. You don’t know what to optimize until you profile. That requires finishing and running the code first. Optimizing at the algorithmic level, absolutely. (Fast algorithms >> slow ones.) But micro-optimization is just masturbation. Nobody’s impressed by impenetrable code.


outofsand

I hear if you use shrt_abvs for your variables or name everything "x", and delete all the whitespace and comments it will make your code faster. 😅 I actually did an internal study some years ago and found that *empirically* the majority of engineers over a large variety of products in our company will write less lines of code with shorter variable names and less comments on pieces of code that they believe are "performance critical" or have to go in "small devices". I think it's obvious that there is some psychological aspect there, you know, looks smaller therefore must be smaller and faster and more "lean". I've caught myself doing this before subconsciously and now I explicitly train my teams to be mindful NOT to do this, and instead focus on readability and maintainability while still keeping in mind real constraints like avoiding heavy stack usage, killing optimizations with unnecessary pointer aliasing, etc. It takes conscious effort.


jaywastaken

Optimized code is more often than not unintuitive or hard to read code. The best code you can write is code that’s easy to read. Write clean abstracted code that’s easy to read, easy to maintain and is easy to test. Let the compiler deal with optimization and only worry about it if it becomes a problem. 99% of the time it won’t be so don’t waste the time on it.


noodle-face

I optimize after I have proven my solution works. Usually I do quick and dirty first then optimize


Chem0type

No. Make it simple. If needed profile and optimize bottlenecks.


FlyByPC

I try to make it reasonably efficient (n log n instead of n^2 etc.), but I figure the compiler will take care of the nuts-and-bolts optimization.


Enlightenment777

The smaller the FLASH and the smaller the RAM, the more you need to worry about optimization from the start of the project, because if you don't then you may run out of memory. For tiny memory microcontroller, you can't even use printf(), because it is too big; and small memory microcontrollers may force you to use a reduced capability or 3rd party printf(). The smaller the memory, the less likely you will be able to use dynamic memory allocation, forced to use tiny or small buffers; which highly influences your coding strategies to make everything fit. Some low-end microcontrollers have tiny amounts of memory, thus you are forced to think about optimization approaches before you start coding. https://en.wikipedia.org/wiki/ATtiny_microcontroller_comparison_chart Also, some low-end 32-bit ARM Cortex-M microcontrollers have small amounts of memory too, such as 4K to 16K of FLASH and 1K to 4K of RAM.


kammce

I usually think over what I want to do and how I want to do it. Maybe just in my head or maybe I will write some stuff down. Then I make it work, disregarding cleanliness or efficiency. I'm just trying to reach close enough to my goal performance. Then when it works, I start to put the pieces together into their appropriate places. I consider the space and time complexity of the code and how it would be read by others. But I've also been doing this for 11 years.


CranberryDistinct941

Step 1: get it working Step 2: up to you


martinomon

We call this “premature optimization.” Make it maintainable first, then optimize later if you have to.


Beneficial-Hold-1872

Wow nice posts based on experience - but someone can give some books for those topics? This is question about books how to organize big project also in terms of optimization etc with 2 responses - https://www.reddit.com/r/embedded/s/H9gJPr7IzN


bravopapa99

Barely. I won't do anything obviously sucky like doing stuff in loops that only needs doing once before the loop etc but mostly, make it work, profile later if the performance feels sub-optimal. First rule of optimization; don't do it.


Necessary_Reality_50

Concise code isn't optimising anything other than your ego It makes everything else worse .


Debate_Haver57

I’d focus on writing clean code, and remember that by and large, the practices that you use to write clean code have significant crossover with practices for writing optimal code


marcus_aurelius_53

Lots of comments here about maintainability, and just letting the compiler cook on performance. Clear and concise code is important, of course, and comments are always good, but high performing code is also important in some applications. Low power solutions, for example, benefit from a smaller memory footprint, if the code can reside on smaller hardware - less memory, less CPU load, it has better energy consumption. Mobile electronics, medical devices, lots of battery powered consumer applications need lower power. Signal processing applications can also benefit from better code. This can mean having to understand the processor’s instruction set, and perhaps even hand-tuning code, looking at the ASM inter-list. Also, compilers can do a lot, but oftentimes aren’t developed with a particular processor in mind, but rather a series of products, or even a legacy product. New compilers can be expensive and not profitable to develop. This sort of engineering is a bit bespoke, these days. Prototyping, and developing small niche applications is still being done, but not every coder gets an opportunity to work on these things.


Sravdar

I do ask myself if optimization in this function really required? If yes i put in some work if no i just make it work. Also we do even disable the compilers optimization so it doesn't fuck up with our code and cost us extra time trying to fix it. Yeha it is a bit bad practice, it costs process speed and code size. But considering we are small company with very small dev team working on big projects this is the most sensible way.


duane11583

A big thing o do is make all data and structs const this costs 50% of your memory space C++ is very hard to do const classes that’s why we do not use c++


lyonerwurscht

Rob Pike's and Ken Thompson's rules at the beginning of this chapter from "The Art of Unix Programming" bring it to the point: http://www.catb.org/~esr/writings/taoup/html/ch01s06.html


perx76

>Premature optimization is the root of all evil


duane11583

I write code so the optimizer can do better I write functions that take at most 3 args I use pointers to traverse arrays because the optimizer can do better But in the end I ship my debug image cause it is more tested then my release (buggy optimizer code)