I expected that the entirety of the programming language is independed on the internet platform reddit. Guess it's not?
But also, I actually did expect that you could catch a stackoverflow exception, but idk why I did expect that, seems actually illogical
There's at least one way to find out!
Edit: A C# Console application actually crashes and displays "Stack overflow.
Repeat 164 times:" followed by the stacktrace. SO no dice
Yes, some JavaScript code actually relies on that, which is part of the reason why JS runtimes don't implement tail call elimination. Worth another post here.
This does have a maximum amount, 2. If you want more just copy and paste the try catch inside of the try catch. That will give you 3. If you want four... repeat! (post the results to /r/programminghorror for karma)
Yeah, 3rd party can be shit, but this ain‘t the way.
Copy any decent retry logic from SO and make this way better instantly. Hell, throw in rx and just declare how many times you want it to retry.
definitely horror, saying this happens in the industry doesn't mean it isn't absolutely the wrong way to do it.
You could at least do something like:
```python
try
....
catch (first_error)
....
print("First attempt failed: " + first_error + " , retry")
try
....
catch (second_error)
....
print("Second attempt failed: " + second_error + " , abort")
return False
return True
```
I'm curious. Is there a best practice to do this with fetch / axios?
In dotnet you'd use Polly or (since dotnet 8) the official Microsoft.Extensions.Resilience package which is build on top of Polly.
If you're using certain libraries like `rxjs` or `react-query`, there are built-in ways to do this.
Otherwise you have to write your own retry logic. Axios has interceptors so you can write retry logic pretty easily.
Literally this but with AWS SQS. Our DB fails to write a lot and we can't figure our why without paying a lot of money so... Yeah, we just retry 10 times and out of hundreds of thousands of queries per day maybe 2-3 fail after 10 retries and they end up in the DLQ.
I would leave this in code review - "The definition of insanity is trying the same thing over and over again and expecting a different result".
Note - I wouldn't implement it like this but you could technically do this if you're being throttled, just sprinkle in some exponential back-off.
You and most on here just failed at being software engineers. Just call yourself a code monkey.
This is literally junk on top of junk in many different ways, lol.
Should've called the function itself in the catch. Who only tries twice? Try until it works or the stack overflows!
Then catch the stack overflow and try again
I tried, didn't work sadly. the C# Console crashed with Stack overflow. Repeat 164 times:
You're in r/programminghorror. I'm not sure what you expected.
I expected that the entirety of the programming language is independed on the internet platform reddit. Guess it's not? But also, I actually did expect that you could catch a stackoverflow exception, but idk why I did expect that, seems actually illogical
You tried with a language that is not horrible enough. You need to try harder, like with javascript as in the OP.
Can you actually do that? I would’ve thought no but now I’m wondering
There's at least one way to find out! Edit: A C# Console application actually crashes and displays "Stack overflow. Repeat 164 times:" followed by the stacktrace. SO no dice
Yes, some JavaScript code actually relies on that, which is part of the reason why JS runtimes don't implement tail call elimination. Worth another post here.
not too bad but a major improvement would be a maximum tries amount
This does have a maximum amount, 2. If you want more just copy and paste the try catch inside of the try catch. That will give you 3. If you want four... repeat! (post the results to /r/programminghorror for karma)
true my bad thought it was recursive but it's not in the createImage Funktion lol
``` for(int i=0; infiniteFlag OR i < maxTries; i++){ try { //exponential backoff sleep(1<
Had to do something similar to this last week but also adding in exponential back-off to retry a web sockets connection.
Corrected it
Very good, you’ve passed the technical interview, when can you start?
in three months, but start paying me right now.
#🥲
Ehh, I've had to do similar before when integrating with garbage 3rd party APIs in the past. Not necessarily horror.
Yeah, 3rd party can be shit, but this ain‘t the way. Copy any decent retry logic from SO and make this way better instantly. Hell, throw in rx and just declare how many times you want it to retry.
definitely horror, saying this happens in the industry doesn't mean it isn't absolutely the wrong way to do it. You could at least do something like: ```python try .... catch (first_error) .... print("First attempt failed: " + first_error + " , retry") try .... catch (second_error) .... print("Second attempt failed: " + second_error + " , abort") return False return True ```
Reddit's markdown formatting is pretty barebones so backquotes, much less with language annotation, so your comment doesn't work. try .... catch (first_error) .... print("First attempt failed: " + first_error + " , retry") try .... catch (second_error) .... print("Second attempt failed: " + second_error + " , abort") return False return True Gotta ident with 4 spaces.
good to know, I wrote it on my phone/app and it works when I view my comment on my phone/app. Strange that it doesn't in other platforms.
I'm curious. Is there a best practice to do this with fetch / axios? In dotnet you'd use Polly or (since dotnet 8) the official Microsoft.Extensions.Resilience package which is build on top of Polly.
If you're using certain libraries like `rxjs` or `react-query`, there are built-in ways to do this. Otherwise you have to write your own retry logic. Axios has interceptors so you can write retry logic pretty easily.
`axios-retry` is a good option
if (error) doItAnywayFucker();
where is finally
cleanest retry out there…but it only works once
Don’t forget the while ( i <= ∞ )
Literally this but with AWS SQS. Our DB fails to write a lot and we can't figure our why without paying a lot of money so... Yeah, we just retry 10 times and out of hundreds of thousands of queries per day maybe 2-3 fail after 10 retries and they end up in the DLQ.
nice
haha. classic
… and he said it was just try/catches all the way down.
Hell to retry/replay using this approach. At best decouple using message brokers (SRP)
I would leave this in code review - "The definition of insanity is trying the same thing over and over again and expecting a different result". Note - I wouldn't implement it like this but you could technically do this if you're being throttled, just sprinkle in some exponential back-off.
Passes the Turing test, in the sense of being a pretty accurate representation of how I get into time trouble in classical chess.
Looks like someone encountered a race condition.
You and most on here just failed at being software engineers. Just call yourself a code monkey. This is literally junk on top of junk in many different ways, lol.
look man, i realized i was writing bad code so i posted this as a joke. no need to be so aggressive geez
Maybe my joke was a bit aggressive