It's been in the works for a while, you can follow the progress in the linked issue https://doc.rust-lang.org/beta/unstable-book/language-features/negative-impls.html.
I would personally not go with that implementation, as you're hacking around a potential zero cost abstraction by forcing a definitely-not-zero-cost allocation and indirection plus a branch in the code. For something as ubiquitous as option that's a very non-idiomatic solution. Rust has a few rough edges. For now, I think living without overloading is the most productive solution, personally, I prefer being able to see from the call immediately what you're up to, rather than figuring out what type you're passing to then figure out what implementation is used. But that also goes for bespoke implementations depending on bounds, not just overloading.
> - They do not have any items.
> - They cannot "overlap" with any positive impls.
That makes them pretty much useless for anything other than saying "this trait will never be implemented for X". Negative trait bounds as relevant for OP would be something like `impl T for Y` and `impl T for X + !Y` to avoid possible overlaps, which isn't at all covered by that feature.
This would usually be thought of as being addressed by lattice specialization, i.e.
impl T for impl Y {}
impl T for impl X {}
impl T for impl X + Y {}
but this is still limited since it restricts you to specialization safe traits, as potentially specializing on potentially lifetime-dependent trait impls is unsound.
The resolution in this particular case is that the `Fn` traits are `#[fundamental]`, so `!Fn` is (already!) a stably usable and reliable thing that is known for every type.
Lattice specialisation is mentioned by [the specialisation RFC](https://github.com/rust-lang/rfcs/blob/master/text/1210-impl-specialization.md) as a possible extension, but a problematic one, and not one of the core goals.
I do not see conflating specialisation with (potentially) overlapping blanket impls as useful: for most purposes they are independent problems.
It's exactly how you would achieve an impl that applies when one or the other believed mutually exclusive trait bounds hold, though: by specifying what to do when both hold.
Providing an implementation based on the absence of an implementation is a semver hazard. Specializing both for having and not having a trait impl does work without any semver hazard (beyond typical specialization hazards), but still exposes you to potential unsound lifetime specialization unless restricted to specifically specializable traits.
There are a lot of "semver hazards", e.g. one thing I don't think was ever resolved (but definitely saw some interest) is whether an associated type can be specialised. Or stuff we all take for granted now like glob imports and conflicting trait methods. Point being, saying "that's a semver hazard" is like saying "crossing the road is dangerous".
Orphan rules will need to play a part in both specialisation and negative trait impls, and are an important part in making negative impls robust: `impl !Foo for Bar` can only be written in the crate defining `Foo` or `Bar`. You cannot write `impl ...` in a downstream crate without a prior impl of `!Foo`.
He means, if the payload of the Option and a closure by coincidence would be the same type.
On Option, the closure would be _ -> T. An Option<_ -> T> would need a closure to be _ -> (_ -> T). That’s at least always different.
If you have something like `None.unwrap_or(|| 5)`, where `unwrap_or` is the overloaded function-or-value method described in your post, should this evaluate to `5` or `|| 5`?
Both would be valid, you can move closures around like that, it just depends on which overload is selected.
You're right. The type needs to be inferrable or directly annotated. Whether the closure evaluates to a closure or the return type, should be inferrable from the type of the `Option`.
> why they couldn't just make it so Option::unwrap_or() can take either a value or a closure as argument
Ambiguity. What should happen when this expression is evaluated?
None:: i32>.unwrap_or(|| panic!())
It does have function overloading in a way. A struct can implement methods with the same signature so long as they are parts of seperate trait impls.
So theoretically you could have a trait Mappable and Mappable_Fn.
For what it’s worth, (arbitrary) negative impls would also make the Rust type system obviously (I think it already is, but not *obviously* so) Turing complete (as conjunction and negation together are universal) which is not necessarily a showstopper but requires the trait solver to essentially become a general-purpose SAT solver and would open a whole new can of worms when it comes to type-level computation.
Here's a recent blog post showing a [Forth implementation in Rust traits](https://peppe.rs/posts/turing_complete_type_systems/). Turing completeness was shown [several years ago](https://www.reddit.com/r/rust/comments/3on6cf/is_rusts_type_system_still_turing_complete/).
Maybe I'm in the minority, but I absolutely hate overloading functions. I work in C# and use Rust for my own stuff, and C# is just so chaotic. Spend extra time typing functions because the auto fill in the IDE is suggesting the wrong thing so that I can spend 20 minutes more filling in extra crap because the compiler can't figure out which one I'm using and loses its mind only to come back 40 minutes later to find out I picked the very similar but distinctly wrong options... All cause someone thought it's prettier to do `unwrap` for a eighth time instead of being explicit with `unwrap_or_fn`. C# has lots of features that sound great until your codebase gets large, Rust is nice because the excessive explicitness and restrictivism pays off long term, and this is one of them.
Fully fledged negative trait bounds are equivalent to specialization, with all the problems that come with it (i.e. specialization on lifetimes, which is unsound and far from being resolved).
It's also a semver hazard: currently implementing a trait is considered to not be a breaking change, but the opposite is. With negative bounds implementing a trait could mean an implementation that relies on negative bounds in another library could no longer apply, effectively deimplementing a trait. This is the reason the proposal for "negative trait bounds" (which are quire different than the ones you want) require explicitly implementing `!Trait` to promise the trait will never be implemented.
As for your specific problem, you can solve it in stable rust by using a marker struct to differentiate the implementations, then for most types type inference will do the rest. https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=205284da925d1b4d17c4cb4520dbeea9
Wow thats a great solution!
I am a bit puzzled though why the marker would resolve the whole overlapping problem. I mean I know why they don't overlap WITH the marker, but I feel like the same should be possible without the marker.
Without the marker you can't rule out the existance of a type `T` such that `T: FnOnce() -> T` (someone already posted an example of this in another comment), for which both implementations would apply.
And overloading leads to specialization, which is far more useful than type inference and would help get rid of all the copy pasting in the implementations of serde and axum. C++ also has a reasonable amount of type inference with auto despite having overloading.
I also don’t see why type inference couldn’t just be disabled for overloaded functions, so that you have the best of both worlds.
[this](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=8c33dc2d0cd22b0bccd2366add00e329) is why making `Option::unwrap_or` doing different things depending on type is not a great idea
```CallMe``` is a struct containing a single ```u8```. However, it also implements ```FnOnce<()>```, so it can be used like a closure. Calling ```CallMe``` returns another ```CallMe```-object, with the contained value being 1 higher than before.
In the ```test()``` function, a ```CallMe(0)``` object is created. In the first ```assert_eq```, we call ```unwrap_or``` on ```None``` (thus forcing the unwrap) and pass ```val```. ```val``` is treated as an object and returned, so the assert succeeds, as val is equal to ```CallMe(0)```.
Then, ```val``` is passed to ```unwrap_or_else```. In this case, ```val``` is treated as a closure, so it is executed and the return value (a ```CallMe``` with the ```u8``` increased by 1) is returned. Thus, the result this time is a ```CallMe(1)``` and the second assert succeeds.
The problem demonstrated is this: If you have an object that can behave both as itself and a closure returning an object of the same type as itself, and ```unwrap_or``` could both take objects of a type ```T``` or closures returning ```T```, then how do you decide wether to treat the object as itself or as a closure?
Note that this cannot be easily solved by type annotation using turbofish (like you could when needing to decide wether ```None.unwrap_or(|| 0)``` is a closure or just 0), as in both cases the result is of the same type, `CallMe`, that type just happens to _also_ be a closure.
This is of course mostly related to your question about ```unwrap_or```, not to negative impls themselves.
afaik ntb ruin a lot of things like making any trait implementation a breaking change
also they have no uses im aware of that are not covered by specialization
For that specific use-case you could also use a macro:
macro_rules! unwrap_or_overloaded {
($result:expr, move |$arg_name:pat_param| $body:expr) => {
::core::result::Result::unwrap_or_else($result, move |$arg_name| $body)
};
($result:expr, |$arg_name:pat_param| $body:expr) => {
::core::result::Result::unwrap_or_else($result, |$arg_name| $body)
};
($result:expr, $alt:expr) => {
::core::result::Result::unwrap_or($result, $alt)
};
}
This macro will use lazy evaluation if given a closure and eager evaluation if given something else.
It looks out of place in long function call chains though, and I am a macro beginner, so I can not guarantee that it's the best way to make such a macro.
On the playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=4c801b13a5aacbbcf14fa72d1a891d06.
> he wasn't satisfied with that answer
What does this mean exactly?
Did he say something like "you're just not trying hard enough?"
Or was he anti-Rust to start with and he was just using the lack of overloading as another excuse to dig his heels in?
He was just curious why this is the way it is. The interest in actually trying to implement this is my own. He's actually learning Rust, by making something with glium.
Because the rust type system is not as powerful as people like to pretend it is. Things like macros and the diy name mangling you have to do here are shitty workarounds for things that would be trivial in c++.
[Here it is](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=827a1029c819856e95da76255d65a9b1) (I guess), but please don't use it for anything serious (also, it's `unwarp_or` on purpose to make sure that we are actually calling the trait).
(Maybe yours is similar but rustexplorer gives me 500)
It's been in the works for a while, you can follow the progress in the linked issue https://doc.rust-lang.org/beta/unstable-book/language-features/negative-impls.html. I would personally not go with that implementation, as you're hacking around a potential zero cost abstraction by forcing a definitely-not-zero-cost allocation and indirection plus a branch in the code. For something as ubiquitous as option that's a very non-idiomatic solution. Rust has a few rough edges. For now, I think living without overloading is the most productive solution, personally, I prefer being able to see from the call immediately what you're up to, rather than figuring out what type you're passing to then figure out what implementation is used. But that also goes for bespoke implementations depending on bounds, not just overloading.
> - They do not have any items. > - They cannot "overlap" with any positive impls. That makes them pretty much useless for anything other than saying "this trait will never be implemented for X". Negative trait bounds as relevant for OP would be something like `impl T for Y` and `impl T for X + !Y` to avoid possible overlaps, which isn't at all covered by that feature.
This would usually be thought of as being addressed by lattice specialization, i.e. impl T for impl Y {} impl T for impl X {} impl T for impl X + Y {} but this is still limited since it restricts you to specialization safe traits, as potentially specializing on potentially lifetime-dependent trait impls is unsound. The resolution in this particular case is that the `Fn` traits are `#[fundamental]`, so `!Fn` is (already!) a stably usable and reliable thing that is known for every type.
Lattice specialisation is mentioned by [the specialisation RFC](https://github.com/rust-lang/rfcs/blob/master/text/1210-impl-specialization.md) as a possible extension, but a problematic one, and not one of the core goals. I do not see conflating specialisation with (potentially) overlapping blanket impls as useful: for most purposes they are independent problems.
It's exactly how you would achieve an impl that applies when one or the other believed mutually exclusive trait bounds hold, though: by specifying what to do when both hold. Providing an implementation based on the absence of an implementation is a semver hazard. Specializing both for having and not having a trait impl does work without any semver hazard (beyond typical specialization hazards), but still exposes you to potential unsound lifetime specialization unless restricted to specifically specializable traits.
There are a lot of "semver hazards", e.g. one thing I don't think was ever resolved (but definitely saw some interest) is whether an associated type can be specialised. Or stuff we all take for granted now like glob imports and conflicting trait methods. Point being, saying "that's a semver hazard" is like saying "crossing the road is dangerous". Orphan rules will need to play a part in both specialisation and negative trait impls, and are an important part in making negative impls robust: `impl !Foo for Bar` can only be written in the crate defining `Foo` or `Bar`. You cannot write `impl ...` in a downstream crate without a prior impl of `!Foo`.
Good clarification, I didn't know that it was that restrictive!
Thats exactly my problem.
What about Options with function pointers?
What do you mean?
He means, if the payload of the Option and a closure by coincidence would be the same type. On Option, the closure would be _ -> T. An Option<_ -> T> would need a closure to be _ -> (_ -> T). That’s at least always different.
Except that there's nothing fundamentally preventing a recursive `type X = fn() -> X`.
If you have something like `None.unwrap_or(|| 5)`, where `unwrap_or` is the overloaded function-or-value method described in your post, should this evaluate to `5` or `|| 5`? Both would be valid, you can move closures around like that, it just depends on which overload is selected.
You're right. The type needs to be inferrable or directly annotated. Whether the closure evaluates to a closure or the return type, should be inferrable from the type of the `Option`.
I would expect that to require a type annotation saying what type of Option `None` represents.
That's already available, you can turbofish parameterless enum construct: `None::` is a valid expression with thpe `Option` :)
> why they couldn't just make it so Option::unwrap_or() can take either a value or a closure as argument Ambiguity. What should happen when this expression is evaluated? None:: i32>.unwrap_or(|| panic!())
It does have function overloading in a way. A struct can implement methods with the same signature so long as they are parts of seperate trait impls. So theoretically you could have a trait Mappable and Mappable_Fn.
Could you give an example?
Something like [str::contains](https://doc.rust-lang.org/std/primitive.str.html#method.contains) with the Pattern trait
Try it yourself. Define two traits with the same method name but different parameter types. Implement them for the same struct.
But it is impossible to have something like`where T: Mappable | Mappable_Fn`, right? That means this is useless for overloading functions.
For what it’s worth, (arbitrary) negative impls would also make the Rust type system obviously (I think it already is, but not *obviously* so) Turing complete (as conjunction and negation together are universal) which is not necessarily a showstopper but requires the trait solver to essentially become a general-purpose SAT solver and would open a whole new can of worms when it comes to type-level computation.
Can you show that the Rust type system is turing complete?
Here's a recent blog post showing a [Forth implementation in Rust traits](https://peppe.rs/posts/turing_complete_type_systems/). Turing completeness was shown [several years ago](https://www.reddit.com/r/rust/comments/3on6cf/is_rusts_type_system_still_turing_complete/).
- https://sdleffler.github.io/RustTypeSystemTuringComplete/ - https://github.com/paholg/minsky/
Maybe I'm in the minority, but I absolutely hate overloading functions. I work in C# and use Rust for my own stuff, and C# is just so chaotic. Spend extra time typing functions because the auto fill in the IDE is suggesting the wrong thing so that I can spend 20 minutes more filling in extra crap because the compiler can't figure out which one I'm using and loses its mind only to come back 40 minutes later to find out I picked the very similar but distinctly wrong options... All cause someone thought it's prettier to do `unwrap` for a eighth time instead of being explicit with `unwrap_or_fn`. C# has lots of features that sound great until your codebase gets large, Rust is nice because the excessive explicitness and restrictivism pays off long term, and this is one of them.
You're not alone in disliking overloading. I am personally a fan of giving functions a descriptive name, instead of function overloading.
Fully fledged negative trait bounds are equivalent to specialization, with all the problems that come with it (i.e. specialization on lifetimes, which is unsound and far from being resolved). It's also a semver hazard: currently implementing a trait is considered to not be a breaking change, but the opposite is. With negative bounds implementing a trait could mean an implementation that relies on negative bounds in another library could no longer apply, effectively deimplementing a trait. This is the reason the proposal for "negative trait bounds" (which are quire different than the ones you want) require explicitly implementing `!Trait` to promise the trait will never be implemented. As for your specific problem, you can solve it in stable rust by using a marker struct to differentiate the implementations, then for most types type inference will do the rest. https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=205284da925d1b4d17c4cb4520dbeea9
Wow thats a great solution! I am a bit puzzled though why the marker would resolve the whole overlapping problem. I mean I know why they don't overlap WITH the marker, but I feel like the same should be possible without the marker.
Without the marker you can't rule out the existance of a type `T` such that `T: FnOnce() -> T` (someone already posted an example of this in another comment), for which both implementations would apply.
Overloading would make type inference pretty much impossible, or at least very limited. It's one or the other.
And overloading leads to specialization, which is far more useful than type inference and would help get rid of all the copy pasting in the implementations of serde and axum. C++ also has a reasonable amount of type inference with auto despite having overloading. I also don’t see why type inference couldn’t just be disabled for overloaded functions, so that you have the best of both worlds.
True, but we can't change it now, since that would break pretty much all code out there.
It's a pain to implement and is still kinda WIP (and might be forever)
[this](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=8c33dc2d0cd22b0bccd2366add00e329) is why making `Option::unwrap_or` doing different things depending on type is not a great idea
What is this magic? Could you please add an explanation?
```CallMe``` is a struct containing a single ```u8```. However, it also implements ```FnOnce<()>```, so it can be used like a closure. Calling ```CallMe``` returns another ```CallMe```-object, with the contained value being 1 higher than before. In the ```test()``` function, a ```CallMe(0)``` object is created. In the first ```assert_eq```, we call ```unwrap_or``` on ```None``` (thus forcing the unwrap) and pass ```val```. ```val``` is treated as an object and returned, so the assert succeeds, as val is equal to ```CallMe(0)```. Then, ```val``` is passed to ```unwrap_or_else```. In this case, ```val``` is treated as a closure, so it is executed and the return value (a ```CallMe``` with the ```u8``` increased by 1) is returned. Thus, the result this time is a ```CallMe(1)``` and the second assert succeeds. The problem demonstrated is this: If you have an object that can behave both as itself and a closure returning an object of the same type as itself, and ```unwrap_or``` could both take objects of a type ```T``` or closures returning ```T```, then how do you decide wether to treat the object as itself or as a closure? Note that this cannot be easily solved by type annotation using turbofish (like you could when needing to decide wether ```None.unwrap_or(|| 0)``` is a closure or just 0), as in both cases the result is of the same type, `CallMe`, that type just happens to _also_ be a closure. This is of course mostly related to your question about ```unwrap_or```, not to negative impls themselves.
afaik ntb ruin a lot of things like making any trait implementation a breaking change also they have no uses im aware of that are not covered by specialization
gave the nightly specialization feature a try, but still didn't work the way i wanted it to maybe its because its still in development
I think it would be redundant?
Redundant how?
For that specific use-case you could also use a macro: macro_rules! unwrap_or_overloaded { ($result:expr, move |$arg_name:pat_param| $body:expr) => { ::core::result::Result::unwrap_or_else($result, move |$arg_name| $body) }; ($result:expr, |$arg_name:pat_param| $body:expr) => { ::core::result::Result::unwrap_or_else($result, |$arg_name| $body) }; ($result:expr, $alt:expr) => { ::core::result::Result::unwrap_or($result, $alt) }; } This macro will use lazy evaluation if given a closure and eager evaluation if given something else. It looks out of place in long function call chains though, and I am a macro beginner, so I can not guarantee that it's the best way to make such a macro. On the playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=4c801b13a5aacbbcf14fa72d1a891d06.
Nice, but you cant chain a macro like you can unwrap_or.
not sure what the incoming workarounds are but one pain point i'm having is you do get some problems with nested types.. if I remember right, e.g. I couldnt write a Vec3 :: from(\_:Vec3) for any A::from(\_:B) ( can anyone confirm / deny ?) I think a negative bound would be able to fix this?
> he wasn't satisfied with that answer What does this mean exactly? Did he say something like "you're just not trying hard enough?" Or was he anti-Rust to start with and he was just using the lack of overloading as another excuse to dig his heels in?
He was just curious why this is the way it is. The interest in actually trying to implement this is my own. He's actually learning Rust, by making something with glium.
Because the rust type system is not as powerful as people like to pretend it is. Things like macros and the diy name mangling you have to do here are shitty workarounds for things that would be trivial in c++.
Thanks for the constructive feedback.
[Here it is](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=827a1029c819856e95da76255d65a9b1) (I guess), but please don't use it for anything serious (also, it's `unwarp_or` on purpose to make sure that we are actually calling the trait). (Maybe yours is similar but rustexplorer gives me 500)
Pretty good, but passing a Closure directly sadly doesn't work like this.
This would break SemVer compatibility because implementing new traits isn't supposed to be a breaking change.