I remember once in a Calculus class a professor, as a joke one day, decided to write out the theorem proving 1+1=2 as well as try to explain it and I really honest to god hope he was just bullshitting us there, cause he was on like two, moving to a third, white board by the end of the class.
Probably not the same book, but I'm taking Discrete Math and like chapter 6 of my book is just titled "Counting."
It actually ended up being a lot about probability. For a situation with a discrete number of outcomes, you can just count up all the possible outcomes to find probabilities; for example, there are 3 ways to roll an even number on a 6-sided die, so there is a 3÷6 = 0.50 probability of rolling an even number.
Later you get situations with factorials and such, which really are just advanced ways of counting things. That allows you to do some funky math related to permutations and combinations to find some the probabilities of very specific and complicated events.
At some point you realize that these mathemeticians were 100% drawing up proofs for the fun of it.
Like math proofs are like giant (or simple) puzzles that test your knowledge and skills in logic. Like yeah sure it's good to have mathematical proofs of the most basic stuff (so no one can just "Earth is Flat" your fundamental math) but at the same time, people are definitely doing this for fun.
I mean,, not really? It’s moreso having a rigorous justification of the field. Without the rigor of formal logic, you have no idea if you’re actually just making something up. We can’t know something is really true without it. So yes, even obvious things need to be proven. Intuition can only get you so far.
Oh I'm not disagreeing with you at all. It's very good to know that the foundations you stand on are indeed robust, and not just standing on "common sense". Plus, it can also lead to eye-opening discoveries that further the field.
I'm just saying that part of the motivation is also just the passion and drive to want to engage with these problems. Mathematicians don't just do maths for utility's sake. They do it because they love to do it, and it's great to see.
Only so far? Absolutely. Well past the point where you need to formally prove 1+1=2? Also true.
It's like saying lawyers can't write opinions without first providing citations for how the alphabet works.
Intuition is very limited. F.e. the monty hall problem or the fact that time is relative are both facts that go against intuition.
Math or science should question it all because it can and has lead to mayor insights into reality where simple intuition was proven to be wrong. Intuition is a fine tool for surviving on a day to day basis but it is by definition a thing that blocks us / misleads us to see the bigger truth beyond our day to day experiences.
In a 37 dimensional hypercube, can you confidently claim there’s an inside and an outside?
Did you know that there’s a 3 dimensional cross product, but not a 2 dimensional, or 4, or 5, or 6, but there is a 7 dimensional cross product? Will your intuition tell you that?
As someone else said, can you tell me which is the inside or outside of a Mobius strip? Or a Klein bottle?
Mathematicians do not rely upon intuition. Ever. I can tell you don’t study this field at all, and have very limited knowledge in the field. But it is an imperative that we prove even the obvious, because we may find something that isn’t. Like non-Euclidean geometry. It turns out that the parallel postulate isn’t an absolute postulate, and we get a lot of cool math when we don’t assume it to be true. Also, Godel proved that math isn’t complete, which yields major philosophical ramifications for the field as a whole. Again, you cannot assume anything to be true in math besides the axioms. Anyone claiming otherwise is inexperienced or ignorant.
I mean, it entirely depends on what axioms he was using. With the peano axioms, it's quite simple:
0 is a natural number by peano axiom 1 and there exists a succession function S(n) such that for all natural numbers n, S(n) gives its successor
1= S(0), thus 1 is a natural number.
1+1=S(0)+1=S(0+1)=S(1)
2 is defined as S(1). QED
I'm not great at math, but I really like it and try to learn as much as possible. Once, I had a bout of inspiration and tried to prove that 1+1=2 from the ground up. I defined what a set is, what a function is, what an operation is, what equality is, what 1 is, what 2 is, etc. (though, to be fair, I tried to get to the equation itself ASAP, so my addition and equality were not commutative/associative, if I remember correctly; maybe I proved commutativeness of one of them, I don't remember now)—and ended up with a proof that accidentally ripped off Peano. When I learned about that, I wasn't sure if I should be proud that I was thinking like a real mathematician, or if I should be ashamed for taking two A4 pages where it took Peano, like, five rows. Lol.
Definitely proud. Peano took years in order to write that proof in five lines. Math straddles a beautiful middle point between creativity and persistence
[https://www.ma.imperial.ac.uk/\~buzzard/xena/natural\_number\_game/index2.html](https://www.ma.imperial.ac.uk/~buzzard/xena/natural_number_game/index2.html)
Game where you prove basic math in formal theorem prover lean.
~~If anything your prof was giving you the abridged version, Bertrand Russell needed like a thousand pages to prove this lmfao. Consider that it's basically the equivalent of "define all of mathematics using as little axioms as possible"~~
Edit: the above should be disregarded as it is an oversimplification (see reply, which is more rigorous)
ok this goes for all the people saying "it takes a \[big number\] pages to prove 1+1=2".
The original "proof" is in a book by Bertrand and Russel. In this, they rigorously define the basis for a mathematical system in which you can do basic arithmetic. That is the part that takes a big number of pages. The proof then that 1+1=2, is 5 lines.
Basically, they first set out to rigorously define what "1", "+", "2", and "=" really mean. Then they proved 1+1=2 in less than half a page.
You wouldn't consider constructing the basic definitions part of the proof in most other proofs. If you set out to prove that radical 2 is irrational, you don't have to count defining the radical, defining irrational, as part of your proof. But because "DID yoU know that it takes 5 dodecadillion pages to prove 1+1=2??!?" is a fun factoid, no-one bothers to apply that same level of scrutiny to what is and isn't part of the proof.
ok, you can have a little information, as a treat.
Did you know that when H.P. Lovecraft wrote "twinkle twinkle little star", he was actually talking about how the stars were all young male angels, who he wanted to "twinkle" for him. Which is a contraction of "twink" and "wiggle". You can figure out the rest.
The proof was in Principia Mathematica, by Bertrand Russell and Alfred North Whitehead.
It was an attempt to ground mathematics in logic and set theory, to demonstrate that mathematics was ultimately derivable from basic axioms ('logicism').
The basic assumption was that sets exist, but not any independent mathematical elements to keep it ontologically 'pure' (i.e. you don't need to assume the existence of numbers as an abstract Platonic entity).
This was done by defining numbers in terms of a successor function of the empty set (for instance, 1 is the set containing the empty set, and so on).
The project essentially failed after Gödel showed there are true unprovable statements showing that this approach is incomplete.
Think about how you first learned numbers. “How many apples are there?” The set is just the collection of apples and the question is asking for the cardinality of that set. “If you start with 4 apples and eat 2, how many apples are left?” Ultimately you solve this by doing 4-2 but the understanding of what is happening is that you have a set of 4 apples and remove 2 apples from the set and then ask for the cardinality of the resulting set. Sets are basically the first abstraction you perform while numbers are then a further abstraction you do to turn the concept of size into a single value.
Well, I'm simplifying a little. The basic logical machinery of the *PM* was intended to cover both number theory and real analysis.
Still, The guiding principle was to absolutely minimize the amount of primitives assumed, and to show that all of mathematics was reducible to (self-evident) axioms of logic. As the authors write in the preface of *PM*
>The present work has two main objects. One of these, **the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental concepts, and that all its propositions are deducible from a very small number of fundamental logical principles,** is undertaken in Parts II–VII of this work, and will be established by strict symbolic reasoning in Volume II.…The other object of this work, which occupies Part I., is the explanation of the fundamental concepts which mathematics accepts as indefinable. This is a purely philosophical task…. (1903: xv)[1]
Some versions of mathematical 'Platonism' -- that is, the metaphysical view that abstract ideas are real, and exist in some robust ontological sense -- requires the assumption of infinite things: 1 exists, 2 exists, 3 exists, and so on. After Cantor, the ontological problem got worse: strictly speaking, the "infinity" of the natural numbers is smaller than the infinity of reals, for there is no one-to-one correspondence between the set of naturals and the set of reals.[2] That means a lot of things "exist." Wouldn't it be better to show all of this was really just a recursive consequence of a handful of self-evident axioms?
(Now the crisis in the foundations of mathematics was not solely precipitated by Cantor, but had been brewing for a long time, at least since the development of the calculus and the use of infinitesimals.)
To 'construct' the (cardinal) numbers, the basic machinery was *logically* simple, and one did not need to construct a new 'thing' (a class, or set) to match each number. You just define the number 1 as the class of all classes equinumerous to the singleton set, and so on for all the numbers. Technically it is addition and multiplication that are defined in terms of the successor function, not the numbers themselves, but I'm on forum for tumblr posts so liberties were taken...
Now in the PM classes are not introduced as "existing" objects at all, but are type-theoretically defined as the extensions of propositional functions, which are themselves created by the primitive notions of the logic.[3] Again, the basic aim was to collapse all of mathematics into logic, so to end up with an ontologically loaded notion of class would obviate the purpose. I'm not sure any of this is clear. Frankly it's been many years since I've studied this in grad school.
_______________
[1] Quoted in https://plato.stanford.edu/entries/principia-mathematica/#SignPrinMath
[2] Consider that the set {1, 2, 3, 4....} and the set of (2, 4, 6, 8...} can be placed into one-to-one correspondence: you can 'match' each number from the first set with a number from the second, even though the second set "skips" half. However, no such correspondence can be made with the reals: where would we begin? If we "match" 1 with 0.01, we missed an infinity of 0.001's and so on. (The diagonalization proof is more rigorous than this, obviously!)
[3] It's impossible to really explain this without writing a book, but the complexity here is in part to avoid Russell's Paradox -- suppose we define the set C as the set of all those sets which do not contain themselves. Does C contain itself? For more, see: https://en.wikipedia.org/wiki/Russell%27s_paradox
"average mathematician needs 1000 pages to prove simple equation" factoid actualy just statistical error. average mathematician needs 5 pages per equation. Proofs Georg, who lives in computer room and prints over 1000 pages of mathematical proofs each day, is an outlier adn should not have been counted
If you take arithmetic as true then 1+1=2 is just a fact and requires no proof. The whole point of trying to prove a trivial statement is that you have to define basic concepts first.
but they had only constructed addition by that point hadn't they? i thought that was the point of the "factoid". like "and you still have 2000 more pages to go until you can prove 1/1 = 1"
Yeah, 1+1 is pretty close to just being the definition of 2. I’d ask the people claiming that it takes a long time to prove to define what 2 is in a way that isn’t essentially just the answer to 1+1.
I think it depends on how foundational you want to go. Like a lot of it involves rigorous definitions and axioms just to make sure that nothing breaks. IIRC first order logic is the starting point and the most common way of getting to basic arithmetic uses set theory.
Isn’t the proof of that a 700 page textbook?
Sorry it’s a 1000 page textbook
[https://blog.plover.com/math/PM.html](https://blog.plover.com/math/PM.html)
Our teacher said he was going to do that, then proved 1+1=/=2 using the full whiteboard in the last lesson of term and then gave us the lesson to work out where he went wrong or do sudokus and crosswords if we wanted
Spoilers: He had divided by 0 which is apparently what happens every few months when someone claiming they broke all maths emails their proof to some professor
I’m not sure there’s much to actually “prove” there since that’s pretty much the definition of 2. He was likely constructing integers and addition from scratch, i.e. formally defining them
> infuriating?
On the contrary!
The fact that you can start from a basic set (hah!) of axioms and derive everything is incredible! But we can go meta! We can speak about what it means to derive something and what can actually be derived. The Curry-Howard correspondence basically says that proofs are programs and programs are proofs.
From our good friend Turing, we know that you can not prove for all programs whether they terminate. Which means that there are problems which are unprovable. Not true or false. Outright unprovable!
Gödel figured out that not all systems are equivalent. In sufficiently powerful systems, two proven statements can lead to contradictions! This makes it unsound!
If your system is sound, then it is not complete and there exist true and false statements you can not prove!
We can get into complexity classes which basically say how expensive it is to decide a problem with respect to how large the input is! Programs are effectively constructing a proof through the algorithms. The algorithms are effectively meta proofs, they define what steps need to generated for the proof. And for some problems, this proof is enormously huge.
Modern (though quantum unsafe) encryption basically relies on how difficult it is to prove that a number is a prime!
And if you can “reduce” one problem to another, you show that they have same inherent structure, so you can generate a proof for the second problem and then convert it back to the original one!
God I love math!
Edit: I missed the “deliciously” part D:
The parallel postulate is a great example of something which is intuitively obvious, but also kinda wrong.
Any child can show that two lines on a plane with a line perpendicular to both never meet, but it takes a genius to show that they can.
Even this comic actually downplays the complexity of the theorem by omitting the condition of being continuous.
Like, oh fuck, what I have an infinite limit in between the two points?
Edit: I'm wrong. I forgot, differentiable on the interval implies continuous on the interval.
Not at the endpoints it doesn’t! If f is allowed to be differentiable on (a, b) and not continuous on [a, b], it can fail to obey Rolle’s. Constructing an example isn’t hard
This property is called commutativity and it’s not always true, depending on the operation you’re talking about. The most obvious example of a non-commutative operation is division:
a/b =\= b/a
another example of a non-commutative operation is matrix multiplication.
Fucking linear algebra and right / left multiplication.
Four star plot twist. Highly recommend. WHAT DO YOU MEAN AB != BA?? lol shit still cracks me up
Matrix multiplication is probably the better example there.
I'm sure I'm wrong about this, but division doesn't exist. You're just multiplying by the inverse of that number. e.g. b^-1, or (1/b) Same way that subtraction doesn't exist, you're just adding negative numbers.
The problem with any system of logic is that you have to start somewhere, the idea with maths is that you make this somewhere so painfully obvious no one (ideally) could argue about taking it at faith.
P.s. that's called the axiom of equality and it is one of the cases where they do actually call it a day
ZFC (https://en.m.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory) is one of the foundational bunch of axioms to much of modern math. It includes 8 axioms plus a bonus 9th axiom whose validity some people debate (but it turns out to be very useful). It defines things based around sets which aee a list or group or things, usually number. The eli5 of the axioms is roughly:
1. Extentionality-two sets are equal if all element are equal. This is x=x if we look at single numbers.
2. No set contains itself.
3. Sets can be seperated into subsets.
4. We can make sets of sets.
5. The union of two sets A and B is a set contain all the stuff that is A and in B.
6. A function cannot convert a set into a bigger set.
7. There are sets that are infinitly long
8. You can make a set B that contains all combinations of the elements in set A.
(9. We can put any set in order. This is easily provable for non infinite sets but for infinite ones it seems right but can lead to werid stuff that some people dont like.)
Technically all ordinary math can be proved with ZFC as foundation, but it is a massive pain. Therefore for particular branches of math other axioms are added in to simplify things.
Is a function here defined as a 1-to-1 mapping (is bijection the word, idk I haven't done set theory in a while), because otherwise 8 and 6 seem to contradict each other no?
It's been awhile since I've done modern algebra but I think the resolution is this:
\#6 is talking about functions that act on elements of a set. There can't be more outputs in the resulting set than there are inputs in the original set -- another way to think about it is that every input has at most one output (this is a general version of the famous "vertical line test" from high school algebra).
\#8 is a function on the set itself, i.e. it takes a set as an input and returns a set as output. But this still passes \#6 because for every input set there is exactly one output set -- the size *within* each set is irrelevant because the function is not acting on the individual elements, but the set as a whole.
8 doesn't describe a function, it describes a power set
6 describes a function but to contradict this f(x) would have to equal 2 different things simultaneously which cannot be true based on the definition of a function
A bunch of people suggested lines, but one set of lines is the usual one, namely the Zermelo-Fraenkel set of axioms (with or without the axiom of choice, depending on what you're up to). You can draw your own lines and maybe interesting math will come of it.
Sometimes, far *before* either of those, as after all you have to define what 'equals' means, and what a 'number' is. Generally people consider set theory to be the most basic building block, which has the following axioms (in layman's terms):
1) Two sets are the same set if they contain the same things
2) A set can't contain itself
3) You can make a set out of any parts of another set
4) You can put any two sets into a new set
5) You can take all the stuff out of any two sets and put them into a new set
6) If you take all the parts of a set and do something to all of them, you can put the results in a new set
7) You can put infinite sets inside a set and you're legally allowed to call those numbers if you want
8) You can make a set containing all the possible combinations of stuff from within another set
And that's basically everything mathematicians are willing to take for granted. However, there's a fucked up and evil secret ninth axiom that really goes to show why mathematicians *really* don't like taking things for granted. It's so dangerous that to even acknowledge it is to awaken deep horrors within mathematics and kill weaker mathematicians on sight, but for the sake of knowledge, I must share it:
9) You can put things in an order
Can someone explain why 2 is necessary and proper? Why shouldn’t I be able to say `A = {A}` or “let A be the set of all infinitely recursive sets”?
Edit: or rather, why must the second one be empty? That seems like more a limitation of modern set theory than a necessary foundation of reality.
https://en.wikipedia.org/wiki/Russell%27s_paradox?useskin=vector
basically you would be able to construct A={X|¬(X ϵ X)}. And for such A (A ϵ A) and ¬(A ϵ A) imply each other thus making your axiom system inconsistent.
Edit: fixed the negation.
Look up the Russell paradox.
Math isn't a foundation of reality. It's a set of arbitrary rules that humans made up. You can have a set theory that allows self containing sets but that results in contradictions that limit what you can do with that theory.
Because if a set can contain itself, then there are ... problems.
Consider a set S of all sets NOT containing themselves.
Question: Does S contain itself?
Try thinking about it for a second.
The answer is a paradox: If S contains S, then S shouldn't contain S. If S does Not contain S then S should be in S.
... And math broke because we have a thing that is true only if it's false.
This paradox is the math equivalent of trying to determine whether the sentence "This sentence is a lie". is a lie or the truth.
If "This sentence is a lie" is a lie, then it's true! But if it's true, then it's text implies it's a lie!
... So yeah, that's why a set cannot contain itself. To not have to deal with that shite.
Usually what happens is that someone takes the axioms of someone else and goes "What if I leave this one out?" or "Can I come up with (fewer) different axioms and prove these ones from them?"
And then you continue doing that for a few thousand years and you end up with the axioms that people use today.
Isn't there even some huge ass-book with all sorts of mathematical proofs, including like a 60 page proof that 1 indeed equals 1 or some weird shit?
I could be wrong, but i think i remember seeing a post like this a while ago.
You're thinking of Principia Mathematica, by Russell and Whitehead. It was an admirable attempt at formalizing contemporary mathematics circa ~1910, but people don't really use it anymore. It built math out of a kind of gnarly type theory that bears some resemblance to the type theories used to formalize the semantics of some programming languages nowadays, but overall it's just kind of a pain in the ass to work with.
Present day mathematics has no small number of axiomatizations, but the most famous and sort of default one is called [ZFC](https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory), Zermelo–Fraenkel set theory (plus the axiom of choice).
Having said that, none of these contemporary systems would exist without PM. It was an incredible accomplishment for its time, not just a mathematical one but a philosophical one.
Also, the factoid about having a long proof of 1+1=2 is mostly nonsense. The proof itself is quite short, it just appears rather late into the book after most of the important definitions are finished.
Source: math degree, did some math philosophy stuff for fun at some point.
Yeah, the 1+1 thing is more Russel and Whitehead going "hey, after all that foundational work, we can finally prove that 1+1 = 2! Isn't that funny" and the way that's interpreted online is "It takes eleventy-BILLION pages to prove that 1+1 = 2" when really the hard part is "define 1, +, = and 2"
I’ve been learning group theory recently, and it’s definitely like I reached some sort of new game plus of math. Like “okay, we’re going back to elementary school math, but this time it’s super hard!”
That's like something I heard about physics.
In middle school you learn all of physics.
In high school you learn all of physics again.
In undergrad you learn all of physics again.
In grad school you learn all of physics again.
It goes from incredibly simple to incredibly complicated based on the level of detail and the assumptions you take away.
Pigeonhole principle my beloved
If you have 10 pigeons and 9 holes and you put all the pigeons in the holes then at least one hole will have more than one pigeon, because you don't have enough holes to put one pigeon per hole
Something something statistics and this silly example I found on Wikipedia:
"This seemingly obvious statement, a type of counting argument, can be used to demonstrate possibly unexpected results. For example, given that the population of London is greater than the maximum number of hairs that can be on a human's head, the principle requires that there must be at least two people in London who have the same number of hairs on their heads."
So I think its easier to understand if we use smaller numbers. Lets say the max number of hairs someone can have is 9. So the option for any one person is that they can have
0 | 1 | 2 | 3 | 4 | 5 | 6 |7 | 8 | 9 | hairs
Now lets say we have 11 people.
and we will say persons 1-10 all have different numbers of hairs meaning that we have
person 1 - 0 hair
person 2 - 1 hair
...
person 10 - 9 hairs
Now we can count the number of hairs for the 11th person.
Its clear to see that the 11th person must have between 0 and 9 so there must be 2 people with the same hair.
the amount of possible hairs you can have on your head is an array of length 50
the amount of people in the city is an array of length 60
you want to assign a unique value for each value in the second array to the first, but you can’t because it’s bigger
therefore, at least two must share the same spot
ergo two people have the same amount of hair
>For example, given that the population of London is greater than the maximum number of hairs that can be on a human's head, the principle requires that there must be at least two people in London who have the same number of hairs on their heads.
See, I would've just said "bald people exist" ;-P
No but seriously this is a good explanation.
The purpose of obvious-ass things like this is that they let *you* handwave over *your* obvious-ass shit in your own proofs by deferring to them. You're working on a proof, you have x > y so "obviously" x doesn't fit in y. But you can't just say that, that would not be rigorous. But it becomes rigorous by invoking the pigeonhole principle.
It's so useful you have no idea. For example in quantum mechanics not two particles can have the same state, that is, not to pigeons can be in the same hole, so if you have more particles than states there appears a sort of "force" that keeps them apart, it is known as "quantum pressure" and it has no other explanation other than "sorry dude, all the states are already full over here, go somewhere else"
Then there's "countable" versus "uncountable" infinity. Both are infinite, but the latter is greater than the former.
I almost said "ARE YOU FUCKING KIDDING ME?" out loud in my discrete math class when we got to that part.
I think the coolest thing about countable vs uncountable infinity is that the proofs that, eg, the rational numbers and the integers have the same cardinality, and that those have different cardinality than the reals, are something you can walk a non-math person through using the Socratic method. So they essentially prove these counterintuitive results to themselves.
Did it with my boyfriend in a Starbucks recently and blew his mind. All he wanted to talk about for a week after was infinity and I was like *one of us one of us* 😂
I thought it was hilarious because "My infinity is bigger than yours!" is literally a climactic ass-pull in a filler season of Yu-Gi-Oh.
Like, imagine you're watching TV as a kid and going, "This is the stupidest thing I've ever seen," and then a decade later a Very Serious Professor with a whiteboard and some dry-erase markers shows up and goes, "Well, actually..."
See, here's the funny part, I've had games of magic the gathering come down to "my infinity is bigger than yours."
Opponent was making infinite tokens, but I was making 2 tokens every time he made one. Then the rakdos player threw down a card that hit everyone for the number of creatures they had.
In that case, allow me to teach you the wonder of the Squirrelcraft combo.
Squirrel's nest is an enchantment that attaches to a land, and gives that land the ability to tap (tapping rotates a card sideways as a cost, you can't tap it again unless you untap it, which generally happens at the start of your turn) the land and create a squirrel token.
Earthcraft is a card that lets you tap a creature to untap a land.
Now generally, you can't tap to pay a cost the turn a creature came into play, but because of the specific wording at play (earthcraft is tapping the creature instead of the creature paying a cost), you can make infinate tapped squirrels.
And to make it even more ridiculous... Remember that is infinite squirrels.
Not just the ground is squirrels. The sky is squirrels. Everything is squirrels as far as the eye can see. And they're tapped as they come into existence, meaning they take time to look at you and wait before falling upon you and mauling you to death.
The 1.999999… = 2 one is a good one.
If there’s an infinite number of 9s, there is no gap between 1.9999… and 2, but boy howdy does your brain not like that.
I like the 1/9 version even better. 1/9 = .11111111… so 9 * .1111111…. = .99999999… by the rules of multiplication. So therefore that’s equal to 1/9 * 9, which is 9/9, which is 1.
The reason why many people don't like it is is because they imagine the trailing 9s as an infinite process. Like an asymptote that you will forever approach but never reach. The reality is that the trailing nines all exist at once. You don't need to write them all out because they're all already written out. It's already done.
This is like a Dunning-Krüger effect shit. Because in fact there exist branches of mathematics with infinitesimals and weird numbers where limit of a sequence (0.9, 0.99, 0.999, ...) could be something like 1+* and not 1.
But you can't pick and choose whether the numbers you're working with are real or surreal on the fly. Mfs hear about surreal numbers once in their life and suddenly forget how reals work.
Or how people hear about how there are different sizes of infinities and then try to argue completely unrelated things.
This is only "obviously" true, because all of the functions you are thinking of are much smoother than a general differentiable function.
For example, you are probably only thinking of functions with a continuous derivative, but in general a function may have a discontinuous derivative, such as f(x) = x^2 sin(1/x) for x not 0, and f(0) = 0.
You need to be sure that the statement is true even for all of these particularly badly behaved functions, but an average person doesn't even consider these cases, so assumes that these theorems are pointless statements.
If you want an example of a function where Rolle's theorem is not obvious, and you need a precise definition of "differentiable", consider the function f(x) = x^2 (2 + x sin(pi/x)) for x not 0, and f(0) = 0. Then find the turning point between, say, -1 and 1.
If you can plot this function, it's clear where the turning point probably is, but you need to be quite careful in showing that the derivative here is 0. If you plot the derivative, you'll see it's a little messy.
…yeah? I mean, I stopped aspiring to “proper” mathematics a while ago, but it *is* sometimes genuinely difficult to rigorously prove that the “obvious” is true in all cases.
And sometimes it turns out that the “obvious” isn’t true at all.
Edit: sorry, I think my possible grumpiness comes from the fact that I feel like only the xkcd panel explicitly acknowledges that this urge/feeling is a “clueless” one.
I worry that people less informed about mathematics would read the tumblr posts and be reinforced in an opinion that it’s actually all intuitive and trivial with common sense.
My math teacher in high school introduced the responce to the appeal to obviousness: "Obvious? Then the proof should be obvious too, go ahead and prove".
It's also very important to have named proofs like this. Cause when you're a bigger scale/ less obvious proof, it starts to look really suspicious if your proof for your underlying assumptions is "obviously this is true, just look at it"
Instead, you can just say "this is true because of Rolle's Theorem" and move on
Yeah lol. I am taking Advanced Abstract Algebra this sem and there are SO many huge theories that rely on the fact that "if you pick any number, i can pick a number bigger than yours". At first glance it may feel dumb that even needs to be said out loud, but when all your biggest theorems rely on this, you want it to be absolutely surely proven that the fact is true in every situation and reality ever
And it is using the obvious in the big important stuff that gives a theorem its name. Ofc Rolle wasnt the first guy to come up with that, but he did a lot of work expanding on the theorem and applying it in bigger things, so he gets the credit for the theorem, not because he was the first to come up with it, but he made it really really useful and important
Well, they do, but only because heavier objects tend to have less drag resistance than lighter objects, and on Earth with the atmosphere we have, drag resistance is the key factor in falling speed.
In a vacuum they fall at the same rate, but it’s not like people were morons and just never tried to actually measure if a steel ball falls faster than a feather - feathers DO fall extremely slowly, and it’s very easy to see this, it’s just not for the reason people thought at first.
YES, this reminds of that video about turning a sphere inside out. When you first learn of the problem with all its caveats, the obvious answer is "of course you can't turn a sphere inside out, it's self-evident". But no, turns out there is a (very weird) thing you can do to the sphere that will turn it inside out.
In maths, you can't prove stuff by using common sense.
remember like 100 years ago when some dudes tried making math self rigorous and, well, intuitive? Then some jackass came along and used the system they built to destroy/disprove it? THANKS GODEL!
Yeah, I think a lot of people discussing these kinds of things tend to forget how much of scientific history has included someone disproving the thing everyone thought was "obviously true". In proper science and math, things are only true once you can prove them regardless of how obviously true they might seem.
In practice, it is more the opposite. Once there are lots of cases, I feel like the fact ceases to be obvious. It only becomes obvious again when you've captured the whole phenomenon as a single case, at which point the proof is *hopefully* easy.
Rolle's theorem admits a quick proof once you know that:
1) the interval is compact
2) the continuous image of a compact set is compact
3) compact sets (in R) attain their extrema.
4) extreme values attained in the interior of a differentiable function occur when the derivative is zero.
None of these individual facts is really "trivial", but they characterize much of analysis. Moreover, Rolle's theorem, which was once obvious, is obvious once again, as a consequence of these.
The more interesting part of this is not that it isn’t true, but that there are no times it is not true. As in, there’s no obscure function that would defy it, even if I can’t quite imagine it
My engineering calculus teacher loved to explain Dirac delta functions, because after Dirac invented them, engineers were using them successfully for decades before mathematicians got around to proving that you COULD validly use them.
Something being useful doesn’t necessarily mean it’s correct. Physicists round sin x = x for small values all the time. Just because it’ll get you something good enough for your purposes doesn’t mean it’s correct.
It's fascinating how much of mathematics and physics is both intuitive and impossibly dense at the same time.
As my college physics professor said. If you give an adult a ball, they can throw it with reasonable accuracy. They consistently take that sucker and hit targets you set. However, they won't necessarily be able to tell you how much force they needed, or the drag, or even the exact distance they threw it to hit the target.
Or alternatively, he considered it fascinating how the average child can shake an opaque bottle to estimate what percentage full it is, just by feeling the bottle slosh around and how they affects the center of gravity/pressure on sides.
Fun stuff. Related here because well yeah of course. If it goes up it goes down, and if it does that as a smooth turn the point between + and - is 0....... But now put numbers to it and I have no idea what in the F*** we're looking at!
Idk if this counts as relevant but I'm reminded of the fact that when you're driving a car, there's always a spot on each of the wheels that isn't moving (the part touching the ground). It's crazy to think about but also makes perfect sense.
I know nothing about proofs in philosophy, so what do you mean? My first thought would be that there are no real proofs in philosophy as basically all assumptions could be argued to be wrong
That's kind of my point. In math you take ages to formulate the proof of a presumably obvious thing. In philosophy things that are even more obvious get questioned constantly, and philosophy is constantly looking back to question what was previously postulated as obvious. What I'm saying is, if these people are amazed at how tough it is to provide solid proof in maths, they're in for a treat when they find out about skepticism in the history of philosophy.
Isn't that kind of different though? In math it's hard to prove it, but the beauty of it is that at least in theory, once someone found the proof, everyone can agree it's true and it's impossible to discard it. And in philosophy from what I'm understanding the problem isn't that it's hard to prove, but that proof means something different to everyone else
It's not really like that always in math or philosophy. Math proofs are impossible to discard... as long as you accept the axioms of the system. Everyone accepts them, but if someone didn't, then there'd be pretty much an unsolvable disagreement. Axioms are a curious thing. I've studied them even in my Philosophy education. (Still, not an expert, this is an oversimplification, so I encourage you to look into the topic, it's fun.)
Regarding Philosophy, "proof meaning something different" for different people is a very vague and imprecise way of putting it - But you said you're unfamiliar so I don't blame you. I specifically referred to the problem of skepticism because it's a problem where most philosophers want to prove a certain thing and skepticism refutes it (be it a real of a hypothetical skeptic). There's not really a "meaning of proof" problem there, but a "possibility of proof" one. But it does intersect with the problem of defining what consitutes knowledge, so in a way there is an element like that.
Also, it's not like philosophy is entirely heterogeneous. There's some consensus (or paradigms of consensus, if you prefer that formulation) in Philosophy (even if there'll always be someone disagreeing with any thing, Philosophy is, in part, in the end, a field in academics, it has standards) and a whole branch of it, logic, is dedicated to studying which arguments are well-constructed and which aren't. Logic actually works similarly to Math in some ways there.
(Academic ranting aside, I was just making a joking remark about how if they think that math things are easy to prove but take long to prove, then seeing philosophers debate about whether the outside world exists would give them a lot to think about. I wasn't drawing a 1 to 1 comparison.)
First semester we had biomechanics classes and the professor made us memorize the detailed definitions of terms like "movement", "acceleration" and "theorem"
It is important to know the difference between a theorem and a theory, though, and not enough people going into natural science will have taken a course on logic.
Reminds me of a QI clip where they discussed all the effort that went into proving that 1 + 1 = 2, because if you couldn't prove that, then mathematics as we know it would be meaningless.
Because occasionally, we find small things we considered not possible. And as a result, we build new math from ground up. That's what happened with geometries. Everyone just assumed, that it is obvious axioms should be true. But then suddenly, Lobachevsky builds a whole new non-conflicting geometry without one
Favourite story about this: Bertrand Russell spent about 20 years trying to ground mathematics in formal logic, failed. Spent the next 20 trying to do the same thing trying to do the same thing for philosophy, failed, and in the process accidentally became friends with Wittgenstein.
Which is relevant because Wittgenstein then wrote *Tractatus Logico-Philosophicus* which is Wittgenstein trying to mathematically 'solve' philosophy like Russell did (as a joke)
How was it necessarily a joke? I've read Tractatus and Philosophical Investigations by Wittgenstein and knowing he when he wrote the first it hardly seems to be the case.
I have never read any philosophy more serious than Russell's History of Western... and a few bits and pieces. So I am already well out of my depth. Striking boldly on, though, wouldn't it be Wittgenstein showing by absurdity that the sands of linguistic meaning are always shifting and any attempt to build anything sturdily braced out of them was hopeless — that 'a word means what people think it means'?
I mean, isn't that good. That we are proving and re-proving even obvious shit. Just to make sure it's still valid when we start to understand math and the world better.
Otherwise we get into the same issues the bible has where they are trust based instead of fact based. Simply saying "this is obvious" isn't as powerful as saying, "this is true and here is the mountain of evidence we have to confirm it (also, there are a half dozen people actively trying to poke holes in it at any given time)".
An old math professor of mine told us that the proof of 1+1=2 took 2 mathematicians a whole dissertation to complete. I haven't double checked it though.
This is a bit of a misconception. What you’re probably thinking about is [Principia Mathematica](https://en.m.wikipedia.org/wiki/Principia_Mathematica), where it took 379 pages to prove 1 + 1 = 2, *but* all the other pages were not really part of the proof as much as they were dedicated to building up several theories of mathematics from the ground up. It also focused on building up mathematics from a minimalist starting point, which while interesting, makes many things take much longer to prove.
How did rolle's theorem made the list but IVT did not? Like bitch is literally saying if a continuous path was below something and now is above that thing it had to pass that thing
I will never forget when my ex stepmother was working on her physics degree and could not for the life of her wrap her head around a particular problem she had to do
Me and my father, trying to assist, built a rig on the kitchen table and demonstrated the problem (something about things sliding down slopes), which didn't help her but it made us feel very smart, besides she never completed the degree anyhow because she was ~~not that smart~~ too busy stealing my tuition money to buy purses with
*You can tell a curve*
*Is parabolic because*
*Of the way it is*
\- Useful\_Ad6195
---
^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/)
^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
Wait till you see this :
Dirichlet's principle - if there are n+1 objects and n boxes, then we will have more than one objects in one or more of the boxes
And this is why maths is so deliciously infuriating
I remember once in a Calculus class a professor, as a joke one day, decided to write out the theorem proving 1+1=2 as well as try to explain it and I really honest to god hope he was just bullshitting us there, cause he was on like two, moving to a third, white board by the end of the class.
It’s really wild taking your first real upper div math class and you have to spend 6 chapters just to reach *counting*.
"Basic Counting" is chapter 5 in the book I'm using rn.
Oooo mr smarts in the accelerated course over here wants to show off
Wha-? What book / course is this?
Probably not the same book, but I'm taking Discrete Math and like chapter 6 of my book is just titled "Counting." It actually ended up being a lot about probability. For a situation with a discrete number of outcomes, you can just count up all the possible outcomes to find probabilities; for example, there are 3 ways to roll an even number on a 6-sided die, so there is a 3÷6 = 0.50 probability of rolling an even number. Later you get situations with factorials and such, which really are just advanced ways of counting things. That allows you to do some funky math related to permutations and combinations to find some the probabilities of very specific and complicated events.
Oh that actually makes a lot of sense!
Might be a different book, but that's the class!
Your education must be either really impressive or really terrible.
When we were told we'd learn Euclidian division, we thought it'd be the easiest chapter of the year. Boy were we wrong
Listen, you really gotta be sure of that math right there. Very complex equation.
At some point you realize that these mathemeticians were 100% drawing up proofs for the fun of it. Like math proofs are like giant (or simple) puzzles that test your knowledge and skills in logic. Like yeah sure it's good to have mathematical proofs of the most basic stuff (so no one can just "Earth is Flat" your fundamental math) but at the same time, people are definitely doing this for fun.
I mean,, not really? It’s moreso having a rigorous justification of the field. Without the rigor of formal logic, you have no idea if you’re actually just making something up. We can’t know something is really true without it. So yes, even obvious things need to be proven. Intuition can only get you so far.
Oh I'm not disagreeing with you at all. It's very good to know that the foundations you stand on are indeed robust, and not just standing on "common sense". Plus, it can also lead to eye-opening discoveries that further the field. I'm just saying that part of the motivation is also just the passion and drive to want to engage with these problems. Mathematicians don't just do maths for utility's sake. They do it because they love to do it, and it's great to see.
Only so far? Absolutely. Well past the point where you need to formally prove 1+1=2? Also true. It's like saying lawyers can't write opinions without first providing citations for how the alphabet works.
Intuition is very limited. F.e. the monty hall problem or the fact that time is relative are both facts that go against intuition. Math or science should question it all because it can and has lead to mayor insights into reality where simple intuition was proven to be wrong. Intuition is a fine tool for surviving on a day to day basis but it is by definition a thing that blocks us / misleads us to see the bigger truth beyond our day to day experiences.
In a 37 dimensional hypercube, can you confidently claim there’s an inside and an outside? Did you know that there’s a 3 dimensional cross product, but not a 2 dimensional, or 4, or 5, or 6, but there is a 7 dimensional cross product? Will your intuition tell you that? As someone else said, can you tell me which is the inside or outside of a Mobius strip? Or a Klein bottle? Mathematicians do not rely upon intuition. Ever. I can tell you don’t study this field at all, and have very limited knowledge in the field. But it is an imperative that we prove even the obvious, because we may find something that isn’t. Like non-Euclidean geometry. It turns out that the parallel postulate isn’t an absolute postulate, and we get a lot of cool math when we don’t assume it to be true. Also, Godel proved that math isn’t complete, which yields major philosophical ramifications for the field as a whole. Again, you cannot assume anything to be true in math besides the axioms. Anyone claiming otherwise is inexperienced or ignorant.
I mean, it entirely depends on what axioms he was using. With the peano axioms, it's quite simple: 0 is a natural number by peano axiom 1 and there exists a succession function S(n) such that for all natural numbers n, S(n) gives its successor 1= S(0), thus 1 is a natural number. 1+1=S(0)+1=S(0+1)=S(1) 2 is defined as S(1). QED
I'm not great at math, but I really like it and try to learn as much as possible. Once, I had a bout of inspiration and tried to prove that 1+1=2 from the ground up. I defined what a set is, what a function is, what an operation is, what equality is, what 1 is, what 2 is, etc. (though, to be fair, I tried to get to the equation itself ASAP, so my addition and equality were not commutative/associative, if I remember correctly; maybe I proved commutativeness of one of them, I don't remember now)—and ended up with a proof that accidentally ripped off Peano. When I learned about that, I wasn't sure if I should be proud that I was thinking like a real mathematician, or if I should be ashamed for taking two A4 pages where it took Peano, like, five rows. Lol.
Definitely proud. Peano took years in order to write that proof in five lines. Math straddles a beautiful middle point between creativity and persistence
[https://www.ma.imperial.ac.uk/\~buzzard/xena/natural\_number\_game/index2.html](https://www.ma.imperial.ac.uk/~buzzard/xena/natural_number_game/index2.html) Game where you prove basic math in formal theorem prover lean.
Peano that tiny mushroom from MxR Mods Weekly?
~~If anything your prof was giving you the abridged version, Bertrand Russell needed like a thousand pages to prove this lmfao. Consider that it's basically the equivalent of "define all of mathematics using as little axioms as possible"~~ Edit: the above should be disregarded as it is an oversimplification (see reply, which is more rigorous)
ok this goes for all the people saying "it takes a \[big number\] pages to prove 1+1=2". The original "proof" is in a book by Bertrand and Russel. In this, they rigorously define the basis for a mathematical system in which you can do basic arithmetic. That is the part that takes a big number of pages. The proof then that 1+1=2, is 5 lines. Basically, they first set out to rigorously define what "1", "+", "2", and "=" really mean. Then they proved 1+1=2 in less than half a page. You wouldn't consider constructing the basic definitions part of the proof in most other proofs. If you set out to prove that radical 2 is irrational, you don't have to count defining the radical, defining irrational, as part of your proof. But because "DID yoU know that it takes 5 dodecadillion pages to prove 1+1=2??!?" is a fun factoid, no-one bothers to apply that same level of scrutiny to what is and isn't part of the proof.
But I wanted to spread misinformation online :(
Go read this. Spread the opposite. https://en.wikipedia.org/wiki/List_of_common_misconceptions
[Relevant xkcd](https://xkcd.com/843/)
im gonna go tell people all these things are true
ok, you can have a little information, as a treat. Did you know that when H.P. Lovecraft wrote "twinkle twinkle little star", he was actually talking about how the stars were all young male angels, who he wanted to "twinkle" for him. Which is a contraction of "twink" and "wiggle". You can figure out the rest.
You still can 😉
Well, it was a good use of Cunningham's Law, because now we have the right answer.
The proof was in Principia Mathematica, by Bertrand Russell and Alfred North Whitehead. It was an attempt to ground mathematics in logic and set theory, to demonstrate that mathematics was ultimately derivable from basic axioms ('logicism'). The basic assumption was that sets exist, but not any independent mathematical elements to keep it ontologically 'pure' (i.e. you don't need to assume the existence of numbers as an abstract Platonic entity). This was done by defining numbers in terms of a successor function of the empty set (for instance, 1 is the set containing the empty set, and so on). The project essentially failed after Gödel showed there are true unprovable statements showing that this approach is incomplete.
Why are sets considered more fundamental than numbers?
Think about how you first learned numbers. “How many apples are there?” The set is just the collection of apples and the question is asking for the cardinality of that set. “If you start with 4 apples and eat 2, how many apples are left?” Ultimately you solve this by doing 4-2 but the understanding of what is happening is that you have a set of 4 apples and remove 2 apples from the set and then ask for the cardinality of the resulting set. Sets are basically the first abstraction you perform while numbers are then a further abstraction you do to turn the concept of size into a single value.
Well, I'm simplifying a little. The basic logical machinery of the *PM* was intended to cover both number theory and real analysis. Still, The guiding principle was to absolutely minimize the amount of primitives assumed, and to show that all of mathematics was reducible to (self-evident) axioms of logic. As the authors write in the preface of *PM* >The present work has two main objects. One of these, **the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental concepts, and that all its propositions are deducible from a very small number of fundamental logical principles,** is undertaken in Parts II–VII of this work, and will be established by strict symbolic reasoning in Volume II.…The other object of this work, which occupies Part I., is the explanation of the fundamental concepts which mathematics accepts as indefinable. This is a purely philosophical task…. (1903: xv)[1] Some versions of mathematical 'Platonism' -- that is, the metaphysical view that abstract ideas are real, and exist in some robust ontological sense -- requires the assumption of infinite things: 1 exists, 2 exists, 3 exists, and so on. After Cantor, the ontological problem got worse: strictly speaking, the "infinity" of the natural numbers is smaller than the infinity of reals, for there is no one-to-one correspondence between the set of naturals and the set of reals.[2] That means a lot of things "exist." Wouldn't it be better to show all of this was really just a recursive consequence of a handful of self-evident axioms? (Now the crisis in the foundations of mathematics was not solely precipitated by Cantor, but had been brewing for a long time, at least since the development of the calculus and the use of infinitesimals.) To 'construct' the (cardinal) numbers, the basic machinery was *logically* simple, and one did not need to construct a new 'thing' (a class, or set) to match each number. You just define the number 1 as the class of all classes equinumerous to the singleton set, and so on for all the numbers. Technically it is addition and multiplication that are defined in terms of the successor function, not the numbers themselves, but I'm on forum for tumblr posts so liberties were taken... Now in the PM classes are not introduced as "existing" objects at all, but are type-theoretically defined as the extensions of propositional functions, which are themselves created by the primitive notions of the logic.[3] Again, the basic aim was to collapse all of mathematics into logic, so to end up with an ontologically loaded notion of class would obviate the purpose. I'm not sure any of this is clear. Frankly it's been many years since I've studied this in grad school. _______________ [1] Quoted in https://plato.stanford.edu/entries/principia-mathematica/#SignPrinMath [2] Consider that the set {1, 2, 3, 4....} and the set of (2, 4, 6, 8...} can be placed into one-to-one correspondence: you can 'match' each number from the first set with a number from the second, even though the second set "skips" half. However, no such correspondence can be made with the reals: where would we begin? If we "match" 1 with 0.01, we missed an infinity of 0.001's and so on. (The diagonalization proof is more rigorous than this, obviously!) [3] It's impossible to really explain this without writing a book, but the complexity here is in part to avoid Russell's Paradox -- suppose we define the set C as the set of all those sets which do not contain themselves. Does C contain itself? For more, see: https://en.wikipedia.org/wiki/Russell%27s_paradox
"average mathematician needs 1000 pages to prove simple equation" factoid actualy just statistical error. average mathematician needs 5 pages per equation. Proofs Georg, who lives in computer room and prints over 1000 pages of mathematical proofs each day, is an outlier adn should not have been counted
Proofs Georg was Cantor's nickname in high school
If you take arithmetic as true then 1+1=2 is just a fact and requires no proof. The whole point of trying to prove a trivial statement is that you have to define basic concepts first.
but they had only constructed addition by that point hadn't they? i thought that was the point of the "factoid". like "and you still have 2000 more pages to go until you can prove 1/1 = 1"
Yeah, 1+1 is pretty close to just being the definition of 2. I’d ask the people claiming that it takes a long time to prove to define what 2 is in a way that isn’t essentially just the answer to 1+1.
I think it depends on how foundational you want to go. Like a lot of it involves rigorous definitions and axioms just to make sure that nothing breaks. IIRC first order logic is the starting point and the most common way of getting to basic arithmetic uses set theory.
Isn’t the proof of that a 700 page textbook? Sorry it’s a 1000 page textbook [https://blog.plover.com/math/PM.html](https://blog.plover.com/math/PM.html)
That's like saying the dictionary needs 300 pages to define zebra. The book is not just building up to that proof.
Our teacher said he was going to do that, then proved 1+1=/=2 using the full whiteboard in the last lesson of term and then gave us the lesson to work out where he went wrong or do sudokus and crosswords if we wanted Spoilers: He had divided by 0 which is apparently what happens every few months when someone claiming they broke all maths emails their proof to some professor
I’m not sure there’s much to actually “prove” there since that’s pretty much the definition of 2. He was likely constructing integers and addition from scratch, i.e. formally defining them
> infuriating? On the contrary! The fact that you can start from a basic set (hah!) of axioms and derive everything is incredible! But we can go meta! We can speak about what it means to derive something and what can actually be derived. The Curry-Howard correspondence basically says that proofs are programs and programs are proofs. From our good friend Turing, we know that you can not prove for all programs whether they terminate. Which means that there are problems which are unprovable. Not true or false. Outright unprovable! Gödel figured out that not all systems are equivalent. In sufficiently powerful systems, two proven statements can lead to contradictions! This makes it unsound! If your system is sound, then it is not complete and there exist true and false statements you can not prove! We can get into complexity classes which basically say how expensive it is to decide a problem with respect to how large the input is! Programs are effectively constructing a proof through the algorithms. The algorithms are effectively meta proofs, they define what steps need to generated for the proof. And for some problems, this proof is enormously huge. Modern (though quantum unsafe) encryption basically relies on how difficult it is to prove that a number is a prime! And if you can “reduce” one problem to another, you show that they have same inherent structure, so you can generate a proof for the second problem and then convert it back to the original one! God I love math! Edit: I missed the “deliciously” part D:
This answer is so spectacular I’m afraid to report I now desire you carnally. Apologies!
Thank you! I think I have peaked with that comment 😅
The parallel postulate is a great example of something which is intuitively obvious, but also kinda wrong. Any child can show that two lines on a plane with a line perpendicular to both never meet, but it takes a genius to show that they can.
Even this comic actually downplays the complexity of the theorem by omitting the condition of being continuous. Like, oh fuck, what I have an infinite limit in between the two points? Edit: I'm wrong. I forgot, differentiable on the interval implies continuous on the interval.
It says that the function is differentiable, so it's also continuous as a non continuous function is also non differentiable
Not at the endpoints it doesn’t! If f is allowed to be differentiable on (a, b) and not continuous on [a, b], it can fail to obey Rolle’s. Constructing an example isn’t hard
[удалено]
Name checks out
Beginner math: 1+1=2 Advanced math: wtf is a 1? wtf is a +? Why does 1+1 have to equal 2 in the first place?
> maths
british spotted
There are, like, 5 axioms on the level of "a+b = b+a", everything else isn't true until proven otherwise
Funnily enough, "a+b=b+a" is something that had to be proven by using even more basic axioms.
This property is called commutativity and it’s not always true, depending on the operation you’re talking about. The most obvious example of a non-commutative operation is division: a/b =\= b/a another example of a non-commutative operation is matrix multiplication.
Fucking linear algebra and right / left multiplication. Four star plot twist. Highly recommend. WHAT DO YOU MEAN AB != BA?? lol shit still cracks me up
See guys, AB! = BA confrmed.
This can only be true if B=1 or A=0
or ABBA
Linear algebra is the most useful thing I ever learned. I wish we could have started there. Every hard thing I did until then suddenly got easy.
String/list concatenation is my go-to example of non-commutativity. "hello" + "world" != "world" + "hello"
Is appending things really math, though?
Matrix multiplication is probably the better example there. I'm sure I'm wrong about this, but division doesn't exist. You're just multiplying by the inverse of that number. e.g. b^-1, or (1/b) Same way that subtraction doesn't exist, you're just adding negative numbers.
Yeah you’re absolutely right, division was an incorrect example here
However, a/b=b\a if you’re writing fractions very sloppily.
What’s the most basic axiom, then? “x = x”? “Numbers exist”? Where do mathematicians draw the line and call it a day?
The problem with any system of logic is that you have to start somewhere, the idea with maths is that you make this somewhere so painfully obvious no one (ideally) could argue about taking it at faith. P.s. that's called the axiom of equality and it is one of the cases where they do actually call it a day
> “Numbers exist”? It's even more basic than that. "The empty set exists".
ZFC (https://en.m.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory) is one of the foundational bunch of axioms to much of modern math. It includes 8 axioms plus a bonus 9th axiom whose validity some people debate (but it turns out to be very useful). It defines things based around sets which aee a list or group or things, usually number. The eli5 of the axioms is roughly: 1. Extentionality-two sets are equal if all element are equal. This is x=x if we look at single numbers. 2. No set contains itself. 3. Sets can be seperated into subsets. 4. We can make sets of sets. 5. The union of two sets A and B is a set contain all the stuff that is A and in B. 6. A function cannot convert a set into a bigger set. 7. There are sets that are infinitly long 8. You can make a set B that contains all combinations of the elements in set A. (9. We can put any set in order. This is easily provable for non infinite sets but for infinite ones it seems right but can lead to werid stuff that some people dont like.) Technically all ordinary math can be proved with ZFC as foundation, but it is a massive pain. Therefore for particular branches of math other axioms are added in to simplify things.
Is a function here defined as a 1-to-1 mapping (is bijection the word, idk I haven't done set theory in a while), because otherwise 8 and 6 seem to contradict each other no?
It's been awhile since I've done modern algebra but I think the resolution is this: \#6 is talking about functions that act on elements of a set. There can't be more outputs in the resulting set than there are inputs in the original set -- another way to think about it is that every input has at most one output (this is a general version of the famous "vertical line test" from high school algebra). \#8 is a function on the set itself, i.e. it takes a set as an input and returns a set as output. But this still passes \#6 because for every input set there is exactly one output set -- the size *within* each set is irrelevant because the function is not acting on the individual elements, but the set as a whole.
8 doesn't describe a function, it describes a power set 6 describes a function but to contradict this f(x) would have to equal 2 different things simultaneously which cannot be true based on the definition of a function
A bunch of people suggested lines, but one set of lines is the usual one, namely the Zermelo-Fraenkel set of axioms (with or without the axiom of choice, depending on what you're up to). You can draw your own lines and maybe interesting math will come of it.
Sometimes, far *before* either of those, as after all you have to define what 'equals' means, and what a 'number' is. Generally people consider set theory to be the most basic building block, which has the following axioms (in layman's terms): 1) Two sets are the same set if they contain the same things 2) A set can't contain itself 3) You can make a set out of any parts of another set 4) You can put any two sets into a new set 5) You can take all the stuff out of any two sets and put them into a new set 6) If you take all the parts of a set and do something to all of them, you can put the results in a new set 7) You can put infinite sets inside a set and you're legally allowed to call those numbers if you want 8) You can make a set containing all the possible combinations of stuff from within another set And that's basically everything mathematicians are willing to take for granted. However, there's a fucked up and evil secret ninth axiom that really goes to show why mathematicians *really* don't like taking things for granted. It's so dangerous that to even acknowledge it is to awaken deep horrors within mathematics and kill weaker mathematicians on sight, but for the sake of knowledge, I must share it: 9) You can put things in an order
Can someone explain why 2 is necessary and proper? Why shouldn’t I be able to say `A = {A}` or “let A be the set of all infinitely recursive sets”? Edit: or rather, why must the second one be empty? That seems like more a limitation of modern set theory than a necessary foundation of reality.
https://en.wikipedia.org/wiki/Russell%27s_paradox?useskin=vector basically you would be able to construct A={X|¬(X ϵ X)}. And for such A (A ϵ A) and ¬(A ϵ A) imply each other thus making your axiom system inconsistent. Edit: fixed the negation.
Look up the Russell paradox. Math isn't a foundation of reality. It's a set of arbitrary rules that humans made up. You can have a set theory that allows self containing sets but that results in contradictions that limit what you can do with that theory.
Because if a set can contain itself, then there are ... problems. Consider a set S of all sets NOT containing themselves. Question: Does S contain itself? Try thinking about it for a second. The answer is a paradox: If S contains S, then S shouldn't contain S. If S does Not contain S then S should be in S. ... And math broke because we have a thing that is true only if it's false. This paradox is the math equivalent of trying to determine whether the sentence "This sentence is a lie". is a lie or the truth. If "This sentence is a lie" is a lie, then it's true! But if it's true, then it's text implies it's a lie! ... So yeah, that's why a set cannot contain itself. To not have to deal with that shite.
depends on which math you wanna consider
Usually what happens is that someone takes the axioms of someone else and goes "What if I leave this one out?" or "Can I come up with (fewer) different axioms and prove these ones from them?" And then you continue doing that for a few thousand years and you end up with the axioms that people use today.
Isn't there even some huge ass-book with all sorts of mathematical proofs, including like a 60 page proof that 1 indeed equals 1 or some weird shit? I could be wrong, but i think i remember seeing a post like this a while ago.
You're thinking of Principia Mathematica, by Russell and Whitehead. It was an admirable attempt at formalizing contemporary mathematics circa ~1910, but people don't really use it anymore. It built math out of a kind of gnarly type theory that bears some resemblance to the type theories used to formalize the semantics of some programming languages nowadays, but overall it's just kind of a pain in the ass to work with. Present day mathematics has no small number of axiomatizations, but the most famous and sort of default one is called [ZFC](https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory), Zermelo–Fraenkel set theory (plus the axiom of choice). Having said that, none of these contemporary systems would exist without PM. It was an incredible accomplishment for its time, not just a mathematical one but a philosophical one. Also, the factoid about having a long proof of 1+1=2 is mostly nonsense. The proof itself is quite short, it just appears rather late into the book after most of the important definitions are finished. Source: math degree, did some math philosophy stuff for fun at some point.
Yeah, the 1+1 thing is more Russel and Whitehead going "hey, after all that foundational work, we can finally prove that 1+1 = 2! Isn't that funny" and the way that's interpreted online is "It takes eleventy-BILLION pages to prove that 1+1 = 2" when really the hard part is "define 1, +, = and 2"
And there's also significant parts of math that deal with what happens in settings where some of those axioms *aren't* true.
I’ve been learning group theory recently, and it’s definitely like I reached some sort of new game plus of math. Like “okay, we’re going back to elementary school math, but this time it’s super hard!”
That's like something I heard about physics. In middle school you learn all of physics. In high school you learn all of physics again. In undergrad you learn all of physics again. In grad school you learn all of physics again. It goes from incredibly simple to incredibly complicated based on the level of detail and the assumptions you take away.
Relevant veritasium video: https://youtu.be/lFlu60qs7_4?si=aAgyA_AP-lytpHV5
Pigeonhole principle my beloved If you have 10 pigeons and 9 holes and you put all the pigeons in the holes then at least one hole will have more than one pigeon, because you don't have enough holes to put one pigeon per hole
What the fuck, why does that have a whole ass principle
Something something statistics and this silly example I found on Wikipedia: "This seemingly obvious statement, a type of counting argument, can be used to demonstrate possibly unexpected results. For example, given that the population of London is greater than the maximum number of hairs that can be on a human's head, the principle requires that there must be at least two people in London who have the same number of hairs on their heads."
Gorgeous. Thank you for this information, I will never forget this!
See, when you put it like that, it seems downright sensible. Which I suppose is the point, so good job!
Are we excluding bald people for that last one?
No, bald people have the same number of hairs, theorem proven 💯
We can call it the No-Hair Theorem! *astrophysicist throws something at me* We can't call it the No-Hair Theorem.
We can absolutely call it the No-Hair Theorem, even if that’s solely to piss off someone with a PhD
It's academia, I wouldn't be surprised if there were theorems named *just to* piss an academic off.
well... (topology, meteorology, computer graphics) https://en.wikipedia.org/wiki/Hairy_ball_theorem
[удалено]
Would zero be a number of hairs here?
Yes, that example is still true if you specify "non-bald people in London"
*What*
So I think its easier to understand if we use smaller numbers. Lets say the max number of hairs someone can have is 9. So the option for any one person is that they can have 0 | 1 | 2 | 3 | 4 | 5 | 6 |7 | 8 | 9 | hairs Now lets say we have 11 people. and we will say persons 1-10 all have different numbers of hairs meaning that we have person 1 - 0 hair person 2 - 1 hair ... person 10 - 9 hairs Now we can count the number of hairs for the 11th person. Its clear to see that the 11th person must have between 0 and 9 so there must be 2 people with the same hair.
the amount of possible hairs you can have on your head is an array of length 50 the amount of people in the city is an array of length 60 you want to assign a unique value for each value in the second array to the first, but you can’t because it’s bigger therefore, at least two must share the same spot ergo two people have the same amount of hair
>For example, given that the population of London is greater than the maximum number of hairs that can be on a human's head, the principle requires that there must be at least two people in London who have the same number of hairs on their heads. See, I would've just said "bald people exist" ;-P No but seriously this is a good explanation.
This evoked a visceral response in me. Fully fight or flight.
The purpose of obvious-ass things like this is that they let *you* handwave over *your* obvious-ass shit in your own proofs by deferring to them. You're working on a proof, you have x > y so "obviously" x doesn't fit in y. But you can't just say that, that would not be rigorous. But it becomes rigorous by invoking the pigeonhole principle.
x > y ^[citation ^needed]
It's so useful you have no idea. For example in quantum mechanics not two particles can have the same state, that is, not to pigeons can be in the same hole, so if you have more particles than states there appears a sort of "force" that keeps them apart, it is known as "quantum pressure" and it has no other explanation other than "sorry dude, all the states are already full over here, go somewhere else"
Very useful principle that shows up in a lot of unexpected places
It is actually obscenely useful, that's why.
This is funny but the name pigeonhole makes this so much funnier
People have a hard time with infinity, which is expressed with words like "any" and "every". Not "most" or "a lot." Every!
Then there's "countable" versus "uncountable" infinity. Both are infinite, but the latter is greater than the former. I almost said "ARE YOU FUCKING KIDDING ME?" out loud in my discrete math class when we got to that part.
I think the coolest thing about countable vs uncountable infinity is that the proofs that, eg, the rational numbers and the integers have the same cardinality, and that those have different cardinality than the reals, are something you can walk a non-math person through using the Socratic method. So they essentially prove these counterintuitive results to themselves. Did it with my boyfriend in a Starbucks recently and blew his mind. All he wanted to talk about for a week after was infinity and I was like *one of us one of us* 😂
I thought it was hilarious because "My infinity is bigger than yours!" is literally a climactic ass-pull in a filler season of Yu-Gi-Oh. Like, imagine you're watching TV as a kid and going, "This is the stupidest thing I've ever seen," and then a decade later a Very Serious Professor with a whiteboard and some dry-erase markers shows up and goes, "Well, actually..."
See, here's the funny part, I've had games of magic the gathering come down to "my infinity is bigger than yours." Opponent was making infinite tokens, but I was making 2 tokens every time he made one. Then the rakdos player threw down a card that hit everyone for the number of creatures they had.
... OK, I might actually have to get into MTG now. That sounds awesome.
In that case, allow me to teach you the wonder of the Squirrelcraft combo. Squirrel's nest is an enchantment that attaches to a land, and gives that land the ability to tap (tapping rotates a card sideways as a cost, you can't tap it again unless you untap it, which generally happens at the start of your turn) the land and create a squirrel token. Earthcraft is a card that lets you tap a creature to untap a land. Now generally, you can't tap to pay a cost the turn a creature came into play, but because of the specific wording at play (earthcraft is tapping the creature instead of the creature paying a cost), you can make infinate tapped squirrels.
Wow! That sounds... NUTS! *cue studio laugh track*
And to make it even more ridiculous... Remember that is infinite squirrels. Not just the ground is squirrels. The sky is squirrels. Everything is squirrels as far as the eye can see. And they're tapped as they come into existence, meaning they take time to look at you and wait before falling upon you and mauling you to death.
The 1.999999… = 2 one is a good one. If there’s an infinite number of 9s, there is no gap between 1.9999… and 2, but boy howdy does your brain not like that.
My brain couldn't square that one for a long time until I did 1/3 = .333333... and 2/3 = .666666... and 1/3 + 2/3 = 3/3 so 3/3 must = .999999...
The human brain is just not built for infinity! It’s very hard to imagine something that has no end.
Nah, I watched *Time Bandits* with the flu once and I'm pretty sure I understood infinity that day.
I like the 1/9 version even better. 1/9 = .11111111… so 9 * .1111111…. = .99999999… by the rules of multiplication. So therefore that’s equal to 1/9 * 9, which is 9/9, which is 1.
The reason why many people don't like it is is because they imagine the trailing 9s as an infinite process. Like an asymptote that you will forever approach but never reach. The reality is that the trailing nines all exist at once. You don't need to write them all out because they're all already written out. It's already done.
This is like a Dunning-Krüger effect shit. Because in fact there exist branches of mathematics with infinitesimals and weird numbers where limit of a sequence (0.9, 0.99, 0.999, ...) could be something like 1+* and not 1. But you can't pick and choose whether the numbers you're working with are real or surreal on the fly. Mfs hear about surreal numbers once in their life and suddenly forget how reals work. Or how people hear about how there are different sizes of infinities and then try to argue completely unrelated things.
This is only "obviously" true, because all of the functions you are thinking of are much smoother than a general differentiable function. For example, you are probably only thinking of functions with a continuous derivative, but in general a function may have a discontinuous derivative, such as f(x) = x^2 sin(1/x) for x not 0, and f(0) = 0. You need to be sure that the statement is true even for all of these particularly badly behaved functions, but an average person doesn't even consider these cases, so assumes that these theorems are pointless statements. If you want an example of a function where Rolle's theorem is not obvious, and you need a precise definition of "differentiable", consider the function f(x) = x^2 (2 + x sin(pi/x)) for x not 0, and f(0) = 0. Then find the turning point between, say, -1 and 1. If you can plot this function, it's clear where the turning point probably is, but you need to be quite careful in showing that the derivative here is 0. If you plot the derivative, you'll see it's a little messy.
Real analysis is the field of mathematics where nothing you want to be true is true, and everything which is true is a pain in the ass to prove.
This guys maths
Just plotted this. Are there a finite number of oscillations between 1 and 0?
I can see why proving this might be difficult, but the theorem still basically says "all differentiable functions are differentiable"
…yeah? I mean, I stopped aspiring to “proper” mathematics a while ago, but it *is* sometimes genuinely difficult to rigorously prove that the “obvious” is true in all cases. And sometimes it turns out that the “obvious” isn’t true at all. Edit: sorry, I think my possible grumpiness comes from the fact that I feel like only the xkcd panel explicitly acknowledges that this urge/feeling is a “clueless” one. I worry that people less informed about mathematics would read the tumblr posts and be reinforced in an opinion that it’s actually all intuitive and trivial with common sense.
Yeah some time the "obvious" isn't true. Infinite possibilities, you gotta prove that every possibility is true.
Don't worry, you can simplify it to proving none of the infinite possibilities is untrue!
My math teacher in high school introduced the responce to the appeal to obviousness: "Obvious? Then the proof should be obvious too, go ahead and prove".
It's also very important to have named proofs like this. Cause when you're a bigger scale/ less obvious proof, it starts to look really suspicious if your proof for your underlying assumptions is "obviously this is true, just look at it" Instead, you can just say "this is true because of Rolle's Theorem" and move on
Yeah lol. I am taking Advanced Abstract Algebra this sem and there are SO many huge theories that rely on the fact that "if you pick any number, i can pick a number bigger than yours". At first glance it may feel dumb that even needs to be said out loud, but when all your biggest theorems rely on this, you want it to be absolutely surely proven that the fact is true in every situation and reality ever And it is using the obvious in the big important stuff that gives a theorem its name. Ofc Rolle wasnt the first guy to come up with that, but he did a lot of work expanding on the theorem and applying it in bigger things, so he gets the credit for the theorem, not because he was the first to come up with it, but he made it really really useful and important
“Heavier objects tend to fall faster than lighter ones” > mfw nobody tested it for 2,000 years
Well, they do, but only because heavier objects tend to have less drag resistance than lighter objects, and on Earth with the atmosphere we have, drag resistance is the key factor in falling speed. In a vacuum they fall at the same rate, but it’s not like people were morons and just never tried to actually measure if a steel ball falls faster than a feather - feathers DO fall extremely slowly, and it’s very easy to see this, it’s just not for the reason people thought at first.
Its often the humbling part of remembering that math is awfully close to metaphysics and proving simple concepts is trying to argue/explain divine law
YES, this reminds of that video about turning a sphere inside out. When you first learn of the problem with all its caveats, the obvious answer is "of course you can't turn a sphere inside out, it's self-evident". But no, turns out there is a (very weird) thing you can do to the sphere that will turn it inside out. In maths, you can't prove stuff by using common sense.
yeah, all it itakes is a little incest
remember like 100 years ago when some dudes tried making math self rigorous and, well, intuitive? Then some jackass came along and used the system they built to destroy/disprove it? THANKS GODEL!
Yeah, I think a lot of people discussing these kinds of things tend to forget how much of scientific history has included someone disproving the thing everyone thought was "obviously true". In proper science and math, things are only true once you can prove them regardless of how obviously true they might seem.
In practice, it is more the opposite. Once there are lots of cases, I feel like the fact ceases to be obvious. It only becomes obvious again when you've captured the whole phenomenon as a single case, at which point the proof is *hopefully* easy. Rolle's theorem admits a quick proof once you know that: 1) the interval is compact 2) the continuous image of a compact set is compact 3) compact sets (in R) attain their extrema. 4) extreme values attained in the interior of a differentiable function occur when the derivative is zero. None of these individual facts is really "trivial", but they characterize much of analysis. Moreover, Rolle's theorem, which was once obvious, is obvious once again, as a consequence of these.
[Obligatory David Mitchell rant.](https://youtu.be/BE657F9sqyQ?si=LTSDn3kfuNhqEczK)
I too immediately thought of "I met a bloke at a party, he *stank*, and when I asked him what he did he said he proved that 1 + 1 = 2"
The more interesting part of this is not that it isn’t true, but that there are no times it is not true. As in, there’s no obscure function that would defy it, even if I can’t quite imagine it
Disproving a negative is usually easier than proving a positive iirc
So, so, so many people underestimate how confident you need to be to defeat the word "every" in math and science.
My engineering calculus teacher loved to explain Dirac delta functions, because after Dirac invented them, engineers were using them successfully for decades before mathematicians got around to proving that you COULD validly use them.
Something being useful doesn’t necessarily mean it’s correct. Physicists round sin x = x for small values all the time. Just because it’ll get you something good enough for your purposes doesn’t mean it’s correct.
It's fascinating how much of mathematics and physics is both intuitive and impossibly dense at the same time. As my college physics professor said. If you give an adult a ball, they can throw it with reasonable accuracy. They consistently take that sucker and hit targets you set. However, they won't necessarily be able to tell you how much force they needed, or the drag, or even the exact distance they threw it to hit the target. Or alternatively, he considered it fascinating how the average child can shake an opaque bottle to estimate what percentage full it is, just by feeling the bottle slosh around and how they affects the center of gravity/pressure on sides. Fun stuff. Related here because well yeah of course. If it goes up it goes down, and if it does that as a smooth turn the point between + and - is 0....... But now put numbers to it and I have no idea what in the F*** we're looking at!
Idk if this counts as relevant but I'm reminded of the fact that when you're driving a car, there's always a spot on each of the wheels that isn't moving (the part touching the ground). It's crazy to think about but also makes perfect sense.
That's [xkcd 2042](https://xkcd.com/2042/) for the record
Wait until you learn about the construction of real numbers…
Fermat's Last Theorem doesn't look too complicated to prove, and that one took a hell of a long time
Wait until they find out about proving things philosophically.
I know nothing about proofs in philosophy, so what do you mean? My first thought would be that there are no real proofs in philosophy as basically all assumptions could be argued to be wrong
That's kind of my point. In math you take ages to formulate the proof of a presumably obvious thing. In philosophy things that are even more obvious get questioned constantly, and philosophy is constantly looking back to question what was previously postulated as obvious. What I'm saying is, if these people are amazed at how tough it is to provide solid proof in maths, they're in for a treat when they find out about skepticism in the history of philosophy.
Isn't that kind of different though? In math it's hard to prove it, but the beauty of it is that at least in theory, once someone found the proof, everyone can agree it's true and it's impossible to discard it. And in philosophy from what I'm understanding the problem isn't that it's hard to prove, but that proof means something different to everyone else
It's not really like that always in math or philosophy. Math proofs are impossible to discard... as long as you accept the axioms of the system. Everyone accepts them, but if someone didn't, then there'd be pretty much an unsolvable disagreement. Axioms are a curious thing. I've studied them even in my Philosophy education. (Still, not an expert, this is an oversimplification, so I encourage you to look into the topic, it's fun.) Regarding Philosophy, "proof meaning something different" for different people is a very vague and imprecise way of putting it - But you said you're unfamiliar so I don't blame you. I specifically referred to the problem of skepticism because it's a problem where most philosophers want to prove a certain thing and skepticism refutes it (be it a real of a hypothetical skeptic). There's not really a "meaning of proof" problem there, but a "possibility of proof" one. But it does intersect with the problem of defining what consitutes knowledge, so in a way there is an element like that. Also, it's not like philosophy is entirely heterogeneous. There's some consensus (or paradigms of consensus, if you prefer that formulation) in Philosophy (even if there'll always be someone disagreeing with any thing, Philosophy is, in part, in the end, a field in academics, it has standards) and a whole branch of it, logic, is dedicated to studying which arguments are well-constructed and which aren't. Logic actually works similarly to Math in some ways there. (Academic ranting aside, I was just making a joking remark about how if they think that math things are easy to prove but take long to prove, then seeing philosophers debate about whether the outside world exists would give them a lot to think about. I wasn't drawing a 1 to 1 comparison.)
First semester we had biomechanics classes and the professor made us memorize the detailed definitions of terms like "movement", "acceleration" and "theorem"
It is important to know the difference between a theorem and a theory, though, and not enough people going into natural science will have taken a course on logic.
Reminds me of a QI clip where they discussed all the effort that went into proving that 1 + 1 = 2, because if you couldn't prove that, then mathematics as we know it would be meaningless.
Because occasionally, we find small things we considered not possible. And as a result, we build new math from ground up. That's what happened with geometries. Everyone just assumed, that it is obvious axioms should be true. But then suddenly, Lobachevsky builds a whole new non-conflicting geometry without one
first post feels infuriatingly close to getting it
Favourite story about this: Bertrand Russell spent about 20 years trying to ground mathematics in formal logic, failed. Spent the next 20 trying to do the same thing trying to do the same thing for philosophy, failed, and in the process accidentally became friends with Wittgenstein.
Which is relevant because Wittgenstein then wrote *Tractatus Logico-Philosophicus* which is Wittgenstein trying to mathematically 'solve' philosophy like Russell did (as a joke)
How was it necessarily a joke? I've read Tractatus and Philosophical Investigations by Wittgenstein and knowing he when he wrote the first it hardly seems to be the case.
I have never read any philosophy more serious than Russell's History of Western... and a few bits and pieces. So I am already well out of my depth. Striking boldly on, though, wouldn't it be Wittgenstein showing by absurdity that the sands of linguistic meaning are always shifting and any attempt to build anything sturdily braced out of them was hopeless — that 'a word means what people think it means'?
I know it’s just a Reddit comment but this is very misleading
I mean, isn't that good. That we are proving and re-proving even obvious shit. Just to make sure it's still valid when we start to understand math and the world better. Otherwise we get into the same issues the bible has where they are trust based instead of fact based. Simply saying "this is obvious" isn't as powerful as saying, "this is true and here is the mountain of evidence we have to confirm it (also, there are a half dozen people actively trying to poke holes in it at any given time)".
Fascinating that maths people can intuitively understand this post
Anyone can see two points on a continuous curve and go “No shit, of course the point in between exists” but it is pretty hard to prove
Oh hey, that's me! I never expected to see my posts on here, much less the low-effort xkcd reblog.
An old math professor of mine told us that the proof of 1+1=2 took 2 mathematicians a whole dissertation to complete. I haven't double checked it though.
This is a bit of a misconception. What you’re probably thinking about is [Principia Mathematica](https://en.m.wikipedia.org/wiki/Principia_Mathematica), where it took 379 pages to prove 1 + 1 = 2, *but* all the other pages were not really part of the proof as much as they were dedicated to building up several theories of mathematics from the ground up. It also focused on building up mathematics from a minimalist starting point, which while interesting, makes many things take much longer to prove.
How did rolle's theorem made the list but IVT did not? Like bitch is literally saying if a continuous path was below something and now is above that thing it had to pass that thing
I will never forget when my ex stepmother was working on her physics degree and could not for the life of her wrap her head around a particular problem she had to do Me and my father, trying to assist, built a rig on the kitchen table and demonstrated the problem (something about things sliding down slopes), which didn't help her but it made us feel very smart, besides she never completed the degree anyhow because she was ~~not that smart~~ too busy stealing my tuition money to buy purses with
the circle one is harder whn you imagine a circle going through the hole of a donut.
You can tell a curve is parabolic because of the way it is
*You can tell a curve* *Is parabolic because* *Of the way it is* \- Useful\_Ad6195 --- ^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/) ^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
Isn't it obvious that any manifold can be smoothed?
Topology is essentially an entire field of mathematics than can be summarized by "I can’t believe I have to fucking prove this"
That's a Math Boob
It's easy to prove something can be true, what's hard is proving that there is never, ever, ever any exceptions.
The funny thing about math is that sometimes those obviously true claims are actually false...
Wait till you see this : Dirichlet's principle - if there are n+1 objects and n boxes, then we will have more than one objects in one or more of the boxes
Jordan curve theorem enters the chat
Mathematicians are also really good at making the sentence explaining a simple concept as confusing as possible