I have a degree in math and a degree in cs. I fucking love nonsense.

  • 0 Posts
  • 33 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle
  • but it’s not really useful to describe reality.

    This is just not true.

    What topology does for people practically, is it allows them to do a rough kind of geometric reasoning in a wide variety of cases. Further, the geometric notions defined via topology subsume many of the more intuitive notions you might already know of from the number line or the plane.

    For example, continuity of functions, convergence of sequences, interiors and boundaries of sets, connectedness and many other things are inherently topological notions that any person who has taken a typical calculus sequence should have some intuitive idea of.

    One of the biggest difference between actual pure topology and analysis is that analysis is just done in the context of really nice types of topological spaces called metric spaces in which notions of distance are available.

    Any time people are using results of calculus in the sciences, under the hood they are using details about topology on R^n.


  • I can say “sin(1/x) is a continuous function on (0,1] but its graph is not path connected”, which is more formal, but likely not mean anything to most of the reader. In that sense, I guess I have also lied :)

    It’s also false. Take any pair of points on the graph of sin(1/x) using the domain (0,1] that you just gave. Then we can write these points in the form (a,sin(1/a)), (b,sin(1/b)) such that 0 < a < b without loss of generality. The map f(t)=(t,sin(1/t)) on [a,b] is a path connecting these two points. This shows the graph of sin(1/x) on (0,1] is path connected.

    This same trick will work if you apply it to the graph of ANY continuous map from a connected subset of R into R. This is what my graph example was getting at.

    The “topologists sine curve” example you see in pointset topology as an example of connected but not path connected space involves taking the graph you just gave and including points from its closure as well.

    Think about the closure of your sin(1/x) graph here. As you travel towards the origin along the topologists sine curve graph you get arbitrarily close to each point along the y-axis between -1 and 1 infinitely often. Why? Take a horizontal line thru any such point and look at the intersections between your horizontal line and your y=sin(1/x) curve. You can make a limit point argument from this fact that the closure of sin(1/x)'s graph is the graph of sin(1/x) unioned with the portion of the y-axis from -1 to 1 (inclusive).

    Path connectedness fails because there is no path from any one of the closure points you just added to the rest of the curve (for example between the origin and the far right endpoint of the curve).

    A better explanation of the details here would be in the connectedness/compactness chapter in Munkres Topology textbook it is example 7 in ch 3 sec 24 pg 157 in my copy.

    However, I like to push back on the assumption that, in the context of teaching continuous function, the underlying space needs to be bounded: one of the first continuous function student would encounter is the identity function on real, which has both a infinite domain and range.

    This is fine. I stated boundedness as an additional assumption one might require for pragmatic reasons. It’s not mandatory. But it’s easy to imagine somebody trying to be clever and pointing out that if we allow the domain or range to be unbounded we still have problems. For example you literally cannot draw the identity function in full. The identity map extends infinitely along y=x in both directions. You don’t have the paper, drawing utensils or lifespan required to actually draw this.


  • More impressively, you can have function that is continuous, but you cannot find a connected path on it (i.e. not path connected). In plain words, if anyone told you “a function is continuous when you can draw it without lifting your pen”. They have lied to you.

    You are misrepresenting an analogy as a lie. Besides that, in the context where the claim is typically made, the analogy is still pretty reasonable and your example is just plain wrong.

    People are talking about continuous maps on subsets of R into R with this analogy basically always (i.e., during a typical calc 1 or precalc class). The only real issue are domain requirements in such a context. You need connectedness in the domain or else you’re just always forced into lifting your pen.

    There are a couple other requirements you could add as well. You might also want to avoid unbounded domains since you can’t physically draw an infinitely long curve. Likewise you might want to avoid open endpoints or else things like 1/x on (0,1] become a similar kind of problem. But this is all trivial to avoid by saying “on a closed and bounded interval” and the analogy is still fairly reasonable without them so long as you keep the connectedness requirement.

    For why your example is just wrong in such a context, say we’re only dealing with continuous maps on a connected subset of R into R. Recall the connected sets in R are just intervals. Recall the graph of a function f with domain X is the set {(x,f(x)) : x is in X}. Do you see why the graph of such a function is always path connected? Hint: Pick any pair of points on this graph. Do you see what path connects those two points?

    Once you want to talk about continuous maps between more general topological spaces, things become more complicated. But that is not within the context in which this analogy is made.







  • If you subscribe to classical logic (i.e., propositonal or first order logic) this is not true. Proof by contradiction is one of the more common classical logic inference rules that lets you prove negated statements and more specifically can be used to prove nonexistence statements in the first order case. People go so far as to call the proof by contradiction rule “not-introduction” because it allows you to prove negated things.

    Here’s a wiki page that also disagrees and talks more specifically about this “principle”: source (note the seven separate sources on various logicians/philosophers rejecting this “principle” as well).

    If you’re talking about some other system of logic or some particular existential claim (e.g. existence of god or something else), then I’ve got not clue. But this is definitely not a rule of classical logic.








  • Operating System Concepts by Silberschatz, Galvin and Gagne is a classic OS textbook. Andrew Tanenbaum has some OS books too. I really liked his OS Design and Implementation book but I’m pretty sure that one is super outdated by now. I have not read his newer one but it is called Modern Operating Systems iirc.


  • myslsl@lemmy.worldtoScience Memes@mander.xyzI just cited myself.
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    2 years ago

    i has nice real world analogues in the form of rotations by pi/2 about the origin (though this depends a little bit on what you mean by “real world analogue”).

    Since i=exp(ipi/2), if you take any complex number z and write it in polar form z=rexp(it), then multiplication by i yields a rotation of z by pi/2 about the origin because zi=rexp(it)exp(ipi/2)=rexp(i(t+pi/2)) by using rules of exponents for complex numbers.

    More generally since any pair of complex numbers z, w can be written in polar form z=rexp(it), w=uexp(iv) we have wz=(ru)exp(i(t+v)). This shows multiplication of a complex number z by any other complex number w can be thought of in terms of rotating z by the angle that w makes with the x axis (i.e. the angle v) and then scaling the resulting number by the magnitude of w (i.e. the number u)

    Alternatively you can get similar conclusions by Demoivre’s theorem if you do not like complex exponentials.


  • myslsl@lemmy.worldtoScience Memes@mander.xyzI just cited myself.
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 years ago

    They don’t eventually become 1. Their limit is 1 but none of the terms themselves are 1.

    A sequence, its terms and its limit (if it has one) are all different things. The notation 0.999… represents a limit of a particular sequence, not the sequence itself nor the individual terms of the sequence.

    For example the sequence 1, 1/2, 1/3, 1/4, … has terms that get closer and closer to 0, but no term of this sequence is 0 itself.

    Look at this graph. If you graph the sequence I just mentioned above and connect each dot you will get the graph shown in this picture (ignoring the portion to the left of x=1).

    As you go further and further out along this graph in the positive x direction, the curve that is shown gets closer and closer to the x-axis (where y=0). In a sense the curve is approaching the value y=0. For this curve we could certainly use wordings like “the value the curve approaches” and it would be pretty clear to me and you that we don’t mean the values of the curve itself. This is the kind of intuition that we are trying to formalize when we talk about limits (though this example is with a curve rather than a sequence).

    Our sequence 0.9, 0.99, 0.999, … is increasing towards 1 in a similar manner. The notation 0.999… represents the (limit) value this sequence is increasing towards rather than the individual terms of the sequence essentially.

    I have been trying to dodge the actual formal definition of the limit of a sequence this whole time since it’s sort of technical. If you want you can check it out here though (note that implicitly in this link the sequence terms and limit values should all be real numbers).



  • My degree is in mathematics. This is not how these notations are usually defined rigorously.

    The most common way to do it starts from sequences of real numbers, then limits of sequences, then sequences of partial sums, then finally these notations turn out to just represent a special kind of limit of a sequence of partial sums.

    If you want a bunch of details on this read further:

    A sequence of real numbers can be thought of as an ordered nonterminating list of real numbers. For example: 1, 2, 3, … or 1/2, 1/3, 1/4, … or pi, 2, sqrt(2), 1000, 543212345, … or -1, 1, -1, 1, … Formally a sequence of real numbers is a function from the natural numbers to the real numbers.

    A sequence of partial sums is just a sequence whose terms are defined via finite sums. For example: 1, 1+2, 1+2+3, … or 1/2, 1/2 + 1/4, 1/2 + 1/4 + 1/8, … or 1, 1 + 1/2, 1 + 1/2 + 1/3, … (do you see the pattern for each of these?)

    The notion of a limit is sort of technical and can be found rigorously in any calculus book (such as Stewart’s Calculus) or any real analysis book (such as Rudin’s Principles of Mathematical Analysis) or many places online (such as Paul’s Online Math Notes). The main idea though is that sometimes sequences approximate certain values arbitrarily well. For example the sequence 1, 1/2, 1/3, 1/4, … gets as close to 0 as you like. Notice that no term of this sequence is actually 0. As another example notice the terms of the sequence 9/10, 9/10 + 9/100, 9/10 + 9/100 + 9/1000, … approximate the value 1 (try it on a calculator).

    I want to stop here to make an important distinction. None of the above sequences are real numbers themselves because lists of numbers (or more formally functions from N to R) are not the same thing as individual real numbers.

    Continuing with the discussion of sequences approximating numbers, when a sequence, call it A, approximates some number L, we say “A converges”. If we want to also specify the particular number that A converges to we say “A converges to L”. We give the number L a special name called “the limit of the sequence A”.

    Notice in particular L is just some special real number. L may or may not be a term of A. We have several examples of sequences above with limits that are not themselves terms of the sequence. The sequence 0, 0, 0, … has as its limit the number 0 and every term of this sequence is also 0. The sequence 0, 1, 0, 0, … where only the second term is 1, has limit 0 and some but not all of its terms are 0.

    Suppose we define a sequence a1, a2, a3, … where each of the an numbers is one of the numbers from 0, 1, 2, 3, 4, 5, 6, 7, 8 or 9. It can be shown that any sequence of the form a1/10, a1/10 + a2/100, a1/10 + a2/100 + a3/1000, … converges (it is too technical for me to show this here but this is explained briefly in Rudin ch 1 or Hrbacek/Jech’s Introduction To Set Theory).

    As an example if each of the an values is 1 our sequence of partial sums above simplifies to 0.1,0.11,0.111,… if the an sequence is 0, 2, 0, 2, … our sequence of partial sums is 0.0, 0.02, 0.020, 0.0202, …

    We define the notation 0 . a1 a2 a3 … to be the limit of the sequence of partial sums a1/10, a1/10 + a2/100, a1/10 + a2/100 + a3/1000, … where the an values are all chosen as mentioned above. This limit always exists as specified above also.

    In particular 0 . a1 a2 a3 … is just some number and it may or may not be distinct from any term in the sequence of sums we used to define it.

    When each of the an values is the same number it is possible to compute this sum explicitly. See here (where a=an, r=1/10 and subtract 1 if necessary to account for the given series having 1 as its first term).

    So by definition the particular case where each an is 9 gives us our definition for 0.999…

    To recap: the value of 0.999… is essentially just whatever value the (simplified) sequence of partial sums 0.9, 0.99, 0.999, … converges to. This is not necessarily the value of any one particular term of the sequence. It is the value (informally) that the sequence is approximating. The value that the sequence 0.9, 0.99, 0.999, … is approximating can be proved to be 1. So 0.999… = 1, essentially by definition.