Sunday, February 12, 2006

A Mathematician's Scratchpad

I seem to be undergoing a bit of a blog diaspora at the moment.

The latest instance of this is that, thanks to the generosity of my friends David Jao and Mikael Johansson, A Mathematician's Scratchpad can now be found here, where I'll be using wordpress 2.0 and - most importantly - LaTeXRender. This way the formulae in my posts can actually be readable.

I've got a copy of the pressing down lemma post up at the moment, but it still needs a bit more editing. Converting this crappy ascii mathematics into LaTeX turns out to bea lot of work.

Saturday, February 11, 2006

The pressing down lemma

The pressing down lemma is a ridiculously useful technical lemma from combinatorial set theory. It's amazing how often you can just set up some machinery, apply the pressing down lemma, and poof. A solution pops out. I'm going to give a simplified version of the lemma and show two nice applications of it to general topology. There are various almost immediate generalisations of the version I'm going to give, but they require me to introduce more new ideas than I want to, and this version will suffice.

Let omega_1 be the first uncountable ordinal. A function f : omega_1 -> omega_1 is said to be regressive if for every x > 0 we have f(x) < x.

The pressing down lemma: Every regressive f : omega_1 -> omega_1 is constant on some unbounded set.

Proof:

Suppose for every t < omega_1 we had that f^(-1)( {t} ) was bounded. Let g(t) = sup f^(-1) ( {t} ). Note that if x != 0 and x in f^(-1) {t} then t = f(x) < g(t) (and trivially g(0) != 0, so we have t < g(t) for every t).

We define a sequence x_n as follows. Let x_0 = 0. Let x_{n+1} = sup { g(t) : t <= x_n } (there are only countably many t <= x_n, so this set is bounded above).

Now, in particular we have x_{n+1} >= g(x_n).> x_n.

Now, let x = sup x_n. We have that f(x) < x, and so that f(x) < x_k for some k. But then x is in g(f(x)) and so x_{k+1} >= x and thus x_{k+2} > x, contradicting that x = sup x_n.

QED

Note that there's something special going on here. If we define f : omega -> omega by f(0) = 0 and f(n) = n - 1 for n > 0 then this is a regressive function which is only constant on {0}. This is a genuinely new property of the uncountable case.

Now, let's give an application of this:

Theorem: Give omega_1 the order topology and let f : omega_1 -> R be continuous. Then f is eventually constant (i.e. there is some t in omega_1 such that f is constant on [t, inf) ).

Proof:

f is continuous, so for every t there is some neighbourhood of t such that for s in this neighbourhood we have |f(s) - f(t)| < 2^{-n+1}. In particular if t > 0 there is some k < t such that whenever k < s < t we have |f(s) - f(t)| < 2^(-(n+1)). Let f(t) be the least such k. (and f(0) = 0). Now, f is regressive, so it's constant on some unbounded set.

i.e. we can find k < omega_1 and an unbounded set of x such that for k < s < x we have |f(s) - f(x)| < 2^(-n+1). In particular this works for k', so we have that |f(s) - f(k')| <= 2^(-(n+1)) + 2^(-(n+1)) = 2^(-n). Now, because the set of x for which this works is unbounded, for any s > k we can find x > s for which this works, and so |f(s) - f(k')| <= 2^(-n)

So, we can find t_n such that for any s >= x we have |f(s) - f(t_n)| <= 2^(-n). Let t = sup t_n

Now, we have from the above that f( [t, inf) ) has diameter <= 2^(-n) for every n, and so has diameter 0. i.e. it consists of a single point. Hence f is eventually constant.

QED

Neat, huh? There was essentially no work in there other than trying to figure out how to apply the pressing down lemma.

The next bit is a tiny bit more work, and won't apply the pressing down lemma directly. Instead we'll procede via another cool and important lemma in combinatorial set theory.

A collection of sets F is said to be a delta system with root I if for every distinct A, B in F we have A cap B = I. Note that I is allowed to be empty.

The delta system lemma:

Let F be an uncountable collection of finite sets. F contains an uncountable delta system.

Proof:

By passing to a subset we may assume that |F| = aleph_1. Further we then have that |Union F| = aleph_1, so by taking an appropriate bijection we may assume that Union F <= omega_1 and contains no finite ordinals.

Now, enumerate F as { U_t : t < omega_1 }

For each t < omega_1 define f(t) = max ( {0} union (U_t cap t ) ) (this set is finite, so has a maximum). Then f is regressive. (When t is finite you need the fact that we choose the U_t so that they contained no finite ordinals, else you might have t <= U_t). So, by the pressing down lemma, is constant on some unbounded set.

So, we have an ordinal s and an unbounded set A such that for t in A we have max { U_t cap t } = s. In particular U_t cap t <= s.

Now, s is countable, and for every t we have that U_t cap t is a finite subset of s. There are only countably many finite subsets of s, and A is uncountable, so for some finite R <= s we must have that U_t cap t = R for uncountably many t in A. Fix some such R and let A' be the set of t in A such that U_t cap t = R.

Now, by transfinite induction we pick an omega_1 sequence x_t such that x_t is in A' and x_t > sup Union_{s in A', s < t} F_t. Let D be the image of this sequence. Now for s, t in D if s < t we have that max U_s < t.

Now, let s, t in D. Say s < t. Then U_s cap U_t = (U_s cap t) cap U_t, because U_s <= t. Thus U_s cap U_t <= U_s cap (U_t cap t) = U_s cap R <= R. But we know that R <= U_s cap U_t, because we have that R <= U_s cap s <= U_s and R < = U_t cap t <= U_t.

Hence { U_t : t in D } is our desired delta-system, and has root R.

QED

Phew. I have to admit, I don't think this proof is substantially easier than the proof without the pressing down lemma. However it's not any harder, and the use of the pressing down lemma clears away a lot of the ugly details.

So. Why do we care about these things? A number of reasons actually, but I don't know many of them. I'm going to use it to prove the following topological result.

First a definition. A topological space is said to be ccc (countable chain condition) if every pairwise disjoint collection of open sets is countable.

Why do we care about this? Well, it has important connections to the theory of Boolean algebras, and it's often a good way to test if something is separable - any separable space is ccc, and it's often easy to see if a space is not ccc. Seeing if it *is* ccc can sometimes be a bit more subtle, and that's something this theorem will help with. Additionally, it is just a special case of one of the many cardinal invariants we can use to study topological spaces, and it's one which is remarkably well behaved.

Theorem: Let { A_t : t in I } be a collection of topological spaces such that every finite product of them is ccc. Then Prod A_t is ccc. In particular a product of separable spaces is ccc.

Proof:

This is almost an immediate consequence of the delta-system lemma. Suppose F was an uncountable set pairwise disjoint open subsets of A = Prod A_t. Then we can replace each element of F with some basis element contained in it. Each of these basis elements is a product of open sets U_t such that for all but finitely many t we have U_t = A_t. Let G be the collection of sets of the form { t : U_t != A } for prod U_t in F. Then this is an uncountable collection of finite sets, so it contains a delta system. Say D <= G is a delta system with root R.

Now, R is some finite subset of I. So consider prod_{t in R} U_t, where the U_t are chosen from F such that { t : U_t != A_t } is in D. Then because we have that Prod U_t cap Prod V_t = Prod (U_t cap V_t) we have that the products prod_{t in R} U_t are pairwise disjoint (since U_t cap U_t' != emptyset for t not in R, as at least one of U_t, U_t' is equal to A_t). So we've got an uncountable collection of pairwise disjoint open subsets of prod_{t in R} A_t, contradicting the hypothesis that all finite products of the A_t are ccc.

The 'in particular' then follows from the fact that separable spaces are ccc, and finite products of separable spaces are separable.

QED

Neat, huh?

As a quick application of this result, recall that every second countable compact hausdorff space is the continuous image of {0, 1}^N - the cantor set. This shows that we can't directly generalise this, because the theorem proofs that for any A we have that {0, 1}^A is ccc. Easy check: The continuous image of a ccc space is ccc. Thus, for example, omega_1 + 1 with the order topology is not the continuous image of {0, 1}^A for any A, because it is not ccc (this is easy to check).

There *is* a generalisation of this result about the Cantor set, but that's a story for another day.

Sunday, February 05, 2006

Silly proofs 2

I swear this was supposed to be Silly proofs three, but obviously my memories of having done two silly proofs are misleading.

This proof isn't actually that silly. It's a proof of the L^2 version of the Fourier inversion theorem.

We start by noting the following important result:

Int_{-inf}^{inf} e^{itx} e^{-x^2/2} = sqrt{2 pi} e^{-t^2/2}

Thus if we let h_0 = e^{-x^2}/2 then we have h_0^ = h_0 (where ^ denotes the fourier transform)

Let h_n(x) = (-1)^n e^{x^2/2} d^n/dx^n (e^{-x^2})

This satisfies:

h_n' - xh_n = -h_{n+1}

So taking the Fourier transform we get

ix h_n^ - i (h_n^)' = -h_{n+1}^

So, h_n and (-i)^n h_n^ satisfy the same recurrence relation. Further h_0^ = h_0.

Hence we have that h_n^ = (-i)^n h_n

Now, the functions h_n are orthogonal members of L^2, and so form an orthonormal basis for their span.

On this span we have the map h -> h^ is a linear map with each h_n an eigenvector. Further h_n^^ = (-1)^n h_n. Thus the fourier transform is a linear isometry from this space to itself.

Now, h_n is odd iff n is odd and even iff n is even. i.e. h_n(-x) = (-1)^n h_n

Thus h_n^^(x) = (-1)^n h_n(-x)

And hence h^^(x) = (-1)^n h(-x) for any h in the span. As both sides are continuous, it will thus suffice to show that the span of the h_n is dense.

Exercise: The span of the h_n is precisely the set of functions of the form p(x) e^{-x^2 / 2}

It will thus suffice to prove the following: Suppose f is in L^2 and Int x^n e^{-x^2 / 2} f(x) dx = 0 for every x. Then f = 0.

But this is just an easy application of the density of the polynomial functions in L^2[a, b] (pick a big enough interval so that the integral of |f(x)|^2 over that interval is within epsilon^2 of ||f||^2, and this shows that the integral of |f(x)|^2 over that interval is 0. Thus ||f||_2 < epsilon, which was arbitrary, hence ||f||_2 = 0).

I've dodged numerous details here, like how the L^2 Fourier transform is actually defined, but this really can be turned into a fully rigorous proof - nothing in this is wrong, just a little fudged. The problem as I see it is that - while the L^2 Fourier theory is very pretty and cool - this doesn't really convert well to a proof of the L^1 case, which is the more important one.

Friday, January 13, 2006

Stone duality rocks

I'm sorry for the title. Please don't hurt me. I have a condition.

Stone duality is a result that says (in a certain sense) that Boolean algebras are the same as zero-dimensional compact Hausdorff spaces. (It's not an equivalence of categories as I had previously claimed, as the resulting functor is contravariant, but it is an equivalence of categories if you replace one of them with their dual).

I'll assume you know a little bit of topology (enough to decipher 'compact Hausdorff' anyway), but I'll give a quick refresher on Boolean algebras.

One way to define them is purely ring theoretic. A Boolean algebra is a ring with 1 such that for every x we have x^2 = x.

These have various nice properties. In particular every element of a Boolean algebra is its own additive inverse, as 4x = 4x^2 = (2x)^2 = 2x, so 2x = 0. Also, multiplication is commutative as x + y = (x + y)^2 = x^2 + xy + yx + y^2 = x + xy + yx + y. So xy + yx = 0 and thus by the previous observation xy = yx.

Two important notes: Firstly, the only Boolean algebra which is a field is {0, 1} with the obvious operations. Secondly, the homomorphic images of Boolean algebras are Boolean algebras. So if B is a Boolean algebra and I is a maximal ideal then B/I is a field, and so {0, 1}. Thus for any x in B we have either x in I or 1 + x is in I.

The classic example of a Boolean algebra is P(X) for some set X, with the symmetric difference as + and intersection as multiplication. 0 is the empty set and 1 is X. In the finite case, every Boolean algebra is isomorphic to one of these. In the infinite case, something more interesting happens.

But first, a note. We can turn any Boolean algebra into a lattice (a partially ordered set in which every finite set has a sup and inf, or equivalently a certain type of algebraic structure) of a very special kind:

Define x <= y if xy = x. Then x^2 = x, so x <= x, if x <= y and y <= x then xy = x and yx = y then x = y because the algebra is commutative. If x <= y and y <= z then xz = (xy)z = x(yz) = xy = x. So, it's a partial order. Note that 1 is the largest element of the order and 0 is the smallest.

I'm going to claim the rest of the details to check without proof:

sup{x, y} = x + y + xy
inf{x, y} = xy

We write sup{x, y} = x \/ y and inf{x, y} = x /\ y

These two operations are associative, commutative and idempotent. Further they distribute over eachother.

Finally we have that for every x there is a unique y such that x \/ y = 1 and x /\ y = 0. (Specifically y = 1 + x). We write this x^c. Further, De Morgan's laws hold in that (x \/ y)^c = x^c /\ y^c and (x /\ y)^c = x^c \/ y^c. Further x <= y iff y^c <= x^c.

The first part is just saying that this forms a lattice. The second part is saying that this is a distributive lattice, and the third is that it is complemented.

In the power set case we have that \/ is union and <= is subset containment as one might expect.

Given any complemeted distributive lattice we can get a Boolean algebra structure back out of it, by taking x + y = (x /\ y^c) \/ (y /\ x^c), the symmetric difference.

So, Boolean algebras are the same thing as complemented distributive lattices. This is nice, but it's not the main point here.

It does however give us one useful way of recharacterising things.

Theorem: I <= B is an ideal of B iff it contains 0, is closed under \/ and whenever y in I and x <= y we have x in I.

I'll leave this as an exercise, but it's not hard. But applying the complement operator to it gives us a dual definition:

F <= B is a filter iff it contains 1, is closed under /\ and whenever y in I and y <= x we have x in I.

Filters turn out to be a very useful equivalent way of looking at ideals.

Boolean algebras come up an awful lot in their own right. A lot of natural things in logic and set theory can be interpreted as Boolean algebras, and there are rich classes of examples of them. Most often they come up as lattices rather than algebras, but because these two are the same thing we can cheerfully switch between them as we wish.

The major example (which will lead us to Stone duality) is the following:

Let X be a topological space and let A be the set of clopen subsets of X. Then A is closed under union, intersection and complement and so forms a Boolean subalgebra of P(X). We call this the clopen algebra of X. In general this may not tell us an awful lot about X, but in the case where X is zero-dimensional (there is a basis of clopen sets) it will.

Stone duality is going to say the following:

1) For every boolean algebra B there is a topological space which we will call the Stone space of B such that B is isomorphic to its clopen algebra.

2) Given a zero dimensional compact hausdorff space X, X is homeomorphic to the Stone space of its clopen algebra.


So for every zero dimensional compact Hausdorff space there is a unique Boolean algebra, and vice versa. Further, both constructions are (contravariant) functorial.

I'm not going to prove this, but I will explain how it works and why it gives a zero dimensional compact Hausdorff space.

The Stone space of B, S(B) is the set of all homomorphisms from B into {0, 1}, given the subspace topology from {0, 1}^B. Equivalently it's the Zariski topology on the set of maximal ideals of B. This is compact because it's a closed subspace, and zero-dimensional and hausdorff because {0, 1}^B is, and both properties are preserved under restriction. The fact that it's functorial is obvious.

Sketch proof of why X is homeomorphic to the Stone space of its clopen set. Let x in X. Then { U in clop : x in U } is an ultrafilter on clop(X), so gives rise to a maximal ideal and so a homomorphism into {0, 1}. You can check this is a homeomorphism.

The fact that it's a contravariant functor actually makes it more interesting, not less.

Here's why:

The stone space of P(N) is the set of all maximal ideals of P(N). By complementing we may equivalently regard it as the set of all maximal filters on P(N). i.e. the set of all ultrafilters, which form the Stone-Cech compactification of N, beta N. It's an easy check to see that this gives the right topology as well. The Stone Cech compactification is a very major area of study in topology, and the Stone duality gives some very powerful tools for studying it.

Of more interest is the remainder, beta N \ N. Now, under Stone duality this is not a subalgebra of P(N) but a quotient of it. Specifically it is the quotient P(N) / Fin, where Fin is the ideal of finite subsets of P(N).

This Boolean algebra has some nice properties. Firstly, it is atomless. An atom in a Boolean algebra is an element a such that x <= a implies that x = a or x = 0. P(N) has lots of atoms (all the singletons), but we've quotiented out by the ideal generated by the atoms (this doesn't always succeed in getting rid of atoms, as it can introduce new ones, but in this case it does). A Boolean algebra being atomless is equivalent to its Stone space having no isolated points.

Most importantly, it has something called the strong countable seperation property. If A, B are countable subsets of P(N)/Fin such that for any finite F, G with F <= A, G <= B we have sup F < inf G then there is an interpolating element x. i.e. one such that for a in A, b in B we have a < x < b. This is incredibly useful, because it lets us prove the following:

Let A, B be Boolean algebras such that B has the strong countable seperation property. Let C <= A a countable subalgebra. Let f : C -> B be an embedding, a in A and define C' to be the algebra generated by C and a. Then we can extend f to an embedding f' : C' -> B.

(The proofs of both of these are a bit technical and a pain to write up in ascii, so I don't intend to include them here. The proofs for all of this will get written up properly at some later date).

Now, this lets us prove Cool Things.

Theorem:

Let A, B be Boolean algebras such that |A| = aleph_1 and B has the strong countable seperation property. Then A embeds into B.

Proof:

You can probably do this via Zorn, but I find transfinite induction more natural here.

Enumerate A as {a_t : t < aleph_1}. We'll define a sequence of subalgebras A_t and embeddings f_t : A_t -> B such that if t < s then A_t <= A_s and f_s restricted to A_t is f_t, and Union A_t = A. Then the patching of these functions to A will give an embedding of A into B.

But doing this is easy. We take A_t to be the algebra generated by Union_{s < t} A_s union {a_t}. We have an embedding from the union of the previous subalgebras, so the extension lemma gives us an embedding of A_t. We pick one such embedding and call it f_t, continuing the induction.

QED

In particular this works with B = P(N)/Fin

This probably doesn't sound especially exciting at first glance. But lets plug it into Stone duality and see what we get:

Let X be a zero dimensional compact Hausdorff space of weight <= aleph_1 (the weight of a topological space is the smallest cardinality of a basis), then X is a continuous image of beta N \ N.

Ok, this is still interesting. But zero dimensional spaces are a bit special - there are certainly interesting examples of them, but a lot of the spaces which we want to study aren't zero dimensional. Fortunately, there's a useful theorem of point set topology we can apply.

Every compact Hausdorff space of weight k embeds as a subspace of [0, 1]^k, as a consequence of Urysohn's lemma. Recall that [0, 1] is a continuous image of {0, 1}^aleph_0. Hence [0, 1]^k is a continuous image of {0, 1}^k. Thus any compact Hausdorff space of weight k is a continuous image of a closed subspace of {0, 1}^k, and so the continuous image of a zero dimensional compact hausdorff space of weight k.

Ok, now it's interesting. We've suddenly got a huge wealth of continuous maps from beta N \ N. This is Paracivenko's first theorem:

Every compact Hausdorff space of weight <= aleph_1 is the continuous image of beta N \ N.

It gets better. We can use almost exactly the same proof to give something which will (combined with the continuum hypothesis) give a very powerful result:

Theorem: Let A, B be Boolean algebras with the strong countable seperation property and |A| = |B| = aleph_1. Then A and B are isomorphic.

Proof:

The place where we use the continuum hypothesis is in the very statement of the theorem. |P(N)/Fin| = 2^aleph_0 = aleph_1 by CH.

We need to adapt the previous argument. We'll use a trick called a back and forth argument.

We'll define increasing sequences A_t, B_t countable subalgebras of A and B. with Union A_t = A, Union B_t = B.

Further there will be embeddings as follows:

f_t : A_t -> B_t
g_t : B_t -> A_{t+}

Such that g_t f_t = id and f_{t+} g_t = id.

Then these will paste together to give maps f : A -> B, g : B -> A which are homomorphisms and mutual inverses, proving the result.

Now that we've got this setup, it's really just a case of applying the countable extension lemma.

Suppose we've done this for for s < t. Let C be the union of g_s(B_s) over s < t. Then this is a countable Boolean subalgebra, and we have an embedding of it h : C -> B given by the inverses of the g_s being patched together. By the construction we have Union_{s < t} A_s <= C, and h|_{A_s} = f_s. Now, extend this map h to the algebra generated by C and a_t. Call this algebra A_t and the extension f_t : A_t -> B.

Now, we have that the image of f_t contains B_s for s < t. Define B_t to be the algebra generated by the image of f_t and b_t and g_t : B_t -> A to be an extension of the inverse of f_t to this algebra.

The induction continues.

Phew. I think I mangled that explanation slightly. Hopefully the idea that I'm trying to convey is reasonably clear.

Anyway, the point is that under the assumption of the continuum hypothesis this becomes "P(N)/Fin is the unique Boolean algebra of cardinality aleph_1 with the strong countable seperation property."

This is nice, because it's a simple short characterisation of something important. Having these will very often give you interesting or useful results.

Now, hit it with Stone duality...

The question is, what does the statement 'has the strong countable seperation property' become under Stone duality?

It turns to be something slightly awkward:

Definition: X is a Paracivenko space if it is a compact zero dimensional Hausdorff space with no isolated points, such that every pair of disjoint F_sigma sets have disjoint closures and every non-empty G_delta set has empty interior.

Theorem: B has the strong countable seperation property iff its Stone space is Paracivenko.

I'm not going to prove this.

So, our uniqueness result becomes: beta N \ N is the unique Paracivenko space of weight aleph_1.

This is actually extremely useful, because there are a wealth of Paracivenko spaces of weight aleph_1 (under CH), and this theorem tells us they're all homeomorphic - so by studying one, we study all of them.

For example, (beta N \ N)^2 is Paracivenko, so is homeomorphic to beta N.

Seems trivial? It's not. You can't prove this result under ZFC.

This gives rise to some very rich theory in modern set theoretic topology - the study of beta N under the continuum hypothesis is extremely nice, and this leads on to a lot of interesting topological and combinatorial results...

...which I know almost nothing about. So I'm going to stop here.

Wednesday, January 11, 2006

Silly proofs 1

My general style of proof in mathematics is to apply deep and powerful theorems in clever ways. I don't like going back to basics, and thus will often apply a big theorem in lieu of making explicit estimates, direct calculations, etc. Sometimes this gives you a better proof, other times it just gives you a shorter and more mysterious proof.

This, combined with my general liking of severe overkill, is probably what leads to me enjoying proving things in silly ways. I'm going to use this blog to share some of my favourite silly proofs.

This one came up recently:

Let M_n be a sequence of real numbers with M_n -> inf. Show that there exists a_n > 0 such that sum a_n converges and sum M_n a_n diverges.

Now, if you want to be boring you can construct such a_n explicitly. But my way is much more fun. "Ah ha!" I thought, "If it didn't then you could define a discontinuous linear functional on l_1, and we know that's impossible!". What follows is basically just a sketch of the most elementary version of this argument I could use:

Suppose not. Then for every a_n in l_1 we have sum M_n a_n converges (as it converges absolutely). Define f : l_1 -> R by f(x) = sum M_n x_n. Then f is linear and discontinuous - because ||f|| >= M_n -> inf.

Now, let f_N = sum_{n <= N} M_n x_n. f_N is continuous, and for all x we have f_N(x) -> f(x). So f is a pointwise limit of continuous linear functionals. But the continuous linear functionals are closed under pointwise limits (e.g. because they are Borel measurable, because of the uniform boundedness theorem, because the stars are in the correct alignment, etc), contradicting the fact that f is discontinuous!

I leave the elementary proof as an exercise for the interested reader.

Tuesday, January 10, 2006

Chains of null sets

Noam Elkies made a cute observation on his site, which is that there are chains of null subsets of [0, 1] whose union is not null. Proof: Take a maximal chain. The union of this can't be null or it would contradict the maximality of the chain.

He then went on to ask how 'large' their union can be: Under the continuum hypothesis it can be all of [0, 1]. Do you need the continuum hypothesis for this? Without it how large can the union be?

I tinkered around with applying some cardinal invariants to the problem and came up with a nearly complete solution. I'm going to be posting a pdf of it at some point, but here are the highlights:

The solution is for all intents and purposes purely combinatorial. I used essentially no measure theory in proving it except for one lemma which has a comparably easy category analogue.

It basically depends on three cardinal invariants. add, cov and non. The additivity is the smallest cardinal k such that there are k many null sets whose union is not null. cov is the smallest cardinality of a covering of [0, 1] by null sets. non is the smallest cardinality of a non-null set.

We have aleph_1 <= add <= non, cov <= 2^aleph_0. add is regular, the other two don't have to be, and no inequalities between non and cov are provable in ZFC. (I screwed up and claimed in my email to Elkies that in fact non <= cov and they were both regular. This is false, but it only broke a minor part of my conclusions).

It's relatively easy to see that if non = 2^aleph_0 or add = cov then you can get a chain whose union is [0, 1]. What's slightly harder is that if such a chain exists then you can find some regular cardinal k (the cofinality of the chain) with cov <= k <= non. So, because you can have non < cov, you can't always find such a chain.

So, in the absence of one with union [0, 1], how big can it be? Can it be measurable? (If it is measurable then of course it has measure > 0). Turns out not, and this is where the tiny bit of measure theory comes in. If A is measurable and m(A) > 0 then mu(A + Q) = 1 (addition is mod 1). So, if we have our chain L_t of null sets whose union is measurable and of positive measure, then replacing each L_t with L_t + Q gives a chain of null sets whose union is of full measure. Then adding in the complement we get it to be all of [0, 1].

The only question I have remaining which I'm not sure about is whether or not the existence of such a regular cardinal is equivalent to the existence of such a chain. I doubt it, but I'm not really sure and probably lack the knowledge of forcing needed to understand a proof either way.

Wednesday, October 26, 2005

Announcement

I'm going to stop writing the super-real fields series for now. It's still interesting me, but there are some other subjects which have been bumped up my priorities list recently that I want to learn first. I may well make some posts about them later.