Play Time

Following the theme of my last journal entry, we will focus on two simple stories which lead to deep mathematics inside.

Simple Story 1 – A Strange Quirk

I have a strange quirk, once I have started a streak, I tend to be very good about maintaining that streak over time. One such streak occurred during this pandemic, when I noticed the app on my phone counted the number of days I had consecutively read.

Since I only had Steven Strogatz’s book The Joy of X, I would read the book multiple times over, and each time I continued to absorb new information. One concept that hit home was reading about the absurdity of infinite series, and the struggles mathematicians were having with this idea prior to advancements in making sense of infinity and infinite sums in particular.

Most people recognize the terms of the famous harmonic series (or infamous depending on your point of view) as unit fractions, being summed together.

\sum_{n=1}^{\infty} \frac{1}{n}=1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots

As the number of terms tends to infinity the terms are tending to zero, while the sum tends to infinity, i.e. the sum does not converge to some finite number. This feature is one of the reasons the harmonic series is so famous, as this sum does not converge while its terms converge to zero.

The Alternating Harmonic Series

\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots

The Alternating Harmonic Series converges to \ln 2 which one can see a proof of here.

The surprise from Strogatz’s book is that playing around with the order in which we add or subtract terms changes the resulting sum. For example,

1+\frac{1}{3}-\frac{1}{2}+\frac{1}{5}+\frac{1}{7}-\frac{1}{4}+\cdots = \frac{3}{2} \ln 2

Perhaps the easiest way to see this, is to start with the value we know the Alternating Harmonic Series converges to

1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots= \ln 2

Multiply both sides by one-half

\frac{1}{2} \left( 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots \right) = \frac{1}{2} \ln 2

Distribute the half through

\frac{1}{2} -\frac{1}{4}+\frac{1}{6}-\frac{1}{8}+\cdots = \frac{1}{2} \ln 2

Now add the above sum to the original series

\frac{1}{2} -\frac{1}{4}+\frac{1}{6}-\frac{1}{8}+\cdots +1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots = \frac{1}{2} \ln 2 + \ln 2

Canceling out terms and combining terms we see we arrive back at our rearrangement

1+\frac{1}{3}-\frac{1}{2} +\frac{1}{5} +\frac{1}{7}-\frac{1}{4} + \cdots = \frac{3}{2}\ln 2

The astute observer will notice that the left hand side is just a rearrangement of the Alternating Harmonic Series, and yet it is now 1.5 times the original amount.

Now we have arrived at the point Strogatz’s was pointing out: The commutative property of addition for convergent infinite sums does not always hold…which is just so weird.

Bernhard Riemann outlined when a convergent infinite sum disobeys the commutative property of addition in a theorem known as Riemann’s Rearrangement Theorem.

Playing with rearrangements to see what numbers we may arrive at is so much fun, which we know we can do as a result of Riemann’s theorem. I am currently using Excel and 1000 terms (truncating the infinite series) and attempting to converge to my favorite number of e.

The best example of this oddity was outlined by Strogatz in the book, by rearranging the series as follows

\left(1-\frac{1}{2}-\frac{1}{4} \right) + \left( \frac{1}{3}-\frac{1}{6} -\frac{1}{8}\right) + \left( \frac{1}{5}- \frac{1}{10}-\frac{1}{12}\right) + \cdots


\left( \frac{1}{2} - \frac{1}{4}\right) + \left( \frac{1}{6}-\frac{1}{8}\right) + \left(\frac{1}{10}- \frac{1}{12}\right) +\cdots

Factoring out a half from the above expression, we see our old friend again

\frac{1}{2} \left( 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots \right) = \frac{1}{2} \ln 2

Mind blown. I know.

The connection of simply re-reading a book to keep a digital count going, lead to a deep inquiry into the nature of commutative properties of infinite sums and uncovering some very strange behaviors. Moreover, it reminded me how this simple operation of addition, when looked at through this lens shared some unexpected behaviors and even added a little more to the fully fleshed-out version.

Simple Story 2 – Neural Networks


During our senior year at university, one of my best friends shared they were going to write a neural network that would learn how to play chess, my first interaction with neural networks. As we discussed the project, a very basic idea about what a neural network was and how it “learned” were vague notions and remained clouded until a few weeks ago.

My colleagues and I were facilitating a lesson highlighting computer science and we used neural networks as a vehicle for the lesson. The lesson sparked a curiosity in me, as in that old conversation with my college friend, I was left unclear and trying to decipher what the pieces are the comprise a neural network and how it works.

After diving into neural networks, I am pleasantly surprised that the concepts are quite simple, but have a ton of depth with many, many fantastic features. Neural networks are one of those rare gems that can be both complex and complicated, so let me define those two terms to illustrate the point.

Complex vs. Complicated

When I refer to an object that is complex, I mean it is something that is often amorphous, ambiguous and requires concerted thought to make sense of it. My favorite example of a familiar complex task is raising a child, a process that is amorphous, ambiguous, and requires lots of mental work to be successful.

In contrast, a complicated task is concrete and fairly clear understand in the sense of when and how the task is completed, i.e. it’s made of discrete pieces. Complicated tasks tend to be skill-based and specific, and can be stacked on top of each other to create more complication. My favorite example of a familiar complicated task comes from my high school social studies class when we had to memorize all 50 states and their corresponding capitals. The task of memorizing a pair of state and capital was complicated, and stacking that task 50 times made the complicated task hard, but this task is not complex.

With that distinction clear, a neural network is complex in the sense that it is adaptable and capable of learning. Moreover, the application of creating a neural network to solve a problem is a complex task, figuring out what to measure, what variables are important for the learning, and what are not so many ambiguous features, this part is very complex. The understanding and application of backpropagation is also a complex feature, with complicated components.

The complexity aside, a neural network becomes complicated when we consider the interactions within the network itself, which will play with the structure and function momentarily. Considering a very basic model with no hidden layers, the activations from the input, the weights assigned to the input, the synapses, the neuron, and the outputs are singularly a series of single finite steps, with a straight forward path, i.e. they are complicated….that’s not to say neural networks become very, very complicated very, very quickly, which we’ll see as we discuss the structure and function next.

Neural Network: Structure and Function

The following two sources were very helpful in getting my own head wrapped around Neural Networks.

  1. Simple Neural Network in Python From Scratch (from YouTuber Polycode)
  2. Neural Networks Playlist (from YouTuber 3Blue1Brown Channel)

The first resource is just enough to get a solid basic understanding, while the playlist in the second does a fantastic job of expanding on the original idea and highlighting both the complexity and complicated nature a neural network can take on.

The main idea is to walk away with is a neural network that has an input series, that connects like synapses to a neuron. The neuron connects to the inputs to make a decision and then is mapped by a function to output. Each input has an associated weight to it and each neuron has a bias connected to it, this helps to train the network through a function that tries to balance the weights and measures for the system, through a process called backpropagation. The image below is my own attempt to connect all these pieces.

Neural Network Schematic

So….you might be wondering how is this related to play time?

Aside from the utterly fascinating world of beginning to understand neural networks, the idea of play came to mind in the “training” of the network. That is, a network only learns from iterating through large numbers of trials to gain the insight we need for the network to have. To undergo training has a sense of play to me, and learning should always incorporate as many elements of play as possible.

There is much more I want to say about Neural Networks, but I will digress for another entry.


While one person’s version of play may look very different, the idea of learning being a fun undertaking is something we tend to forget, especially in a school setting. I cannot express the joy these two examples of learning have brought me, and I am curious what bit of play have you had to enjoy in your learning?

What does play look like for you?

Published by mathkaveli

I'm a math geek.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: