Blog: Ryan's Blog

Description: Founder of CGP's personal Blog

Created by ryan on Mon 25 of May, 2009 23:51 EDT

Last post Sun 30 of May, 2010 15:25 EDT

(13 Posts | 55011 Visits | Activity=2.00)

Last post Sun 30 of May, 2010 15:25 EDT

(13 Posts | 55011 Visits | Activity=2.00)

I've taken the Question Generator code from my previous post and translated it into PHP so that it can be accessed as a web app. There's still a lot of room for improvement, and it is probably obvious from the source that it was written in C++, but this allows you to try out the software without having to download and compile it from source.

Try it here: Question Generator Online

Or download the source here: qgen-20100530.zip

This source code is release under the GNU Affero General Public License v.3.0.

Try it here: Question Generator Online

Or download the source here: qgen-20100530.zip

This source code is release under the GNU Affero General Public License v.3.0.

This is a pet project of mine to randomly generate typical algebra word problems. This program generates Moodle XML files containing randomly generated multiple choice questions. It's currently hard-coded to generate 3 files: one containing arithmetic questions, one containing linear equation questions, and one containing "Mystery Number" word problems. This still has a lot of work to be done, and is more of a proof of concept than anything else.

This software was written by Ryan Ruff (ryan at classroomgaming dot org) (C) 2009-2010 and is hereby released to the public under the terms and conditions of the GPLv3. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Download Source .ZIP

I've included both the XCode project and Makefile for building from the command line. Only tested under OSX, so may require some modifications on other platforms.

What makes this different from other question generators is that it is not just replacing numbers in a fixed English language schema. Instead, the English text is generated from a tree structure containing a randomly generated equation. Ideally it would be possible to generate a greater variety of questions in this way, but making these problems sound natural may be harder than I expected. Please contact me if you have any comments or would like to contribute.

Sample "Mystery Number" Questions produced by this program:

This software was written by Ryan Ruff (ryan at classroomgaming dot org) (C) 2009-2010 and is hereby released to the public under the terms and conditions of the GPLv3. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Download Source .ZIP

I've included both the XCode project and Makefile for building from the command line. Only tested under OSX, so may require some modifications on other platforms.

What makes this different from other question generators is that it is not just replacing numbers in a fixed English language schema. Instead, the English text is generated from a tree structure containing a randomly generated equation. Ideally it would be possible to generate a greater variety of questions in this way, but making these problems sound natural may be harder than I expected. Please contact me if you have any comments or would like to contribute.

Sample "Mystery Number" Questions produced by this program:

The product of -11 and a number plus -27 is -27. Find the number.

-29 times a number and 49 more is 19. Find the number.

Multiplying -23 and a number less 25 is the quotient of the number and 38 less 36. Find the number.

-29 times a number and 49 more is 19. Find the number.

Multiplying -23 and a number less 25 is the quotient of the number and 38 less 36. Find the number.

I've recently added a new section to the Wiki which introduces some of the graphing and solving functionality of the TI-84+ calculator. You can find this here: TI84Plus. This may be expanded in the future.

Lately, I've spent a lot of time contemplating on some of the ideas presented in Chaitin's conclusion to Meta-Math. Something just didn't feel right after reading it, and I proceeded to read it again in disbelief. A number of interesting conjectures were included in the conclusion, but some seem wildly out of context given the direction taken in the rest of the book. Up until the conclusion, the book is structured in a very logical fashion. In the conclusion, Chaitin shifts to generalizing the claims of the book in what seems more like an

Before delving into this further, I'd like to play a little thought experiment with Borel's "know-it-all" real number. In this number, all the possible English phrases are ordered and each one is assigned a number which corresponds to a digit in the expansion of this real number. A value of 0 indicates bad English, 1 indicates that the sentence is not a yes or no question, 2 indicates that the question is unanswerable, 3 indicates yes, and 4 indicates no. The whole purpose of this thought experiment is to show just how absurd this

Chaitin begins his conclusion by talking about the "transcendental" nature of mathematics in relation to God. He also suggests that this reasoning also led to Cantor's development of theories on the nature of infinity. What bothers me about this scenario, is that it shows signs of

One of my other concerns about this section is the way Chaitin goes from the topic of "God" from the perspective of Einstein and Spinoza to the topic of monotheism versus polytheism. The problem with this transition is that belief in the "God" described by Einstein and Spinoza would not be considered either monotheism or polytheism, but rather pantheism or monism. In this context, the word "God" does not refer to a supreme being in the usual sense, but rather this word is treated synonymously with "Nature". This belief is not all that different from atheism. Under such a belief system, questions about the nature of "God" are best answered through physics rather than mathematics. In order to obtain an accurate picture of "God", it is necessary to put ones individual beliefs behind and look at what kind of picture is painted by scientific evidence. Richard Feynman shares a similar perspective in this video.

I think the problem with Chaitin's conception of mathematics in the conclusion is that it suggests that revelations in mathematics can only come from divine inspiration, and this isn't necessarily the case. Often times an important mathematical concept will reveal itself to a mathematician in a flash of insight, a "Eureka!" moment. However, these kinds of ideas do not appear out of nowhere, and are only clear when the individual has spent an adequate amount of time examining the problem at hand. A completely intuitionist view of mathematics trivializes the scientific processes that lead to that moment of insight. A strictly intuitionist view of mathematics also has strong ramifications for education. If a teacher believes that some students will be "divinely inspired" to do math, and others will not, then what would be the point of providing any mathematical instruction? In the context of education, it's important for teachers to believe that

I'm reminded of a web-comic from XKCD, in which various scientific disciplines are "arranged by purity". What I find interesting about this model, is that the topic of meta-mathematics seems to form a loop back to sociology. Chaitin argues that "the problem with the current metamath is that it deals only with – it refutes – static FAS's." This statement forms a kind of meta-meta-mathematics, since it is really an observation about the behavior of mathematicians in the field of meta-mathematics. Perhaps mathematicians need to incorporate experimental data on the behavior of mathematicians into the study of meta-mathematics in order to further advance the field.

*appeal to emotion*than a*logical argument*. Imagery of God, intuition, creativity, and love-making form a basis for appealing to mankind's desire to believe that there is something special about the human condition that cannot be expressed by computing machines.Before delving into this further, I'd like to play a little thought experiment with Borel's "know-it-all" real number. In this number, all the possible English phrases are ordered and each one is assigned a number which corresponds to a digit in the expansion of this real number. A value of 0 indicates bad English, 1 indicates that the sentence is not a yes or no question, 2 indicates that the question is unanswerable, 3 indicates yes, and 4 indicates no. The whole purpose of this thought experiment is to show just how absurd this

*real*number really is. My curiosity about this number has to do with an equally absurd question within this English language: "Does God exist?". How can the answer to this question be codified within these options? It doesn't appear to be bad English, unless the word "God" is considered to be undefined. This sentence has the form of a yes or no question, but one might also consider it to be a probabilistic question in which case it is not a clear yes or no response. Some individuals might argue that it is an unanswerable question, while others may promote a resounding yes or no response. Any one of these 5 options is an arguable position. It seems that any attempt at a correct categorization for this question inherently contains some predisposition. While it seems that this merely adds to the absurdity of an already absurd question, I think the observation that different individuals each have various predispositions towards the nature of truth speaks volumes about the subject of mathematics. Specifically, I would argue that mathematicians need to consider their own personal biases when producing new mathematics.Chaitin begins his conclusion by talking about the "transcendental" nature of mathematics in relation to God. He also suggests that this reasoning also led to Cantor's development of theories on the nature of infinity. What bothers me about this scenario, is that it shows signs of

*experimenter' bias*. If Cantor had a predisposition to believe that the real numbers were uncountable, its conceivable that he subconsciously chose his definitions and axioms in such a way that they would lead to this theorem as a result. The real genius of Cantor was in his ability to convince other mathematicians that what he said was true. There is a powerful social influence on what is considered "correct" in the field of mathematics. The notion of proof in mathematics is inherently a social construct.One of my other concerns about this section is the way Chaitin goes from the topic of "God" from the perspective of Einstein and Spinoza to the topic of monotheism versus polytheism. The problem with this transition is that belief in the "God" described by Einstein and Spinoza would not be considered either monotheism or polytheism, but rather pantheism or monism. In this context, the word "God" does not refer to a supreme being in the usual sense, but rather this word is treated synonymously with "Nature". This belief is not all that different from atheism. Under such a belief system, questions about the nature of "God" are best answered through physics rather than mathematics. In order to obtain an accurate picture of "God", it is necessary to put ones individual beliefs behind and look at what kind of picture is painted by scientific evidence. Richard Feynman shares a similar perspective in this video.

I think the problem with Chaitin's conception of mathematics in the conclusion is that it suggests that revelations in mathematics can only come from divine inspiration, and this isn't necessarily the case. Often times an important mathematical concept will reveal itself to a mathematician in a flash of insight, a "Eureka!" moment. However, these kinds of ideas do not appear out of nowhere, and are only clear when the individual has spent an adequate amount of time examining the problem at hand. A completely intuitionist view of mathematics trivializes the scientific processes that lead to that moment of insight. A strictly intuitionist view of mathematics also has strong ramifications for education. If a teacher believes that some students will be "divinely inspired" to do math, and others will not, then what would be the point of providing any mathematical instruction? In the context of education, it's important for teachers to believe that

**any**student can learn to do mathematics.I'm reminded of a web-comic from XKCD, in which various scientific disciplines are "arranged by purity". What I find interesting about this model, is that the topic of meta-mathematics seems to form a loop back to sociology. Chaitin argues that "the problem with the current metamath is that it deals only with – it refutes – static FAS's." This statement forms a kind of meta-meta-mathematics, since it is really an observation about the behavior of mathematicians in the field of meta-mathematics. Perhaps mathematicians need to incorporate experimental data on the behavior of mathematicians into the study of meta-mathematics in order to further advance the field.

Given my distrust of Cantor's of diagonal proof, I think its also appropriate to approach Gödel's Incompleteness Theorem and Turing's Halting Problem with some skepticism because they use the same method of proof. Chaitin suggests that the Incompleteness Theorem can be thought of as a direct result of the cardinality of the reals being larger than the natural numbers. But what if Cantor was wrong, and they really are countable? Would the results found by Turing and Gödel still be true?

Rather than try to address these grandiose questions directly, I'd like to turn to Chaitin's Omega. I really like the spin Chaitin puts on the Halting Problem with Omega. First, he defines Omega as "The Halting Probability". What a great way to set up this problem! Instead of asking whether or not a specific program halts or not, he asks what the probability that a program chosen at random will halt. In this one fell swoop Chaitin has managed to break out of the system, much like Gödel's formation of a meta-mathematical statement. Let me explain what I mean by breaking out of the system.

Chaitin's Halting Probability is a hypothetical question about universal computers. However,for the purposes of this argument I'm going to use a specific universal computer called a Turing machine, which for all intensive purposes is equivalent. A Turing machine is a abstract computer composed of the following parts: a

If my experiences in software development taught me one thing, its that computers don't always behave as they should 'in theory'. Chaitin notes this also, saying "experimentation is the only way to 'prove' that software is correct." I've seen programs that should not halt, halt for unexpected reasons. I've also seen programs that should halt which do not halt for unexpected reasons. Often time hardware limitations or failures are the cause of these problems, but these do not exist in the world of abstract Turing machines! Turing machines always behave perfectly, and never run out of tape. Real computers do face these kind of problems though, and that's exactly my point. The probability of a program on a real computer halting or not is never exactly 0 or 1, but always somewhere in between. In other words, there's always some probability that the hardware will malfunction in such a way that it provides you with a false result.

To address the question of what the probability of a program halting is, we need to expand our model of a computer. We need to address questions like, "what is the probability that the head fails to move left when instructed?", “what is the probability that our head moves left two places instead of just one?”, “what is the probability of this machine running out of tape?” or “what is the probability that we attempt to write 1 to our state register but get 0 instead?”. With an abstract Turing machine we assume that the answer to all these questions is 0, but are we right to do so? In other words, by asking the probability of a program halting we are really asking a question that depends on variables outside of our model – a question about the validity of the model itself. This result seems to make the undecidability of the halting problem obvious. Of course we can't create a program that determines the halting probability because this number depends on factors that are not part of our model!

The other point I'd like to address, and that's the uncomputability of Omega – or more specifically the Nth approximation of Omega. Part of me agrees with Chaitin's description of Omega as "maximally uncomputable", but for a rather different reason. Looking at the description of our Nth approximation of Omega, the first step is to "run each program up to N bits in size for N seconds". To me, this step right here directly implies uncomputability. The problem is that the definition of a Turing machine has no sense of time. Our model of a universal computer places includes no way to monitor the passage of time. This requires us to know information that is specific to the hardware that our computer runs on, such as the amount of time it takes for our head to move from one cell to another or to change the current value in the state register. In other words, the Nth approximation of Omega may vary depending on how fast your computer is. This observation makes the randomness of Omega fairly obvious. One cannot know what the output will be until the program is run, because its output depends on factors that not known until it

I agree with Chaitin that mathematics can not be restrained to a single static formal axiomatic system. In addition to this system needing new axioms to more precisely identify truth, we also need to examine the theorems of this system and learn to identify when a particular axiom is 'bad'. Chaitin describes math as a progression, from axioms to a computer to theorems. What bothers me about this model is that I think of mathematics as a recursive process, that loops from the theorems back to the axioms. The important idea that drives me as a mathematician is to always question my assumptions. In the context of Omega, I think its necessary to ask whether or not our model of a computer is adequate for even asking a question like the halting probability. More generally, I would suggest that the implications of "computation theory" have diverged so far from the behavior of "actual computers" that their results are no longer of interest. From an engineering standpoint, it is of little relevance whether or not an abstract program halts on an imaginary computer, but rather what is important is whether or not a specific program halts on a specific computer in a specific amount of time. However, the current model of a universal computer is simply not designed to answer this type of question.

My interpretation of Gödel's Incompleteness Theorem is that mathematics needs to be very careful with regards to what truths it declares as axioms. I guess part of this shows the scientist in me, in that I feel a need to be constantly challenging my own assumptions. It's like I feel a certain necessity to doubt results like the undecidability of the halting problem, because it becomes a self-fulfilling prophesy. If I decide that the problem is undecidable, then I stop searching for a solution and will consequently never find one!

Rather than try to address these grandiose questions directly, I'd like to turn to Chaitin's Omega. I really like the spin Chaitin puts on the Halting Problem with Omega. First, he defines Omega as "The Halting Probability". What a great way to set up this problem! Instead of asking whether or not a specific program halts or not, he asks what the probability that a program chosen at random will halt. In this one fell swoop Chaitin has managed to break out of the system, much like Gödel's formation of a meta-mathematical statement. Let me explain what I mean by breaking out of the system.

Chaitin's Halting Probability is a hypothetical question about universal computers. However,for the purposes of this argument I'm going to use a specific universal computer called a Turing machine, which for all intensive purposes is equivalent. A Turing machine is a abstract computer composed of the following parts: a

**tape**containing several cells holding some information, a**head**that can move left to right along the tape and read and write information in the active cell, a**table**that tells the head how to behave based on the information in the current cell and the current value in the**state register**and/or change the current state. The classical halting problem asks,*given a program and a finite input, decide whether or not the program will ever stop*. I would argue that by simply asking "what the probability of a program halting" instead of a asking for a yes/no value we've essentially broken our model of a computer.If my experiences in software development taught me one thing, its that computers don't always behave as they should 'in theory'. Chaitin notes this also, saying "experimentation is the only way to 'prove' that software is correct." I've seen programs that should not halt, halt for unexpected reasons. I've also seen programs that should halt which do not halt for unexpected reasons. Often time hardware limitations or failures are the cause of these problems, but these do not exist in the world of abstract Turing machines! Turing machines always behave perfectly, and never run out of tape. Real computers do face these kind of problems though, and that's exactly my point. The probability of a program on a real computer halting or not is never exactly 0 or 1, but always somewhere in between. In other words, there's always some probability that the hardware will malfunction in such a way that it provides you with a false result.

To address the question of what the probability of a program halting is, we need to expand our model of a computer. We need to address questions like, "what is the probability that the head fails to move left when instructed?", “what is the probability that our head moves left two places instead of just one?”, “what is the probability of this machine running out of tape?” or “what is the probability that we attempt to write 1 to our state register but get 0 instead?”. With an abstract Turing machine we assume that the answer to all these questions is 0, but are we right to do so? In other words, by asking the probability of a program halting we are really asking a question that depends on variables outside of our model – a question about the validity of the model itself. This result seems to make the undecidability of the halting problem obvious. Of course we can't create a program that determines the halting probability because this number depends on factors that are not part of our model!

The other point I'd like to address, and that's the uncomputability of Omega – or more specifically the Nth approximation of Omega. Part of me agrees with Chaitin's description of Omega as "maximally uncomputable", but for a rather different reason. Looking at the description of our Nth approximation of Omega, the first step is to "run each program up to N bits in size for N seconds". To me, this step right here directly implies uncomputability. The problem is that the definition of a Turing machine has no sense of time. Our model of a universal computer places includes no way to monitor the passage of time. This requires us to know information that is specific to the hardware that our computer runs on, such as the amount of time it takes for our head to move from one cell to another or to change the current value in the state register. In other words, the Nth approximation of Omega may vary depending on how fast your computer is. This observation makes the randomness of Omega fairly obvious. One cannot know what the output will be until the program is run, because its output depends on factors that not known until it

**is**run! This is a marvelous example of "thinking outside the box". Chaitin has managed to reach out of the system and query the value of a physically random variable to produce a completely random number. This is almost analogous to a machine that physically tosses a coin over and over to produce a random binary sequence.I agree with Chaitin that mathematics can not be restrained to a single static formal axiomatic system. In addition to this system needing new axioms to more precisely identify truth, we also need to examine the theorems of this system and learn to identify when a particular axiom is 'bad'. Chaitin describes math as a progression, from axioms to a computer to theorems. What bothers me about this model is that I think of mathematics as a recursive process, that loops from the theorems back to the axioms. The important idea that drives me as a mathematician is to always question my assumptions. In the context of Omega, I think its necessary to ask whether or not our model of a computer is adequate for even asking a question like the halting probability. More generally, I would suggest that the implications of "computation theory" have diverged so far from the behavior of "actual computers" that their results are no longer of interest. From an engineering standpoint, it is of little relevance whether or not an abstract program halts on an imaginary computer, but rather what is important is whether or not a specific program halts on a specific computer in a specific amount of time. However, the current model of a universal computer is simply not designed to answer this type of question.

My interpretation of Gödel's Incompleteness Theorem is that mathematics needs to be very careful with regards to what truths it declares as axioms. I guess part of this shows the scientist in me, in that I feel a need to be constantly challenging my own assumptions. It's like I feel a certain necessity to doubt results like the undecidability of the halting problem, because it becomes a self-fulfilling prophesy. If I decide that the problem is undecidable, then I stop searching for a solution and will consequently never find one!

I just finished reading Gregory Chaitin's Meta-Math! The Quest for Omega and wanted to share my thoughts on because I think some of these ideas are relevant to this site. First, I want to say that despite my following criticisms it was certainly an enjoyable read. Chaitin has a knack for concisely summarizing the ideas behind many of the important developments in computational mathematics. One of my favorite lines was "a number with infinite precision, a so-called real number, is actually rather unreal!" Where I seem to differ from Chaitin is that I don't necessarily think of formal axiomatic systems as a failure. One of my music instructors once told me that 'you need to know the rules to know when you can break them', and I think the same goes for mathematics. What I got out of Gödel's Incompleteness Theorem were two major ideas. The first, was that mathematics could be used as a tool for analyzing itself – so called meta-mathematics. The second was that formal axiomatic systems are chaotic – they are very sensitive to changes in the initial conditions. However, my real beef is not really with Chaitin or Gödel, but rather Cantor and his diagonal argument.

Cantor's argument that there are more real numbers goes as follows: Suppose the real numbers in the interval [0,1) are countable. Then we can arrange them in a list:

d0.

d0.6

d0.42

d0.535

If so, then we can construct a real number from the digits along the diagonal by changing each one, for example we subtract 1 from each digit along the diagonal to d0.6421..., so differs from each other number by at least 1 digit. Cantor's argument is that this number cannot be found in the list. Since our assumption was that the list includes all real number in the interval, our assumption must be false and the real numbers are not countable.

My problem with this argument is that I don't like how it handles the following binary sequence, which is based on the binary expansions of the natural numbers:

b0.

b0.1

b0.01

b0.110

b0.0010

Clearly, the elements along the diagonal are always 0, so Cantor's diagonal number would just be b0.1111111111... so who's to say that this number is not in the sequence? b0.1 is the second number in the sequence, b0.11 is the fourth, b0.111 is the eighth, b0.1111 is the sixteenth. For an arbitrary sequence containing n 1s, this will be the exp(2,n)th number in the sequence. From this standpoint, it appears that the sequence converges on Cantor's diagonal number.

The main argument against this rebuttal is that decimal expansions are not unique, since d0.9999... = 1, and therefore this is not a 1-1 correspondence with the real numbers. This is also why I was careful to choose the interval [0,1) instead of [0,1] in the context of this argument. Who's to say that the number 0.9999... even exists? Cantor's argument rests in the fact that it does, but if we also accept that 0.9999.. = 1 then its not even one of the numbers I was intending to describe!

Earlier in this section I suggested that formal axiomatic systems were chaotic. I think this is a real good example of that. If we have two formal axiomatic systems, one that 0.999999... = 1 as an axiom or theorem and one that doesn't, then we're going to get a wildly theorems as a result. When I look at the diagonal argument applied to the binary expansions above, I get this feeling that Cantor's argument seems to rest on 0.999999...=1 and 0.9999999... != 1 both being true. Surely, if we only have a finite number of digits in the representation of a number, we would want to treat 0.999999 and 1.000000 as different numbers, right? Why should this behavior change as the number of digits in the representation approaches infinity? Again, like Chaitin, I sometimes get the feeling that "infinitesimally small distances do not exist", and this feeling makes me cast doubt on Cantor's proof. The result may or may not be true, but I think his method of proof is invalid. It's almost like the cardinality of the real numbers is defined to more than the natural numbers, rather than it being a theorem of the system.

The problem with having these doubts about Cantor's diagonal argument is that the same method was used by Gödel's proof of the Incompleteness Theorem and Turing's proof that the halting problem is unsolvable. Thus, I rather enjoyed Chaitin's alternative approach to these results. I think his probabilistic approach to the halting problem was rather insightful. I'll try to post my thoughts on the halting problem and omega soon.

Cantor's argument that there are more real numbers goes as follows: Suppose the real numbers in the interval [0,1) are countable. Then we can arrange them in a list:

d0.

**7**589162786187...d0.6

**5**72873842435...d0.42

**3**8974289365...d0.535

**2**765387282...If so, then we can construct a real number from the digits along the diagonal by changing each one, for example we subtract 1 from each digit along the diagonal to d0.6421..., so differs from each other number by at least 1 digit. Cantor's argument is that this number cannot be found in the list. Since our assumption was that the list includes all real number in the interval, our assumption must be false and the real numbers are not countable.

My problem with this argument is that I don't like how it handles the following binary sequence, which is based on the binary expansions of the natural numbers:

b0.

**0**00000000...b0.1

**0**0000000...b0.01

**0**000000...b0.110

**0**00000...b0.0010

**0**0000...Clearly, the elements along the diagonal are always 0, so Cantor's diagonal number would just be b0.1111111111... so who's to say that this number is not in the sequence? b0.1 is the second number in the sequence, b0.11 is the fourth, b0.111 is the eighth, b0.1111 is the sixteenth. For an arbitrary sequence containing n 1s, this will be the exp(2,n)th number in the sequence. From this standpoint, it appears that the sequence converges on Cantor's diagonal number.

The main argument against this rebuttal is that decimal expansions are not unique, since d0.9999... = 1, and therefore this is not a 1-1 correspondence with the real numbers. This is also why I was careful to choose the interval [0,1) instead of [0,1] in the context of this argument. Who's to say that the number 0.9999... even exists? Cantor's argument rests in the fact that it does, but if we also accept that 0.9999.. = 1 then its not even one of the numbers I was intending to describe!

Earlier in this section I suggested that formal axiomatic systems were chaotic. I think this is a real good example of that. If we have two formal axiomatic systems, one that 0.999999... = 1 as an axiom or theorem and one that doesn't, then we're going to get a wildly theorems as a result. When I look at the diagonal argument applied to the binary expansions above, I get this feeling that Cantor's argument seems to rest on 0.999999...=1 and 0.9999999... != 1 both being true. Surely, if we only have a finite number of digits in the representation of a number, we would want to treat 0.999999 and 1.000000 as different numbers, right? Why should this behavior change as the number of digits in the representation approaches infinity? Again, like Chaitin, I sometimes get the feeling that "infinitesimally small distances do not exist", and this feeling makes me cast doubt on Cantor's proof. The result may or may not be true, but I think his method of proof is invalid. It's almost like the cardinality of the real numbers is defined to more than the natural numbers, rather than it being a theorem of the system.

The problem with having these doubts about Cantor's diagonal argument is that the same method was used by Gödel's proof of the Incompleteness Theorem and Turing's proof that the halting problem is unsolvable. Thus, I rather enjoyed Chaitin's alternative approach to these results. I think his probabilistic approach to the halting problem was rather insightful. I'll try to post my thoughts on the halting problem and omega soon.

The first draft of BASE Chapter 11, Polynomials, has been posted. I'm not quite as satisfied with this as some of the other chapters, but I think I covered most of the major ideas that will be needed for the later chapters. If you have any feedback or suggestions, feel free to add a comment to the wiki page and I'll try to address it.

Next up is Chapter 13, where the quadratic equation will be introduced. Stay tuned!

Next up is Chapter 13, where the quadratic equation will be introduced. Stay tuned!

The first draft of BASE Chapter 12, Numeric Solutions, is now available. I inserted this in between Polynomials and Quadratic Equations, because I wanted to encourage the reader to think about solving general functions before focusing on specific types of equations. There's still some clean-up I think can be done in this section to make it more readable, but I like the hands-on approach that it takes and I think some things should be intentionally left incomplete so that the reader has room to improve it on their own. I also think that adding this section helped to give me a better idea of where I wanted to go with chapters 11 and 13.

An article on Slashdot Friday drew my attention to Lockhart's Lament, a very passionate indictment of the current math education system. This was a very interesting read that highlights some of the problems I hope this project will eventually be able to address. It's good to know that I'm not the only one that thinks games could be good for the mathematics curriculum!

I really liked Lockhart's definition of mathematics as "the art of explanation". It encapsulates many of the core principles of mathematics and worked well with his argument. I agree that there is a real creative aspect to mathematics, that this is under-emphasized in the typical curriculum and that rote memorization is not an effective means of developing the critical thinking skills that are the heart of mathematical thinking. However, I personally like to think that mathematics is also "the science of explanation". It's not enough for a mathematician just to discover a pattern, but one must also explain that pattern to someone else, assess the effectiveness of that explanation, and refine it as needed. Although Lockhart seemed to try to distance mathematics from science, I felt that there was still a scientific process going on in his instructional examples.

I'm not sure if I agree with Lockhart's stance on mathematics as a language. I'm glad that I learned how to "read math", just as I'm glad I learned how to "read music". I even enjoyed learning about music theory! What's more, I think that I'm a better musician because of that instruction. Of course, I couldn't imagine not

I think Bill Kerr made some excellent criticisms of this essay also. Kerr points out that learner's may differ in how they respond to creative instruction and that other subjects may be similarly "butchered" by schools. Kerr also identifies Papert's use of LOGO programming as one possible way to solve some of these problems.

The benefit of programming instruction is that it moves the experimentation from out of the mathematicians head and into the computer, while at the same time eliminating the need for the student to perform tedious calculations. This would allow the math course to focus more on how the student communicates their ideas, which is really the important content.

I really liked Lockhart's definition of mathematics as "the art of explanation". It encapsulates many of the core principles of mathematics and worked well with his argument. I agree that there is a real creative aspect to mathematics, that this is under-emphasized in the typical curriculum and that rote memorization is not an effective means of developing the critical thinking skills that are the heart of mathematical thinking. However, I personally like to think that mathematics is also "the science of explanation". It's not enough for a mathematician just to discover a pattern, but one must also explain that pattern to someone else, assess the effectiveness of that explanation, and refine it as needed. Although Lockhart seemed to try to distance mathematics from science, I felt that there was still a scientific process going on in his instructional examples.

I'm not sure if I agree with Lockhart's stance on mathematics as a language. I'm glad that I learned how to "read math", just as I'm glad I learned how to "read music". I even enjoyed learning about music theory! What's more, I think that I'm a better musician because of that instruction. Of course, I couldn't imagine not

*playing music*in music class, but rather I feel that understanding the "language of music" was an essential part of my development as a musician. Likewise, learning the "language of mathematics" allowed me to accumulate knowledge that I might not have otherwise discovered. I agree that mathematics is about "notions, not notations", and that's part of the reason I chose to integrate Scheme into BASE. I think that the notation system of mathematics certainly has its problems, but the use of this notation has become something of an international standard and is consequently worth knowing. Part of me wishes I could exclusively use Scheme, but yet another part of me feels obligated to teach the mathematical notation because students are likely to encounter it on standardized assessments as a result of its popularity. Perhaps the assessments themselves are the problem?I think Bill Kerr made some excellent criticisms of this essay also. Kerr points out that learner's may differ in how they respond to creative instruction and that other subjects may be similarly "butchered" by schools. Kerr also identifies Papert's use of LOGO programming as one possible way to solve some of these problems.

The benefit of programming instruction is that it moves the experimentation from out of the mathematicians head and into the computer, while at the same time eliminating the need for the student to perform tedious calculations. This would allow the math course to focus more on how the student communicates their ideas, which is really the important content.

Now that I have a few sample chapters, I'm starting to develop some question banks for the content so that I can try out the Moodle games from Vasilis Daloukas' game module. I've started creating a glossary and added some questions for the Pre-Algebra chapter. You can play some games now at http://www.classroomgaming.com!

I ran into an interesting dilemma about these questions. Moodle supports a "Calculated" question type, where the correct answer can be specified as a formula from some specified variable. However, the "Calculated" type requires that the user type in the answer, within some specified precision, in a text field. For game uses, multiple choice questions are more desirable since they're easier to grade. I wanted to create a large number of multiple-choice questions for the games so that there would be little to no repetition for the learner.

After playing for a while with the "Calculated" questions, and realized that what it was really doing was generating a static data set of questions. This was nice in that I could review the questions for potential errors such as a division by 0 and correct them. However, this also gave me an idea. I made a single multiple choice question and exported it to a Moodle XML file, just to see how the data was formatted, then wrote a quick cpp file that would produce an equivalent file with 150 randomly generated arithmetic questions. I uploaded this file to Moodle, and now I have a nice number of test problems! You can download the source to my little experiment here http://www.classroomgaming.org/questiongen.zip

Generating a question database in advance seems like an ugly hack for this problem. It would be really nice if I could it set up so that the question, answers and distractors are randomly generated on the fly. In creating this question generator, I realized that this could be quite a complicated process. Choosing random numbers for distractors is easy, but the random part can make the wrong answers too obvious. I worked around this by first specifying common mistakes by formulas, then double checking them against the answer and each other for uniqueness. Duplicate distractors are replaced with a random number until they are unique. This process seemed so convoluted, and since I was only working with arithmetic, I couldn't help but wonder how I could implement reasonable distractors for the topics in later chapters.

This got me thinking about order of operations problems, and how I could produce good distractors for them. I was reminded of a genetic programing paradigm that would take a pool of scheme programs and perform random permutations on them until one was discovered that matched the desired behavior. One such permutation was a crossover, in which a node in one program tree was replaced with another. I thought I might be able to take a similar approach to orders of operation problems. First, start with the base expression and modify it by swapping nodes around or replace operations with their opposites. This might even work for a wide variety of Algebraic problems.

I ran into an interesting dilemma about these questions. Moodle supports a "Calculated" question type, where the correct answer can be specified as a formula from some specified variable. However, the "Calculated" type requires that the user type in the answer, within some specified precision, in a text field. For game uses, multiple choice questions are more desirable since they're easier to grade. I wanted to create a large number of multiple-choice questions for the games so that there would be little to no repetition for the learner.

After playing for a while with the "Calculated" questions, and realized that what it was really doing was generating a static data set of questions. This was nice in that I could review the questions for potential errors such as a division by 0 and correct them. However, this also gave me an idea. I made a single multiple choice question and exported it to a Moodle XML file, just to see how the data was formatted, then wrote a quick cpp file that would produce an equivalent file with 150 randomly generated arithmetic questions. I uploaded this file to Moodle, and now I have a nice number of test problems! You can download the source to my little experiment here http://www.classroomgaming.org/questiongen.zip

Generating a question database in advance seems like an ugly hack for this problem. It would be really nice if I could it set up so that the question, answers and distractors are randomly generated on the fly. In creating this question generator, I realized that this could be quite a complicated process. Choosing random numbers for distractors is easy, but the random part can make the wrong answers too obvious. I worked around this by first specifying common mistakes by formulas, then double checking them against the answer and each other for uniqueness. Duplicate distractors are replaced with a random number until they are unique. This process seemed so convoluted, and since I was only working with arithmetic, I couldn't help but wonder how I could implement reasonable distractors for the topics in later chapters.

This got me thinking about order of operations problems, and how I could produce good distractors for them. I was reminded of a genetic programing paradigm that would take a pool of scheme programs and perform random permutations on them until one was discovered that matched the desired behavior. One such permutation was a crossover, in which a node in one program tree was replaced with another. I thought I might be able to take a similar approach to orders of operation problems. First, start with the base expression and modify it by swapping nodes around or replace operations with their opposites. This might even work for a wide variety of Algebraic problems.