Mathematics is and is not formalistic

2 December, 2009 at 8:33 pm (mathematics, philosophy) (, )

Some philosophy of mathematics. And of philosophy.

The question I will cover is: Is mathematics purely formal manipulation of strings of symbols?

First, the trivial answers: Yes, since all results, definitions and theorems could in principle be reduced to such. No, because doings mathematics takes imagination, creativity, and whatever other positive-sounding things I would like to mention. (It also requires manipulation of formal systems, occasionally known as abstract nonsense or computation or calculation.)

There was this philosopher named Immanuel Kant. He proved (by very nonrigorous standards and with many hidden assumptions) a number of seemingly contradictory results called antinomies (of pure reason), such that the world is infinite and finite both. I stated something similar above, but it is fairly easy to see that I cheated, since my two statements are actually about different things.

Mathematics is formalistic in that the mathematical results: theorems, definitions and proofs – can be stated formalistically. Mathematics as in the activity that mathematicians do is not so, as it involves building and understanding (often complicated) arguments, seeing analogies, seeing past analogies, and other quite non-formalistic activities.

There is more to the question. Namely, is mathematics something discovered by or created by people? (One could ask this of natural sciences, I think.) Mathematics is discovered in that all the theorems and results are fixed, given a particular set of axioms (which means assumptions): Given proposition is true or false (or undeterminable due to Gödel’s incompleteness or even meaningless in the same sense that not every string of letters is a word or has meaning) based on the axioms in use.

Mathematics is created in that the particular results achieved, axiomatics considered, and so on, are determined by the questions mathematicians ask and answer. Physics, statistics, and various other fields of inquiry motivate plenty of mathematics, for example.

What is our conclusion? It is that the answer one finds is not important, but what philosophy does is highlight concepts that are hard to separate otherwise. This is what I find useful in philosophy. This is what I wish more philosophers recognised.

(Post inspired by this one.)

Permalink 3 Comments

Edition and playstyle wars

6 June, 2009 at 11:24 am (linkedin, rpg theory) ()

Mostly inspired by Donny the DM’s posts, namely this and this, the first of which was shared by Jonathan Jacobs of forthcoming Nevermet press on Google Reader.

Donny somewhat mischaracerises the extremes of sandbox play, also misuses GNS and makes a number of assumptions, but I thought it would be nice to engage his actual point, too.

I hope I am not misrepresenting Donny too severely. By my understanding Donny’s point is, to steal a term from another field, ecumenical. Donny wants to say that old school and 4e play are not that different after all. Donny’s argument is that since ridiculously extreme sandbox play and ridiculously extreme railroading don’t really work, everyone must actually play in the middle ground and hence in pretty similar way.

There is a number of weaknesses on the argument in addition to misrepresenting railroading. Donny is pretty focused on D&D and it shows. D&D assumes lots of combat. Donny’s argument also assumes lots of combat. Further, not all ways of playing map meaningfully to the railroading-sandbox axis. My normal style of game mastering is story-focused but I don’t plan ahead and hence can’t railroad; there is no point in mapping this to the railroad-sandbox axis. This is not a big problem as one can fabricate a ridiculously extreme version of my style, too, and use argument similar to what Donny used. I will assume that this applies to all possible ways of playing.

The key claim remains: Since all extremes are implausible, all styles of play must be pretty close to each other and fundamentally similar. My perspective is that the claim is too ecumenical, but still has a kernel of truth hidden in it.

First the true part: Certainly, all of roleplaying shares many similarities. Certainly different play traditions have much to learn from each other. I mix and match techniques from old school play and indie games. Philippe, a 4e afficiando if there ever was one, experiments with random encounters. 4e with the focus on encounters has something to teach if one is willing to look carefully, but they really ought to read and play some indie games so as to get a handle of skill challenges, which are a pretty blunt instrument. More importantly: It is possible to enjoy playing in styles that are not one’s favourite, as long as one is willing to approach them with open mind. (Also, having less edition wars would be nice.)

Nevertheless, people play in different ways. I hear some even like railroading and pre-plotted adventures! Hard to accept, but true. The differences are real. Some styles of play demand very much a different perspective for them to be enjoyed. Donny himself illustrates this by the following comments:

As to gathering information. <snip> You either railroad them (just have someone spill their guts as to where you want them to go), or you sandbox them (roll on the random rumor table and they go in the direction the dice tell them to – stomping off blindly indeed :)

No, you do neither of those. You give them the information that they could gather, maybe influenced by dice rolls. Maybe it guides to some interesting adventurous location that you have designed and placed somewhere, but not because you want the player characters to go there, but because you want to present going there as an option. When designing the sandbox, you place a bunch of interesting locations there and create a bunch of interesting random encounters, because you want to know what the players will do to them. In play you don’t guide them around; their characters are an adventurous bunch or so involved in the situation that they will certainly undertake some interesting project or stumble upon something interesting.

That is; instead of director who has a story to tell or encounters to guide the players through, the GM thinks of himself (or herself) as an arbitrator who can’t wait to see what the players do with his sandbox. A different frame of mind. Certainly one can mix and match, for example by creating a sandbox with very strong theme or by creating an adventure with many genuine choices that take it to different directions. Regardless, the extreme but playable cases are pretty far from each other.

As a conclusion I say that those weirdos over there do play in genuinely different way, but once you accept that the difference exists, you just might be able to enjoy their activity, too. Or maybe not. But at the very least you would be likely to learn a bit and get a new experience. Celebrate the difference.

Permalink 20 Comments

Basics of dice probabilities

26 May, 2009 at 7:32 pm (game design, linkedin, mathematics) (, , )

I’ll write a post or a few about probabilities that involve rolling dice. Those who know mathematics might be more interested in probability theory.

I will assume that all probability distributions are discrete and integer-valued. Trying to apply what I say here to continuous distributions will cause problems or require thinking.

Probability

Probability measures, or indicates, how certain it is that some event will happen (or has happened, in case of imperfect knowledge). Probability of 1 means that something is certain, while 0 means impossibility. Probability 1/2, or \frac{1}{2}, or 50%, or 0.5 or in Finnish notation 0,5 means that something happens half the time (if the event is repeated).

I very much prefer working with fractions as they are exact and, in my opinion, more intuitive, but many people like percents. To convert a fraction into percents simply multiply it by hundred and add the %-sign.

An important axiom of probability is that something always happens. The sum of probabilities of all the specific outcomes is 1. By this I mean that if, say, a die is rolled than it gives one and exactly one result. It doesn’t land sideways. It is not hit by a meteor or eaten by a dog.

Symmetry

Especially when playing around with dice symmetry plays an important role. Symmetric events have the same probability.

I will assume that all dice are fair; in practice they are not and it doesn’t matter. An n-sided die has n symmetric results. All of them hence have the same probability. Something must always happen, so the sum of the probabilities is 1. It follows that for an n-sided die the probability of getting any result from the set \{ 1 , 2 , \ldots , n-1 , n \} is 1/n, while the probability of getting any other integer is zero.

Notation

Since writing probability all the time gets boring, I’ll use a shorthand: P( \text{event} ) = p, which means the probability that event happens is p. For example: P(\text{d}8=7 ) = 1/8 and P(\text{d}8=-4 ) = 0.

or, and, not

Some rules for performing calculations with probabilities are in order. First, a definition: Events are independent when knowing something about one of them gives no knowledge about the others. Dice rolls are, as far as this post is concerned, independent: I roll a d12 and get a 1. This tells me nothing about what the next result will be when I roll that d12.

Take two independent events A and B. Now P(A \text{ and } B ) = P(A)P(B). For example: The probability of rolling 1, 2, 3, 4 or 5 (that is: not 6) with a six-sider is 5/6. If we roll two d6s, what is probability of both of them giving a result less than six? Since separate rolls are independent events, this probability is 5/6 times 5/6, which equals 25/36. This rule applies to any finite number of rolls. As long as they are independent, and means multiplication. The independence is not there for show only: Suppose I roll a singe d4. What is the probability of that die giving result of both 1 and 4 at the same time? Obviously, since a given die only gives one result per roll, the event is impossible and hence has probability zero. Careless use of the “and is multiplication”-rule would give 1/4 times 1/4 equals 1/16, which would be wrong.

Multiplying fractions, in case it is not clear: Supposing a, b, c and d are real numbers, b and d are not zero, then \frac{a}{b}\text{ times } \frac{c}{d} = \frac{ac}{bd}.

Take any event. Now P( \text{not event} ) = 1 - P( \text{event} ). This is a direct consequence of something always happening. Example: The probability of rolling 6 with a d6 is 1/6, from which it follows that the probability of not rolling a 6, which is the probability of rolling something else than 6, is 1 - 1/6 = 5/6. Now we have the tools for solving one problems with some history: Roll 4d6. Should you bet on rolling at least one 6? The goal here is to determine P( \text{at least one is 6}). Using the law of not this problem is the same as determining the probability of none of the dice showing 6, which is same as all of them giving a result from the set \{ 1 , 2 , 3, 4, 5 \}. We already know this probability for a single die: It is 5/6. Since separate rolls are made, the events are independent, and hence by the law of and we can simply multiply 5/6 four times, which means raising it to the fourth power: (5/6)^4 = (5^4)/(6^4) = 625/1296, which is slightly less than half. By the principle of not we get that the probability of getting at least one 6 is slightly more than half and should be betted on. By symbols the calculation goes as follows: P(\text{at least one die gives a six}) = 1-P(\text{none of the dice give a six} = 1-P(\text{first die is not six and } \dots \text{ and fourth die is not a six}) = 1- (P(\text{first die is not a six}) \times \dots \times P(\text{fourth die is not a six})) = 1 - 625/1296 = 1296/1296 - 625/1296 = (1296-625)/ 1296 = 671/1296, which is greater than 648/1296 = 1/2.

Take two events A and B. The probability of at least one of them happening, by which I mean P(A or B), equals the sum of their probabilities minus the probability of A and B both happening; otherwise  the “and” would be counted twice. So, for any events A and B, P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B). An important special case: A single d12 is rolled. What is P(\text{d}12=7 \text{ or } 9 )?. Since rolling 7 and 9 are clearly distinct events, the probability of both happening with single die roll is 0 (since they never happen at the same time). Hence P(\text{d}12=7 \text{ or } 9 ) = P(\text{d}12=7) + P(\text{d}12=9) - 0 = 1/6. Another useful application: Roll 2d6. What is the probability that at least one of them shows a 6? This can be formulated in another way: What is the probability of first die showing a 6 or the second die showing a 6? Here the events are independent since two dice are cast. Hence, P(\text{at least one 6}) = P(\text{first d}6=6 \text{ or second d}6=6) = P(\text{first d}6=6) + P(\text{second d}6=6) - P(\text{first and second d}6=6) = 1/6 + 1/6 - P(\text{d}6=6)P(\text{d}6=6) = 2/6 - 1/36 = 11/36.

More to come?

If someone finds this useful, please say so. I do not know how good I am at expository text like this and I really don’t know the skill level of my audience, if any. A topic I might handle in the future, if anyone is interested, is how to calculate the distribution of a sum of two arbitrary distributions.

I managed to land a quite demanding job, so frequent updates are somewhat unlikely, at least for some time. I’ll need to do some adjusting.

Permalink 4 Comments

Go and buy the Open game table

23 March, 2009 at 6:32 pm (linkedin, roleplaying) (, )

You can read about it on the tireless Jonathan Jacobs’ blog (with extra exclamation marks and everything): http://www.thecoremechanic.com/2009/03/open-game-table-now-on-sale.html

There’s even one post of mine that managed to slip past the reviewers in the anthology of blog entries. I also acted as a reviewer and reviewed almost all posts not too tightly coupled with 4e rules. I further have the draft in this very computer and there is much art of significant quality therein. The content is good, also. The strictly 4e material does not overwhelm the useful stuff.

Congratulations, Jonathan, for pulling this off. Let there be many sales.

Permalink 2 Comments

Intensional philosophy

16 September, 2008 at 9:49 pm (philosophy) ()

This is a pretty significant paradigm shift for me. It may or may not be of personal interest to you.

Of intensionality and extensionality

First, an example from the field of set theory. If I take some set, like the set of people who read this blog, I can mean at least two things with it. The first meaning is the set of people who currently are reading this blog, or are currently readers of this blog in some other sense. I could in theory name them and list them, like so: {Tommi, ksym, Phil, Fred, …}. This is the extension of the set. On the other hand, I can mean any people who are readers now or who might be readers some day or who could potentially be readers. In this case the actual list would be largely irrelevant; the way the set is defined is what matters, not the contents it may or may not have.

A more philosophically interesting example: I have two tables in my room/apartment. Assume, for a moment, some ontological theory that ascribes existence to single material objects only, not classes of them (like “tables”). Now, consider, can I meaningfully refer to the two tables as, well, two tables, as I have been doing here? For a mathematician, the answer is “of course”. If I have two entities, I can certainly define a set to which those two belong (assuming the entities are not built so as to resist this). That: The extension of the set is what matters, the intension is irrelevant and in this case not even distinct. But a philosopher would think about the intension: By what means can I refer to the two tables, if they are ontologically only arbitrary physical objects? Certainly not as I have been doing, because given the ontology here, “table” is not very meaningful (at least obviously).

So, in closing: In math, intensionality is a means to an end or a red herring; the extensional is what matters. In philosophy, the intensional is the interesting parts, and the extensional may or may not matter. Philosophy is about the why of the world, or segments thereof, while math (and physics and other hard sciences) about the what and how.

As a disclaimer, the parts about philosophy apply most to ontology and metaphysics and ethics by Kant, and maybe to other stuff I am less familiar with. Also: Almost all distinctions are fuzzy around the edges.

What this means to me

This realisation has been fundamental in that now I no longer see significant parts of philosophy as trivial, which is useful for being motivated to actually study it.

Furthermore, I will need to re-evaluate my interest in philosophy. It requires learning a new way to think, which is generally fun and useful. Can I learn to think philosophically? Maybe. Do I want to? Maybe.

In case I am utterly wrong

In case I am misrepresenting large portions of philosophy or mathematics, please do inform me. I will presumably argue against such a claim, which will force me to sharpen my thoughts on the subject, which is useful. Or I may admit to being wrong.

In case you know something about the subject matter

I would be interested in any literature concerning this subject. Very interested. So, if you know any, I would much appreciate you sharing the information.

Permalink 10 Comments

Mathematical proofs

21 April, 2008 at 7:44 pm (mathematics) (, , , , )

In which I shall relate a few ways of building mathematical proofs. Should be useful for several kinds of problem solving.

Basic structure

The basic structure of mathematical problem solving is simple: There are certain assumptions (or axioms or definitions) and a certain desired outcome. The assumptions are used to get the desired outcome.

For example (disregard if not interested; this is a toy example, and the substance continues after it), Bolzano’s theorem states that if there is a continuous function f such that f(a) < 0 and f(b) > 0 (f is a mapping from real line to real line, a < b), then there exists c between a and b such that f(c) = 0.

So, assume that one has a continuous function, such as g(x) = x^2, that has g(0) = 0 and g(2) = 4. Can Blozano’s theorem be used to prove that there exists z between 0 and 2 such that g(z) = 1? As Bolzano’s theorem only says that if the function gets positive and negative values, it also gets zero, it can’t be directly applied. The trick is to define h(x) = g(x) -1. Now h(0) = -1, h (2) = 3, and hence the there is z between 0 and 2 such that h(z) = 0, and hence g(z) = 1. Tacitly I assumed that continuous function minus a constant is still continuous, which would also have to be proven.

Indirect proof a.k.a. proof by contradiction

Also known as reductio ad absurdum, the idea of an indirect proof is that one assumes that what is being proven is actually false and from that follows a contradiction. The point of indirect proofs is that they give a free assumption to play with, essentially. That is: If one assumes A and B and must prove C, an indirect proof would mean assuming A, B, and “not C”, and find any contradiction.

It is easy to make a mistake in formulating the antithesis (“not C”): Take the definition of a continuous function, which says that for all x, for all positive e there exists a positive d so that if the absolute value of z-x is less than d, then absolute value of f(z)-f(x) is less than e. If something is not continuous, it means that there exists and x for which there exists a positive e such that for all positive d |z-x| < d and |f(z)-f(x)| is at least as large as e. This is a fairly simple concept (continuous function on the real line), which just happens to look scary.

An indirect proof can use any other tool in the box; it gives a free assumption and is never actually harmful, though often useless.

Constructive proof

A proof can be said to be constructive when there is an item that must be shown to exist and this is done by actually constructing the item in question. Constructive proofs are often cumbersome (if rigorous) and longer than uncontructive ones. The reason for appreciating constructive proofs is that they are concrete: Something is actually built. It makes understanding the proof a lot easier.

For example, a function is (Riemann-)integrable if one can put rectangles below and above it such that as leaner rectangles are used, the approximations grow arbitrarily accurate. The problem is that if the function is not a very simple one, building the rectangles is difficult. Hence people instead learn a bunch of rules and tables so that they don’t need to and can instead do easy calculus. Or, in case of people who study physics, handwave it all away with an infinite number of infinitesimals. (One can treat them rigorously, but…)

One can also name unconstructive proofs, if one wants to. For example, everything that uses the axiom of choice is unconstructive (I know of no exceptions and have hard time figuring out how to create one, but someone will hopefully come and tell me I am wrong). Some hardcore mathematicians only accept constructive proofs; they are consequently known as constructivists, and are a rare breed. The scope of what they can prove is greatly limited.

Proof by exhaustion

Proof by exhaustion means dealing with every special case in order, one-by-one. Proof by exhaustion is often long, ponderous, boring, and avoided at all costs by most mathematicians. The most famous example if the four colour theorem, which essentially says that given any map, one can colour the nations that share borders using different colours and only use four colours in the process (this means that they must actually have a small bit of common border; single point is not sufficient). It was proven by a computer that essentially went through all the interesting situations. Five- and six-colour theorems can be proven in the conventional way with relative ease.

Mathematical induction

Normal induction works as follows: The sun has risen every morning that I remember. Hence, it will probably rise tomorrow morning, too. Pretty sensible, though it often goes wrong in nasty ways (every black-skinned person I have met thus far…).

Mathematical induction is used to prove things that apply to all natural numbers (1, 2, 3, …; 0 may or may not be included), or to anything that can be numbered by them, such as the propositions of propositional logic.

For example, the sum 1 + 2 + 3 + … + n = (n+1)*n/2. (E.g. 1 + 2+ 3 + 4 +5 = 6*5/2 = 3*5 = 15).

The first step is to check that the equation works when n = 1. This is often trivial. Particularly: 1 = 2*1/2 is indeed true.

The second step is to assume that the equation works for some natural number k, which is not specified. That is: 1+ 2 + … + k = k*(k+1)/2. This step is not particularly strenous. That is: Assume the equation holds for n = k.

The third step, which is the actual substance of the proof, is to see that the equation now holds for n = k+1. In this specific example, it must be shown that 1 + 2 + … + k + (k+1) =  (k+1)*(k+2)/2. The assumption made in step two will be useful here. (k+1)(k+2)/2 = (k+1)*k/2  +  (k+1)*2/2. By the assumption in the second step the first term equals  1 + 2 + … + k; that is, the formula looks like 1 + 2 + 3 + … + k + (k+1)*2/2, where the last term is simply k+1. Hence, 1 + … + k + k+1 = (k+2)*(k+1)/2.

The first step established that the equation holds for n = 1. The second and third established that if it works for n = k, it also holds for n = k+1. That is: Because it works in the first case, it must also work in the second case, and hence also in the third case, and so forth. Hence it works for all n, as it well should.

Proof by handwaving

The most powerful tool wielded in a seminar of lecture, proof by handwaving involves drawing pretty pictures, writing down some equations, and appealing to the common sense of whoever is subjected to the explanation and the vigorous hand movements. Phrases such as “Clearly”, “One can easily show that”, “It can be proven that”, “Gauss has proven that”, “this is left as an exercise for the reader/student” are often used to great effect. Proof by handwaving can even be used to prove false statements, which makes it the strongest method catalogued herein, even if not the most rigorous.

For real life examples, see such achievements in accurate film-craft as everything by Michael Moore and the “documentaries” titled “The great global warming swindle” and “Expelled! No intelligence allowed” (or something to that effect). Even if they are correct in some parts, such facts are established by vigorous handwaving and propaganda, and hence can’t really by trusted. (Disclaimer: I have seen part of the global warming propaganda and a film or two by Moore.)

Permalink Leave a Comment

Some probability theory

4 April, 2008 at 9:56 pm (dice, mathematics) (, , , , )

I’ll explain what I see as the point behind some elementary concepts of probability theory. The primary source is Introduction to Probability theory by Geiss, Christel and Stefan.

The basis

The idea behind probability theory is to treat uncertainty with mathematical rigour. The first necessary component is a (non-empty) set of events; for example, if rolling a d4, this would be {1, 2, 3, 4}. When arriving to traffic lights, {red, yellow, green} is a passable set of events (at least as the traffic light are around here). The lifetime of an electronic device could have the set {1, 2, 3, 4, …}; that is, the set of natural numbers, which indicates how many days the device functions. If there are news in the radio every half an hour, the time one has to wait for the next news after turning on the radio creates the real line between 0 and 30 minutes; in math, [0, 30[ (0 is included, 30 is not).

Sigma-algebra

Sigma-algebra is defined by a certain set of properties, which I’ll list a bit later. The idea behind sigma-algebra is to list the collections of events one might want to measure the probability of.

Taking the d4 as an example, the largest possible sigma-algebra is the set of subsets or power set of {1, 2, 3, 4}, which means a set that contains {} (the empty set) {1} {2} {3} {4} {1, 2} {1, 3} {1, 4} {2, 3} {2, 4} {3, 4} { 1, 2, 3} {1, 2, 4} {1, 3, 4} {2, 3, 4} {1, 2, 3, 4} for a total of 16 sets (16 = 2^4 which is not a coincidence, and it also is the reason for using d4; d6 would have involved 64 subsets).

What if one is only interested in the end result being even or odd? The following sets also form a sigma-algebra: {}, {1, 3}, {2, 4}, {1, 2, 3, 4}.

The properties of a sigma-algebra

Let E be a non-empty set. The sigma-algebra of A always contains the empty set and E. The idea is that the propability of nothing happening and that of something happening are always known. In addition, if any subset A of E is part of sigma(E), the complement of A (negation of A) is also part of sigma(E). The idea is that if the probability of A is measurable, the probability of “not A” must also be measurable. Further, if any (finite or countable) group of subsets of E are part of sigma(E), so is their union, which means that for any group of measurable events, one can measure the chance of at least one of them happening.

From these follow a lot of things; see the PDF for more detail on the process and results.

Probability measure

Probability measure intuitively assign weight to every set that lives in the sigma-algebra. To take the d4 again, the weight added to every outcome is 1/4 (0.25 or 0,25 or 25% for those who fear fractions) if the die is fair (and is rolled fairly). If the die is weighted, the probability of {4} could be 1/2 (50% aka 0,5 aka 0.5), while that of every other number could be 1/6 (about 17% or 0,17 or 0.17).

The rules that probability measures must conform to in order to be called probability measures are as follows: The probability that something happens is 1, which is written P(E) = 1. If A and B are disjoint subsets of E (they are entirely distinct; for example, even and odd number or the number 1 and cats), the probability that something out of A or B happens equals the probability that something out of A happens plus the probability that something out of B happens. In symbols, P(A or B) = P(A) + P(B) for disjoint A, B. This applies to all numerable groups of subsets. The third rule is that the probability that something out of A does not happen equals one minus the probability that something out of A does happen, which is equivalent to P(not A) = 1 – P(A). It follows that the probability of nothing happening is zero.

The connection to sigma-algebra

In addition to every probability measure requiring a sigma-algebra to even be defined, there is another connection between the rules that defined them. Every sigma-algebra of E always includes the empty set and E; likewise, the probabilities for both of these are always defined. Likewise, if A is part of sigma(E), “not A” also lives there. Contrast to the fact that if the probability of A is known, so is that of not A. The final part of the connection is that summing up probabilities and taking a union of subsets work in similar way (there is an easy way of making any numerable group of sets disjoint; take the first set as is, take the second but remove any overlap with the first, take the third and remove any parts that overlap with the first or the second part and so forth).

The existence of this connection is obvious; there is no sense in building the definitions in a way that does not produce these connections. Still, they are useful guidelines for remembering the definitions, since it is sufficient to only remember one of them and the other can be deduced from it.

Elaboration on d4

I won’t build any more theory, since it is well-presented in the lecture notes and this is long enough as is. Go read those if you are really interested and already know some mathematics. The notation that follows is occasionally a bit clumsy, but there are reasons for it. Anything in square brackets indicates a set.

The measurable space ({1, 2, 3, 4}, {{}, {1, 3}, {2, 4}, {1, 2, 3, 4}}) can be used to determine the probabilities of getting an even or odd number with a d4. First, assuming a fair die, the relevant probability measure is defined by P({1, 3}) = 1/2 (it follows that P({2, 4}) = 1/2). The probability of rolling for example 3 is not well-defined, because {3} is not part of the sigma-algebra in use. One can think of this as a friend telling that the result was even or odd, but not what the exact number rolled was. Using the loaded die introduced earlier, the relevant probability measure would be characterised by P({1, 3}) = 2/6 = 1/3 from which it follows that P({2, 4}) = 4/6 = 2/3.

Note that with the measurable space given one could as well flip a coin; it would have two options, though they would be heads and tails, not numbers, but they could be mapped to the real line to give numeric results.

Permalink 3 Comments

Teaching is like game mastering

25 February, 2008 at 10:45 am (game mastering, mathematics, rpg theory) ()

I spend this weekend prepping s person for the mathematics part of the matriculation examination. The student was a pretty smart guy, but he did not have much routine in twisting numbers and letters around.

I could not help noticing some parallels between teaching and running games.

The matter of preparation

When running a game, I have some possible events that I can confront the player characters with. Extensive planning is just work and encourages railroading (when done by me). I also learn the system well enough so that I don’t need to check the books that often, if at all.

When teaching, I had some ideas about what subjects to handle and possibly some specific problems to solve or tricks to show. I did not build a script of the teaching session. I did go through the MAOL formula book (contains most of the formulae used in the matriculation examination and can evn be used in said examination). I still have an excellent handle of almost all of the material, differential equations excluded. Maybe I’ll ask Thalin to give me a quick summary or alternative check out some literature. Anyway.  I familiarised myself with the “rules”. The parallels are clear, I hope.

The flow of the session

When running a game, I usually have prepared enough toget the thing going and then follow up with improvisation that springs from player choices and the dice. This leads to quite dynamic gameplay, but does have some drawbacks, too. One relevant strength is that I can change the direction or emphasis of the game based on player input, verbal or nonverbal. This would work a lot better if I actually noticed the nonverbal cues, as opposed to what I do now.

When teaching, I had something to start with (nested functions, understanding derivatives to be the rate of change or angle of a graph). I explained something from different angles until the Samuli, the student, understood what I was talking about. If a difficult problem came up, I constructed a series of problems so that it started with very simple and basic, every problem was a bit more complicated than the previous one, and the difficult situation was culmination of the series. The longest series was probably three or four, so nothing terribly elaborate. Likewise, if something was easy, we could skip past it and get on to the harder stuff.

The skills required

Roleplaying and game mastering are both skills. They can be learned, improved, or get rusty. Ditto with teaching.

I think the following skills are all central to both running a game and teaching, if interpreted sufficiently broadly:

  • Gauging interest: Are people yawning or eager and focused?
  • Building suitable obstacles/problems that are not trivial, yet are not too difficult.
  • Clear communication: Explaining/describing things so that shared understanding of the subject matter/fiction is built. See anchors by Bruce (Tumac).
  • Leaving room for the student/players: Teacher/GM knows more about the problem/obstacle than the student/players does/do. Yet, the student is the one who should solve the problem, and players the ones who deal with an obstacle. No use setting up a problem or obstacle and then solving it by yourself or having a GMPC solve all problems.
  • Judgement: When a solution or mode of action is suggested, teacher/GM is the one who judges how well it works. Sometimes a simple mistake is done, sometimes the solution is flawless. GM does have the luxury of letting dice arbitrate some events, but even then the difficulty or modifiers of those rolls are up to the GM. (There are some games where this is not true. E.g. Primetime adventures, Beast hunters, Agon. This all IIRC; I have never played any of those.)
  • Quick thinking. This one is obvious and general enough to not be worth extensive commentary.

Other styles

I had the luxury of only teaching a single person. This is good. It is very exhaustive. I’d say that teaching a group demands slower pace and is probably more conductive to  preparing. Reasoning: If everyone must understand a subject, more numerous angles of presentation are useful. It is often hard to improvise multiple ways of doing the same thing (at least for me it is). Hence, preparing them ahead of them is something between viable and necessary.

Does this apply to roleplaying? Does a larger number of players imply easier or more useful preparation?

In my experience (I have never run a game with many players; four is probably the upper limit), solo games move fast. It takes huge amounts of prep to not have to improvise in solo game. When there are many players, OOG chatter is more prominent, the characters interact with each other, and generally everything takes more time. This means that less content is used in the same time and hence preparation is more possible.

On the other hand, multiple player characters means more complex plans can be formulated and practically achieved. I’d say that the time planning takes means that adjudicating such is not a huge burden, as opposed to the rapid-fire approach a single player is likely to take.

Also, when there are several PCs, the material can be more generic, because nobody assumes that every moment of the game is relevant to every character in a very personal and intimate way. I hope.

Conclusions and a warning

There are clear parallels between teaching and game mastering. The different styles, prep-wise, are quite similar. (Sandbox play would be roughly equivalent to having a huge menu of problems for the studen to choose among; there is similarity between a textbook and a published setting).

The warning part: Don’t use roleplaying to teach a lesson. Like, “greed is teh evilness”. Punish every character that does a greedy thing, reward every generous action. Players will see it and resent it. Be open about such an undertone and add it as a setting or rules premise, like a setting where the generous are held in esteem and greed as a sign of possession by evil spirits. Let players challenge the notions and don’t fiat a conclusion you like. Provide a playground, not a brainwashing session.

This applies to teaching ethics, too. Teach something and people will resist it just because. Give something that people can interact with and they will form their own opinions about it and actually learn something.

Disclaimer

I have no training in pedagogy. Take my opinions with a grain of salt. They are opinions, not facts.

Permalink 2 Comments

Defining omnipotence

27 November, 2007 at 6:49 pm (definition, philosophy) (, , )

This is a sufficient definition and from human POV; that is, if something like what is described existed, we would call it omnipotent.

Let U be a closed universe, or something very near closed. Closed means that the things inside it can’t get to or sense the outside, and hence are unlikely to know anything about it. Let G be an undefined entity (you can read it as God if you really want to).

G is omnipotent with regards to U if G can shape U into whatever native form it could encompass. So, for example, in our universe an omnipotent G could create and remove physical objects at will, but it would not be necessary for G to be able to create things fundamentally beyond our understanding (I have a few problems with trying to create examples for certain reasons), because they are not part of our universe as is.

From this basis, a theorem: G must be outside the conception of time (or entropy or another measure of change, with apologies to everyone who knows physics for probably misusing “time” and “entropy”) that exists in U.

If this was not the case, G could first (within the dominant measure of change) create the indestructable wall and then create a cannonball that destroys everything it touches, third make them touch, which results in impossible outcome, and hence is not true. This does not happen when G is outside the conception of change as it applies in U, because then G would both create something and cancel it at the same time, which amounts to not creating the thing to start with, which leads to no paradox, because G didn’t actually create one of the conflicting absolutes after all.

The definition also assumes G is omniscient, but when talking about omnipotence, that is kinda trivial.

Permalink Leave a Comment

A diversion: On square circles

25 November, 2007 at 9:18 am (mathematics) (, , , )

This is a semiformal proof. A formal one would be hard to understand without pictures and would require me checking out the English translations of a number of words, which I am not inclined to do.

A square is defined as an ordered set of four points, no three of which are on the same line, and further that the angles thus created are all right (there is a bit more, but it is not too relevant). Circle is defined as a set of points that are from given distance (radius) from a given point. Because a square does not exist in hyperbolic geometry, it is sufficient to think about Euclidean geometry. Assume that there is a square circle. Let A and B be two points of the circle that are part of the same edge (i.e. that edge does not go through the circle’s center). Let C be a point that is between A and B. Distance from The center of the circle to C is lesser than the distance to A or B, so it is also lesser than the radius. Accordin to one axiom I am a bit too lazy to check out, it is possible to find D so that D is behind C when observed from the circle’s center and D’s distance from the center is the circle’s radius. Hence D is part of the circle. D is not anyof the original points that defined the circle, because if it was either of the unnamed ones, the definition of rectangle would get messy, and if it were A or B, C would also be A or B, which contradicts the choice of C. Hence, there are no square circles. Qued est demonstratum.

This diversion due to another discussion.

Permalink 6 Comments

Next page »