This is just a thought I was toying with the other day, based on Joshua Foer's mnemonic ideas in Moonwalking with Einstein.
One of the reasons we don't cope well with maths seems to be the level of abstraction - a string of seemingly random digits like 3.14159265 is basically meaningless.
How would engaging in mathematical thought feel different if we learned it using the Roman alphabet again - not so arbitrarily as I = 1, V = 5 etc, but simply assigning as we often do for other purposes, a value to each letter based on its position in the alphabet.
Potentially we could stop at I=9, but we might prefer to use a base 26-27 system to exploit our full writing system (the Babylonian Cuneiform script was base 60, so base 10 doesn't seem to be a hard human limit).
Now every number would automatically have a sound associated with it, albeit often an obscure one. Occasionally it would even entail or contain a word or phrase, or something expressive enough to evoke one. Pi (if I'm translating correctly (ETA: which I'm fairly sure I'm not, but you get the idea), and depending on where/whether 0 was in the alphabet, or a something evocative but perhaps less confusing than a=0 like a question mark, semi colon etc) would be C.NOIZE - 'sea noise' is far easier for an Anglophone to remember than the arabic gibberish above (and in fact, I think I've accidentally just memorised an extra three digits of pi).
Alternatively, we might empirically investigate human capacities to deal with different numeric bases (between 1-26 at least), and then assign letters strategically, perhaps if we had base 16 (for the sake of argument), deliberately including all 5 vowels but excluding homonymous pairs like C and K rather than simply taking A to P.
I realise that maths isn't really about remembering number strings, but it seems like they'd be much easier to mentally manipulate if they just meant more to you in the first place.
Drawbacks (of either base 26 or a more strategic approach)? Maybe the loss of current algebraic representations, but they can't help but be abstractions, so inventing/adopting new symbols to address them wouldn't be too difficult.
Also in writing that included numbers it would be slightly harder to distinguish one from the other (especially if you also had acronyms), but again I'd imagine we could work around that easily enough, perhaps by a convention like italicising capitals (as above), or by developing a different set of characters that looked similar enough to the letters to still be evocative, but clearly distinct enough that you wouldn't get confused (such as the hane Japanese script use to distinguish eg 'ka' from 'ga').
Another issue is that homonyms (and word associations) might confuse. 'C.NOIZE' would be easy to mix up with 'C.NOISE' for eg. But surely it would still be a huge improvement over remembering 3.14159265 rather than 3.14159195, and often a quick sanity check would resolve such ambiguity. Obviously a strategic selection of initial characters if we were to use a base of <26 would help avoid this.
More serious problems seem likely to be more subtle - perhaps cognitive biases introduced by numbers that look *almost* like a word you want them to. But I don't know how we'd predict such a consequence, and we have quite a bit to gain by trying.
One of the reasons we don't cope well with maths seems to be the level of abstraction - a string of seemingly random digits like 3.14159265 is basically meaningless.
How would engaging in mathematical thought feel different if we learned it using the Roman alphabet again - not so arbitrarily as I = 1, V = 5 etc, but simply assigning as we often do for other purposes, a value to each letter based on its position in the alphabet.
Potentially we could stop at I=9, but we might prefer to use a base 26-27 system to exploit our full writing system (the Babylonian Cuneiform script was base 60, so base 10 doesn't seem to be a hard human limit).
Now every number would automatically have a sound associated with it, albeit often an obscure one. Occasionally it would even entail or contain a word or phrase, or something expressive enough to evoke one. Pi (if I'm translating correctly (ETA: which I'm fairly sure I'm not, but you get the idea), and depending on where/whether 0 was in the alphabet, or a something evocative but perhaps less confusing than a=0 like a question mark, semi colon etc) would be C.NOIZE - 'sea noise' is far easier for an Anglophone to remember than the arabic gibberish above (and in fact, I think I've accidentally just memorised an extra three digits of pi).
Alternatively, we might empirically investigate human capacities to deal with different numeric bases (between 1-26 at least), and then assign letters strategically, perhaps if we had base 16 (for the sake of argument), deliberately including all 5 vowels but excluding homonymous pairs like C and K rather than simply taking A to P.
I realise that maths isn't really about remembering number strings, but it seems like they'd be much easier to mentally manipulate if they just meant more to you in the first place.
Drawbacks (of either base 26 or a more strategic approach)? Maybe the loss of current algebraic representations, but they can't help but be abstractions, so inventing/adopting new symbols to address them wouldn't be too difficult.
Also in writing that included numbers it would be slightly harder to distinguish one from the other (especially if you also had acronyms), but again I'd imagine we could work around that easily enough, perhaps by a convention like italicising capitals (as above), or by developing a different set of characters that looked similar enough to the letters to still be evocative, but clearly distinct enough that you wouldn't get confused (such as the hane Japanese script use to distinguish eg 'ka' from 'ga').
Another issue is that homonyms (and word associations) might confuse. 'C.NOIZE' would be easy to mix up with 'C.NOISE' for eg. But surely it would still be a huge improvement over remembering 3.14159265 rather than 3.14159195, and often a quick sanity check would resolve such ambiguity. Obviously a strategic selection of initial characters if we were to use a base of <26 would help avoid this.
More serious problems seem likely to be more subtle - perhaps cognitive biases introduced by numbers that look *almost* like a word you want them to. But I don't know how we'd predict such a consequence, and we have quite a bit to gain by trying.