Tag Archives: mathematics

Thinking ape. Credit: Pixel-mixer/pixabay

The worst poem ever

How does feel to write a story and then, just like that, have everyone read it as well as be interested in reading it?

How would it feel to not have to hope quasi-desperately that a story does well after having spent hours – if not days – on it?

How would it feel to not slog and slog, telling yourself that you just need to be proud of covering a beat few others have chosen to?

“Good journalism can only emerge from being a good citizen” – but is there a way to tell what kind of citizenship is valuable and what kind not?

Of course, I’m also asking myself questions about why it is that I chose to be a journalist and then a science journalist.

The first one doesn’t have a short answer and it’s probably also too personal to be discussing on my blog. So let’s leave that for another day, or another forum.

Why science journalist? Because it’s like Kip Thorne has said: it was the pleasure of doing “something in which there was less competition and more opportunity to do something unique.”

When I tell people I’m a science journalist, a common response goes like this: “I’ve distanced myself from science and math since school”. And it goes with a smile. I smile, too.

Except I’m not amused. This mental block that many people have I’ve found is the Indian science journalist’s greatest enemy – at least it’s mine.

What makes it so great is that, to most people, it’s a class- and era-specific ‘survival skill’ they’ve adopted that has likely made their lives more enjoyable.

And we all know how hard it is give fucks about the wonders that unknown unknowns can hold. It’s impossible almost by definition.

Then there are also so many fucks demanded of us to be given to the human condition.

But Ed Yong’s tweet I will never forget, though I do wish I’d faved it: there’s so much more to science than what applies to being human.

Of course, there’s the other, much simpler reason I’m thinking all this, and so likelier to be true: I’m just a lousy science journalist, writing the worst poem ever.

Featured image credit: Pixel-mixer/pixabay.

 

The literature of metaphysics (or, ‘Losing your marbles’ )

For a while now, I’ve been intent on explaining stuff from particle physics.

A lot of it is intuitive if you go beyond the mathematics and are ready to look at packets of energy as extremely small marbles. And then, you’ll find out some marbles have some charge, some the opposite charge, and some have no charge at all, and so forth. And then, it’s just a matter of time before you figure out how these properties work with each other (“Like charges repel, unlike charges attract”, etc).

These things are easy to explain. In fact, they’re relatively easy to demonstrate, too, and that’s why not a lot of people are out there who want to read and understand this kind of stuff. They already get it.

Where particle physics gets really messed up is in the math. Why the math, you might ask, and I wouldn’t say that’s a good question. Given how particle physics is studied experimentally – by smashing together those little marbles at almost the speed of light and then furtively looking for exotic fallout from the resulting debris – math is necessary to explain a lot of what happens the way it does.

This is because the marbles, a.k.a. the particles, also differ in ways that cannot be physically perceived in many circumstances but whose consequences are physical enough. These unobservable differences are pretty neatly encapsulated by mathematics.

It’s like a magician’s sleight of hand. He’ll stick a coin into a pocket in his pants and then pull the same coin out from his mouth. If you’re sitting right there, you’re going to wonder “How did he do that?!” Until you figure it out, it’s magic to you.

Theoretical particle physics, which deals with a lot of particulate math, is like that. Weird particles are going to show up in the experiments. The experimental physicists are going to be at a loss to explain why. The theoretician, in the meantime, is going to work out how the “observable” coin that went into the pocket came out of the mouth.

The math just makes this process easy because it helps put down on paper information about something that may or may not exist. And if really doesn’t exist, then the math’s going to come up awry.

Math is good… if you get it. There’s definitely going to be a problem learning math the way it’s generally taught in schools: as a subject. We’re brought up to study math, not really to use it to solve problems. There’s not much to study once you go beyond the basic laws, some set theory, geometry, and the fundamentals of calculus. After that, math becomes a tool and a very powerful one at that.

Math becomes a globally recognised way to put down the most abstract of your thoughts, fiddle around with them, see if they make sense logically, and then “learn” them back into your mind whence they came. When you can use math like this, you’ll be ready to tackle complex equations, too, because you’ll know they’re not complex at all. They’re just somebody else’s thoughts in this alpha-numerical language that’s being reinvented continuously.

Consider, for instance, the quantum chromodynamic (QCD) factorisation theorem from theoretical particle physics:

This hulking beast of an equation implies that *deep breath*, at a given scale (µand a value of the Bjorken scaling variable (x), the nucleonic structure function is derived by the area of overlap between the function describing the probability of finding a parton inside a nucleon (f(x, µ)and the summa (Σ) of all functions describing the probabilities of all partons within the nucleon *phew*.

In other words, it only describes how a fast incoming particle collides with a target particle based on how probable certain outcomes are!

The way I see it, math is the literature of metaphysics.

For instance, when we’re tackling particle physics and the many unobservables that come with it, there’s going to be a lot of creativity and imagination, and thinking, involved. There’s no way we’d have had as much as order as we do in the “zoo of particles” today without some ingenious ideas from some great physicists – or, the way I see it, great philosophers.

For instance, the American philosopher Murray Gell-Mann and the Israeli philosopher Yuval Ne’eman independently observed in the 1960s that their peers were overlooking an inherent symmetry among particles. Gell-Mann’s solution, called the Eightfold Way, demonstrated how different kinds of mesons, a type of particles, were related to each other in simple ways if you laid them around in an octagon.

A complex mechanism of interaction was done away with by Gell-Mann and Ne’eman, and substituted with one that brought to light simpler ones, all through a little bit of creativity and some geometry. The meson octet is well-known today because it brought to light a natural symmetry in the universe. Looking at the octagon, we can see it’s symmetrical across three diagonals that connect directly opposite vertices.

The study of these symmetries, and what the physics could be behind it, gave birth to the quark model as well as won Gell-Mann the 1969 Nobel Prize in physics.

What we perceive as philosophy, mathematics and science today were simply all subsumed under natural philosophy earlier. Before the advent of instruments to interact with the world with, it was easier, and much more logical, for humans to observe what was happening around them, and find patterns. This involved the uses of our senses, and this school of philosophy is called empiricism.

At the time, as it is today, the best way to tell if one process was related to another was by finding common patterns. As more natural phenomena were observed and more patterns came to light, classifications became more organised. As they grew in size and variations, too, something had to be done for philosophers to communicate their observations easily.

And so, numbers and shapes were used first – they’re the simplest level of abstraction; let’s call it “0”. Then, where they knew numbers were involved but not what their values were, variables were brought in: “1”. When many variables were involved, and some relationships between variables came to light, equations were used: “2”. When a group of equations was observed to be able to explain many different phenomena, they became classifiable into fields: “3”. When a larger field could be broken down into smaller, simpler ones, derivatives were born: “4”. When a lot of smaller fields could be grouped in such a way that they could work together, we got systems: “5”. And so on…

Today, we know that there are multitudes of systems – an ecosystem of systems! The construction of a building is a system, the working of a telescope is a system, the breaking of a chair is a system, and the constipation of bowels is a system. All of them are governed by a unifying natural philosophy, what we facilely know today as the laws of nature.

Because of the immense diversification born as a result of centuries of study along the same principles, different philosophers like to focus on different systems so that, in one lifetime, they can learn it, then work with it, and then use it to craft contributions. This trend of specialising gave birth to mathematicians, physicists, chemists, engineers, etc.*

But the logical framework we use to think about our chosen field, the set of tools we use to communicate our thoughts to others within and without the field, is one: mathematics. And as the body of all that thought-literature expands, we get different mathematic tools to work with.

Seen this way, which I do, I’m not reluctant to using equations in what I write. There is no surer way than using math to explain what really someone was thinking when they came up with something. Looking at an equation, you can tell which fields it addresses, and by extension “where the author is coming from”.

Unfortunately, the more popular perception of equations is way uglier, leading many a reader to simply shut the browser-tab if it’s thrown up an equation as part of an answer. Didn’t Hawking, after all, famously conclude that each equation in a book halved the book’s sales?

That belief has to change, and I’m going to do my bit one equation at a time… It could take a while.

(*Here, an instigatory statement by philosopher Paul Feyerabend comes to mind:

The withdrawal of philosophy into a “professional” shell of its own has had disastrous consequences. The younger generation of physicists, the Feynmans, the Schwingers, etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrodinger, Boltzmann, Mach and so on. But they are uncivilized savages, they lack in philosophical depth — and this is the fault of the very same idea of professionalism which you are now defending.“)

(This blog post first appeared at The Copernican on December 27, 2013.)

“God is a mathematician.”

The more advanced the topics I deal with in physics, the more stark I observe the divergence from philosophy and mathematics to be. While one seems to drill right down to the bedrock of all things existential, the other assumes disturbingly abstract overtones, often requiring multiple interpretations to seem to possess any semblance of meaningfulness.

This is where the strength of the mind is tested: an ability to make sense of fundamental concepts in various contexts and to recall all of them at will so that complex associations don’t remain complex but instead break down under the gaze of the mind’s eye to numerous simple associations.

While computation theory would have us hold that a reasonable strength of any computing mechanism could be measured as the number of calculations it can perform per second, when it comes to high-energy physics, the strength lies with the quickness with which new associations are established where old ones existed. In other words, where unlearning is just as important as learning, we require adaptation and readjustment more than faster calculation.

In fact, the mathematics is such: at the fringe, unstable, flitting between virtuality and a reality that may or may not be this one.

One could contend that the definition of mathematics in its simplest form – number theory, fundamental theories of algebra, etc. – is antithetic to the kind of universe we seem to be unraveling. If we considered the example of physics, and the divergence of philosophy from theoretical physics, then my argument is unfortunately true.

However, at the same time, it seems to be outside the reach of human intelligence to conceive a new mathematical system that becomes simpler as we move closer to the truth and is ridiculously more complex as one strays from it toward simpler logic – not to mention outside the reach of reasoning! How would we then educate our children?

However, it is still unfortunate that only “greater” minds can comprehend the nature of the truth – what it comprises, what it necessitates, what it subsumes.

With this in mind: we also face the risk of submitting to broader and broader terms of explanation to make it simpler and simpler; we throw away important aspects of the nature of reality from our textbooks because people may not understand it, or may be disturbed by such clarity, and somehow result in the search seeming less relevant to daily life. Such an outcome we must keep from being precipitated by any activity in the name of and for the sake of science.

On Monday, I attended a short lecture by the eminent Indian particle physicist Dr. G. Rajasekaran, or Rajaji as he is referred to by his colleagues, on the Standard Model of high-energy physics and its future in the context of the CERN announcement on July 4, 2012. While his talk itself straightened a few important creases in my superficial understanding of the subject, two of its sections continues to nag at me.

The first was his attitude toward string theory, which was laudatory to say the least and stifling to say the most. When asked by a colleague of his from the Institute of Mathematical Science about constraints placed on string theory by theoretical physics, Rajaji dismissed it as a political “move” to discredit something as exotic as the mathematical framework that string theory introduced.

After a few short, stunted sniggers rippled through the audience, there was silence as everyone realised Rajaji was serious in his allegation: he had dismissed the question as some political comment! Upon some prodding by the questioner, Rajaji proceeded to answer in deliberately uncertain terms about the reasons for the supertheory’s existence and its hypotheses.

Now, I must mention that earlier in his lecture, he had mentioned that researchers, especially of high-energy/particle physics, tended to dismiss new findings just as quickly as they were ready to defend their own propositions because the subject they worked with was such: a faceless foe, constantly shifting form, one moment yielding to one whim, one serendipity, and the next moment, to the other (ref: Kuhn’s thesis). And here he was, living his words!

The second section was his conviction that the future of all kinds of physics lay in the hands of accelerator physics. That experimental proof was the sole arbiter for all things physical he summarised within a memorable statement:

God is a mathematician, but even he/she/it will wait for experimental proof before being right.

This observation arose when Rajaji decided to speculate aloud on the future of experimental particle physics, specially considering an observable proof of the existence of string theory.

He finished ruing that accelerator physics was an oft ignored subject in many research centres and universities; now that we had sufficiently explored the limits and capabilities of SM-physics, the physics to follow (SUSY, GUT, string theory, etc.) necessitated collision-energies of the order of 1019 GeV (the “upgraded” run of the LHC in early to July 2012 delivered a collision energy of 8,000 GeV).

These are energies well outside the ambit of current human capability. It may well be admitted at this point that an ultimate explanation of the universe and all it contains is not going to be simple, and definitely not elegant. Every step of the way, we seem to encounter two kinds of problems: one cardinal (particle-kinds and their properties) and metaphysical (why three families of particles and not two or four?).

While the mathematics is “reconfigured” to include such new findings, the philosophy acquires a rupture, a break in derivability, and implications become apparent ex post facto.

The fallacy of attaining infinity

I made a lot of trips to The Hindu office on Mount Road this past week, and I made all of them using the suburban railways. When standing still inside a train that’s moving at around 80 km/hr, I’m also moving at the same speed in the same direction. When walking ahead inside the train, I’m moving faster than the train inside the train. When walking toward the back of the train, I’m moving slower than the train is. All this is boring relative-motion stuff. How about when I’m moving sideways inside the train?

When I’m moving sideways at a speed of, say, 2 m/min, the train will have moved forward a distance of 22.22 m in that same minute. If there were an imaginary path that I was inscribing on the ground, then my sideways-one won’t be perpendicular to the train’s path: it will be adjacent and separated by an angle (like in the diagram shown below).

In the diagram, b is 22.22 m long, a is 2 m long, C is 90°, and A is tan(a/b) = 0.0015°. Now, the time taken by the train to traverse 22.22 m is 1 s. Let’s keep that fixed; instead, in that same second, let’s move faster and faster from point A to point B (i.e., my sideways motion). If I move 3 m instead of 2, the angle A becomes 0.0022°. If I move 5 m, the angle’s value climbs to 0.0039°. At some point, where the train’s speed is too high, the value of A has to move toward around 0°, and c, toward b. In other words, if I move really fast along the breadth of the train and the train has also sped up to a great velocity, I can get from one side of the train to the other as if I simply vanished at this point and materialized at that.

For that to happen, let’s make some hypothetical modifications to the train: let the breadth be 2 km instead of a few metres, and let it be accelerating toward around 2,000 km/hr. Assuming that at some point of time the train has stopped accelerating and attained a constant velocity of 2,000 km/hr, if I move 2 m sideways in 1 s, the value of A stands at 0.0000628° and c at 555.5536 m. To make c smaller, let’s say the train has sped up to 2,100 km/hr and I move at 1 m/s. This makes A 0.0000298° and c, 583.3309 m. If I move so much as 0.1 m, A becomes 0.00000298° and c, 583.3300002 m. At this stage, A is as good as 0° and c almost equal to b.

For someone watching me move from inside the train, I will have moved sideways at 0.1 m/s. However, for someone looking at the imaginary path (i.e., from a relativistic reference frame), it will be non-existent! This is because I will have moved from A to B in a train so fast that my path will become a, as if there were two parallel lines (one beginning at A and the other at B) and I moved from one to the other along a path that is parallel to both lines. This situation is a mathematical improbability, and thus must correspond to an improbable assumption in the real-world. What is it?

The simplest wrong assumptions are always associated with almosts and nearlies. Saying 583.33 m is almost equal to be 583.3300002 m is different from saying 583.33 m is equal to 583.3300002 m. In the real world, for as long as we don’t hit relativistic velocities (i.e., those close to that of light), there will always be these extremely small but furiously persistent inconsistencies – they might seem valid for mathematical and practical approximations but they will always translate into very real differences.

The cause-effect paradigm

Some people find differential calculus very easy. Others find vector algebra very easy. However, given that our education system is firmly unidirectional for many justifiable reasons, the calculus-folk would have had to suffer vectors before they came across what they liked. This happens to most students. Unfortunately, the process is so rigorous that such students may be driven to lose focus or interest in the subject as a whole. There could be no other way to do it, but that doesn’t mean there’s no better way to teach such subjects inside classrooms.

From time to time, students and teachers alike need to be reminded that each topic in a subject is weak by itself, and only with the assistance of other topics is anything achieved. Instead of going from specifics to the larger picture, why not come from the larger picture to the specifics? After all, and this is just an (convenient) example, mathematics is a powerful but singular set of tools used to solve problems in the real world: every problem is application driven, including in string theory and loop quantum gravity, where, without the verification of their hypotheses by experiments, each remains just a strongly-defended opinion.

The tools of multilateral thinking can be used within classrooms as well to improve efficiency and productivity.

I must concede that some problems are better solved using some tools than others, but keeping in mind why the problem is being solved like that is important. Even if calculus provides a circuitous route to a solution, what’s wrong with its being adopted by the calculus-lovers to get there? When they get there, the relationship between the problem and the solution becomes clearer: there is a better cause-effect relationship established than when a student struggles through vectors and is exhausted by the end, reluctant to take it up again.

As far as laying the groundwork is concerned, teaching students everything is the way to go: at some point later, then, they will be better equipped to make a choice – between what they think they ought to stick with and what they think they can afford to avoid. However, in this order of things, the problems solved using tool-set A and tool-set B, even if in different terms, could be the same, or related in some way so that even what seems difficult could be better understood in terms of what seems easy.

These are only musings concerned with the different ways through which students can convert information into knowledge. The point is: as long as we’re here to solve problems, let’s have fun doing it.

Visualization calibration

What would a musical vector look like? Vectors have magnitude and direction; music possesses an amplitude (volume) and a frequency (pitch). If the directive parameter of the vector is substituted with the frequency of some noise and the magnitude of the vector substituted with the amplitude, and if the origin of the vector is held fixed, then it would move around that pivot, pointing in a certain direction for a given frequency and stretching in that direction according to the amplitude.

The next step is to model the direction according to the frequency: given that the noise playing could be at any frequency between 20 Hz and 20,000 Hz, it would be quite a mundane exercise to manually calibrate a scale and have the vector point at the appropriate positions. Instead, it would be more interesting to ditch the cylindrical coordinates normally taught in classrooms at the middle school level and take up the circular coordinate system. Here, instead of the X and Y axes, there’s the radial vector and the angular position: if I stand at a particular point, instead of being so much to the left and so much toward the front, I will be some distance from an origin and inclined at some angle against a baseline.

A circular, or polar, coordinate system

Now, let’s fix the frequency conversion first. 20 Hz to 20,000 Hz is a range of 19,980 Hz. Dividing that value by 360 degrees, we get 55.5 Hz per degree: this means that starting at 0 degrees, each subsequent degree represents an increment of 55.5 Hz, as in 0 Hz, 55.5 Hz, 111 Hz, 166.5 Hz, and so on. Therefore, as the noise plays out, the vector will point in the corresponding direction. In order to make it more visually captivating, the timestep can be incremented to 0.5 seconds. In other words, the vector will correspond to the frequency only once every second instead of corresponding continuously. With suitable fade-in and fade-out effects, a smooth flashing motion can be visualized.

Before fixing the amplitude conversion, let’s look at the following wave representation of some noise.

Demarcating it into three sections,

If the red line was to be held as the baseline, then the net displacement from it of each point of the green curve (with a timestep of 0.5 seconds) can be computed and a standard deviation (SD) arrived at. Now, the value of the SD is going to be different for different sections, the reasons behind which are evident. Now, instead of computing the deviations separately, section after section, it can be done continuously. Since the value of the SD is equal to the average value of all measured deviations in that section, the section under consideration can be moved with a timestep of 0.5 seconds and a range of 5 seconds.

For example, let’s assume that the range of A is 5 seconds. This is the original section. Now, as the noise begins to play, we wait for the first 5 seconds to transpire. At 5.5 seconds, we move the head of the section we’re considering to coincide with the the position at which the noise is playing – like a slider along a rail – while we bring up the rear, constantly ensuring that the range remains at 5 seconds. In this moving range, we continuously compute the SD and use this changing value as the radius of the circle we’re using to visualize the vector in.

If the noise playing is a continuous and uniformly pitched beep, the vector is going to point in one direction all the time and the radius of the circle is going to be constant throughout. If a sine wave is playing out, then the radius of the circle will rise and fall according to the frequency of the wave and the vector will oscillate between two points on the perimeter of the circle. Here again, a latency can be effected by introducing a lag component to the vector’s movement, ensuring that it moves, say, 0.25 seconds later than right then. The final step to calibrating a visualizer is the graphic effects: since we’ve assumed a circular coordinate system, the equation for the Archimedean spiral can be employed to assign each point, or pixel, within the circular a particulate color.

r = a + b . θ

‘a’ is the gradient of the coloring; ‘b’, the number of pixels on the radius of the circle; and ‘r’, the coloring function that has been employed. The total number of pixels in the circle will be πrwhich will also then be the number of colors to be assigned overall. Using a loop counter to increment the hex colors (and assigning them to the value of ‘r’), the moving vector can be colorized depending on where it points to and to what distance within the circle (while θ is increased from 0-360 degrees). Since the radius of the circle, ‘b’, is going to keep changing, it would be better to colorize the entire canvas, superimpose the image of the circle on it, mask the colors, and then use the vector to unmask the colors on its “skin”.

A still of a visualization on Windows Media Player, achieved by using more subtle gradients, fading effects, and multiple layers of images.

Exploring a clustering technique using mathematical group theory

Step I: Identification of ICM

A scatter plot

The x-axis, identified by the base marker, represents numerical values from 1 to 100 (integers). The y-axis, identified by the decimal marker, represents numerical values from 0 to 1 (decimal, 2 significant digits).

From all the data, two points are selected at random, marked in the image by a black rectangle around them. They are labelled as the initial cluster markers (ICM).

Step II: Geometric tagging

The Euclidean distance between each point in the scatter plot and the ICM is calculated. In this case, there are 48 points (after excluding the ICM): x1, x2, . . ., x48. Therefore, the distances between x1 and ICM1 and ICM2 are calculated; between x2 and ICM1 and ICM2, and so on until x48 and ICM1 and ICM2.

If the distance between some xi and ICMa is lower than xi and ICMb, then xi is grouped with ICMa and tagged as xia. This gives rise to a clear demarcation; the data set is now split amongst xia and xjb. Each section is identified as S{xi,ja,b}.

Step III: Averaging

Before any average is computed, it must be determined as to which value is to be considered. Three cases are presented below.

  1. x-value: if the x-value of each data-point is to be averaged in each cluster, then the computed average will lie along a straight line xi = µi
  2. y-value: if the y-value of each data-point is to be averaged in each cluster, then the computed average will lie along a straight line yi = µi
  3. Alien value: consider the following table.

Base marker (1-50)

Decimal marker (0-1)

Anonymous marker (1-100)

4

0.71

20

13

0.61

38

14

0.98

7

18

0.03

4

11

0.68

55

37

0.26

67

30

0.04

46

21

0.22

57

37

0.48

90

47

0.94

43

The x- and y-axes represent the values in the first two columns while the third column shows a set of values not considered in the construction of the scatter plot. Since each row in this table is denoted as a point in the plot, each such point is also associated with a certain “anonymous value” as shown in the table.

These can be considered in the averaging process, whereby all the “anonymous” values of all the points in each cluster are averaged separately:

S(xia): A(xia)

S(xjb): A(xjb)

These new points, Aa and Ab, are plotted in their respective clusters.

Step IV: Convergence

Aa and Ab become the new ICM, and steps I to III are repeated.

The iterations stop when the new Aa and Ab converge with the Aa and Ab from the previous iteration.

Result

After the values have converged, the two S{xi,ja,b} now presented by the machine are two distinct groups of data, the emergence of which also signals that the machine has “learnt”. If a very large database is supplied as an input to the machine, there is an advantage as well as a disadvantage.

  • Advantage: the machine learns better
  • Disadvantage: data may not converge at all due to improper choice of clusters

In order to prevent the second possibility, a suitable number of clusters has to be determined for n, the number of data-points. As a rule of thumb,

k = (n / 2)1/2

Here, ‘k’ is the number of clusters.

If a dataset of 50 points is input, then k = 5 clusters are suggested. Once the iterations have been performed and convergence has been attained in five separate clusters, the centroids of each cluster (or, the average value of the data-points in each cluster) can now be used to arrive at 2 super-clusters.

Plays of the day

The “Club 27” apocrypha

Do rockstars who die at the age of 27 change the way we look at rock n’ roll?

*

Necker cubing

Some time ago, during a certain event, it so befell that I had to get up on stage and speak over a mic; I say that because what I said isn’t important. As I started to speak, I became aware of two voices: my voice pre-amplification (pre-A) and my voice post-amplification (post-A). I had to be aware of the pre-A so I wouldn’t raise my voice unnecessarily, and I had to be aware of the post-A so I could listen to what I was saying.

Over the course of the next few minutes, I could often be caught trying to listen to my pre-A and check for the loudness of my voice using the post-A, which didn’t work at all, leading to a constantly varying amplitude of the output – more often than not at increasing volumes. Then again, I let my audience laugh at me: I’ve found that distracts people enough to let me carry on with my work. Anyway, the experience was like trying to drive a motorcycle precisely over the center of the road at all times.

Consider the following schema.

Here, S stands for the source, A for the amplifier, V for the volume (or quantity) and I for information (or quality).

Are there any hormonal systems or neural networks that function on this principle? Because this reminds me of the McGurk effect in interdisciplinary cognition.

*

Communist journalism

Does constantly asking “How is the common man being wronged?” foster a Communist proclivity?

*

The calculus affair

My textbook of differential equations and their applications was finally delivered by Flipkart (and then to me by D.). When I was looking for the book first, I chanced upon a textbook of algebraic topology, which would’ve been perfect for the Conway’s game of life problems I’ve been looking at. While browsing through the first few pages on Amazon’s preview, I had a shock when I realized I’d lost with my calculus. Of course I bought the book on differentiation immediately!

I was never so good at solving problems I was asked to solve inside the classrooms, but when it came to differential calculus, I could solve the toughest problem in a jiffy. What a dejection it must have been when I took more than 10 minutes to figure out the differential of ewas ex. When in Dubai doing engineering, I wanted to study journalism so much. Now, at ACJ, my fingers itch everyday for a challenge in calculus.

*

Symmetrical itching

I was sitting with a couple of friends outside on the lawn when one of them, A, itched two sides of her face at the same time. It was curious because she said it happened often, i.e., symmetrical itching, sometimes on her shoulders, sometimes on her hands. I quickly made a note of it, much to the amusement of my friends who thought I was being curious about nothing.

Now, I find that there’s nothing concrete to explain symmetrical itching (even though the itch itself as the cause of concern has been widely debated) and most answers on the Web are centered around the “wiring” of the CNS and a possible eczema infliction. This shalt be pursued.

Machines and meaning

Old post, now archived thus.

The problem with machines is subjectivism. A machine’s ability to feel, to perceive and to draw subjective conclusions is compromised by its creation itself: if we were to put random bits of metal together and suddenly witness life, we wouldn’t bother with any of the intricate mechanisms that seem to make the device life-like. Essentially, we’re only imitating life, we’re not creating it.

In the absence of mechanical subjectivism, the most favourable other recourse at our disposal is to imitate the processes through which we humans achieve subjectivism. The most important of these processes is the feedback loop, and together with such things as logic gates and Turing machines, the loop manages to recreate various scenarios. Recreation, however, is not our ultimate goal but must still be suffered so that we may find what we seek.

As an example, I’ll use the ‘intent’ and ‘mechanism’ modules to highlight the problems associated with making a machine think for itself. And this is not a simple computation process that a Universal Turing Machine (UTM) could solve – think of it as a UTM with a very large rules table, an infinite and variable input tape and an output that must fit certain descriptions.

Intent: To reprimand a man who’s made a mistake

Resources: Corpus of words, grammatical rules, semantics, alpha-numeric index

Mechanisms:

  1. UTM.access accesses elements from the ‘Resources’ set
  2. UTM.flip substitutes one element with another element
  3. UTM.eval.typo is a function that evaluates the typology of a given phrase and returns a predefined structure index value
  4. UTM.eval.sem is a function that evaluates the meaning of a given word and returns a predefined semantics index value
  5. UTM.build finalizes the order in which a length of words in a two-dimensional array are set
  6. UTM.insert inserts a word into the array
  7. UTM.remove removes a word from its place in the array

Now, in terms of the UTM: the input tape is made finite and constituted by the words in the corpus, the rules table inherits its logicality from the properties of the ‘Resources’ set, and the output must match the intent.

Corpus of words (closed concept)

  • Finite in number
  • Categorized as adjectives, nouns, verbs, determiners and prepositions
  • Spelling

Grammatical rules (closed concept)

  • Syntactic rules (placement of commas after certain words, etc.)
  • Typologies (OVS and SVO)
  • Placement rules for verb phrase (VP), noun phrase (NP), prepositional phrase (PP), and adjectival phrase (AP)
  • Categorization rules for phrasal grammar

(Example: My neighbour, whose dog was barking all night, parked his car in the garage and ran into his backyard.

Phrasal form: {My neighbour, {whose dog was barking all night}, {parked {his {car}} {in {the garage}}} and {ran {into {his backyard.}}}})

Alpha-numeric index

  • A mapping from one closed-concept finite set to a closed-concept infinite set
  • Providing inter alia “communication” between different functions and procedures
  • Definitions can be modified by user(s)

There we have it. Now, using these tools, and a corpus of say 5,000 words, script the algorithm for a Turing machine to generate as many sentences as possible that are all synonymous to: “He is so foolish, so much more foolish than my neighbours.”

The simpler sections of the algorithm are invariably those that have been deconstructed into discrete encapsulations of different concepts, and therefore are easily deployed as part of a process. The more difficult sections are those that employ the semantic aspects of words because the problem statement is how do you make a machine understand meaning?

(Remember that the corpus does not contain a synonymic categorization rule.)

To be continued…

Experiments with first-order logic (colour-coded)

(Algorithms are syntactically approximated; T is a tape, or a row, consisting of n blocs)

Coloured rule 90 garble

*invoke graph and colour libraries
bin k;
int h,i;
int r=1,g=1,b=1;
*create m x n matrix
n=ln(T);
m=count(T);
for h=1 to m
{
for i=1 to n
{
if h=1:
{
val(kh,i)=rand(0,1);
else:
if i=1: val(kh,i)=val(kh-1,i+1);
if i<n: val(kh,i)=XOR(val(kh-1,i-1), val(kh-1,i+1));
if i=n: val(kh,i)=val(kh-1,i-1), 1);
}
}
next i;
}
next h;
*represent the modified m x n matrix on a graph
for h=1 to m
{
for i=1 to n
{
putcolour(kh,i)=RGB(r,g,b)
r=r+1; g=g+1; b=b+1;
}
next i;
}
next h;

Coloured Sierpinksi triangle

*invoke graph and colour libraries
bin k;
int h,i;
int r=1,g=1,b=1;
*create m x n matrix
n=ln(T);
m=count(T);
val(kh/2,1)=1;
for h=2 to m
{
for i=2 to n
{
if i<n: val(kh,i)=XOR(val(kh-1,i-1), val(kh-1,i+1));
if i=n: val(kh,i)=val(kh-1,i-1);
}
next i;
}
next h;
*represent the modified m x n matrix on a graph
for h=1 to m
{
for i=1 to n
{
putcolour(kh,i)=RGB(r,g,b)
r=r+1; g=g+1; b=b+1;
}
next i;
}
next h;

XORsweeper

*invoke graph and colour libraries
bin k;
int h,i;
int a,b;
int r=1,g=1,b=1;
bin g=1;
*create m x n matrix
n=ln(T);
m=count(T);
if g=1: forwardsweep(h,i);
if g=0: reversesweep(h,i);
def forwardsweep(h,i):
{
for h=1 to m
{
for i=1 to n
{
if h=1:
{
val(kh,i)=rand(0,1);
else:
if i=1: val(kh,i)=val(kh-1,i+1);
if i<n: val(kh,i)=XOR(val(kh-1,i-1), val(kh-1,i+1));
if i=n: val(kh,i)=val(kh-1,i-1), 1);
}
}
next i;
colour(h);
}
next h;
return g=0;
}
def reversesweep(h,i):
{
for h=m to 1, h–
{
for i=n to 1, i–
{
if h=1:
{
val(kh,i)=rand(0,1);
else:
if i=1: val(kh,i)=val(kh+1,i+1);
if i<n: val(kh,i)=XOR(val(kh+1,i-1), val(kh+1,i+1));
if i=n: val(kh,i)=val(kh+1,i-1), 1);
}
}
next i;
colour(h);
}
next h;
return g=0;
}
*represent the modified m x n matrix on a graph
def colour(a):
{
for b=1 to n
{
putcolour(ka,b)=RGB(r,g,b)
r=r+1; g=g+1; b=b+1;
}
next i;
return φ;
}

Kurt the penguin & Gödel’s incompleteness theorem

Kurt the Penguin

Kurt was a talking penguin aware of his special existence.

Rules of the Clock

#1: When the clock behind Kurt struck 12, he’d yell, “I’m a penguin!”

#2: When the clock didn’t strike 1, he’d not yell, “I’m a penguin!”

#3: When the clock struck 6, he’d yell “I’m a penguin!” twice.

#4: When the clock didn’t strike 7, he’d not yell, “I’m a penguin!” twice.

The clock strikes!

Once, on a fine Tuesday, the clock struck 7.

Either Kurt didn’t yell “I’m a penguin!” twice or Kurt yelled “I’m a penguin!” twice.

If Kurt yelled “I’m a penguin!” twice, then Kurt has not broken none of the Rules of the Clock.

If Kurt didn’t yell “I’m a penguin!” twice, then Kurt has broken one of the Rules of the Clock.

What’s up with Kurt?

Therefore, if Kurt always speaks the truth, there are some truths that Kurt doesn’t speak.

And Kurt sometimes speaks the untruth.

Kurtlogic

#1: [T12, KP] → t

#2: [NT1, NKP] → t

#3: [T6, {KP, KP}] → t

#4: [NT7, N{KP, KP}] → t

If Kurt yelled “I’m a penguin!” twice: [T7, {KP, KP} → Nf]

If Kurt didn’t yell “I’m a penguin!” twice: [T7, N{KP, KP} → f]

Where,

[T7, {KP, KP} → Nf] ≡ “if Kurt always speaks the truth, there are some truths that Kurt doesn’t speak”

[T7, N{KP, KP} → f] ≡ “Kurt sometimes speaks the untruth”

*

Gödel’s first incompleteness theorem

Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.

Gödel’s second incompleteness theorem

For any formal effectively generated theory T including basic arithmetical truths and also certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent.

Machine-rules of grammar

The sciences of numerical analysis and operations research reveals that, in order to facilitate the lowest number of denominations to ensure the highest values (via combinations), the instituted mint mints/prints the Re. 1, Rs. 2, Rs. 5, Rs. 10, Rs. 20, Rs. 50, Rs. 100, Rs. 500 and Rs. 1000 notes. Similarly, and I don’t think it must be too much of a stretch to assume so, that in order to facilitate the most complex of logical constructs, a comparatively finite number of givens should prove sufficient.

For example, a half-adder in a computer is composed of an AND gate and a XOR gate; the XOR gate, in turn, looks like this:

A XOR gate composed solely of MOSFETs

Therefore, from the diagram, a finite number of MOSFETs (each the constitution of a finite amount of logical information) can be seen to be employed to result in the output-logic. From this example, it is also deducible that such an approach to construction becomes invalidated when there is a (direct or indirect) violation of the third law of thermodynamics.

To illustrate the relation, an example exists in the form of the ADR (adiabatic demagnetization refrigerator) wherein a paramagnetic material is repeatedly isothermal magnetized and isentropically demagnetized. Such a machine cannot be used to reduce the temperature of the system to absolute zero because an isothermal process is involved in the refrigeration process – a condition mandated by both the Third Law and Nernst’s Law (of heat transfer).

Similarly, if an inclusion of the component, C, available to us cannot result in the logical output, L, that we require, then the stated approach becomes useless. In other words, if the employment of a capacitor in even numbers presents a logical restriction in building a certain gate, then that particular component should be further broken before it is re-included into the project: if each capacitor consists of 2 Xs, then the overall logic shouldn’t be defined as N-times-1.5C but N-times-3X.

To be continued…