Category Archives: Uncategorized

SC’s cancer-due-to-cell-tower verdict

From Times of India:

Last year, Harish Chand Tiwari, who works at the residence of Prakash Sharma in the Dal Bazar area of Gwalior, moved the SC through advocate Nivedita Sharma, complaining that a BSNL tower illegally installed on a neighbour’s rooftop in 2002 had exposed him to harmful radiation 24×7 for the last 14 years. Radiation from the BSNL tower, less than 50 metres from the house where he worked, afflicted him with Hodgkin’s lymphoma caused by continuous and prolonged exposure to radiation, Tiwari complained. In a recent order, a bench of Justices Ranjan Gogoi and Navin Sinha said, “We direct that the particular mobile tower shall be deactivated by BSNL within seven days from today.” The tower will be the first to be closed on an individual’s petition alleging harmful radiation.

Unbelievable. If the radiation received and transmitted by base station towers really causes cancer, where’s the explosion of cancer rates in urban centres around the world? In fact, data from the US suggests that cancer incidence is actually on the decline (or at least not exploding if you account for population growth) – except for cancers of the lung/bronchus (due to smoking)…

… whereas the number of cell sites has been surging.

Even if we are to give Harish Chand Tiwari the benefit of doubt, taking a cell site down because one man in its vicinity had cancer seems quite excessive. Moreover, I don’t think Tiwari has a way to prove it was the cell site alone and not anything else that gave him leukaemia. For that matter, how does any study purport to be able to show cancer being caused by one agent exclusively? We speak only in terms of risk and comorbidity even with smoking, the single-largest risk factor in modern times. Moreover, none of this has forced us to distance the hordes of other factors – including the pesticides in our food and excessive air pollution – in our daily lives. But through all these stochasticities and probabilities, the SC seems to be imposing a measure of certainty that we’ll never find. And its judgment has set a precedent that will only make it harder to beat down the pseudoscience that stalks irrational fears.

Featured image credit: Unsplash/pixabay.

If Rajinikanth regrets some of the roles he played, and other questions

Featured image: An illustration of actor Rajinikanth. Credit: ssoosay/Flickr, CC BY 2.0.

Read this about the Dileep-Kavya wedding and the crazy thing the groom said about the bride and why he was marrying her (protecting her honour, apparently). Reminded me of the widespread misogyny in Tamil cinema – as well as the loads of interviews I daydream about conducting with the people who both participate in and create one of my favourite enterprises in India: ‘Kollywood’. So many people have so much to answer for: fat jokes, moral policing, stalking, the so-called “amma sentiment” (nothing to do with JJ), love, superstitions, punch-dialogues, etc.

(What follows is by no means exhaustive but does IMO address the major problems and the most well-known films associated with them. Feel free to pile on.)

Fat jokes – What do actors like Nalini, Aarthi Ravi and Bava Lakshmanan feel about elephant-trumpets playing in the background when they or their dialogues have their moment on screen? Or when actors like Vivek, Soori and Santhanam make fun of the physical appearances of actors like Yogi Babu, Madhumitha and ‘Naan Kadavul’ Rajendran for some supposedly comedic effect? Or when actors like Vadivelu and Goundamani make fun of dark-skinned women?

Moral policing – Applies to a lot of actors but I’m interested in one in particular: Rajinikanth. Through films like Baasha (1995), Padayappa (1999), Baba (2002), Chandramukhi (2005) and Kuselan (2008), Rajini has delivered a host of dialogues about how women should or shouldn’t behave, dialogues that just won’t come unstuck from Tamil pop culture. His roles in these films, among many others, have glorified his stance as well and shown them to reap results, often to the point where to emulate the ‘Superstar’ is to effectively to embody these attitudes (which are all on the conservative, more misogynistic side of things). I’d like to ask him if he regrets playing these roles and the lines that came with them. I’d be surprised if he were completely unconcerned. He’s an actor who’s fully aware of the weight he pulls (as much as of his confrontation with the politician S. Ramadoss in 2002, over the film Baba showing the actor smoking and drinking in many scenes, from which he emerged smarting.)

(Oh, and women can’t drink or smoke.)

Misogyny – Much has been written about this but I think a recent spate of G.V. Prakash movies deserve special mention. What the fuck is he thinking? Especially with a movie like Trisha Illana Nayanthara (2015)? Granted, he might not even had much of a say in the story, production values, etc., but he has to know he’s the face, the most prominent name, of the shitty movies he acts in. And I expect him to speak up about it. Also, Siva Karthikeyan and his ‘self-centred hero’ roles, where at the beginning of the plot he’s a jerkbag and we’ve to spend the next 100 minutes awaiting his glorious and exceptionally inane reformation even as the background score strongly suggests we sympathise with him. Over and over and over. What about the heroine’s feelings? Oh, fuck her feelings, especially with lines like, “It’s every woman’s full-time job to make men cry.” Right. So that’s why you spent the last 99 minutes lusting after her. Got it. Example: Remo (2016).

Stalking – This is unbelievably never-endingly gloriously crap. And it’s crappier when some newer films continue to use it as a major and rewarding plot-device, often completely disregarding the female character’s discomfort on the way.

Respect for mothers – I hate this for two reasons. In Kollywood pop culture, this trope is referred to as “amma sentiment” (‘amma’ is Tamil for ‘mother’). It plays out in Tamil films in the form of the protagonist, usually the male, revering his mother and/or mothers all over the place for being quasi-divine manifestations of divine divinity. It began with Kamal Haasan’s Kalathur Kannamma in 1960 (though I’m not going to hold that against him, he was 6 y.o. at the time) and received a big boost with Rajinikanth’s Mannan (1992). But what this does is to install motherhood as the highest possible aspiration for women, excising them of their choice be someone/something else. What this reverence also does is to portray all mothers as good people. This it delegitimises the many legitimate issues of those who’ve had fraught relationships with their mothers.

The Moment When Love ‘Arrives’ – Stalking-based movies have this moment when Love Arrives. Check out the cult classic Ullathai Allitha (1996), when Karthik Muthuraman forces Rambha to tell him she loves him. And then when she does, she actually fucking does. The Turn is just brutal: to the intelligence of the female character, to the ego of the male character (which deserves only to be deflated). But thanks: at least you’re admitting there’s no other way that emotional inflection point is going to come about, right?

Endorsement of religious rituals/superstitions/astrology – Sometimes it’s frightening how casually many of these films assume these things are based in fact, or even in the realm of plausibility. Example: DeMonte Colony (2015), Aambala (2015), Aranmanai (2014), Sivaji (2007), Veerappu (2007), Anniyan (2005), etc.

Punch dialogues – Yeah, some actors like Vijay, Dhanush, Ajith, even Siva Karthikeyan and *cough* M. Sasikumar of late, deliver punch dialogues on screen to please their more-hardcore fans. But the more these dialogues continue to be developed and delivered, aren’t the actors and their producers also perpetuating their demand of mind-numbing levels of depersonalisation from the audience?

Obsession with fair skin – Apart from the older fair-and-lovely criticisms, etc., some movies also take time out to point out that an actress in the film is particularly fair-skinned and deserves to be noticed for just that reason. Example: Poojai (2014), Maan Karate (2014), Kappal (2014), Goa (2010), Ainthaam Padai (2009), Kadhala Kadhala (1998), etc.

Circlejerking – The film awards instituted by the South Indian film industries are like those awards given to airports: a dime a dozen, no standardised evaluation criteria and a great excuse to dress up and show off. On many occasions, I’ve felt like some of the awardings might’ve better served the institutions that created them if they weren’t given out in a particular year. Another form of this circlejerk is for a mediocre or bad film to have multiple throwbacks to its male protagonist’s previous films and roles.

Miscellaneous WTFs

Manadhai Thirudivittai (2001) – For completely rejecting the idea that a woman has feelings or opinions about something that affects her

Endrendrum Punnagai (2013) – For a male protagonist who never feels the need to apologise for his boneheadedness and its emotional impact on other people

Kaththi (2014) – For portraying a female lead prepared to be part of a strike that cripples an entire state but is okay being slapped by random people

The actor Santhanam – I’ve always found that Tamil cinema’s comedians and comediennes are among the industry’s best actors, and Santhanam is no exception. He’s been extremely successful in the last five years, and it’s been evident of late that he now wants to make it big as a hero. Good luck! Except what hurts is that he’s trying to be the painful-to-watch hero: engaging in stalking, delivering punch-dialogues, telling women what they should or shouldn’t do, etc.

It is as the art critic John Berger wrote in Ways of Seeing (1972) – with the following prefix: “In most of Tamil cinema…”

… men act and women appear. Men look at women. Women watch themselves being looked at. This determines not only most relations between men and women but also the relation of women to themselves. The surveyor of woman in herself is male: the surveyed female. Thus she turns herself into an object – and most particularly an object of vision: a sight.

Workflow: Making a mailer

Some of you might remember that, well before Infinite in All Directions, a friend and I used to send out a science newsletter called Curious Bends. After quickly raking in a few hundred subscribers, both of us lost interest in sending the newsletter even as we continued to believe that doing so would be a valuable service. One of the reasons it may have stopped – in hindsight – is likely set-shifting. From next week’s newsletter:

… the newsletter didn’t take a hit for lack of time as much as for the cost of switching between tasks that require different aptitudes. Psychologists have a name for this phenomenon: set-shifting. Research has shown that when a person switches from one task to another, there are two kinds of costs. One is the cost of readjusting one’s mental settings to be ready for the second task after the first. The other is the erosion of our efficiency at performing the second task due to ‘leftover’ settings from the first task. And these costs are exacerbated when the tasks get more complex. In effect, I skipped the newsletter because the second kind of cost was just getting too high for me.

Now, I don’t want that to happen with Infinite in All Directions because when I do compile and send it out, I have a gala time as do many of its subscribers (based on the feedback I’ve received – but feel free to tell me I’m wrong). And this is now making me think harder about mitigating the costs, or even prevalence, of set-shifting.

One way out, for example, is for me to reduce the time it takes to create the newsletter. Right now, I send it out through MailChimp, which has its own editing and formatting tools/area. I didn’t choose MailChimp as much as I chose the email newsletter as a medium through which to deliver information. And my workflow goes like this: See a link I like → Save it on Evernote → Make some points on Evernote → Port them at the end of the week to MailChimp → Format the newsletter → Send → Copy the email and reformat → Publish on The Wire (WordPress).

Now what if I could use one tool – like iA Writer (and its amazing transclusion feature) – instead of two (Evernote + MailChimp) so I can publish what I compile via the same platform, while you – the subscriber – receive an auto-compiled list of posts once a week via MailChimp? I.e.: Ulysses/iA → WordPress → MailChimp. It sounds quite appealing to me but if you think I’m missing something, please let me know.

Featured image: A few server racks with disks and switches. Caption & credit: Alex/Flickr, CC BY 2.0.

A small update


I moved to Delhi on March 27 (that’s the view outside my bedroom). I’ll be working out of here for the next year – maybe longer, but I don’t have to decide that until much later. This completes one of my five big tasks for the year:

  • Move to Delhi
  • Access new Planck and LHC data
  • Purchase and devour Fall of Light, book #2 of the Kharkhanas Trilogy
  • Visit a world-renowned particle accelerator lab
  • Continue contributing to The Wire

If you’re in Delhi, let’s get a drink!

The Borexino neutrino detector on the inside, during construction in 2001. The photomultiplier tubes are visible. Credit: Borexino

Physicists could have to wait 66,000 yottayears to see an electron decay

The longest coherently described span of time I’ve encountered is from Hindu cosmology. It concerns the age of Brahma, one of Hinduism’s principal deities, who is described as being 51 years old (with 49 more to go). But these are no simple years. Each day in Brahma’s life lasts for a period called the kalpa: 4.32 billion Earth-years. In 51 years, he will actually have lived for almost 80 trillion Earth-years. In a 100, he will have lived 157 trillion Earth-years.

157,000,000,000,000. That’s stupidly huge. Forget astronomy – I doubt even economic crises have use for such numbers.

On December 3, scientists announced that we’ve all known something that will live for even longer: the electron.

Yup, the same tiny lepton that zips around inside atoms with gay abandon, that’s swimming through the power lines in your home, has been found to be stable for at least 66,000 yottayears – yotta- being the largest available prefix in the decimal system.

In stupidly huge terms, that’s 66,000,000,000,000,000,000,000,000,000 (66,000 trillion trillion) years. Brahma just slipped to second place among the mortals.

But why were scientists making this measurement in the first place?

Because they’re desperately trying to disprove a prevailing theory in physics. Called the Standard Model, it describes how fundamental particles interact with each other. Though it was meticulously studied and built over a period of more than 30 years to explain a variety of phenomena, the Standard Model hasn’t been able to answer few of the more important questions. For example, why is gravity among the four fundamental forces so much weaker than the rest? Or why is there more matter than antimatter in the universe? Or why does the Higgs boson not weigh more than it does? Or what is dark matter?


The electron belongs to a class of particles called leptons, which in turn is well described by the Standard Model. So if physicists are able to find that an electron is less stable the model predicts, it’d be a breakthrough. But despite multiple attempts to find an equally freak event, physicists haven’t succeeded – not even with the LHC (though hopeful rumours are doing the rounds that that could change soon).

The measurement of 66,000 yottayears was published in the journal Physical Review Letters on December 3 (a preprint copy is available on the arXiv server dated November 11). It was made at the Borexino neutrino experiment buried under the Gran Sasso mountain in Italy. The value itself is hinged on a simple idea: the conservation of charge.

If an electron becomes unstable and has to break down, it’ll break down into a photon and a neutrino. There are almost no other options because the electron is the lightest charged particle and whatever it breaks down into has to be even lighter. However, neither the photon nor the neutrino has an electric charge so the breaking-down would violate a fundamental law of nature – and definitely overturn the Standard Model.

The Borexino experiment is actually a solar neutrino detector, using 300 tonnes of a petroleum-based liquid to detect and study neutrinos streaming in from the Sun. When a neutrino strikes the liquid, it knocks out an electron in a tiny flash of energy. Some 2,210 photomultiplier tubes surrounding the tank amplify this flash for examination. The energy released is about 256 keV (by the mass-energy equivalence, corresponding to about a 4,000th the mass of a proton).

However, the innards of the mountain where the detector is located also produce photons thanks to the radioactive decay of bismuth and polonium in it. So the team making the measurement used a simulator to calculate how often photons of 256 keV are logged by the detector against the ‘background’ of all the photons striking the detector. Kinda like a filter. They used data logged over 408 days (January 2012 to May 2013).

The answer: once every 66,000 yotta-years (that’s 420 trillion Brahma-years).

Physics World reports that if photons from the ‘background’ radiation could be eliminated further, the electron’s lifetime could probably be increased by a thousand times. But there’s historical precedent that to some extent encourages stronger probes of the humble electron’s properties.

In 2006, another experiment situated under the Gran Sasso mountain tried to measure the rate at which electrons violated a defining rule in particle physics called Pauli’s exclusion principle. All electrons can be described by four distinct attibutes called their quantum numbers, and the principle holds that no two electrons can have the same four numbers at any given time.

The experiment was called DEAR (DAΦNE Exotic Atom Research). It energised electrons and then measured how much of it was released when the particles returned to a lower-energy state. After three years of data-taking, its team announced in 2009 that the principle was being violated once every 570 trillion trillion measurements (another stupidly large number).

That’s a violation 0.0000000000000000000000001% of the time – but it’s still something. And it could amount to more when compared to the Borexino measurement of an electron’s stability. In March 2013, the team that worked DEAR submitted a proposal for building an instrument that improve the measurement by a 100-times, and in May 2015, reported that such an instrument was under construction.

Here’s hoping they don’t find what they were looking for?

Is the universe as we know it stable?

The anthropic principle has been a cornerstone of fundamental physics, being used by some physicists to console themselves about why the universe is the way it is: tightly sandwiched between two dangerous states. If the laws and equations that define it had slipped during its formation just one way or the other in their properties, humans wouldn’t have existed to be able to observe the universe, and conceive the anthropic principle. At least, this is the weak anthropic principle – that we’re talking about the anthropic principle because the universe allowed humans to exist, or we wouldn’t be here. The strong anthropic principle thinks the universe is duty-bound to conceive life, and if another universe was created along the same lines that ours was, it would conceive intelligent life, too, give or take a few billion years.

The principle has been repeatedly resorted to because physicists are at that juncture in history where they’re not able to tell why some things are the way they are and – worse – why some things aren’t the way they should be. The latest significant addition to this list, and an illustrative example, is the Higgs boson, whose discovery was announced on July 4, 2012, at the CERN supercollider LHC. The Higgs boson’s existence was predicted by three independently working groups of physicists in 1964. In the intervening decades, from hypothesis to discovery, physicists spent a long time trying to find its mass. The now-shut American particle accelerator Tevatron helped speed up this process, using repeated measurements to steadily narrow down the range of masses in which the boson could lie. It was eventually found at the LHC at 125.6 GeV (a proton weighs about 0.98 GeV).

It was a great moment, the discovery of a particle that completed the Standard Model group of theories and equations that governs the behaviour of fundamental particles. It was also a problematic moment for some, who had expected the Higgs boson to weigh much, much more. The mass of the Higgs boson is connected to the energy of the universe (because the Higgs field that generates the boson pervades throughout the universe), so by some calculations 125.6 GeV implied that the universe should be the size of a football. Clearly, it isn’t, so physicists got the sense something was missing from the Standard Model that would’ve been able to explain the discrepancy. (In another example, physicists have used the discovery of the Higgs boson to explain why there is more matter than antimatter in the universe though both were created in equal amounts.)

The energy of the Higgs field also contributes to the scalar potential of the universe. A good analogy lies with the electrons in an atom. Sometimes, an energised electron sees fit to lose some extra energy it has in the form of a photon and jump to a lower-energy state. At others, a lower-energy electron can gain some energy to jump to a higher state, a phenomenon commonly observed in metals (where the higher-energy electrons contribute to conducting electricity). Like the electrons can have different energies, the scalar potential defines a sort of energy that the universe can have. It’s calculated based on the properties of all the fundamental forces of nature: strong nuclear, weak nuclear, electromagnetic, gravitational and Higgs.

For the last 13.8 billion years, the universe has existed in a particular way that’s been unchanged, so we know that it is at a scalar-potential minimum. The apt image is of a mountain-range, like so:


The point is to figure out if the universe is lying at the deepest point of the potential – the global minimum – or at a point that’s the deepest in a given range but not the deepest overall – the local minimum. This is important for two reasons. First: the universe will always, always try to get to the lowest energy state. Second: quantum mechanics. With the principles of classical mechanics, if the universe were to get to the global minimum from the local minimum, its energy will first have to be increased so it can surmount the intervening peaks. But with the principles of quantum mechanics, the universe can tunnel through the intervening peaks to sink into the global minimum. And such tunnelling could occur if the universe is currently in a local minimum only.

To find out, physicists try and calculate the shape of the scalar potential in its entirety. This is an intensely complicated mathematical process and takes lots of computing power to tackle, but that’s beside the point. The biggest problem is that we don’t know enough about the fundamental forces, and we don’t know anything about what else could be out there at higher energies. For example, it took an accelerator capable of boosting particles to 3,500 GeV and then smash them head-on to discover a particle weighing 125 GeV. Discovering anything heavier – i.e. more energetic – would take ever more powerful colliders costing many billions of dollars to build.

Almost sadistically, theoretical physicists have predicted that there exists an energy level at which the gravitational force unifies with the strong/weak nuclear and electromagnetic forces to become one indistinct force: the Planck scale, 12,200,000,000,000,000,000 GeV. We don’t know the mechanism of this unification, and its rules are among the most sought-after in high-energy physics. Last week, Chinese physicists announced that they were planning to build a supercollider bigger than the LHC, called the Circular Electron-Positron Collider (CEPC), starting 2020. The CEPC is slated to collide particles at 100,000 GeV, more than 7x the energy at which the LHC collides particles now, in a ring 54.7 km long. Given the way we’re building our most powerful particle accelerators, one able to smash particles together at the Planck scale would have to be as large as the Milky Way.

(Note: 12,200,000,000,000,000,000 GeV is the energy produced when 57.2 litres of gasoline are burnt, which is not a lot of energy at all. The trick is to contain so much energy in a particle as big as the proton, whose diameter is 0.000000000000001 m. That is, the energy density is 1064 GeV/m3.)

We also don’t know how the Standard Model scales from the energy levels it currently inhabits unto the Planck scale. If it changes significantly as it scales up, then the forces’ contributions to the scalar potential will change also. Physicists think that if any new bosons, essentially new forces, appear along the way, then the equations defining the scalar potential – our picture of the peaks and valleys – will have to be changed themselves. This is why physicists want to arrive at more precise values of, say, the mass of the Higgs boson.

Or the mass of the top quark. While force-carrying particles are called bosons, matter-forming particles are called fermions. Quarks are a type of fermion; together with force-carriers called gluons, they make up protons and neutrons. There are six kinds, or flavours, of quarks, and the heaviest is called the top quark. In fact, the top quark is the heaviest known fundamental particle. The top quark’s mass is particularly important. All fundamental particles get their mass from interacting with the Higgs field – the more the level of interaction, the higher the mass generated. So a precise measurement of the top quark’s mass indicates the Higgs field’s strongest level of interaction, or “loudest conversation”, with a fundamental particle, which in turn contributes to the scalar potential.

On November 9, a group of physicists from Russia published the results of an advanced scalar-potential calculation to find where the universe really lay: in a local minimum or in a stable global minimum. They found that the universe was in a local minimum. The calculations were “advanced” because they used the best estimates available for the properties of the various fundamental forces, as well as of the Higgs boson and the top quark, to arrive at their results, but they’re still not final because the estimates could still vary. Hearteningly enough, the physicists also found that if the real values in the universe shifted by just 1.3 standard deviations from our best estimates of them, our universe would enter the global minimum and become truly stable. In other words, the universe is situated in a shallow valley on one side of a peak of the scalar potential, and right on the other side lies the deepest valley of all that it could sit in for ever.

If the Russian group’s calculations are right (though there’s no quick way for us to know if they aren’t), then there could be a distant future – in human terms – where the universe tunnels through from the local to the global minimum and enters a new state. If we’ve assumed that the laws and forces of nature haven’t changed in the last 13.8 billion years, then we can also assume that in the fully stable state, these laws and forces could change in ways we can’t predict now. The changes would sweep over from one part of the universe into others at the speed of light, like a shockwave, redefining all the laws that let us exist. One moment we’d be around and gone the next. For all we know, that breadth of 1.3 standard deviations between our measurements of particles’ and forces’ properties and their true values could be the breath of our lives.

The Wire
November 11, 2015

ASTROSAT's instruments being tested at ISRO's Satellite Centre in Bangalore. Source: ISRO

The pitfalls of thinking that ASTROSAT will be ‘India’s Hubble’

The Hubble Space Telescope needs no introduction. It’s become well known for its stunning images of nebulae and star-fields, and it wouldn’t be amiss to say the telescope has even become synonymous with images of strange beauty often from distant cosmic shores. No doubt saying something is like the Hubble Space Telescope simplifies the task of communicating that object’s potential and significance, especially in astronomy, and also places the object in stellar company and effortlessly elevates its public perception.

It’s for the latter reason that the comparison shouldn’t be made lightly. Not all telescopes are or can be like the Hubble Space Telescope, which sports some of the more cutting-edge engineering at play in modern telescopy, undoubtedly necessary to produce some of the images it produces (here’s a list of stunners). The telescope also highlighted the role of aestheticism in science: humans may be how the universe realises itself but the scope of that realisation has been expanded by the Hubble Space Telescope. At the same time, it has become so famous for its discoveries that we often pay no heed to the sophisticated physics at play in its photographic capabilities, in return for images so improbable that the photography has become irrelevant to our realisation of their truth.

ASTROSAT, on the other hand, is an orbiting telescope whose launch on September 28 will place India in the small cohort of countries that have a space-borne observatory. That’s insufficient to claim ASTROSAT will be akin to the Hubble as much as it will be India’s debut on the road toward developing “Hubble-class” telescopes. ASTROSAT’s primary science objectives are:

  • Understand high-energy processes in binary systems
  • Search for black hole sources in the galaxy
  • Measure magnetic fields of neutron stars
  • Study high-energy processes in extra-galactic systems
  • Detect new transient X-ray sources
  • Perform limited high angular-resolution deep field survey in UV

The repeated mentions of high-energy are synonymous with the parts of the electromagnetic spectrum ASTROSAT will study – X-ray and ultraviolet emissions have higher frequencies and thus higher energies. In fact, its LAXPC (Large Area X-ray Proportional Counter) instrument will be superior to the NASA NuSTAR X-ray telescope: both will be logging X-ray emissions corresponding to the 6-79 keV* energy range but LAXPC’s collecting area will be almost 10x the collecting area of NuSTAR’s. Similarly, ASTROSAT’s UV instrument, the Ultraviolet Imaging Telescope, studies wavelengths of radiation from 130 nm to 320 nm, like the Cosmic Origins Spectrograph on board the Hubble spans 115-320 nm. COS has a better angular and spectral resolution but UVIT, as well as the Scanning Sky Monitor that looks for transient X-ray sources, tops with a higher field of view. The UVIT and LAXPC double up as visible-wavelength detectors as well.

In contrast, the Hubble makes observations in the infrared, visible and UV parts of the spectrum. Its defining feature is a 2.4-m wide hyperbolic mirror that serves to ‘collect’ photons from a wide field of view onto a secondary hyperbolic mirror, which in turn focuses into the various instruments (the Ritchey-Chrétien design). ASTROSAT also has a primary collecting mirror; it is 30 cm wide.

Design of a Ritchey–Chrétien telescope. Credit: HHahn/Wikimedia Commons, CC BY-SA 3.0

Design of a Ritchey–Chrétien telescope. Credit: HHahn/Wikimedia Commons, CC BY-SA 3.0

But it’s quite wrong to think ASTROSAT could be like Hubble when you consider two kinds of gaps between the instruments. The first is the technical-maturity gap. Calling ASTROSAT “India’s Hubble” will imply that ISRO has reached that level of engineering capability when it has not. And making that reference repeatedly (here, here, here and here) will only foster complacency about defining the scale and scope of future missions. One of ISRO’s principal limitations is payload mass: the PSLV rocket has been the more reliable launch vehicle at our disposal and it can lift 3,250 kg to the low-Earth orbit. The GSLV rocket can lift 5,000 kg to the low-Earth orbit (10,000 kg if an upper cryogenic stage is used) but is less reliable, although promising. So, the ASTROSAT weighs 1,500 kg while the Hubble weighs 11,110 kg – the heaviest scientific satellite launched till date.

A major consequence of having such a limitation is that the technology gets to define what satellite is launched when instead of astronomers laying out what they want to find out and technology setting out to achieve it, which could be a useful impetus for innovation. These are still early days for ISRO but it’s useful to keep in mind even this component of the Hubble’s Hubbleness. In 1974, NASA and ESA began collaborating to build the Hubble. But before it was launched in 1990, planning for the James Webb Space Telescope (JWST) – conceived from the beginning to be Hubble’s successor – began in the 1980s. In 1986, an engineer named Pierre Bely published a paper outlining how the successor will have to have a 10-m primary mirror (more than 4x the width of the Hubble’s primary mirror) and be placed in the geostationary orbit so Earth doesn’t occlude its view of space, like it does for the Hubble. But even four years later, NASA didn’t have a launch vehicle that could heft 6,500 kg (JWST’s weight) to the geostationary transfer orbit. In 2018, Europe’s Ariane 5 (ECA) will be doing the honours.

The other is the public-outreach gap. As historian Patrick McCray has repeatedly noted, telescopes are astronomers’ central research tools and the quality of astronomy research is a reflection of how good the telescopes are. This doesn’t just mean large reflecting mirrors, powerful lenses and – as it happens – heavy-lift launch vehicles but also the publication of raw data in an accessible and searchable format, regular public engagement and, most importantly, effective communication of discoveries and their significance. There was a hint of ISRO pulling off good public outreach before the Mars Orbiter Mission launched in November 2013 but that evaporated soon after. Such communication is important to secure public support, political consensus and priority funding for future missions that can expand an existing telescope’s work. For the perfect example of what a lack of public support can do, look no further than the India-based Neutrino Observatory. NASA, on the other hand, has been celebrated for its social media efforts.

And for it, NASA’s missions are more readily recognisable than ISRO’s missions, at least among people who’ve not been following ISRO’s launches closely since the 1960s. Not only that, while it was easier for NASA’s scientists to keep the JWST project from being cancelled, due to multiple cost overruns, thanks to how much its ‘predecessor’ the Hubble had redefined the images of modern astronomy since the late 1990s, the Hubble’s infamous spherical aberration fault in its first years actually delayed the approval of the JWST. McCray writes in a 2009 essay titled ‘Early Development of the Next Generation Space Telescope‘ (the name of JWST before it was changed in 2002),

Years before the Hubble Space Telescope was launched in 1990 a number of astronomers and engineers in the US and Europe were thinking hard about a possible successor to the HST as well as working to engage a broad community of researchers in the design of such a new observatory. That the launch of any such successor was likely to be many years away was also widely accepted. However, the fiasco of Hubble’s spherical aberration had a serious effect on the pace at which plans were advancing for the Next Generation Space Telescope. Thus crucially for the dynamics of building the “Next Big Machine,” the fate of the offspring was intimately tied to that of the parent. In fact, … it was only when in the mid-1990s that the NGST planning was remade by the incorporation of a series of technology developments in infrared astronomy that NASA threw its institutional weight and money behind the development of a Next Generation Space Telescope.

But even for all the aestheticism at play, ISRO can’t be said to have launched instruments capable of transcending their technical specifications, either: most of them have been weather- and resource-monitoring probes and not crafted for the purpose of uncovering elegance as much as keeping an eye out. But that doesn’t mean, say, the technical specifications of the ASTROSAT payload shouldn’t be readily available, that there shouldn’t be one single page on which one can find all info. on ISRO missions (segregated by type: telecom, weather-monitoring, meteorology, resource-monitoring, astronomy, commercial), that there shouldn’t be a channel through which to access the raw data from its science missions**, or that ISRO continue to languish in its misguided conflation of autonomy and opacity. It enjoys a relative abundance of the former, and does not have to fight for resources in order to actualise missions it designs based on internal priorities. On the other hand, it’s also on the cusp of making a habit of celebrating frugality***, which could in principle provide the political administration with an excuse to deny increased funding in the future, and surely make for a bad idea in such an industry that mandates thoroughness to the point of redundancy as space. So, the day ought to come when the bright minds of ISRO are forced to fight and missions are chosen based on a contentious process.

There are multiple ways to claim to be the Hubble – but ASTROSAT is definitely not “India’s Hubble”. ISRO could in fact banish this impression by advertising ASTROSAT’s raw specs instead of letting people abide by inadequate metaphors: an amazing UV imager, a top-notch X-rays detector, a first class optical observer. A comparison with the Hubble also diminishes the ASTROSAT by exposing itself to be not like the Hubble at all and, next, by excluding from conversation the dozens of other space-borne observatories that it has already bested. It is more exciting to think that with ASTROSAT, ISRO is just getting started, not finished.

*LAXPC will actually be logging in the range 3-79 keV.

**There appears to be one under construction.

***How long before someone compares ASTROSAT’s Rs.178 crore to the Hubble’s $2.5 billion?

Curious Bends – tumour twin, ethical non-vegetarians, fixing Indian science, and more

Apologies for the unplanned summer holiday, but we’re back!

1. Was the tumour inside her brain her twin? (Audio)

She moved from Hyderabad to do her PhD at Indiana University and began​ ​experiencing headaches and suffering from​ ​sleep disorders. Co-workers​ ​and friends would speak to her, only for the sentences to get all​ ​garbled. She was in excruciating pain. What was this tumour that was​ ​growing inside her brain? Why was it wreaking havoc in her life? What​ ​if what was growing inside her head had a life of its own? (, 13 min listen)

2. An India-born Nobel laureate’s solutions for fixing science in India 

“Venkatraman Ramakrishnan is a biologist—even though he won the Nobel Prize in chemistry in 2009—and an Indian at heart, even though he has spent most of his life in the US and the UK where his work led to the prize. His career has been unusual, just as his achievements. In December, he is going to take his new position as the president of the Royal Society, the world’s oldest and most esteemed scientific society. He will be the first non-white president in its 350-year history, and he has already made plans to invigorate scientific ties between India and the UK.” (, 7 min read)

3. The only ethical way to eat meat: become scavengers

“The first and less realistic way is to replace hunting with scavenging. Scavenging for wild animals is a non-exploitative method of obtaining animal flesh. A more achievable and safer option would be to do something closer to agriculture as we now know it: domesticate the scavenger hunt. That is, raise animals—preferably ruminants—on limited pasture with the utmost attention to their welfare, allow them a life free of human exploitation, feed them natural diets in appropriate habitats, allow them to die a natural death, and then, and only then, consume them.” (, 7 min read)

4. The woman who could stop climate change

“I asked what would happen if the emissions line did not, in fact, start to head down soon. Tears welled up in her eyes and, for a moment, Christiana Figueres, the head of United Nations Framework Convention on Climate Change, couldn’t speak. “Ask all the islands,” she said finally. “Ask Bangladesh. We just can’t let that happen. Do we have the right to deprive people of their homes just because I want to own three SUVs? It just doesn’t make any sense. And it’s not how we think of ourselves. We don’t think of ourselves as being egotistical, immoral individuals. And we’re not. Fundamentally, we all have a morality bedrock. Every single human being has that.”” (, 25 min read)

5. Although patents were designed to promote innovation, they don’t

“The public-good position on patents is simple enough: in return for registering and publishing your idea, which must be new, useful and non-obvious, you get a temporary monopoly—nowadays usually 20 years—on using it. This provides an incentive to innovate because it assures the innovator of some material gain if the innovation finds favour. It also provides the tools whereby others can innovate, because the publication of good ideas increases the speed of technological advance as one innovation builds upon another. But a growing amount of research in recent years suggests that, with a few exceptions such as medicines, society as a whole might even be better off with no patents than with the mess that is today’s system.” (, 15 min read)

Chart of the week

“By analysing global migration trends among professionals, the social network found India ended 2014 with 0.23% fewer workers than the beginning of the year. This represents the biggest loss seen in any country it tracked, according to LinkedIn.” (, 2 min read)

Countries to which Indian professionals are migrating. Source: Quartz

Source: Quartz

Climatic fates in the ooze

While governments scramble to provide the laziest climate-change commitments ahead of the UN conference in Paris later this year, the world is being honed to confront how life about land will change as the atmosphere and surface and heat up. But for another world – a world that has often shown up its terran counterpart in sheer complexity – scientists are far from understanding how things will change over the next 85 years.

Climatologists and oceanographers were only recently able to provide a rounded explanation for why the rate of global warming slowed in the late 1990s – and into the 2010s: because the Pacific Ocean was absorbing heat from the lower atmosphere, and then palming it off to the Indian Ocean. But soon after the announcement of that discovery, another team from the US armed with NASA data said that the rate of warming hadn’t slowed at all and that it seemed that way thanks to some statistical anomalies.

Irrespective of which side is right, the bottomline is that our understanding of the oceans’ impact on climate change is poorly understood. And although it hasn’t been for want of trying, a new study in the journal Geology presents the world’s first map of what rests on the oceans’ floors – a map that’s been updated comprehensively for the first time since the 1970s.

The ocean floor is in effect a graveyard of all the undersea creatures that have ever lived, but the study’s significance for tracking climate-change lies with the smallest of those creatures – the tiny plankton, inhabitants of the bottommost rungs of the oceanic food chain. Their population on the surface and pelagic zones of the oceans increases with the abundance of silica and carbon, and when they die or the animals that eat them die, the float into the abyss – taking along a bit of carbon with them. This is the deceptively simple mechanism called the biological pump that allows the world’s larger waterbodies to absorb carbon dioxide from Earth’s atmosphere.

Digital map of major lithologies of seafloor sediments in world’s ocean basins. Source: doi: 10.1130/G36883.1

Digital map of major lithologies of seafloor sediments in world’s ocean basins. Source: doi: 10.1130/G36883.1

The new map, made by scientists from the University of Sydney and the Australian Technology Park, shows that contrary to popular beliefs, the oceanic basins are not settled by broad bands of sediments as much as there are pockets of them, varying in size and abundance due to a variety of surface characteristics and with the availability of certain minerals.

A photomontage of plankton. Credit: Kils/Wikimedia Commons, CC BY-SA

A photomontage of plankton. Credit: Kils/Wikimedia Commons, CC BY-SA

For example, diatom ooze – not watery eidolons of muck sticking to the underside of your shoe but crystalline formations composed of minerals and the remains of calcium- and silica-based plankton called diatoms – is visible in widespread patches (of light-green in the map) throughout the Southern Ocean, between 60º and 70º S.

The ooze typically forms in the 0.8-8º C range at depths of 3.3-4.8 km, and is abundant in the new map where the temperatures range from 0.9º to 5.7º C. Before this map came along, oceanographers – as well as climatologists – had assumed these deposits to be lying in continuous belts, like large undersea continents. But together with the uncertainty in data about the pace and quanta of warming, scientists had been grappling with a shifting image of climate change’s effects on the oceans.

The locations of diatom ooze also contribute to a longstanding debate about if the ooze settles directly below the largest diatom populations. According to the Australian study’s authors, “Diatom ooze is most common below waters with very low diatom chlorophyll concentration, forming prominent zones between 50° S and 60° S in the Australian-Antarctic and the Bellinghausen basins”. The debate’s origins lie in the common use of diatoms to adjudicate water quality: some species proliferate only in clean water, some in polluted water, and there many species of them differentiated by other preferred environments – saline, acidic, warm, etc.

The relative abundance of one species of plankton over the other could, for example, become a reliable indicator of another property of the water that scientists have had trouble measuring: acidity. The dropping pH levels in the oceans are – or could be – a result of dissolving carbon dioxide. While some may view the oceans as great benefactors for offsetting the pace of warming by just a little bit, the net effect for Earth has continued to be negative: acidic waters dissolve the shells of molluscs faster and could drive populations of fishes away from where humans have set up fisheries.

Ocean acidification’s overall effect on the global economy could be a loss of $1 trillion per year by 2100, a UN report has estimated – even as a report in the ICES Journal of Marine Science found that 465 studies published between 1993 and 2014 sported a variety of methodological failures that compromised their findings – all of precise levels of acidity. The bottomline, as with scientists’ estimates of the rate of pelagic warming, is that we know that the oceans are acidifying but are unsure of by how much.

The new map thus proves useful to assess how different kinds of ooze got where they are and their implications for how the world around them is changing. For example, as the paper states, “diatom oozes are absent below high diatom chlorophyll areas near continents”, where sediments derived from the erosion of rocks provides a lot of nutrients to the oceans’ surfaces – in effect describing how a warming Earth posits a continuum of implications for contiguous biospheres.

The Wire
August 13, 2015

Welcome to The Shire.

The neuroscience of how you enter your fantasy-realms

If you grew up reading Harry Potter (or Lord of the Rings, as the case may be), chances are you’d have liked to move to the world of Hogwarts (or Middle Earth), and spent time play-acting scenes in your head as if you were in them. This way of enjoying fiction isn’t uncommon. On the contrary, the potentially intimidating levels of detail that works of fantasy offer often lets us move in with the characters we enjoy reading about. As a result, these books have a not inconsiderable influence on our personal development. It isn’t for nothing that story-telling is a large part of most, if not all, cultures.

That being the case, it was only a matter of time before someone took a probe to our brains and tried to understand what really was going as we read a great book. Those someones are Annabel Nijhof and Roel Willems, both neuroscientists affiliated with the Radboud University in The Netherlands. They used functional magnetic resonance imaging, a technique that employs a scanner to identify the brain’s activity by measuring blood flow around it, “to investigate how individuals differently employ neural networks important for understanding others’ beliefs and intentions, and for sensori-motor simulation while listening to excerpts from literary novels”.

If you’re interested in their methods, their paper published in PLOS One on February 11 discusses them in detail. And as much as I’d like to lay them out here, I’m also in a hurry to move on to the findings.

Nijhof and Willems found that there were two major modes in which listeners’ brains reacted to the prompts, summed up as mentalizing and activating. A mentalizing listener focused on the “thoughts and beliefs” depicted in the prompt while an activating listener paid more attention to descriptions of actions and replaying them in his/her head. And while some listeners did both, the scientists found that the majority either predominantly mentalized or predominantly activated.

This study references another from 2012 that describes how the neural system associated with mentalizing kicks in when people are asked to understand motivations, and that associated with activating kicks in when they’re asked to understand actions. So an extrapolation of results between both studies yields a way for neuroscientists to better understand the neurocognitive mechanisms associated with assimilating stories, especially fiction.

At this point, a caveat from the paper is pertinent:

It should be noted that the correlation we observed between Mentalizing and Action networks, only holds for one of the Mentalizing regions, namely the anterior medial prefrontal cortex. It is tempting to conclude that this region plays a privileged role during fiction comprehension, in comparison to the other parts of the mentalizing network

… while of course this isn’t the case, so more investigation – as well as further review of extant literature – is necessary.

The age-range of participants in the Nijhof-Willems study was 18-27 years, with an average age of 22.2 years. Consequent prompt: a similar study but with children as subjects could be useful in determining how a younger brain assimilates stories, and checking if there exist any predilections toward mentalizing or activating – or both or altogether something else – which then change as the kids grow up. (I must add that such a study would be especially useful to me because I recently joined a start-up that produces supplementary science-learning content for 10-15-year-olds in India.)

So… are you a mentalizing reader or an activating reader?

Caste, healthcare and statistics

In late November 2014, the esteemed British medical journal The Lancet published an editorial calling for the end of casteism in India to mitigate the deteriorating health of the millions of rural poor, if nothing else. The central argument was that caste was hampering access to healthcare services. Caste has been blamed for hampering many things. As Amartya Sen and Jean Dreze write in An Uncertain Glory (2014), “… caste continues to be an important instrument of power in Indian society, even where the caste system has lost some of its earlier barbarity and brutality”.

To append healthcare to that list wasn’t a big leap because casteism in India has had a tendency to graduate access to the fundamental rights even. The editorial cites a lecture that the social activist Arundhati Roy gave last year, during which she mentions the example of a doctor who wouldn’t treat a patient because the latter is of a lower caste. At the same time, the appending had to be a controversial leap because it implies that those who are responsible for the ineffectual provision of healthcare services could in some way be ignoring – or even abetting – casteist practices.

Anyway, three responses to the editorial (whose links are available on the same page) provide some clarity on how caste contributes directly and indirectly to the country’s distinct health problems by interfering in unique ways with our class divisions, economic conditions and social inequalities. They can be broadly grouped as age, inheritance and wealth.

1. Age

The first letter argues that the health effects of caste are best diagnosed among older people, who have been exposed to poverty and the effects of caste for a lifetime. Citing this study (PDF), the correspondents write:

The study reported that several health measures, including self-rated overall general health, disability, and presence of a chronic disorder, are similar between scheduled tribes, scheduled castes, Brahmins, Kshatriyas, Vaishyas, and Shudras in people aged 18–49 years. However, people aged 50 years and older in scheduled tribes and castes were reported as having poorer self-rated health and generally higher levels of disability than those in less impoverished groups, which suggests that the longer the exposure to poverty, the greater the effect on the ageing process.

However, there is an obvious problem in assessing older people and attributing health concerns unique to their age to a single agent. Hindus, who comprise the religious majority in India, traditionally revere their elders. The young are openly expected to ensure that their elders’ economic security and social dignity are not significantly diminished once they retire from full-time employment. Such promises on the other hand are not prevalent in other religious groups. To be sure, that “longer exposure to poverty leads to more health drawbacks” is not entirely flawed but the intensity of its effects may be confounded by traditional values.

2. Inheritance

A paragraph from the second letter reads,

People should only marry within their caste, which can lead to consanguinity. This antiquated tradition has resulted in an unusually high prevalence of specific autosomal recessive diseases in specific community or caste populations, such as diabetes, hypertension, ischaemic heart disease, mental impairments, mental illness, spinocerebellar ataxia, thalassaemia, and sickle-cell diseases.

While increasing literacy rates, especially among the younger age groups, are likely to reduce caste gaps in literacy over this decade, caste seems to have left some population groups with an unenviable inheritance: of the effects of detrimental biological practices. One of the studies the letter’s authors cite provides a p-value of 0.01 for consanguinity being a determinant of diabetic retinopathy (that’s strong evidence). And inter/intra-caste marriages are a prominent feature among caste-based social groups.

3. Wealth

The author of the third piece of correspondence is disappointed that The Lancet saw fit to think dismal healthcare has anything to do with caste, and then adds that the principal determinant across all castes is economic status (on the basis of a 2010 IIPS study). In doing so, two aspects of the caste-healthcare association are thrown up. First, that casteism’s effects are most pronounced on the economic statuses of those victimized by its practice, and that is one way of understanding its effects on access to reliable healthcare. Second, that the statistical knife cuts the other way, too: how do you attribute an effect to caste when it could just as well be due to a failure of some other system?

Spam alert

If you had subscribed to receive email alerts from my blog, then you might’ve received a few hundred emails about 20 minutes ago. That’s entirely my fault. I was trying to restore some old posts that I’d deleted by mistake when I accidentally restored all my deleted posts –  850+ – go. I’m really sorry this happened, and I hope you won’t unsubscribe from my blog as a result. I would never intentionally spam my followers and that’s always been the case. – VM.