Feeds:
Posts
Comments

Archive for the ‘Research’ Category

I realise I’ve been neglecting the blog a bit the last few months, with only a few posts that link to other things, but that’s because I’ve been solving a nasty problem for my research and I think it’s now time to update what’s going on with a ‘real’ post.

For the past year or so, we’ve been working on modelling a galaxy using a set of models developed by my supervisor, Larry Widrow, and his collaborator, John Dubinski, who is at Toronto. The models assume a standard exponential disc for spiral galaxies, but the curious thing is that, because discs appear to contain a luminous bulge at the centre (which is generally considered to be a dynamically distinct entity) there is degeneracy in how surface brightness profiles can be broken down: typically, we break them down into a part due to the bulge (which is often assumed to be a de Vaucouleurs-type r^1/4 profile) and a part due to the disc, which is assumed to be exponential. In the centre, the bulge and the disc contribute to the total light seen in a galaxy. There is some theory behind assuming that discs form exponential density distribution, but in principle other functional forms are allowed for disks. One of these is called a Kormendy disc (named after John Kormendy who proposed this in 1977).

A Kormendy disc is basically an exponential disc with a hole in the centre. So the centre of the disc emits no light, because there are no stars in the centre. This sounds weird, but there is nothing preventing us from attributing all the central light to the bulge, and letting the disc take over as the bulge falls off – we can find fits using the Kormendy functional form as well as the standard exponential. In fact, a siginificant minority of galaxies appear to be better fit by a bulge + Kormendy disc than by a bulge + exponential disc.

The presence of a hole is highly counterintuitive, and you may wonder if ‘seeing’ a hole is merely a measurement artifact or a real measurement due to some other effect. Neither of these appears to be the case; measurement artifacts have been ruled out, and effects such as central dust (which could obscure disc light) are unlikely to be the cause. But, there are good reasons to think it reflects a real physical effect; many of the galaxies that display holes are known to be barred (i.e., they have a central bar of stars out of which typically emanate spiral arms), and bars could be responsible for the effect, but even some that don’t have bars are well fit by holes.

So what has this got to do with my research? Well, the galaxy we’re looking at might in fact be fit better by a central hole than by the exponential disc in our models. Originally we accounted for this by pretending that the real mass distribution was exponential while the light distribution had a Kormendy hole. This is reasonable, because the galaxy appears to have some dust at the centre, although, as I mentioned above, holes are not likely due to dust. So we decided to fundamentally change the structure of the models to generate a Kormendy disc instead, and that’s what I’ve been doing for the past two months… and it’s a pain, because the code is written in Fortran (the F-word again) 77, and Fortran 77 is just plain annoying.

Fortunately, the code is structured in such a way that the radial profile can be changed separately without altering anything else, so it was in principle just a matter of changing the density wherever it was called in the code. Unfortunately, the code itself has been changed several times over the past 15 years and the syntax and variable names used across different subroutines are inconsistent. Moreover, just changing the density profile at first did not work, partly because there is a mathematical (though not a physical) singularity at the centre; to rectify this, I just redefined the central density (and the first two derivatives) to be zero. Debugging was annoying because outputting important quantities in subroutines ran into fortran’s absurd I/O recursion error, which really seems to be an annoying bug that has just never been fixed (as evidenced by the fact that it would pop up sometimes and not at other times when I had made no changes to the code). Finally, the code has a bothersome habit of giving different results for the same input when compiled on different machines… I’ve quietly decided to ignore that since it seems to work on my department machine (but on my home desktop or my laptop). Regardless, after two months of fighting through oceans of NANs and INFs, outputting dozens of different quantities at all points of the program, and banging my head against the wall (not literally, but it wasn’t going to stay not literal for long), I tried an arbitrary but well-behaved radial profile and found that it worked perfectly, which indicated that the main problem was likely a coding bug or something having to do with the centre of the Kormendy disc. I finally traced the problem to the calculation of certain spline quantities at the centre… where the density was zero. Instead of fighting my way through tedious, possibly buggy numerical subroutines, I changed the density (and the first two derivatives) to a very small nonzero quantity… and presto! We have self-consistent Kormendy discs.

So now we have real Kormendy discs, and we can model galaxies with either a Kormendy profile or an exponential profile, and the sun is shining, and life is grand (well, sort of, but that’s another story :P).

Advertisements

Read Full Post »

Sean Carroll over at Cosmic Variance has succintly explained why it is reasonable to believe in dark matter, in spite of the spate of recent modified gravity-inspired publications that purport to explain the Bullet Cluster result and the emergence of a relativisic verison of modified gravity over the past few years. Quoth the article:

The dark matter hypothesis provides a simple and elegant fit to the Bullet Cluster, and for that matter fits a huge variety of other data. That doesn’t mean that it’s been proven within metaphysical certainty; but it does mean that there is a tremendous presumption that it is on the right track. The Bullet Cluster (and for that matter the microwave background) behave just as they should if there is dark matter, and not at all as you would expect if gravity were modified. Any theory of modified gravity must have the feature that essentially all of its predictions are exactly what dark matter would predict. So if you want to convince anyone to read your long and complicated paper arguing in favor of modified gravity, you have a barrier to overcome. These folks aren’t crackpots, but they still face the challenge laid out in the alternative science respectability checklist: “Understand, and make a good-faith effort to confront, the fundamental objections to your claims within established science.” Tell me right up front exactly how your theory explains how a force can point somewhere other than in the direction of its source, and why your theory miraculously reproduces all of the predictions of the dark matter idea (which is, at heart, extraordinarily simple: there is some collisionless non-relativistic particle with a certain density).

And people just don’t do that. They want to believe in modified gravity, and are willing to jump through all sorts of hoops and bend into uncomfortable contortions to make it work. You might say that more mainstream people want to believe in dark matter, and are therefore just as prejudiced. But you’d be laboring under the handicap of being incorrect. Any of us would love to discover a modification of Einstein’s equations, and we talk about it all the time. As a personal preference, I think it would be immeasurably more interesting if cosmological dynamics could be explained by modifying gravity rather than inventing some dumb old particle.

But the data say otherwise….

The basic probelm is that, while modified gravity proponents tend to argue that dark matter is a kludge designed to fit the data even though there is no indication of what it might actually be composed of (and until recently hadn’t even been directly detected), modified gravity itself proposes an ad hoc modification to the law of gravity without any theoretical motivation. Why should modified gravity then be considered more realistic hypothesis than dark matter?

Read Full Post »

Update to my previous post: Kayll points out that the equation of motion for the stars probably leads to orbits that are unstable. Also, the three authors of the paper have between them two publications. If that means something. I still want to know how they make the field theory description for the stars work.

Read Full Post »

This week I will be giving a talk on a new paper that showed up on astro-ph positing that flat galactic rotation curves can be explained by string theory (!). Galactic rotation curves – that is, graphs of the speed at which stars and gas rotate about the centre of the galaxy – are found to remain flat or even rise far outside where the visible galaxy ends, and we can’t account for that using only Newtonian dynamics assuming the visible matter is the only matter there; they should eventually fall off gradually rather than maintaining very high speeds. So astronomers postulate the existence of dark matter to add extra acceleration to the matter orbiting the galaxy centre, to make up for the apparent difference. We’ve never directly detected the components of dark matter.* Other theories exist to explain this, such as an ad hoc modification of the Newtonian force law (modified Newtonian dynamics, or MOND) that is not theoretically motivated.

The very intriguing string theory argument, meanwhile, seems to be that in the Nappi-Witten model of IIB string theory, where the string theory equations can be solved exactly in the case of a plane polarized gravitational field background, it can be shown that the gauge potential couples to the worldsheet of the strings via the gravimagnetic field (a gravimagnetic field is a field produced by a moving mass in general relativity, in the same way that a moving electric charge produces a magnetic field in electromagnetism). Therefore strings interacting under this field will follow Landau orbits, much like a charged particle moving in a magnetic field.

So this means (and here is the great logical leap of the paper) that stars themselves will follow the same trajectories (they don’t provide a description for how this happens) – and this adds a term linear in the circular speed to the force equation which compensates for the drop in speed far from the galaxy centre! C’est un miracle! And it looks like it sort of works in real galaxies, too. Except sometimes. Probably when the galaxy doesn’t benefit from string background rotation, since it needs a componenet of the gravimagnetic field perpendicular to the galactic plane to act. But maybe there is somethnig to this idea.

Well, it’s fun to speculate…

*We have never directly detected the components of dark matter experimentally but there are a couple of results that corroborate it: first, we know that the dynamics of satellite galaxies suggest a density drop off of r^(-3) far from the centre, which is predicted by cosmological simulations of structure formation. Second, the ‘Bullet Cluster‘ result, in which dark matter was detected in a colliding galaxy cluster, is the first direct detection of a dark matter halo.

Read Full Post »

I haven’t posted in along time, and the reason is that I’ve been trying to get a production run going for the research on HPCVL (big computing cluster) and it’s always trying to port your code to another machine with a different compiler. So here’s what happened.

Three weeks ago we decided to get a big MCMC run going on HPCVL, which required me to compile code on a Solaris machine using the Sun compiler. My programs use automake, which means I had lots of fun figuring out how to configure and install them into my home directory on HPCVL (aha! ‘prefix’ flag required!), and had my memory refreshed on changing environment variables/linking libraries many times. So after running configure, ‘make’ choked in several different places, not all of which seemed logical:

  • Problem: CC can’t figure out what ‘sqrt’, ‘sin’, etc mean. Fine, I need a #include <cmath> statement in a header file. Makes sense, but it’s still not clear to me why it compiles on my computer at Stirling. Possibly something to do with gcc 3.2, because it won’t compile without the aforementioned directive on my home computer with gcc 4. In any case, that’s an easy fix.
  • Problem: There’s some kind of scope problem involving trying to define a member of a class definition and putting an extraneous scope operator. I think. I still have no idea what the error message meant, but removing the extraneous scope operator made it go away. Good enough.
  • Problem: I can’t compile with the cosmology header file as it is because the compiler won’t accept initializing const static int (the Hubble constant) members in the ‘protected:’ section of a class definition. Since this pushes the limits of my knowledge of C++, I try to initialize them in random places elsewhere in the class file, notably under where it says ‘public:’, using a constructor, keeping the original declarations in place. This works, in the sense that the original compiler error message is gone and I now get a different one. This time the error occurred in the linking stage, with one of those horrific error message that looks like a long string of random letters and numbers that says absolutely nothing helpful, as opposed to the cryptic compilation error message I got earlier that at least told me what line the error was on. It’s complaining about doubly declared variables or something like that, and I notice that the end of the long string of meaningless characters, the names of the rogue const static ints are appended. This tells me that the problem remains with the initialization of these members, but nothing else. After multiple ham-handed attempts at working around the problems using all sorts of syntactical gymnastics, I finally declared them as global variables. Problem solved.
  • After finally getting everything to compile correctly, I tried a few things. Notice problems with the calculation of the rotation curve and density profiles of sample systems. The rotation curve problem is easy, because somehow the wrong calculation for the tangential velocity was included in the original source (long since fixed on my computer here). The density is slightly more difficult, because, for no clear reason, it calculates the density profile correctly, and then spits out the wrong value. More specifically, it gets the correct density values while in the for loop that loops through each bin (which I checked by outputting the density in every bin within the for loop), but outputs the wrong values immediately on exiting the for loop (which i checked by outputting the density values in a separate for loop). I still haven’t figured that one out. I could just output the density within the original for loop but I would like to know why it chokes after leaving the original loop.

So the analysis programs (more or less) work, or seem to. Compiling the N-body code itself goes smoothly and seems to work fine as well. Now to compile the galaxy-building programs.

  • There is no excuse in this day and age to be forced to submit to a 72-character per line limit (really 66 character because you can’t use the first six columns except for special cases) like fortran 77 requires. So I have to change a flag in the makefile. Fine. Except the Sun compiler’s flag can only handle up to 132 characters per line. I shouldn’t be making lines that long; fair enough.
  • After compilation, I compare two of the same models generated on HPCVL and on my computer here at Stirling. Slight discrepancy in the central potential. Hmm. Larry suggests doing some detective work to uncover the cause. After multiple write(*,*) statements, I discover that tiny roundoff errors on each machine can lead to big differences when they’re used as arguments to logs and are really close to zero. Shouldn’t be too big a deal.
  • I then try building a galaxy. Everything goes (more or less) well. Analyse the model, density still doesn’t work. Phooey. But I’ll manage. Everything else seems okay. So after getting a trial MCMC run started, I did a little more inspecting of the resulting models. I look at one model, generate the disk, bulge and halo, and work out the rotation curve. I notice a slight anomaly: the halo rotation is not being calculated correctly: it’s too low. Output the mass as a function of radius. Only then do I discover, to my horror, that the halo has giant hole in the centre. Everything else is correct – in particular, the total mass and tidal radius are correct. It’s not distributing the mass correctly. There’s just nothing that ends up in the centre. Nada. I try changing one halo parameter. This time, everything is fine. I try changing the whole parameter set. Everything is fine. Okay. So what’s going wrong? Larry thinks it’s a bug in the interpolation routines that would take a month to find. Since we can’t control where a MCMC chain goes in parameter space, there’s no telling if it’ll find one of these regions where the bug will manifest itself. What a mess. I’m toying with the idea of spending my Christmas holidays rewriting all this code in C. But for now we’ll do the MCMC run on my regular computer and use HPCVL for the simulations.

Now I’m trying to figure out how to account for the asymmetric drift in the galaxy we’re trying to model. Which, you may notice, is a problem that has actual physics involved. It’s nice to have one of those once in a while – cuz, ya know, I am technically an astrophysicist and not a programmer.

Read Full Post »

Here are some interesting things that I should be blogging about:

  • This is probably not the best way to go about doing research, but after returning from my vacation in Montreal last week I learned that we’ve changed the way we analyse our galaxy parameter space for the second time in less than a month (see my university web page for more info). We started with a downhill simplex method, which sort of worked but didn’t lend itself to a systematic application; then we moved on to a gradient method, which was more systematic but gave occasionally wierd results; and now we’re working on Markov chain Monte Carlo methods to which we will subsequently apply Bayes’ theorem. This is actually the most promising approach, but i remain slightly put off by the seemingly regular changes in strategy. That being said, I’m thinking of changing strategies again to a Hamiltonian Monte Carlo approach to speed up the process of modelling the chi-square space. (At least this way I get to write my own code in a language that isn’t Fortran instead of using my supervisor’s.)
  • Anything to bring down the price of textbooks and make them more accessible (financially) is a good thing. The Global Text Project (globaltext.org, posting from old browser at school again) will use a wiki-style approach to create textbooks – i.e., a collaborative approach where many people can contribute. In this case, the process is overseen by academics and experts in the field, distinguishing it from most wikis. Besides making such knowledge available at a much lower price to students, it should allow such knowledge to flow more freely among students in developing nations, who are less likely to be able to afford the exorbitant prices charged by textbook publishers. This will hopefully force textbook publishers to rethink their business model; with quality free texts widely available, they should be less able to charge high prices for textbooks.
  • Somehow, Hewlett Packard’s chairperson thinks it’s okay to spy on HP’s executives using the unethical and very likely illegal method of ‘pretexting’ to find out who leaked some info to the press. Somehow, leaking info to the press is a serious breach of personal integrity. It doesn’t seem consistent to believe that the former is fine while the latter is not, but that’s exactly the impression that HP’s own statement on the matter gives (see techdirt). You can’t claim that you uphold your own employees’ (as well as customers’) privacy, say that you expect them hold high standards of personal integrity, and then break both of those tenets to find out who leaked the information (which does not appear to have been particularly interesting). It sounds like what Dunn did was far worse than what Keyworth did; making matters worse, most of the board, with the exception of Tom Perkins, does not seem to care (or at least sided with Dunn on the issue). Makes you wonder what types of people run the company. This may seem to be a fairly irrelevant matter to most people, but it reflects very poorly on the company when the board implicitly sanctions such actions from the chairperson. There’s something so repulsively hypocritical about this that it makes me question whether or not I should consider buying from HP at all in the future. What type of attitude do you suppose they take to their own customers’ privacy?

Read Full Post »