Feeds:
Posts
Comments

Archive for November, 2006

Hmm…

I am nerdier than 96% of all people. Are you nerdier? Click here to find out!

Been wondering for a month if I should brag about this…um, being a nerd/geek/dork is cool these days, right?

Read Full Post »

Right now there’s a man vs. machine chess match between world champion (the real world champion) Vladimir Kramnik and Fritz 10. It well-established at this point that computers can play better chess than humans, and I do not think that any competition pitting people against computers has been won by the people in the past few years. Garry Kasparov’s defeat by Deep Blue in 1997 was well-publicized, but Kasparov historically has tended to underperform against computers because his dynamic, sacrificial style is poor strategy against the silicon monsters. Since computers can calculate tactics dreadfully well, a more strategic style is usually called for.

Currently, the strongest chess computer is probably considered to be Hydra, which runs on specialized hardware (Fritz is probably very close), and last year beat the English GM Michael Adams 5.5-0.5 in a six game match. Adams, a defensive, positional player, was probably a better-suited adversary than a player like Kasparov to a strong computer. So there’s pretty much no hope that humans will ever be able to regain their supremacy in chess, even though Kramnik is one of the deepest positional players today, and probably has a better understanding of chess than anyone else (including the people who actually program computers). So if anyone can defend humanity’s honour, Kramnik is the man.

Then he went and did this.

Read Full Post »

Today a number of large Canadian ISPs announced that they are partnering with Project Cleanfeed, meaning they will now actively block access to web sites that (supposedly) contain child pornography. This is accomplished using an actual list of sites that have been investigated by cybertip.ca and found to contain such content.

This is a bad idea, for several reasons.

Banning or restricting access to any type of content (no matter how objectionable) on the internet is dangerous because it may lead to an abuse of the practice, a very real possibility. It is not difficult to extend the list to other sites that someone else deems objectionable, such as hate speech, hacking sites, gambling, legal pornography, sites that may infringe copyright, etc. This is not a hypothetical possibility: this comment shows what happens when this is abused: IFPI got a Danish court to extend that country’s child pornography blacklist to include Allofmp3.com.

Some countries have an cutoff for child pornography that is lower than Canada’s. Will this blacklist cut off access to those sites that are legal in other countries? While it is still illegal in Canada to access such sites, there is no question that, morally, they do not fall into the same category as ‘real’ (i.e. prepubescent) child pornography. The same goes for, e.g., written child pornography that is legal in countries such as the United States. This falls too close to having a third party legislate morality for my comfort.

Without extensive oversight to detemine that the list is being properly administered, there is a serious risk that legitimate web sites will be blacklisted. This list is not available to the public; of course, it is not likely to be made available to the public because doing so would essentially provide a list of sites to anyone who wants to access child pornography. However, without oversight, there is no way of knowing whether or not a web site that cannot be accessed is being blocked without going through an appeal process. What about sites such as NAMBLA or Boychat? Though legal, they could easily get caught up in such an ill-considered scheme. Somehow I suspect that the appeal process is not going to be optimized in favour of assuming that sites have been blacklisted in error. Cory Doctorow points out the sheer extent of the problems that could arise here:

The idea is fundamentally broken. First of all, it seems to me that keeping a secret list of “evil” content is inherently subject to abuse. This is certainly something we’ve seen in every single other instance of secret blacklisting: axe-grinding, personal vendettas, and ass-covering are the inevitable outcome of a system in which there is absolute authority, no due process, and no accountability.

The appeals process is likewise flawed. If the self-appointed censors opt to block, for example, material produced by and for gay teens about their sexuality (a common “edge-case” in child porn debates), then teens will have to out themselves as gay to avail themselves of the appeals process.

Notwithstanding this, it’s hard to imagine how an appeals process would unfold. How could someone who wanted a site unblocked marshal a cogent argument for his case unless he could see the content and determine whether it was being inappropriately blocked?

Likewise, there is no imaginable way in which such a system could possibly be comprehensive in blocking child porn. It will certainly miss material that is genuinely child pornography. The Internet is too big for such a list to be compiled, and the censorship problems are compounded as the lists grow.

If, for example, Canada were to import Australia’s secret list of bad sites, then Canadians would then be subject to the potential abuses of unscrupulous (or unintelligent) censors in Australia, as well as in Canada. You’d have to trust the Canadian censor-selector process, and the Australian one. The longer lists that would emerge from the merger process would be harder to audit — the haystacks of real porn larger, the needles of censorship smaller.

Worst of all is the problem of site-level blocking for user-created content sites like Blogger, Typepad, Geocities, YouTube, etc. These sites inevitably contain child porn and other objectionable material, because new, anonymous accounts can be created there by people engaged in bad speech. However, these sites are also the primary vehicle by which users express their own feelings and beliefs and are frequently posted to anonymously by whistle-blowers, rape victims, dissidents in totalitarian states and others who have good reason to hide their identities.

Furthermore, this is not going to work. Anyone actively trafficking in child pornography is presumably familiar with anonymizing techinques, such as using anonymous proxies, TOR, darknets, etc. By pushing them towards these techinques, they will become harder to track, which does nothing to help law enforcement or to make children safer, and it also puts pressure on services such as TOR, which is supposed to be used for, e.g., political speech in repressive regimes.

It is, moreover, not clear to me that this will serve any kind of useful purpose. Is there any proof that preventing access to child pornographic web sites makes children safer? That it makes pedophiles less likely to offend? One could argue that accessing child pornography acts as a release for many pedophiles rather than an incitement and therefore makes them less likely to offend. Moral objections to child pornography are spurious; the only reasonable justification for this action is if it actually makes children safer. I worry that this may make children less safe by making it more difficult to track suspected pedophiles. If anyone is aware of any research that explicitly demonstrates that this helps children, please post in the comments below.

The only good thing about this is that it is a voluntary action taken by the ISPs, which means it is not as yet mandated by government regulation. In addition, as Michael Geist points out, the difference between child pornography and other objectionable content that I outlined above is that merely accessing child pornography is a criminal offence in this country. Geist argues that this will provide a natural barrier to extending the blacklist to other sites; for the reasons I outline above, I am skeptical of this view.

As a matter of priciple, no ISP should be taking action to restrict access to any website. The reason the internet is such a powerful medium is because, historically, it has allowed full free access to anything that anyone cares to publish online. I have no interest in visiting child pornographic web sites, but I do not want my websurfing to be hampered by potentially arbitrary network-level blocking. This action risks creating a precedent that allows ISPs to arbitrarily restrict access to online content. That, in turn, will compromise the internet’s effectiveness as a tool to promote free speech both locally and globally. We should always err on the side of protecting free speech.

Read Full Post »

Dawkins says that he hates what he calls fundamentalist religion because he says it’s “hell-bent on ruining the scientific education of countless eager minds.”

Quote: ‘Fundamentalist religion is hell-bent on ruining the scientific education of countless thousands of innocent, well-meaning, eager young minds. Non-fundamentalist, “sensible” religion may not be doing that. But it is making the world safe for fundamentalism by teaching children, from their earliest years, that unquestioning faith is a virtue.’

read more | digg story

Read Full Post »

A couple of weeks ago Microsoft and Novell signed a patent cross-licencing deal that would indemnify Novell customers against patent lawsuits. Ars Technica speculates that the deal is really just about provinding virtualization support for their customers who also use Linux for some purposes and that it’s not clear that it’s quite detrimental to Linux or to open-source software in general – but let’s pretend, for the time being, that MS has got something nasty up its sleeve. I’ve read a little about this on news sites and poked around the Slashdot forum; here’s what I’ve gleaned and what I think this is about.

Earlier this week Steve Ballmer said that Linux infringed on MS’ intellectual property. How, he did not say, leading me to think that this is mainly chest-thumpin, just some good ol’ FUD. MS filing suit against Linux users or distributors would be heavy-handed and attract the attention of both IBM and government regulators (not to mention invite intense scrutiny of Microsoft’s own patent portfolio and past IP infringements on their part). So what would Ballmer do in this situation? Perhaps, find a way to undermine Linux while taking precautions to make it look like they’re going out of their way to help customers who also want to use Linux. It does not give Linux the credibility that MS would not want when coupled with statements of the type that Ballmer just made, in that although MS is essentially acknowledging Linux as a viable competitor, it implies that a licence is needed because of possible IP issues. In other words, it’s fine to use Linux provided that it happens under a licence they approve of. This is one way to compromise the perception of Linux as an alternative to Windows in the minds of many IT managers and CTOs.

Viewed in this light, MS may simply acknowledge that Linux is not going away and will now try to kep it on the server – and so doing, keep their dominance on the desktop and in corporate environments. So the new strategy may be to control how customers perceive they can use Linux.

Let’s take this one step further. If you were giant software company frightened to death (pretending, remember) of a competing product that was not backed by a single company, given away for free, and is in many ways better than your product, what strategy would you pursue to try to siphon market share away from this competing product? You can’t just steamroll over it like you used to with many other competitors in the 80s and 90s. You can’t start infringement suits willy-nilly because you’d get hit right back. You could try to compete on the open market, but, realistically, this is difficult to make work against a competitor that has more programmers than you to call upon, many willing to work on a voluntary basis, and against a competitor that can respond to such challenges on the turn of a dime (which, after all, is one of the major benefits of OSS). A much more attractive option is to get a foothold in the development process such that you can directly (not just through FUD) control what it does and how it can be used. By signing with a major Linux provider, you gain influence regarding what type of functionality goes into subsequent development on GNU/Linux systems. If Linux developers do not take kindly to the deal, the result is fragmentation of the Linux space – that is, forking the project. A fragmented Linux is, of course, less able to compete because developers will be drawn to one or the other Linux kernel stream. If MS were to sign deals with other providers – which seems unlikely, given the backlash to this deal – it would further exacerbate the fragmentation problem.

So the Microsoft strategy may be to divide and conquer. They may even try to include proprietary components in the licenced Novell version of Linux (don’t know how, but maybe their lawyers will figure something out). If the kernel is forked, and no other Linux distributor accedes to MS’ overtures to licence ‘their’ IP in Linux, Novell will likely wither without the resources of the open source community at large to draw upon. If Novell remains successful, however, it will pressure other vendors to sign licencing deals with MS, which will further fragment the market and could lead to more forking. This accomplishes two things for MS: first, it will prevent Linux from growing market share on the desktop, because most commonly used desktops distros are nonprofit and are unlikely to obtain licencing deals from MS, which protects MS’ monopoly there; second, it weakens the ability of the community at large to compete, which will allow MS to slowly pick off Linux vendors. This is a long term project for MS; there’s no way for them to defeat quickly, so they will now pursue a strategy of slowly squeezing Linux vendors and gradually weakening the entire project.

Divide and conquer. This is Microsoft’s strategy for dealing with Linux.

Read Full Post »

A few interesting things:

  • Techdirt has a series of post discussing the economics of scarcity in the context of digital goods. The essential point is that digital goods do not suffer from scarcity, which naturally drives down the cost of the content – something that will (is?) force (forcing?) content providers to recognize a dramatic shift in market dynamics. Link.
  • Now I know what my people are called: Star Wars virgins. According to Slashdot. (By which, let me clarify, is meant people who have never seen any of the Star Wars movies, and not Star Wars geeks who happen to be virgins.) I will not watch any Star Wars flick out of principle. I don’t know what principle yet, but I’m not watching them. Quoth the article:

The Challenge was simple: Lose my virginity. More specifically, my Star Wars virginity. This was something I had held for so long that I had developed a sort of pride about it. It made me unique in this vast world of passionate and eccentric fans. Was now the time? Would I even be ready?

  • Quote of the day: A bit late for this, it’s from either Christopher Hitchens or Andrew Sullivan on CNN the just before last Tuesday’s congressional elections: “This isn’t an election, it’s an intervention.” Heh.

Read Full Post »

I haven’t posted in along time, and the reason is that I’ve been trying to get a production run going for the research on HPCVL (big computing cluster) and it’s always trying to port your code to another machine with a different compiler. So here’s what happened.

Three weeks ago we decided to get a big MCMC run going on HPCVL, which required me to compile code on a Solaris machine using the Sun compiler. My programs use automake, which means I had lots of fun figuring out how to configure and install them into my home directory on HPCVL (aha! ‘prefix’ flag required!), and had my memory refreshed on changing environment variables/linking libraries many times. So after running configure, ‘make’ choked in several different places, not all of which seemed logical:

  • Problem: CC can’t figure out what ‘sqrt’, ‘sin’, etc mean. Fine, I need a #include <cmath> statement in a header file. Makes sense, but it’s still not clear to me why it compiles on my computer at Stirling. Possibly something to do with gcc 3.2, because it won’t compile without the aforementioned directive on my home computer with gcc 4. In any case, that’s an easy fix.
  • Problem: There’s some kind of scope problem involving trying to define a member of a class definition and putting an extraneous scope operator. I think. I still have no idea what the error message meant, but removing the extraneous scope operator made it go away. Good enough.
  • Problem: I can’t compile with the cosmology header file as it is because the compiler won’t accept initializing const static int (the Hubble constant) members in the ‘protected:’ section of a class definition. Since this pushes the limits of my knowledge of C++, I try to initialize them in random places elsewhere in the class file, notably under where it says ‘public:’, using a constructor, keeping the original declarations in place. This works, in the sense that the original compiler error message is gone and I now get a different one. This time the error occurred in the linking stage, with one of those horrific error message that looks like a long string of random letters and numbers that says absolutely nothing helpful, as opposed to the cryptic compilation error message I got earlier that at least told me what line the error was on. It’s complaining about doubly declared variables or something like that, and I notice that the end of the long string of meaningless characters, the names of the rogue const static ints are appended. This tells me that the problem remains with the initialization of these members, but nothing else. After multiple ham-handed attempts at working around the problems using all sorts of syntactical gymnastics, I finally declared them as global variables. Problem solved.
  • After finally getting everything to compile correctly, I tried a few things. Notice problems with the calculation of the rotation curve and density profiles of sample systems. The rotation curve problem is easy, because somehow the wrong calculation for the tangential velocity was included in the original source (long since fixed on my computer here). The density is slightly more difficult, because, for no clear reason, it calculates the density profile correctly, and then spits out the wrong value. More specifically, it gets the correct density values while in the for loop that loops through each bin (which I checked by outputting the density in every bin within the for loop), but outputs the wrong values immediately on exiting the for loop (which i checked by outputting the density values in a separate for loop). I still haven’t figured that one out. I could just output the density within the original for loop but I would like to know why it chokes after leaving the original loop.

So the analysis programs (more or less) work, or seem to. Compiling the N-body code itself goes smoothly and seems to work fine as well. Now to compile the galaxy-building programs.

  • There is no excuse in this day and age to be forced to submit to a 72-character per line limit (really 66 character because you can’t use the first six columns except for special cases) like fortran 77 requires. So I have to change a flag in the makefile. Fine. Except the Sun compiler’s flag can only handle up to 132 characters per line. I shouldn’t be making lines that long; fair enough.
  • After compilation, I compare two of the same models generated on HPCVL and on my computer here at Stirling. Slight discrepancy in the central potential. Hmm. Larry suggests doing some detective work to uncover the cause. After multiple write(*,*) statements, I discover that tiny roundoff errors on each machine can lead to big differences when they’re used as arguments to logs and are really close to zero. Shouldn’t be too big a deal.
  • I then try building a galaxy. Everything goes (more or less) well. Analyse the model, density still doesn’t work. Phooey. But I’ll manage. Everything else seems okay. So after getting a trial MCMC run started, I did a little more inspecting of the resulting models. I look at one model, generate the disk, bulge and halo, and work out the rotation curve. I notice a slight anomaly: the halo rotation is not being calculated correctly: it’s too low. Output the mass as a function of radius. Only then do I discover, to my horror, that the halo has giant hole in the centre. Everything else is correct – in particular, the total mass and tidal radius are correct. It’s not distributing the mass correctly. There’s just nothing that ends up in the centre. Nada. I try changing one halo parameter. This time, everything is fine. I try changing the whole parameter set. Everything is fine. Okay. So what’s going wrong? Larry thinks it’s a bug in the interpolation routines that would take a month to find. Since we can’t control where a MCMC chain goes in parameter space, there’s no telling if it’ll find one of these regions where the bug will manifest itself. What a mess. I’m toying with the idea of spending my Christmas holidays rewriting all this code in C. But for now we’ll do the MCMC run on my regular computer and use HPCVL for the simulations.

Now I’m trying to figure out how to account for the asymmetric drift in the galaxy we’re trying to model. Which, you may notice, is a problem that has actual physics involved. It’s nice to have one of those once in a while – cuz, ya know, I am technically an astrophysicist and not a programmer.

Read Full Post »

Older Posts »