As seen on Slashdot:

Adobe uses a proprietary encrypted communications system between their Flash player and their Media Server product. This is intended to ensure that only people who pay for Flash Media Server can stream Flash movies, and only official clients can access them.

In other words, it’s a copy protection (DRM) scam. It’s completely antithetical to the goals of running a free software desktop or serving content to users using free software. However, despite Adobe’s claims, it doesn’t actually provide any security except through the obscurity of the protocol and some short secret keys.

lkcl claims to have created an open source, clean-room implementation of this protocol, called RTMPE, and published it on Sourceforge. Despite promising in January to open RTMP, Adobe wants to protect their revenue stream, so they sent a DMCA takedown notice to Sourceforge, who complied by censoring the project.

If you value your freedom to publish and receive Flash videos using free software, help us fight Adobe and embarrass SourceForge by nominating rtmpdump for “Best Project for Multimedia” in the SourceForge Community Choice awards.

If you just want to download it, here are some handy links now that it’s been censored by SourceForge: LKCL sehe.nl megashare.com mininova.org sumotorrent.com fulldls.com btjunkie.org mybittorrent.com demonoid.com mininova/TOR.

Advertisements

Fouad Bajwa writes of an unusual deal between the Pakistani government and Microsoft, on the s-asia-it mailing list:

To all members of the IT Industry & Technical Community,

Everyone is well aware that global financial recession has hit even the Tech Giants where companies like Microsoft and Intel have being saying goodbye to thousands of their employees. The situation doesn’t seem to be getting better but interestingly our Pakistani National ICT R&D Fund is thinking about helping Microsoft in Pakistan and we from the industry feel that it is sad that instead of supporting local Hi-Tech Start-ups and struggling IT Entrepreneurs [they are]  funding the usual “Non-Useful” activities like conferences [and] so-called accelerator programs for Pakistan…

To be fair, they have funded a number of open source projects, and funding for conferences and other networking activities is always in short supply for those without a significant marketing budget.

I have come to know through my friends in the IT Industry that the National ICT R&D Fund has signed an MoU with Microsoft to fund the Microsoft Developers Conference and something called an “Innovators Accelerator Program”. The funds haven’t been disbursed yet but it definitely annoys me and many of my friends in the IT industry that our government should fund Microsoft initiatives which is already a global giant. I have heard that around 5 million rupees [about USD 60,000] or thereabouts for the innovation accelerator program which will involve Microsoft training, entrepreneurship training and connecting with Microsoft partners and similar amounts related.

I also find it strange that Pakistan would choose it invest money in Microsoft at this time, despite their obvious experience and competence with open source. Others come to the Fund’s defence, saying:

ICT R&D Fund is one of the few institutions in the country that are doing an excellent job… [it] is the role of a funding agency to encourage collaborations for promoting research cultures and provide help in bringing the best minds closer.

But nobody has denied that the Fund has signed an MoU with Microsoft, or argued for its benefit to Pakistan. Fouad also writes:

When will our national institutions support its people, the vulnerable, not the already empowered? Why doesn’t it support the local entrepreneurs, the ones that don’t have large companies or university backings? Why does it have liabilities to include universities whereas it knows what the state of R&D in universities has been except for a few handful? Why doesn’t it include this money for Social Enterprise and created a NATIONAL INCUBATION AND ACCELERATION CENTRE where people like me or you or anyone can walk in and build their ideas and companies?

Ashiq Anjum replies that “No funding agency can build incubators for industry, probably this is outside of their scope.” But the Fund’s stated goal is “To transform Pakistan’s economy into a knowledge based economy by promoting efficient, sustainable and effective ICT initiatives through synergic development of industrial and academic resources.”

It sounds entirely reasonable on this basis for them to assist university graduates in gaining skills that are useful in the knowledge industry, and in setting up their own companies in the knowledge industry. Indeed, another stated goal is to “make Pakistan an attractive destination for service oriented and research and development related outsourced jobs.”

We can establish centres like http://www.socialinnovation.ca/
and help local entrepreneurs in business development and social innovation with the same amount of money[.] That helps and benefits our people and companies directly as well as innovate for local and international markets.

I agree that all countries should support local development, training and entrepreneurship as much as possible.

zdnet.com reports that ‘In an effort to improve Web users’ compatibility experience, Microsoft added a new, user-selectable Compatibility List to the Release Candidate test version of IE 8 that the company released in January… Microsoft describes the list — Version 1.0 of which includes 2,400 sites that don’t render properly in IE 8 (in other words, an “incompatibility list”) – as a tool designed to “make sure IE8 customers have a great experience with highly trafficked sites that have not yet fully accomodated IE8’s better implementation of web standards.”‘

(read more from the horse’s business end at http://blogs.msdn.com/ie/archive/2009/02/16/just-the-facts-recap-of-compatibility-view.aspx)

I think this is interesting. On the one hand Microsoft has finally (finally!) decided to bite the bullet and fix some of the bugs in IE that cause web developers so much pain. In my experience, supporting IE’s buggy CSS takes about as much effort as developing the CSS for Firefox in the first place.

Microsoft has always used the excuse before that users would view sites that rendered badly in a new standards-compliant IE and blame IE for the problems. This is an understandable, if self-serving excuse. Perhaps with IE’s market share below 70%, they feel that they can no longer get away with it on the basis of user base alone.

On the other hand, the list has some very interesting entries, apart from nearly every chinese website in existence:

  • amazon.com
  • blogger.com
  • ebay.com
  • facebook.com
  • google.com
  • live.com
  • microsoft.com
  • msn.com
  • myspace.com
  • wikipedia.org
  • yahoo.com
  • youtube.com

I can’t think of a high-profile site that’s not on the list. I think Microsoft has asked a million monkeys to beta-test IE8 and they’re hitting the error report button randomly.

Otherwise, I can only assume that IE8 doesn’t support any websites at all. Perhaps this is the EU-competition-commission version of IE8 that they were testing?

(thanks to PC The Great at lugm.org for the heads-up)

Open source in Government

February 17, 2009

The Register has an interesting article about various open source vendors’ latest attempt to legislate their way into the healthcare system, and why it’s doomed to fail.

I found it well-written and convincing right up to the last
paragraph but one:

If open source is going to make any real headway in the government, there needs to be an incentive to choose it, not a rule. Time and again, this is where the open source community falls short: Quality code isn’t enough of an incentive. You can put the best engineering in the world
into your product, but if you don’t know how to market, your project will rot in the source repository.

Uhh, non sequitur? Needs to be an incentive to choose it => needs better marketing? Where’s the incentive in marketing? Surely the incentive should be that it’s a better product or that it saves money or time, not that it has flashing lights all over it?

Backup Mail Exchangers

January 28, 2009

On Monday night, the power supply unit (PSU) in the server that hosts our mail server failed at around 2200 GMT. We don’t have physical access to the server out of hours, so I wasn’t able to replace it until about 1045 the next day, so our main email server was down for nearly 13 hours.

We didn’t have a backup MX because:

  • It usually can’t check whether recipients are valid or not, and therefore must accept mail that it can’t deliver;
  • It usually doesn’t have as good antispam checks as the primary, because it’s a hassle to keep it updated;
  • Spammers usually abuse backup MXes to send more spam, including Joe Jobs.

I thought that this was OK because people who send us mail also have mail servers with queues, which should hold the mail until our server comes back up. It’s normal for mail servers to go down sometimes and this should not cause mail to be lost or returned.

However, we had a report that one of our users did not receive a mail addressed to them, and was told by the sender that it had bounced. I saw the bounce messsage and suspected Exchange, so I decided to check how long Exchange holds messages before bouncing them. Turns out it’s only five hours by default. Most mail servers hold mail for far longer, for example five days, sending a warning message back to the sender after one day.

Bouncing messages looks bad on us. Apart from making our main mail server more reliable 🙂 we need a backup MX to accept mail when the master is down.

However I do still want to minimise the spam problem that this will cause. Therefore I configured our backup MX to only accept mail when the master is down. Otherwise it defers it, which will tell the sender to try sending it to the master (again).

How did I achieve this magic? With a little Exim configuration that took me a day and that I’m quite proud of. I set up a new virtual machine which just has Exim on it, nothing else. I configured it as an Internet host, and to relay for our most important domains. Then I created /etc/exim4/exim4.conf.localmacros with the following contents:

CHECK_RCPT_LOCAL_ACL_FILE=/etc/exim4/exim4.acl.conf
callout_positive_expire = 5m

This allows us to create a file called /etc/exim4/exim4.acl.conf which contains additional ACL (access control list) conditions. The other change, callout_positive_expire, I’ll describe in a minute.

I created /etc/exim4/exim4.acl.conf with the following contents:

# if we know that the primary MX rejects this address, we should too
deny
        ! verify = recipient/callout=30s,defer_ok
        message = Rejected by primary MX

# detect whether the callout is failing, without causing it to
# defer the message. only a warn verb can do this.
warn
        set acl_m_callout_deferred = true
        verify = recipient/callout=30s
        set acl_m_callout_deferred = false

# if the callout did not fail, and the primary mail server is not
# refusing  mail for this address, then it's accepting it, so tell
# our client to try again later
defer
        ! condition = $acl_m_callout_deferred
        message = The primary MX is working, please use it

# callout is failing, main server must be failing,
# accept everything
accept
        message = Accepting mail on behalf of primary MX

The first clause, which has a deny verb, does a callout to the recipient. A callout is an Exim feature which makes a test SMTP connection and starts the process of sending a mail, checking that the recipient would be accepted. This is designed to catch and block emails that the main server would reject. Our backup server has no idea what addresses are valid in our domains; only the primary knows that.

The callout response is cached for the default two hours if it returns a negative result (the recipient does not exist on the master) or five minutes (see callout_positive_expire above) if the address does exist. We use a defer_ok condition here so that if we fail to contact the master, we don’t defer the mail immediately, but instead assume that the address is OK and therefore continue to the next clause.

The second clause of the ACL, which has a warn verb, is what took me so long to work out. Normally, if a condition in a statement returns a result of defer, which means that it failed, the server will defer the whole message (tell the sender to come back later). In almost all cases this is the right thing to do, but it’s the exact opposite of what we want here. We want to accept mail if the callout is failing, not defer it, otherwise our backup MX is useless (it stops accepting mail if the primary goes down).

Because this is such an unusual thing to do, there is no configurable option for it in Exim. The only workaround that I found is that there is exactly one way to avoid a deferring condition causing the message to be deferred: a warn verb. The documentation for the warn verb says:

If any condition on a warn statement cannot be completed (that is, there is some sort of defer), the log line specified by log_message is not written… After a defer, no further conditions or modifiers in the warn statement are processed. The incident is logged, and the ACL continues to be processed, from the next statement onwards.

So what we do is:

  1. Set the local variable
    acl_m_callout_deferred to true;
  2. Try the callout. If it defers (cannot contact the primary server) then we stop processing the rest of the conditions in the warn statement, as described above;
  3. If we get to this point, we know that the callout did not defer, so we set acl_m_callout_deferred to false.

The third clause  of the ACL, which has a defer verb, simply checks the variable that we set above. If we get this far then the primary server is not rejecting this address; and if it’s not deferring either, then it must be accepting mail for the address. In that case, we defer the message, telling our SMTP client to try again later, at which point it will hopefully succeed in delivering directly to the primary.

Callout result caching becomes a problem here. If the master was not reachable, but a previous callout had verified that a particular address existed, and that callout result was cached for the default 24 hours, then the backup MX would defer subsequent mail to that address for the next 24 hours, even if the master went down. This is why we changed the positive callout result caching time to 5 minutes earlier.

The fourth clause  of the ACL, which has an accept verb, is even simpler. It accepts everything that was not denied or deferred earlier. We can only get this far if the master is not accepting or rejecting mail for that address.

So far the configuration appears to work fine and has blocked 14 spam attempts (abusing the backup MX) in 14 hours.

Offline Wikipedia

November 21, 2008

I’m working on making Wikipedia, the (in)famous free encyclopaedia, available offline, for a project in a school in rural Zambia where Internet access will be slow, expensive and unreliable.

What I’m looking for is:

  • Completely offline operation
  • Runs on Linux
  • Reasonable selection of content from English Wikipedia, preferably with some images
  • Looks and feels like the Wikipedia website (e.g. accessed through a browser)
  • Keyword search like the Wikipedia website

Tools that have built-in search engines usually require that you download a pages and articles dump file from Wikipedia (about 3 GB download) and then generate a search index, which can take from half an hour to five days.

For an open source project that seems ideally suited to being used offline, and considering the amount of interest, there are surprisingly few options (already developed). They also took me a long time to find, so I’m collating the information here in the hope that it will help others. Here are my impressions of the solutions that I’ve tried so far, gathered from various sources including makeuseof.com.

The One True Wikipedia

The One True Wikipedia, for comparison

MediaWiki (the Wikipedia wiki software) can be downloaded and installed on a computer configured as an AMP server (Apache, MySQL, PHP). You can then import a Wikipedia database dump and use the wiki offline. This is quite a complex process, and importing takes a long time, about 4 hours for the articles themselves (on a 3 GHz P4). Apparently it takes days to build the search index (I’m testing this at the moment). This method does not include any images, as the image dump is apparently 75 GB, and no longer appears to be available, and it displays some odd template codes in the text (shown in red below) which may confuse users.

Mediawiki local installation

Mediawiki local installation

Wikipedia Selection for Schools is a static website, created by Wikimedia and SOS Childrens Villages, with a hand-chosen and checked selection of articles from the main Wikipedia, and images, that fit on a DVD or 3GB of disk space. It’s available for free download using BitTorrent, which is rather slow. Although it looks like Wikipedia, it’s a static website, so while it’s easy to install, it has no search feature. It also has only 5,500 articles compared to the 2 million in Wikipedia itself (about 0.25%). Another review is on the Speed of Creativity Blog. Older versions are available here. (thanks BBC)

Wikipedia Selection for Schools

Wikipedia Selection for Schools

Zipedia is a Firefox plugin which loads and indexes a Wikipedia dump file. It requires a different dump file, containing the latest metadata (8 GB) instead of the usual one (3 GB). You can then access Wikipedia offline in your browser by going to a URL such as wikipedia://wiki. It does not support images, and the search feature only searches article titles, not their contents. You can pass the indexed data between users as a Zip file to save time and bandwidth, and you may be able to share this file between multiple users on a computer or a network. (thanks Ghacks.net)

WikiTaxi is a free Windows application which also loads and indexes Wikipedia dump files. It has its own user interface, which displays Wikipedia formatting properly (e.g. tables). It looks very nice, but it’s a shame that it doesn’t run on Linux.

WikiTaxi screenshot (wikitaxi.org)

WikiTaxi screenshot (wikitaxi.org)

Moulin Wiki is a project to develop open source offline distributions of Wikipedia content, based on the Kiwix browser. They claim that their 150 MB Arabic version contains an impressive 70,000 articles, and that their 1.5 GB French version contains the entire French Wikipedia, more than 700,000 articles. Unfortunately they have not yet released an English version.

Kiwix itself can be used to read a downloaded dump file, thereby giving access to the whole English Wikipedia via the 3 GB download. It runs on Linux only (as far as I know) and the user interface is a customised version of the Firefox browser. Unfortunately I could not get it to build on Ubuntu Hardy due to an incompatible change in Xulrunner. (Kiwix developers told me that a new version would be released before the end of November 2008, but I wasn’t able to test it yet).

Kiwix (and probably MoulinWiki)

Kiwix (and probably MoulinWiki)

Wikipedia Dump Reader is a KDE application which browses Wikipedia dump files. It generates an index on the first run, which took 5 hours on a 3 GHz P4, and you can’t use it until it’s finished. It doesn’t require extracting or uncompressing the dump file, so it’s efficient on disk space, and you can copy or share the index between computers. The display is in plain text, so it looks nothing like Wikipedia, and it includes some odd system codes in the output which could confuse users.

Wikipedia Dump Reader

Wikipedia Dump Reader

Thanassis Tsiodras has created a set of scripts to extract Wikipedia article titles from the compressed dump, index them, parse and display them with a search engine. It’s a clever hack but the user interface is quite rough, it doesn’t always work, requires about two times the dump file size in additional data, it was a pain to figure out how to use it and get it working, and it looks nothing like Wikipedia, but better than the Dump Reader above.

Thanassis Tsiodras' Fast Wiki with Search

Thanassis Tsiodras' Fast Wiki with Search

Pocket Wikipedia is designed for PDAs, but apparently runs on Linux and Windows as well. The interface looks a bit rough, and I haven’t tested the keyword search yet. It doesn’t say exactly how many articles it contains, but my guess is that it’s about 3% of Wikipedia. Unfortunately it’s closed source, and as it comes from Romania, I don’t trust it enough to run it. (thanks makeuseof.com)

Pocket Wikipedia on Linux

Pocket Wikipedia on Linux (makeuseof.com)

Wikislice allows users to download part of Wikipedia and view it using the free Webaroo client. Unfortunately this client appears only to work on Windows. (thanks makeuseof.com)

WikiSlice (makeuseof.com)

WikiSlice (makeuseof.com)

Encyclopodia puts the open source project on an iPod, but I want to use it on Linux.

Encyclopodia

Encyclopodia

It appears that if you need search and Linux compatibility, then running a real Wikipedia (MediaWiki) server is probably the best option, despite the time taken.

I apologise for not writing another article sooner. I’d like to respond to some of the comments on my previous article, IT in the field, by some of the people I mentioned, Jeff Allen and Jon Thompson. I’ll include their comments and my responses inline.

Jon writes:

1) Almost no one uses PDA’s. Mobiles – yes. (See my post on Nokia Data Gathering.)

I would argue that I’m coming from a different position, not on what’s currently in the field and being used, but what could be. I think that Jon is referring to a similar kind of future possibilities in his follow-on article. But I’m also thinking in terms of what an organisation (NGO or government) can do to avoid these problems in the first place, by changing the way that they deploy technology.

I accept completely that PDAs are not commonly used in the field at the moment. However, I’d suggest that organisations take a much closer look at deploying PDAs (and mobile phones) instead of general-purpose PCs in current and future projects, as it would avoid most of the problems that Jeff reports and that we are discussing here.

2) Is anyone buying and deploying Inveneo? Who is Aleutia? What is their penetration. Odds are Jeff’s guy in Congo will never know about either.

I’m not intending to put the onus on Jeff’s guy in Congo, or any other end user, to select appropriate technology for their circumstances. I don’t think they have the technical skills or buying power to do so. The choice was already made for them by the organisation that supplied a PC running Windows without antivirus updates. I believe that this was a bad choice, driven by the factors that I gave in my previous article, and that different choices could be made in future to avoid such problems.

3) Webmail will stay the same until the cows come home. Sure, someone can write tricky code but will the guy in Congo ever see it? Probably not.

Actually, I have seen comments asking why Google doesn’t use their own Gears library for their own webmail service, particularly in the context of recent Gmail service outages. If they did decide to do so, the benefits should translate immediately to users with low-bandwidth or intermittent connections, in the Congo as elsewhere.

4) Mac OSX does not exist in the rest of the world and barely in the Balkans, some of which are even slated for EU membership.

I don’t see that as a strong argument against deploying alternatives such as MacOS X in new systems. We have a problem: poor maintainability of traditional IT systems in the field. We have to find solutions to those problems, which may involve deploying different technology. The same argument applies to Ubuntu.

5) So you do support Ubuntu adoption as an alternative? I think you are getting my point. You could just install Ubuntu and forget about the AVG argument.

But the end user will not deploy Ubuntu. The providers of the equipment will have to do it for them. That’s my point, that the organisations have to change, not the end users.

Vertical programs that distribute hardware almost never follow-up so anything they put in the field is usually toast within a few months.

That’s precisely my point in the last article, even if I rambled a bit in the process of getting there. But I think it would be great to have hard numbers that we could present as evidence to the organisations to get them to recognise the problems in what they’re doing, and to change their ways.

For example, let us just say that something causes the OS to hiccup so the well meaning local IT guy steps in and recommends reinstalling the OS. Of course the owner doesn’t have one so the IT guy offers one of his own. Bootleg install. Story over.

That is certainly true for general-purpose computers, but even a minimal amount of BIOS lock-down can prevent the reinstallation of the software, or make it significantly harder. There’s also the question of why the man in the Congo is turning to his “local IT guy” for technical support? Why isn’t the organisation that provided the computer providing support for it as well?

9) Forget about NGO’s as they are not the problem. Remember, this guy worked for the Congolese Gov’t so he gets whatever trickles down (a few odd machines from international agencies)

These “international agencies” are precisely who I meant by NGOs. Perhaps I should have included IOs as well, but all this jargon is going to get very confusing to anyone who’s not an expert in the field. Is there any reason not to lump NGOs, IOs and government programmes into the same basket as “organisations” all?

His support team? The local DVD vendor and the well meaning IT tech. Therefore, educate the local health official, the DVD vendor and the local IT tech…

And why are they his only (or primary) means of support? MSF sent Jeff out to deal with a problem that wasn’t even to do with any equipment that they provided (as far as we know), but it was stopping the man in the Congo from interacting with MSF.

Had MSF not sent Jeff to the Congo, you, me and Jeff would be none the wiser.

Indeed, but had the original organisation not sent out the wrong equipment (a Windows computer with no support and no antivirus updates), MSF would not have had to waste a lot of valuable time and money and resources on sending Jeff out in the first place. Also, had MSF provided their own, more suitable equipment, for the man in the Congo to use, Jeff would not have had to make an on-site visit either.

I’d like to respond to Jeff’s comments as well, but this article is already getting rather long and ranty, so I’ll leave that for another day.