Russian crackers throw GPU power at passwords

Russian-based cracking "password recovery" company Elcomsoft hasn't really been in the news since 2003, when Adobe helped make "Free Dmitry" the new "Free Kevin" by having one of the company's programmers, Dmitry Sklyarov, arrested for cracking its eBook Reader software. But Elcomsoft has remedied the lack of press attention this week with its announcement that it has pressed the GPU into the service of password cracking. HangZhou Night Net

With NVIDIA and AMD/ATI working overtime to raise the GPU's profile as a math coprocessor for computationally intensive, data-parallel computing problems, it was inevitable that someone would make an announcement that they had succeeded in using the GPU to speed up the password-cracking process. Notice that I said "make an announcement," because I'm sure various government entities domestic and foreign have been working on this from the moment AMD made its "close-to-metal" (CTM) package available for download. The Elcomsoft guys didn't use CTM, though. They opted to go with NVIDIA's higher-level CUDA interface, a move that no doubt cut their development time significantly.

Elcomsoft's new password cracker attacks the NTLM hashing that Windows uses with a brute force method. The company claims that its GPU-powered attack speeds up the time it takes to crack a Vista password from two months to a little over three days.

Elcomsoft claims that they've filed for a US patent on this approach, but it's not clear what exactly they're attempting to patent. A search of the USPTO's patent database turned up nothing, but that could be because the patent hasn't made it into the database yet.

Ultimately, using GPUs to crack passwords is kid's stuff. The world's best password cracker is probably the Storm Worm, assuming that its owners are using it for this. As many as ten million networked Windows boxes—now that's parallelism.

Climate change mega-post

This week there seems to be a lot of climate news around, some good, some bad, and some that is just ugly. Rather than putting up a plethora of posts and getting accused of being Ars Climactica, we thought we would combine them into a single mega post for your consumption. HangZhou Night Net

The first paper, published in Science1, looks at the prospects for narrowing the range of estimates for the future climate. In doing so, they note that the climate is a system that consists of many physical processes that are coupled together nonlinearly. This has led to climate modelers focusing on physical mechanisms and fundamentals of nonlinear dynamics to understand and improve their models. Notably, the specific inclusion of many physical mechanisms has not led to a significant decrease in the range of climate predictions. Most of the blame for this has fallen on the nature of nonlinear systems. Essentially, to obtain a small increase in predictive ability, one needs a very large increase in the accuracy of the initial conditions. We are stuck because we can’t improve the accuracy of our ancestor’s weather stations and other methods, such as ice core samples, will only ever yield averages. But as our earlier coverage on the nature of climate modeling explains, this isn’t really the heart of the issue. Climate models use a range of initial conditions and measure the probability of certain climatic conditions occurring based on those modeling results.

Instead of focusing on the physics of the climate or the dynamical system, Roe and Baker look at the behavior of a simple linear equilibrium system with positive feedback. All the physics is replaced with a simple gain parameter, which describes how an increase in average temperature leads to a further increase in temperature. Although this does not describe the physics, it does encompass what we measure, so the model is valid for their purposes. They then explore how the uncertainty in the gain parameter changes the rate of temperature increase. The positive feedback system has the effect of amplifying the uncertainties (just like a nonlinear system), meaning that it is practically impossible to improve climate estimates. This is not really derived from the initial conditions (e.g., the starting climatic conditions) but rather focuses on the natural uncertainty in physical mechanisms, which is a major focus of current modeling efforts and includes such things as cloud cover. Basically, the amplifying of the uncertainties, and the timescales involved mean that the smallest uncertainties blow out to give the large range of temperatures predicted by climate researchers.

This news will not call off the search for parts of the environment that influence our climate because, if we are to mitigate global warming, then we must know which bits of the environment are the best to change. This obviously includes human behavior, but that covers a whole gamut from urban lifestyles through to farming practices. Part of this picture is soil erosion, which removes carbon from the soil and deposits it elsewhere. The question isn’t so much as where but what happens to that carbon on route and once it arrives. It was thought that perhaps soil erosion contributed carbon dioxide to the atmosphere by opening up new mechanisms for the decomposition of organic matter. Alternatively, it has been argued that soil erosion deposits organic carbon in places—like the bottom of the sea, for instance— where it is effectively stored. However, testing these hypotheses has been problematic.

Nevertheless, problematic is what a good scientist looks for, so, with fortitude and dedication to the cause, scientists from the EU and US have collaborated to measure the uptake and removal of carbon over ten sites. They report in Science2 this week that, like normal land, eroding land also acts as a carbon sink. They do note that in eroding landscapes, the carbon is likely to more laterally more, but is no more likely to enter the atmosphere as carbon dioxide than on healthy pastureland. Of course the amount of carbon stored is slightly less, so these soils are perhaps not as efficient as normal soils as carbon sinks. Some research is needed to determine if there are differences in the long-term destination of carbon between normal pasture and eroding soils—however, until that research is done, we can cross soil erosion off the list of things to worry about in terms of global warming.

On the bad news, rapid industrialization in the developing world and the lack of action in the developed world is now measurably increasing the rate at which we deposit carbon dioxide into the atmosphere. This is the conclusion of a paper to be published in the Proceedings of the National Academy of Science. Essentially, they have looked at estimates for anthropogenic carbon dioxide emissions and compared that to the measured concentration in the atmosphere and determined from the time series that the natural carbon sinks are either already saturated or are nearing saturated. The conclusion from this is that the concentration of carbon dioxide in the atmosphere is likely to increase faster than predicted in most scenarios. This is especially true since most scenarios assume that we will take some action to keep the rate of increase in atmospheric carbon dioxide (as a percentage) below the rate of economic growth (also as a percentage). Not the best news.

Electronic Arts to undergo empire-wide restructuring, layoffs

When you're on top, the only place to go is down. In the face of stiff competition, EA's profits have begun to drop. Destructoid is reporting that job cuts and branch restructuring have already begun taking place, with extensive changes being made to many different studios under EA's umbrella, including Mythic. HangZhou Night Net

Word of these changes came from an internal EA e-mail. CEO John Riccitiello has begun taking precautions to ensure that the current state of affairs of his company doesn't continue. This follows a previous restructuring meant to rebalance staff across the many branches of the company. To quote the e-mail:

Given this, John Riccitiello, our CEO, has tasked the company to get its costs in line with revenues… Every studio, group and division of the company has been tasked to review its overall headcount and adjust its organization to meet the needs of the business moving forward.

The changes to Mythic appear to be only the first in what will be a long line of changes. Certain teams, such as the Ultima Online group, will be relocated. Competitive employment strategies will also be enforced to keep employees working hard if they want to keep their jobs: "attrition, performance management, stricter hiring guidelines, and layoffs" will purportedly keep workers in check.

Given the state of EA's multiplatform competitors, including Activision, which is set to release one of the assured hits of the winter in Call of Duty 4, and long-time rival Ubisoft, which is sitting on Assassin's Creed, the company will be pressed to start taking more risks like skate if it hopes to stay fresh in this increasingly competitive development scene.

Verizon to pay $1 million over deceptive “unlimited” EVDO plans

The case of the limited "unlimited" EVDO has been settled: Verizon has agreed to pay out $1 million to customers that it has terminated for overuse of its high-speed data service. New York Attorney General Andrew Cuomo made the announcement today, saying that Verizon's decision came after a nine-month investigation into the company's services and marketing practices. The attorney general accused Verizon of producing misleading materials and deceptive marketing when it claimed that its data plans were unlimited. HangZhou Night Net

The issue came to light last year, when some customers found their accounts on the chopping block after downloading too much data over Verizon's wireless broadband service. The wireless provider prominently advertised its EVDO service as "unlimited," but the fine print indicated that it was only unlimited for certain things, such as e-mail and web access. Video, music, and other media did not fall into that category, and Verizon began enforcing an undisclosed bandwidth cap on users that the company decided downloaded too much.

Verizon eventually cut off some 13,000 customers for excessive use of its "unlimited" service, leaving those customers out in the cold with equipment that they could no longer use. But the Attorney General said that the service's limitations were not clearly and conspicuously disclosed, and that they "directly contradicted the promise of 'unlimited' service."

Verizon, which has voluntarily cooperated with the investigation since it began in April, has now agreed to reimburse the terminated customers for the costs associated with their now-useless equipment. Verizon estimates that it will come out to be about $1 million, with an additional $150,000 in fines paid out to the state of New York.

"This settlement sends a message to companies large and small answering the growing consumer demand for wireless services. When consumers are promised an 'unlimited' service, they do not expect the promise to be broken by hidden limitations," said Cuomo in a statement. "Consumers must be treated fairly and honestly. Delivering a product is simply not enough—the promises must be delivered as well."

The company has also agreed to revise its marketing of the wireless plans, in addition to allowing common uses of the broadband connection (such as video downloads). As you can see above, Verizon has already updated some of its marketing material.

Let your geek flag fly: a review of Eye of Judgment

A cold winter morning, in the time before the light

Eye of Judgment
Publisher: Sony Computer Entertainment
Developer: SCE Studios Japan
Platform: PlayStation 3
Price: $69.99
Rating: Teen HangZhou Night Net

A few days ago I had an experience of such intense geekiness that I felt like I was back in high school, in a garage somewhere, listening to a Led Zeppelin tape and playing Dungeon and Dragons. What caused this intense and oddly freeing embrace of the best parts of being a geek? I sat down with a good friend, set up Eye of Judgment, and played for hours upon hours. The cheap pot may have been traded in for a nice bourbon, but the overall feeling was the same.

Eye of Judgment is a game that doesn't just require you to be a geek, it demands that you give yourself over to the sheer insanity of the experience. You need a PlayStation 3, a nice television, the PlayStation Eye, a bunch of cards, a table near the PS3 so you can hook up the camera and lay down the mat, and, in an optimal situation, a friend sitting across from you.

When you're holding your deck of cards, staring into the eyes of your opponent before laying down that Biolith God that will lay waste to his entire force, you'll know that not only are you playing something that embraces many clichés of games you've played before while doing something completely new, you may have just reconfigured your entire room to do it.

What the players see on the left, what the television displays on the right

I lay my card down on the mat. The camera scans it, and on the television runes explode from the card's face before the deity emerges and destroys two of the four creatures my friend has laid down. He removes the "dead" cards from the mat, places them in his graveyard where the vanquished rot, and takes a sip of his drink. Only then do I realize I just blew all my mana, and he's been banking his for an equally impressive attack. He lays his card down, and our eyes turn to the screen to see what carnage his attacks will cause. Heavy metal music is playing is the background.

This is awesome. This is Eye of Judgment.

My Judgment. Let me show you it

Eye of Judgment is a little bit Magic: The Gathering and a little bit of the chess game from Star Wars. The new webcam from Sony looks down on the game mat, which features a 3×3 grid of squares. You lay down cards on the squares, summoning creatures to allow you possession of each slot. When you own five spots, you win. Sounds easy, doesn't it?

While you lay down cards on the mat, the actual play area may not look like much, but on your television you'll see a creature on top of each card, you'll see information on the hit points that each creature has left, as well as their attack strength and which direction they're facing. Each time you turn a creature, attack, or summon a new monster you use mana. You gain two mana points each round, and there are cards that allow you to gain more mana points by discarding creatures in your hand. Management of yourmana points is a major piece of strategy: do you summon a bunch of weak creatures early in the game or save your points to lay down the serious ordnance?

There is a spell card that costs no mana to cast and simply spins a character around. Since creatures can only attack in certain directions and spinning this particular creature cost six mana for every 90-degree turn, I effectively wiped out my opponent's mana pool in one swipe. The strategy is deep; you'll be learning new tricks and getting ideas for how to play your cards for a long time.

Give your living room to Eye of Judgment

Learning how to play and what the numbers and information on the cards mean takes a little bit of a time investment, although the printed manual and video-based tutorials make the process easier. Just a warning, the video tutorials aren't interactive, and they're incredibly boring. They actually explain to you that when someone brings cards over, they are the "owner" of those cards. I'm surprised they didn't explain that the cards were made of "paper" which is made from "pulped trees."

The game itself is not overly complicated. You can explain things and play a quick game in about an hour, and after that, players catch on rather quickly to the subtleties of strategy. Eye of Judgment could have been a complicated mess, but I'm happy to report the rules make sense and are easy to comprehend while not limiting play. Each battle is a series of questions. Doyou use mana now or later? Each square matches with an element, so ifyou play a water-based card on a water square you get a bump. Do you play a card now or wait for a better elemental slot to become available?

These are questions that will haunt you at night as you listen to Dragonforce and plan your next deck.

I've got a BA in mana management

How well the card game plays is a worthless thing to try to judge if the hardware doesn't work; if the camera has issues with the cards, then the whole set up is a waste. It can take a minute or two to calibrate the camera and to make sure the mat is lined up correctly, but the tools built into the game do a great job of making this painless. You'll also want to make sure the room you play in has plenty of light. After these two things are done, the camera does a fantastic job of seeing the cards and reacting quickly to your play.

This is made even more impressive by the fact that cards in play aren't the only things the camera has to keep track of. To attack or end your turn, you have to show the camera "action" cards to tell the game what you'd like to do, and this process is quick and easy. Sometimes the camera does lose track of the card, but when this happens you get a clear warning, and a slight adjustment is usually all that is necessary to make things right again.

We did bump into troubles once or twice with cards that the camera refused to recognize, but this was a very rare occurrence. Overall, the PlayStation Eye camera was more than up to the task of keeping play quick and fun.

My one complaint—and this will only become an issue in future games—is that even with good lighting the actual image the PlayStation Eye picks up is pretty muddy and indistinct. This won’t bother you in Eye of Judgment, where the screen is covered with overlays and you will only see your hand on the screen when you lay cards, but it doesn’t bode well for later games that will feature more video interaction. The picture is better than the last-generation Eye Toy, but the jump isn’t as large as I had been hoping for.

The camera stand breaks down quickly and easily for storage, and set up only takes a minute or so. You may want to flatten the mat when you first unpack it, as the little ridges from being packed are slightly annoying while laying down cards.

Comcast shooting itself in the foot with traffic shaping “explanations”

As the evidence that Comcast is doing something untoward with BitTorrent and other traffic on its network has mounted, the cable company has tried clumsily to fend off accusations of wrongdoing. The latest developments come in the wake of several conference calls held by the ISP in which it attempted to make a case for its practice of sending forged TCP reset packets to interfere with some P2P traffic. HangZhou Night Net

Timothy B. Lee, who is a regular contributor to the Tech Liberation Front blog as well Ars Technica, was invited to sit in on one of yesterday's conference calls, along with folks from a handful of think tanks. According to Tim, the Comcast engineer on the call said that the Lotus Notes problems were a known side effect of Comcast's traffic shaping practices, one the company was trying to fix. The engineer also "seemed to implicitly" concede that the accounts about the forged packet resets were accurate.

Delaying as a blocking tactic

The company still claims that it is isn't blocking BitTorrent and other P2P traffic, just "delaying it." In a statement given to Ars earlier today, a Comcast spokesperson denied that the company blocks traffic. "Comcast does not block access to any Web sites or online applications, including peer-to-peer activity like BitTorrent," the spokeperson told Ars. "Our customers use the Internet for downloading and uploading files, watching movies and videos, streaming music, sharing digital photos, accessing numerous peer-to-peer sites, VOIP applications like Vonage, and thousands of other applications online. We have a responsibility to provide all of our customers with a good Internet experience and we use the latest technologies to manage our network so that they can continue to enjoy these applications."

Comcast VP of operations and technical support Mitch Bowling put it this way. "We use the latest technologies to manage our network so that our customers continue to enjoy these applications. We do this because we feel it's our responsibility to provide all of our customers with a good Internet experience."

Another Comcast executive told the New York Times that the company "occasionally" delays P2P traffic, "postponing" it in some cases. His rather clumsy analogy was that of getting a busy signal when making a phone call and eventually getting through after several attempts. "It will get there eventually," is the takeaway message.

That's a distinction without any meaning. If someone is preventing my calls from going through and giving me a busy signal, the effect is the same. At the time I am trying to make the call, it's being actively blocked; calling it "delayed" is merely an exercise in semantics. Comcast is, in effect, impersonating the busy signal and preventing the phone at the other end from ringing by issuing TCP reset packets to both ends of a connection.

What's particularly troublesome is that Comcast's FAQ leaves customers with the impression that all content will flow unfettered through its network. One entry states that Comcast engages in "no discrimination based on the type of content," saying that the ISP offers "unfettered access to all the content, services, and applications" on the Internet. Another FAQ entry informs customers that Comcast does not "block access to any Web site or applications, including BitTorrent."

What did I do wrong?

Comcast's attempts to clarify its traffic shaping practices are having the opposite effect of what the company intends. As is the case with its nebulous bandwidth caps, customers can find themselves running afoul of what appears to be an arbitrary limitation imposed by the ISP. As a result, Comcast's customers don't really know that what they're paying for, aside from a fast connection that may or may not give them access to the web sites and applications they want. The company's public comments on the traffic shaping issue are intended to leave the impression that, like the bandwidth cap issue, this only affects a handful of bandwidth hogs. But judging by the comments we've seen from our readers and on other sites, there are either a lot more bandwidth hogs than Comcast leads us to believe, or the company's traffic shaping practices extend further than is being disclosed. Without some transparency from the ISP, we're left to guess.

Comcast has a handful of options to choose from. The company can own up to what it's doing and tell customers how to avoid running afoul of its BitTorrent regulations. Comcast could also continue on its current course, keeping its opaque traffic management practices in place. The cable giant's best option may be dropping the practice of sending false TCP reset packets altogether.

There are a couple of reasons that the third option may be the best choice for Comcast. First, it may be against the law. An Indiana University PhD student and Cnet contributor believes that the illicit reset packets may violate state laws in Alabama, Connecticut, and New York against impersonating another person "with intent to obtain a benefit or to injure or defraud another" (language from the New York law). In sending out the spoofed packets, Comcast is impersonating the parties at either end of the connection.

When the market can't sort things out

Legal concerns aside, Comcast is providing net neutrality advocates with plenty of ammunition. Comcast is not running a neutral network right now, and its traffic shaping choices are degrading the broadband service of many a Comcast customer.

In a perfect free market, customers would be free to pack up in leave Comcast for greener and more open broadband pastures, but the competitive landscape in the US doesn't always provide that kind of choice. More than a few Comcast customers are faced with the choice of Comcast or dial-up, leaving them with the Hobson's choice of hoping their data packets can evade Comcast's traffic shaping police or not having broadband service at all.

KDE development platform appears

HangZhou Night Net

This Wednesday in KDE land sees the start of the Release Freeze for KDE 4.0, anticipating a final release in December. By then, KDE 4 will have been in development for two years and five months, counting from the aKademy conference in Malaga, Spain, in July 2005.

This freeze is significant for KDE development as evidence of an increasing professionalization in the KDE release process. Where past major releases like KDE 2 (2000) and KDE 3 (2002) involved a relatively close-knit group of insiders releasing a set of source tarballs for Linux distributions to package, the KDE landscape in 2007 is broader and much more diverse. Seventeen modules made up KDE 3.0, and that contained pretty much the entire desktop and all the software that most users would run on it.

During the lifetime of KDE 3, development has become decentralized and some of the most popular applications, such as Amarok, Kopete and KDevelop emerged. They formed their own developer communities who may have little overlap with the core group of KDE library and platform hackers.

In addition, groups of businesses such as the Kolab Konsortium (we kid you not, those Ks are authentic German) have sprung up to serve the needs of companies using KDE. Their ‘Enterprise’ Branch’ flavor of the Kontact PIM suite, featuring additional enterprise features and QA work, is released on its own schedule, independently of the core KDE modules. Completing the scene, a swarm of smaller projects and individual developers take advantage of the KDE libraries, producing thousands of applications, as seen at www.kde-apps.org.

This expansion has caused major changes in the KDE release process. The era of the lone release manager (represented in the past by David Faure and Stephan Kulow) bearing responsibility for getting every module ready just in time for distributions’ release dates has ended, and instead a Release Team representing KDE’s various stakeholders oversees the readiness of all its components.

Also, the release itself has become staggered. To give all the projects and developers who rely on the KDE libraries time to adapt to the changes of a major release, these core libraries and runtime components are being frozen ahead of the rest of the desktop and its applications.

On October 30, the KDE Development Platform version 4.0.0 will be released. This forms a stable set of packages including the libraries, the PIM libraries, and runtime components of the base desktop such as the help browser, IO slaves, and the notification daemon that third party developers require to port their apps to KDE 4. They will be able to download this platform as binary packages for several distributions, easing the porting process.

KDE is expanding the scope of its ambition with this release, by targeting developers of scripting languages as well as its traditional C++ constituency. By shipping bindings for Python and Ruby as part of the Development Platform, the KDE project hopes to signal that these languages are fully accepted for KDE application development. Bindings for C# are due to follow shortly afterwards. And for the first time, KDE will be released for Mac OS X and Windows.

Targeting two new operating systems poses a challenge to the community’s resources. A brief poll of developers showed that although ‘most things work’, the KDE Development Platform version won’t be ready for these platforms at the same time as the Linux flavor. It is hoped that in the long term, this investment will pay off by bringing more participants to Free Software.

The significance of this milestone to KDE fans and users is that the rest of the applications making up the core KDE Desktop are now soft-frozen. While third party developers are just getting to grips with KDE 4, the applications that form the central KDE experience need to get in shape for the KDE 4.0 release. With a lot of issues facing KDE hackers before 4.0 is a usable desktop, all work on new features and UI is stopped, and efforts focus on fixing the inevitable, long list of bugs. A state of the user desktop is the topic of an upcoming article.

Hands on with Google’s OCRopus open-source scanning software

The first official alpha version of Google's OCRopus scanning software for Linux was released yesterday. OCRopus is built on top of HP's venerable open-source Tesseract optical character recognition (OCR) engine and is distributed under the Apache License 2.0. HangZhou Night Net

OCRopus uses Tesseract for character recognition but has its own layout analysis system that is optimized for accuracy. The OpenFST library is used for language modeling, but it still has some performance issues. OCRopus is designed to be modular, so that other character recognition and language modeling components can be used to eventually add support for non-Latin languages. An embedded Lua interpreter is used for scripting and configuration. The developers chose Lua rather than Python because Lua is slimmer and easier to embed. This release also includes some new image cleanup and de-skewing code.

We tested OCRopus with several kinds of content, including scanned documents and screenshots of text. The accuracy of the output varies considerably depending on the quality of the input data, but OCRopus was able to provide readable output in about half of our tests. Several of our test documents caused segmentation faults; this seems to occur when there is extra page noise or the letters aren't consistent.

We observed several common errors. For instance, the letter "e" is often interpreted as a "c" and the letter "o" is often interpreted as a "0" in scanned documents. OCRopus provides better results when scanning text that is printed with a sans serif font, andthe size of the font also has a significant effect on accuracy.

The following examplesshow the typical output quality of OCRopus:

Tpo' much is takgn, much abjdegi qngi tlpugh we arg not pow Wat strength whipl} in old days Moved earth and heaven; that which we are, We are; QpeAequal_tgmper of hqoic hgarts, E/[ade Qeak by Eirpe ang fqte, lgut strong will To strive, to Seek, to hnd, and not to y{eld.

Tho' much is taken, much abides; and though We are not now that strength which in old days Moved earth and heaven; that which we are, we are; One equal temper of heroic hearts, Made weak by time and fate, but strong in will To strive, to seek, to find, and not to yield

The installation process is relatively straightforward for experienced Linux users, but there were a few bumps. I tested OCRopus on my Ubuntu 7.10 desktop computer. The only dependency that isn't in the Ubuntu repositories is Tesseract, which I had to download separately and compile. Since I already hadmany development packages installed, the only dependencies I needed (apart from Tesseract) were libtiff-dev and libaspell-dev. If you don't have the aspell dependency installed, OCRopus will still build, but it will trigger an error saying that the word list is unreadable.

I also ran into a snag with Tesseract, which apparently only comes with placeholders for the language data files, causing OCRopus to emit an error saying that the unicharset file is unreadable. In order to resolve that problem, I had to download the English language data files and decompress them into /usr/local/share/tessdata.

According to the release announcement, a beta release is planned for the first quarter of 2008. The focus for the beta will be on better performance and output quality, whereas the focus of this release was functionality.

Google's involvement in the project is motivated by the company's interest in digitizing printed documents. Open-source OCR technology could be valuable in many other contexts as well. Government agencies that want to digitize paper records, for instance, could one day benefit from OCRopus. Although OCRopus is weak in many areas, it has some real potential.

Minireview: Getting things done with TaskPaper 1.0

Long has Apple been driven by development that places simplicity over functionality, the iPod being an excellent example. The iPod doesn't do everything–sometimes not even functions one might associate strongly with a music player, like a built-in FM radio–but it does music playing right. At Hog Bay Software, Jesse Grosjean takes that simplicity concept to a new level. His previous work includes Mori (now owned by Apokalypse Software), a note-taking application, and WriteRoom, a full-screen text editor. His latest application, TaskPaper, a GTD (Get Things Done) task list, similarly concentrates on doing one thing well. Before you say "OmniFocus," Jesse Grosjean did, saying that if "you are looking for a larger more structured application then check out OmniFocus." So why use TaskPaper? HangZhou Night Net

TaskPaper UI

TaskPaper is fast–fast, as in enabling the thought process for creating lists. It's designed with using the keyboard in mind. In the text editor interface, you hit 'Return' and go:

Create a new project (list) by ending a line with ':'Create a task (to-do item) by starting a line '- 'Assign a tag (category) by typing '@' and the tag in the task's line Command-D marks a task done

That could pretty much be the user manual. Using TaskPaper, I started doing my grocery list. While such a project is hardly complex, I was surprised at how pleasant it was using TaskPaper. First, I wrote out some lists, all of which appear in the 'Home' list, then I clicked on a list which appeared in its own tab. From there it was simply a matter of adding items and tagging them with the stores where I buy the items.

TaskPaper Search UI

If I want a list of all items I purchase at Whole Foods Market, I just click on the tag. Nice. TaskPaper is a database that looks like a text file because it is a text file, which means maximum compatibility. Completed items can be archived; they are moved to a list by that name when archiving is invoked. If TaskPaper sounds neat, it is, but as someone who uses to-do lists I found some basic functionality missing:

SortingCollapsible ListsShow/Hide Done Tasks Export Options

You pretty much print or not print lists, or use TaskPaper on your Mac. Hopefully that will change someday.

Jade: TaskPaper would be awesome on the iPhone, like it's almost designed for the iPhone already. TaskPaper 1.0 is only 2.4MB, so have you given any thought to a port once the iPhone SDK becomes available?

Jesse Grosjean: I have certainly thought about it, but not very deeply. I don't have an iPhone (or cell of any sort) and so the platform didn't interest me much until the SDK announcement. Now it's a lot more interesting, but I'm going to need to wait and see the SDK, and maybe get a iPod touch to play with before I make a real decision.

My feelings are that TaskPaper does pretty well at everything it does, which, by design, is not a lot. It is the pencil and paper of task list software, with all that metaphorically implies, both good and bad. If that's what you are looking for, you should download the trial version. It's $18.95 to buy—let's not hear any crap about price from people who pay the Apple Tax—and I intend to buy it in the hope that come next year I will be checking off groceries on my iPhone using TaskPaper.

The future is bright: Mozilla revenues up 26 percent, Google deal is gold

Mozilla published financial statements earlier this week showing that the organizationmade $66.8 million in revenue for 2006, a 26 percent increase from 2005. That's some strong growth, and it shows that Mozilla has the potential for long-term fiscal sustainability. HangZhou Night Net

Most of that money (about 85 percent) comes from the company's search partnership with Google, but some of it also comes from the Mozilla store and other sources. Mozilla's expensestotaled $19.7 million. According to Mozilla CEO Mitchell Baker, 70 percent of those costs are associated with human labor, and much of the rest is used to fund the bandwidth and technical infrastructure that Mozilla uses to distribute Firefox—2.1 terabytes of data transfer, 600,000 Firefox downloads, and 25 million update requests per day. Baker expects the expenses to be much higher for 2007, because the organization is significantly increasing employment.

Mozilla also uses its resources to provide grants to other organizations. Approximately $300,000 was contributed to various organizations by Mozilla in 2006, and much more is beingdoled outin 2007—including grants to the Participatory Culture Foundation, which makes the Miro video player.

"Our financial status allows us to build on sustainability to do ever more. More as an open source project, and more to move the Internet overall increasingly towards openness and participation," said Baker in a blog entry. "[W]e're able to hire more people, build more products, help other projects, and bring more possibilities for participation in the Internet to millions of people. The Mozilla project is growing in almost every way—size, scale, types of activities, new communities, and in reach."

Mozilla's lucrative deal with Google was initially scheduled to expire in November 2006, but it was renewed and extended to 2008. Itappears likely that Mozilla willdepend on Google for a considerable portion of its revenue going forward.

Although revenue of $66.8 million makes Mozilla seem like a for-profit endeavor, the organization still remains committed to serving the public good by contributing to projects that make the Internet more accessible and open. Mozilla also has plans for many new initiatives—like the new Mozilla mobile project—that will likely consume some of the excess resources.

Lost in translation: hands-on with Google’s new stats-based translator

Automated translation systems, such as Alta Vista's Babelfish, have relied on a set of human-defined rules that attempt to encapsulate the underlying grammar and vocabulary used to construct a language. Although Google has been using that approach to power much of its translation service, it's not really in keeping with the company's philosophy of using some clever code and a massive data set. So it should be no surprise that the company has started developing its own statistical machine translation service. According to some Google-watchers, Google's homegrown translation process is now being used for all languages available through the service. HangZhou Night Net

We took the new service for a spin. Five years of Spanish in high school and college, as well as countless years of exposure to the language through ads on the subway and watching the World Cup on Univision, have left me borderline-literate in the language. I chose a web page that was inspired by my contributions to Urs Technica: a description of the native bear population of the Iberian peninsula. The page contains a mix of some basic descriptive language, along with more detailed discussions of ursine biology. A second translation using Babelfish was performed at the same time.

Overall, it was difficult to discern a difference in quality between the two. Each service had some difficulty with Spanish's sentence structure, which places adjectives after the nouns they modify. For example, instead of "Discover Bear Country," Google suggested that a link was inviting people to "Discover the Country Bears." Maybe Disney paid for that one.

Both also ran into a number of words they didn't know what to do with; for example, Spanish has a specific word for "bear den"—osera—that neither service recognized and so left untranslated. Neither correctly figured out the proper context for the use of "celo". This is a term that didn't come up during my years of Spanish, but it apparently can be used to describe the annual period of female fertility. Both services went literal when faced with "celo", with Babelfish choosing "fervor" and Google picking "zeal" as its translation. This caused Google to suggest that female bears "can be mounted by several different males over the same zeal."

There were also what might be termed Spanish 101 level errors. The verb "molesta" is generally used to mean "bother" or "harass." Yet Google made a novice-level mistake and did a literal translation to "molest." Neither service demonstrated a human's ability to recognize when they were producing gibberish. Google, for example, described a group of bears gathering around a rich food source as "They can also occur by coincidence, rallies temporary copies in a few places with abundant food."

There was one case where Google's statistical method seemed to lead it astray. Both services went Spanish 101 on the term "crudo," which was used to describe the harshest or roughest part of winter, when bears hibernate. Google apparently applied undue statistical weight to the word "crude." In one case, this trashed the entire sentence that contained "crudo"—a photo of a cold winter scene was captioned: "The period of winter as crude bears spend winter." In a second instance, the more typical context of "crudo" was applied, with hilarious results: "The life of a bear begins as crude oil during winter."

To test a language that is more distant from English, I located a press release in both Japanese and English: the one announcing the 2002 Nobel Prize in Physics, which went to researchers running parallel experiments in the US and Japan. The release in Japanese was available only as a PDF, so I copied and pasted the text into the translation box. The results, which seem to have preserved the line breaks from the PDF, were practically poetic:

I do so without interaction, thus detected is extremely
Difficult for. For example, the trillions of pieces of New
Torino is our second body to penetrate, but I
We are absolutely not aware. Raymond Davis Jr.
Coal giant tank is placed 600 tons of liquid meets applicable
The construction of a completely new detection equipment. He was 30 years…

That bears a slight resemblance to Japanese Zen poetry, which is supposed to startle its readers out of their normal perception of reality, allowing them to reach a Buddhist enlightenment.

This may sound like I'm being excessively harsh regarding Google's new translation method, so I'll reemphasize that it appears to produce translations that are roughly equal in quality to those provided by other services. Where it really shines, however, is its interface. On a translated web page, you can hover the mouse over any translated sentence, and the untranslated version will appear. This is a tremendous aid for those that have a partial command of the language, as the immediate comparison between the texts can help eliminate any confusion caused by mistranslation.

This same feature may ultimately help Google move beyond the quality of other services. Each of these popups comes with a link that offers you the opportunity to suggest a better translation. If people are willing to spend the time suggesting fixes for mistranslations (and vandalism doesn't become a problem), Google may ultimately have a dataset that allows their service to provide an exceptional degree of accuracy.

Danish record labels float flat ISP fee idea for unlimited P2P music

Ah, ha! Come, some music! come, the recorders!
For if the king like not the comedy,
Why then, belike, he likes it not, perdy.
Come, some music! HangZhou Night Net

Hamlet the Dane famously called for music more than400 years ago, but he had no idea that it would one day come streaming down the tubes and onto computers in his home countryfor a flat monthly fee. The Danish music business is now proposing a plan to offer unlimited music downloads for around 100 kroner a month (about $19), and although questions remain, it could represent a real step forward.

Hamlet, with all his dithering and delayed revenge,is actually a good model for the music business. Despite knowing for years that bold action was needed to provide users a legal alternative to P2P downloads, the worldwide music industry has been slow to react. Whenit finally did so, the results involved things like DRM, low bitrates, and high prices for merchandise with little distribution cost: all measures guaranteed to keep legal music from rivaling illicit downloads in popularity. And of course, there were the lawsuits. (Unlike Hamlet, though, the music labels have yet to stab an overweight councilor hiding behind the arras.)

But if life imitates art,we're now moving into Act IV of the play, the part where Hamlet returns to Elsinore with some new ideas about how to deal with his problem. Andy Oram over at O'Reilly Radar noted the recent moves in Denmark to create a system where every ISP user might pay a monthly fee in order to access unlimited P2P music legally.

The proposal has drawn positive feedback from an unlikely source—the local "Piratgruppen."

"It's good that they admit that they cannot solve the problem of falling CD sales by suing their own fans," said Sebastian Gjerding from the Piratgruppen. "It looks like they have understood that they should offer something that is competitive compared to other, free music sources. It is an entirely new admission that hasn't spread internationally yet. IFPI Denmark is on the forefront in this matter. But it is annoying that no action has been taken so far to save many teenagers million-krone fines."

Certainly the idea is interesting, and the industry deserves real credit when it makes bold decisions to embrace such new ideas. The proposal isn't without problems, though. Among them: the fee would apparently be mandatory for all ISP users. Those who don't listen to much music or who don't want to pay $19 a month to do so won't be thrilled. Will it only apply to select ISPs (thus allowing those who don't want the deal to choose another provider), or will the IFPI try to make it mandatory at the national level?

Another issue: will the fee cover worldwide music? If it only covers Danish bands, it may also be of limited utility. But making payments to artists all over the world could be a logistical nightmare.

Finally, would the deal cover indie music or only that from major labels? If it only covers major labels, consumer confusion about what is legal to download and what is not will be widespread, and could certainly irritate bands that don't want their music distributed this way.

Similar ideas about compulsory blanket music licenses have been floating around for a while, but appear to be in no imminent danger of being floated in the US.

So questions remain, but the idea is intriguing. Let's hope that the end of this story looks less like the bloody end of Hamlet, however, and more like the conclusion of As You Like It, complete with music and dancing.

As Leopard Day nears, third-party devs request patience

With Leopard arriving in less than three days, intrepid Infinite Loop readers have probably embarked on your journeys of preparation by cleaning out hard drives, meditating, and making up non-geeky excuses for why you can't hit that party Friday night (it's ok: be one with The Geek inside you). Still, we thought it would be prudent to warn that, even if you're prepared for Leopard, your favorite third-party applications might not be. HangZhou Night Net

Unfortunately, third-party developers are unable to get their applications 100 percent ready for the public version of Leopard. Why? Because they can't get their final copy any earlier than we do. Sure, Apple was seeding plenty of near-release versions to developers just before shipping, but the company still makes changes to the final build that it doesn't share before going to the printers. Sometimes Apple's last-minute changes result in third-party apps experiencing mere quirks that aren't too hard on our workflow, but for some apps these changes could be major game-stoppers that screw up files or prevent an app from even opening.

The moral of the story is that, before upgrading to Leopard, a prudent move would be to check in on announcements from the developers of apps you simply can't live without in the brave new Leopard world. Case in point: Cabel Sasser of Panic, maker of such fine Mac OS X apps as Transmit, Coda, Unison and CandyBar, has provided us with a Leopard status update. For now it sounds like most of Panic's apps work pretty well in Leopard, though Transmit apparently has a quirk or two, but CandyBar 2 won't even open. For those who have just gotta have their custom system icons though, CandyBar 3 is on schedule for November and will include a bonus for users who are less than impressed with Leopard's new 3D Dock: the ability to replace Leopard's Dock. It is worth noting, however, that CandyBar 2 will effectively be discontinued once Leopard arrives, as CandyBar 3 echoes a growing trend among Mac OS X developers of taking apps Leopard-only. Anyone interested in CandyBar but sticking with Tiger for a while should download the latest version before it disappears from Panic's site.

Instead of cluttering up your RSS feeds with every Leopard-ready announcement, we'll do our best to provide round-ups of apps that make the leap in order to help you pin down exactly when it's safe for your third-party apps to play with Apple's new kitty.