Russian crackers throw GPU power at passwords

Russian-based cracking "password recovery" company Elcomsoft hasn't really been in the news since 2003, when Adobe helped make "Free Dmitry" the new "Free Kevin" by having one of the company's programmers, Dmitry Sklyarov, arrested for cracking its eBook Reader software. But Elcomsoft has remedied the lack of press attention this week with its announcement that it has pressed the GPU into the service of password cracking. HangZhou Night Net

With NVIDIA and AMD/ATI working overtime to raise the GPU's profile as a math coprocessor for computationally intensive, data-parallel computing problems, it was inevitable that someone would make an announcement that they had succeeded in using the GPU to speed up the password-cracking process. Notice that I said "make an announcement," because I'm sure various government entities domestic and foreign have been working on this from the moment AMD made its "close-to-metal" (CTM) package available for download. The Elcomsoft guys didn't use CTM, though. They opted to go with NVIDIA's higher-level CUDA interface, a move that no doubt cut their development time significantly.

Elcomsoft's new password cracker attacks the NTLM hashing that Windows uses with a brute force method. The company claims that its GPU-powered attack speeds up the time it takes to crack a Vista password from two months to a little over three days.

Elcomsoft claims that they've filed for a US patent on this approach, but it's not clear what exactly they're attempting to patent. A search of the USPTO's patent database turned up nothing, but that could be because the patent hasn't made it into the database yet.

Ultimately, using GPUs to crack passwords is kid's stuff. The world's best password cracker is probably the Storm Worm, assuming that its owners are using it for this. As many as ten million networked Windows boxes—now that's parallelism.

Climate change mega-post

This week there seems to be a lot of climate news around, some good, some bad, and some that is just ugly. Rather than putting up a plethora of posts and getting accused of being Ars Climactica, we thought we would combine them into a single mega post for your consumption. HangZhou Night Net

The first paper, published in Science1, looks at the prospects for narrowing the range of estimates for the future climate. In doing so, they note that the climate is a system that consists of many physical processes that are coupled together nonlinearly. This has led to climate modelers focusing on physical mechanisms and fundamentals of nonlinear dynamics to understand and improve their models. Notably, the specific inclusion of many physical mechanisms has not led to a significant decrease in the range of climate predictions. Most of the blame for this has fallen on the nature of nonlinear systems. Essentially, to obtain a small increase in predictive ability, one needs a very large increase in the accuracy of the initial conditions. We are stuck because we can’t improve the accuracy of our ancestor’s weather stations and other methods, such as ice core samples, will only ever yield averages. But as our earlier coverage on the nature of climate modeling explains, this isn’t really the heart of the issue. Climate models use a range of initial conditions and measure the probability of certain climatic conditions occurring based on those modeling results.

Instead of focusing on the physics of the climate or the dynamical system, Roe and Baker look at the behavior of a simple linear equilibrium system with positive feedback. All the physics is replaced with a simple gain parameter, which describes how an increase in average temperature leads to a further increase in temperature. Although this does not describe the physics, it does encompass what we measure, so the model is valid for their purposes. They then explore how the uncertainty in the gain parameter changes the rate of temperature increase. The positive feedback system has the effect of amplifying the uncertainties (just like a nonlinear system), meaning that it is practically impossible to improve climate estimates. This is not really derived from the initial conditions (e.g., the starting climatic conditions) but rather focuses on the natural uncertainty in physical mechanisms, which is a major focus of current modeling efforts and includes such things as cloud cover. Basically, the amplifying of the uncertainties, and the timescales involved mean that the smallest uncertainties blow out to give the large range of temperatures predicted by climate researchers.

This news will not call off the search for parts of the environment that influence our climate because, if we are to mitigate global warming, then we must know which bits of the environment are the best to change. This obviously includes human behavior, but that covers a whole gamut from urban lifestyles through to farming practices. Part of this picture is soil erosion, which removes carbon from the soil and deposits it elsewhere. The question isn’t so much as where but what happens to that carbon on route and once it arrives. It was thought that perhaps soil erosion contributed carbon dioxide to the atmosphere by opening up new mechanisms for the decomposition of organic matter. Alternatively, it has been argued that soil erosion deposits organic carbon in places—like the bottom of the sea, for instance— where it is effectively stored. However, testing these hypotheses has been problematic.

Nevertheless, problematic is what a good scientist looks for, so, with fortitude and dedication to the cause, scientists from the EU and US have collaborated to measure the uptake and removal of carbon over ten sites. They report in Science2 this week that, like normal land, eroding land also acts as a carbon sink. They do note that in eroding landscapes, the carbon is likely to more laterally more, but is no more likely to enter the atmosphere as carbon dioxide than on healthy pastureland. Of course the amount of carbon stored is slightly less, so these soils are perhaps not as efficient as normal soils as carbon sinks. Some research is needed to determine if there are differences in the long-term destination of carbon between normal pasture and eroding soils—however, until that research is done, we can cross soil erosion off the list of things to worry about in terms of global warming.

On the bad news, rapid industrialization in the developing world and the lack of action in the developed world is now measurably increasing the rate at which we deposit carbon dioxide into the atmosphere. This is the conclusion of a paper to be published in the Proceedings of the National Academy of Science. Essentially, they have looked at estimates for anthropogenic carbon dioxide emissions and compared that to the measured concentration in the atmosphere and determined from the time series that the natural carbon sinks are either already saturated or are nearing saturated. The conclusion from this is that the concentration of carbon dioxide in the atmosphere is likely to increase faster than predicted in most scenarios. This is especially true since most scenarios assume that we will take some action to keep the rate of increase in atmospheric carbon dioxide (as a percentage) below the rate of economic growth (also as a percentage). Not the best news.

Electronic Arts to undergo empire-wide restructuring, layoffs

When you're on top, the only place to go is down. In the face of stiff competition, EA's profits have begun to drop. Destructoid is reporting that job cuts and branch restructuring have already begun taking place, with extensive changes being made to many different studios under EA's umbrella, including Mythic. HangZhou Night Net

Word of these changes came from an internal EA e-mail. CEO John Riccitiello has begun taking precautions to ensure that the current state of affairs of his company doesn't continue. This follows a previous restructuring meant to rebalance staff across the many branches of the company. To quote the e-mail:

Given this, John Riccitiello, our CEO, has tasked the company to get its costs in line with revenues… Every studio, group and division of the company has been tasked to review its overall headcount and adjust its organization to meet the needs of the business moving forward.

The changes to Mythic appear to be only the first in what will be a long line of changes. Certain teams, such as the Ultima Online group, will be relocated. Competitive employment strategies will also be enforced to keep employees working hard if they want to keep their jobs: "attrition, performance management, stricter hiring guidelines, and layoffs" will purportedly keep workers in check.

Given the state of EA's multiplatform competitors, including Activision, which is set to release one of the assured hits of the winter in Call of Duty 4, and long-time rival Ubisoft, which is sitting on Assassin's Creed, the company will be pressed to start taking more risks like skate if it hopes to stay fresh in this increasingly competitive development scene.

Teachers’ lack of fair use education hinders learning, sets bad example

Here's how bad it is: not a single teacher interviewed for a recent study on copyright reported receiving any training on fair use. HangZhou Night Net

Copyright confusion is running rampant in American schools, and not just among the students. The teachers don't know what the hell is going on, either, and media literacy is now being "compromised by unnecessary copyright restrictions and lack of understanding about copyright law."

That's the conclusion of a new report from the Center for Social Media at American University. Researchers wanted to know if confusion over using copyrighted material in the classroom was affecting teachers' attempts to train students to be critical of media. The answer was an unequivocal "yes."

One teacher, for example, has his students create mashups that mix pop music and news clips to comment on the world around them. Unfortunately for the students, the school "doesn't show them on the school's closed-circuit TV system" because "it might be a copyright violation."

One big problem is that few teachers understand copyright law; they follow guidelines drawn up by school media departments or district lawyers, or they rely on books that attempt to lay down principles appropriate for an educational setting. As the report notes, though, this advice is generally of the most conservative kind, while long-established principles of fair use may afford far more rights—especially in a face-to-face educational setting.

Researchers found that teachers may not understand the law (or may understand it to be unduly restrictive), but that they deal with their confusion in three different ways. Teachers can "see no evil" by refusing to even educate themselves about copyright, on the thinking that it can't be wrong if they don't know it's wrong. Others simply "close the door" and do whatever they want within the classroom, while a third group attempts to "hyper-comply" with the law (or what they perceive the law to be).

The results can be less-effective teaching tools. One teacher profiled in the survey wanted to promote literacy among kids who might not be enthused about it, and he thought that using lyrics from the Beatles and Kanye West might be a good way to do it. The license holders wanted $3,000. The report's authors claim that a robust understanding of fair use would give educators far more confidence about using such materials in the classroom.

Because teachers aren't confident in the rules and have no training in fair use, many rely on rules of thumb with no real basis in the law. One teacher, for instance, told her students, "If you have to pay to use or see it, you shouldn't use it," though uses of such works for commentary, criticism, and parody are explicitly established by US copyright law. The result is students that are even less-informed about copyright law.

Creating a new "code of practice" for educators could go some way toward fixing the situation, especially if such a code were blessed by major library and teachers' associations.

But the basic issue is the fear of lawsuits that could cost a school district tens of thousands of dollars. Because the four fair use principles are intentionally left vague (so that they can cover a huge variety of situations), those in charge of local copyright guidelines tend to issue rules far more stringent than those obviously required by law. This new report hopes to show educators that by learning a bit more about copyright, they can have confidence in crafting a broad array of teaching tools and classroom assignments, even when those involve bits of copyrighted work.

Examining the security improvements in Leopard

There have been several articles on Leopard's new security features popping up on various Mac websites but, so far, they've all been little more than rewrites of the security section in Apple's list of 300 new Leopard features. However, Rich Mogull's How Leopard Will Improve Your Security on TidBITS goes much further. HangZhou Night Net

Interestingly, Rich starts by touting Time Machine as a big security win. A good way to keep your data from prying eyes is to delete it—don't forget to "erase free space" with the appropriate security options in Disk Utility, though—but that also kind of defeats the purpose of having data in the first place. Time Machine makes sure you get to keep your data to secure it another day.

The next improvement that Rich points out in Leopard is "stopping buffer overflows." Well, that's not actually what Leopard does. Even in Leopard, writers of applications, libraries, and operating system components can still write code that fails to restrict input data, allowing it to be written beyond the memory buffer set aside for this it. Therefore, buffer overflows are still possible. But the whole point of a buffer overflow exploit is to get the system to execute code sitting in that excess data—"arbitrary code" that can do something on behalf of the attacker. What Leopard does is randomize the location of various libraries in memory. This means that the attacker can't simply make the program jump to a known library location as part of the next step in its attack. Library randomization isn't foolproof—an attacker can still get lucky or be very persistent—but it certainly derails the vast majority of buffer overflow attacks.

The article goes on to talk about "identifying and defanging evil apps" in the form of tagging downloads, explains how vulnerable system components run in a "sandbox," and more. Personally, I'm very interested to see what the firewalling improvements amount to. Applications can be firewalled individually in Leopard, but it's unclear at this time how fine-grained that control is.

Using antennas to see really small stuff

A lot of the recent developments in microscopy have centered on visible light (400-650nm) or near-infrared light (700-2500nm). This is because detectors are most sensitive to visible and near-infrared light and most commercial lasers operate in this wavelength range. The problem is that nothing interesting happens in this wavelength range. Most objects are reasonably transparent to light over the visible and near-infrared ranges, so images are generally created by labeling a region of interest with florescent materials, which then glow in the presence of the laser light. Another problem with microscopy is the diffraction limit, which tells us what the smallest resolvable image is. In most cases, this is something like the wavelength of the light (around 400nm) and that is too big to be able to resolve individual proteins or DNA molecules. Microscopy, using mid-infrared light and optical antennas to beat the diffraction limit may enable high resolution microscopy that can also identify the chemical it is imaging. Here, we report on some recent progress in developing the tightly focused light source required for such a microscope. HangZhou Night Net

As we have discussed in other articles, there are methods for defeating the diffraction limit. For example, light can be guided in some structure that is tapered to a tip whose dimension is much smaller than the wavelength of light (say 10nm). If the outside of the tip is conductive, the light excites the electrons, causing them to collectively vibrate down the guiding structure. At the end of the structure, the electrons release the energy as light, as if it had been conducted down the structure. However, the light is emitted in every direction and is only very intense right at the end of the tip. The intensity of the scattered light can be used to map a surface with a resolution about the same as the tip diameter.

Using this and similar techniques, scientists could, in principle, resolve an individual protein molecule. The difficulty is that the protein is transparent to the light used and if we use a florescent label, we are imaging the label not the protein. In other words, labels are very useful when looking at populations of proteins (or other molecules) but are of more limited use when studying individual molecules.

Enter the quantum cascade laser, which is a unique class of laser that emit in the mid-infrared (3-5 micrometers). These lasers use a very finely structured semiconductor to weakly confine electrons in very small boxes. The boxes give the electrons a set of well-defined energy levels to occupy. When a voltage is applied, the electrons travel from box to box in such a way that they must transition down an energy level with each move. For every transition, they release a photon of light and the presence of photons can stimulate electrons to make the transition, hence a laser is born. The difference is that the wavelength of these lasers are limited only by the physical dimensions of the boxes, meaning that we are no longer stuck with laser light colors given to us by nature. Quantum cascade lasers have found their niche in the mid-infrared and infrared (3-15 micrometers), where they make a lovely reliable source for people wanting to do spectroscopy.

The thing that makes this interesting is that almost every molecule in existence absorbs somewhere in the mid-infrared, making mid-infrared spectroscopy a key tool for identifying and understanding molecules. The problem is that the diffraction limit means that you can only resolve objects around three micrometers big. In principle, the quantum cascade laser could be used to detect the absorption from a single protein molecule, but it can only tell you where that molecule is to within three micrometers.

Now a group of researchers from Harvard, with support from Agilent, have combined the ideas used for high resolution imaging with quantum cascade lasers. To do this, they deposited a couple of metallic strips on the emitting face of the quantum cascade laser, forming an antenna. This metallic layer absorbed a lot of the light from the laser, causing the electrons to oscillate coherently. The light emitted from the gap between the strips is very intense because it gets most of the energy from the antenna. However, it also radiates in every direction, so the intensity is only very high near the gap. Imaging with such a laser will reveal features on the order of the gap size, which is about 100nm. This is still too big to reveal single proteins, but is certainly much smaller than most microscopes operating in the mid-infrared.

Now, there is a downside to this. Unlike normal laser diodes, quantum cascade lasers aren’t really that tunable. If you ask for a quantum cascade laser with a wavelength of five micrometers, that is what you will get. Unfortunately, spectroscopy really requires accessing a broad range of colors, all in the mid-infrared. This means that the light source will have to be different if this is to be employed as a generalized microscopy tool. However, there are plenty of applications where the ability to image the locations of a few key chemicals would be required to obtain useful information. There is certainly room for a specialized instrument utilizing this technique.

Applied Physics Letters, 2007, DOI: 10.1063/1.2801551

ICANN probing “insider trading” allegations with domain name registrations

The Internet Corporation for Assigned Names and Numbers (ICANN) has begun an investigation (PDF) into accusations that some insiders may be using inside information to collect data and purchase unregistered domain names that get a lot of DNS lookup requests—nonexistent domains that surfers frequently try to access. ICANN refers to the practice as "domain name front running," adding that it—along with several registrars and intellectual property attorneys—has received a number of complaints from registrants suggesting that such a thing has occurred. While the organization currently has no solid evidence on the matter as of yet, it feels that an investigation is warranted in order to nip in the bud any perceptions that the domain name industry is involved in unethical activity. HangZhou Night Net

ICANN's Security and Stability Advisory Committee (SSAC) likens the practice to stock and commodity front running, which occurs when a broker makes a personal stock purchase based on inside information before fulfilling a client's order. An insider to one of the popular domain registrars can see which domain names are popular with visitors, even if they are not yet registered. That person can then register the domain, knowing how much traffic it could get before the general public does, with the intent to resell it at a profit later.

While the practice is illegal when it comes to stocks and commodities, it is much more cloudy when it comes to domain names. ICANN recognizes the lack of regulation covering this area and makes it clear that a stronger set of standards needs to be established. "ICANN's Registrar Accreditation Agreement and Registry Agreements do not expressly prohibit registrars and registries from monitoring and collecting WHOIS query of domain name availability query data and either selling this information or using it directly," writes the SSAC. "In the absence of an explicit prohibition, registrars might conclude that monitoring availability checks is appropriate behavior."

The SSAC report comes just a day after news leaked that Verisign, a major root name server operator, was considering selling access to select DNS server lookup data. DomainNameNews first broke the story, saying that sources had indicated the company would provide "lookup traffic" reports on specific domains. The sources also said that pricing for the service was not known, but that it could cost up to $1 million per request.

The SSAC is now calling for public discussion of the situation in hopes of gathering more data and coming up with standardized practices on how to manage it. The committee suggests that those involved with domain name registrations "examine the existing rules to determine if the practice of domain name front running is consistent with the core values of the community." If registrants continue to find what they consider to be evidence of the practice, SSAC requests that users submit incidents to [email protected] with as much information as possible, including specific details of domain name checks and copies of any correspondence with the believed to be engaged in domain front running.

Microsoft antes up $240 million for a piece of the Facebook action

All of the recent flirting between Facebook and Microsoft has turned into hot equity action, as the two companies have announced that Microsoft will make a $240 million investment in the social networking site. In addition, Microsoft will begin selling ads for Facebook outside of the US and will become the site's exclusive ad provider in the US. HangZhou Night Net

Facebook's value is not in the software itself—which could be duplicated relatively easily by a small group of programmers—but in the vast social networks the site has gathered, networks that contain information about people's interests and desires that would be invaluable for any marketing company.

Launched in early 2004, Facebook was originally targeted to college students, limiting registrations to those with a .edu e-mail address. The company opened the registration doors to all comers in September 2006, and the move appears to have paid off: the site is drawing an average of 250,000 new registered users every day, according to Facebook. Facebook now has over 49 million active users, according to VP of operations Owen Van Natta.

Just a couple of weeks before removing the college-students-only registration limitation, Facebook and Microsoft inked an advertising pact that made Microsoft the exclusive banner ad provider. The companies extended that agreement through 2011 earlier this year.

Google had also been rumored to be courting Facebook, but Microsoft appeared determined to close the deal. Google already has an exclusive $900 million pact with MySpace to provide that site—and other Fox Interactive Media properties—with contextual ads and search services. Yahoo has also courted Facebook in the past, but the $750 million to $1 billion offers were apparently not enough to scratch Facebook's financial itch.

Microsoft's $240 million investment is part of a new round of financing for Facebook, one that places a $15 billion valuation on the company.

PodSleuth to bring better iPod support to Linux

HangZhou Night Net

Banshee developer Aaron Bockover announced the PodSleuth project earlier this week, which is designed to expose iPod metadata through the Linux Hardware Abstraction Layer (HAL). PodSleuth replaces the old libipoddevice and is designed to be more adaptable and future-proof.

PodSleuth metadata will be merged into the iPod’s HAL device representation as properties so that the information can easily be accessed by any application that can interact with HAL. PodSleuth uses information extracted directly from plists on the devices and only relies on the model table to ascertain “cosmetic” distinctions, so devices that aren’t registered in the model table will still be supported. PodSleuth will provide an icon metatadata property through HAL for devices that are listed in the model table, enabling the proper icon for known iPod devices to be displayed in Banshee and Nautilus.

PodSleuth is currently available from the GNOME version control system, but is still under heavy development. An initial release is expected to take place next week along with a new version of ipod-sharp and Banshee 0.13.2. These releases will bring support for the new iPods to Banshee.

In a blog entry, Bockover also addresses criticisms of his choice to use C# as the programming language for PodSleuth. He points out that PodSleuth is a HAL service and not a library, which means that other programs don’t have to be written in C# to use the functionality. PodSleuth also only uses the ECMA approved portions of Mono, which means that it doesn’t rely on any patent-encumbered code.

Apple’s attempts to lock iPod users into iTunes have been unsuccessful and impressive open source software solutions continue to provide strong alternative music management options for current iPod owners. Despite the availability of iPod support on Linux, the need to constantly reverse engineer and hack around Apple’s lock-in mechanisms makes the iPod a poor choice for Linux users. There is no guarantee that Apple’s antihacker efforts will be so easily overcome in future firmware revisions. Linux users should still consider buying alternate products that support open standards.

Seagate customers eligible for manufacturer refunds, free software

Back in 2005, a woman named Sara Cho sued Seagate alleging that the company's use of binary when reporting hard drive sizes constituted false advertising. If you're not familiar with the difference between how the hard drive industry measures a gigabyte vs. how everyone else measures a gigabyte, it boils down to this. HDD manufacturers (including Seagate, Western Digital, Samsung, and Hitachi) define one gigabyte as one billion bytes. In other market areas, however, a gigabyte is technically defined as 1.074 billion bytes—a difference of 7.4 percent. The gap between the two measurements has grown along with hard drive capacities, at the one terrabyte level the gap increases to ~10 percent. HangZhou Night Net

According to details posted at the settlement website, Seagate has agreed to issue a refund equal to five percent of a drive's original purchase price, provided the hard drive was bought between March 22, 2001 and September 26, 2007. Alternatively, customers can request a free set of Seagate's backup and recovery products (valued at $40). Seagate has agreed to this settlement despite denying any liability (and all of Cho's claims). The settlement must still be approved by the presiding judge and no ruling regarding the merits of the case has been given.

In order to submit a claim, buyer's must fill out either an online claim form (for free software) or a mail-in claim form if you actually want the five percent refund. Drive serial numbers, merchant identification, and the month, date, and year of the purchase are all required for either form, so if you've already tossed the drive or don't remember when you bought it or who you bought it from, you're unfortunately out of luck.

As for the merits of Cho's case, I can see her point—but her failure to win any real concessions from Seagate regarding product labeling means that the problem will continue to occur. What might've seemed trivial at one megabyte becomes a notable loss at one terrabyte, though I have to admit that I don't plan on taking to the streets over the issue. It's quite possible, however, that Cho's settlement (if approved) will open the door for similar actions against other major hard drive manufacturers.

Simple Turing machine shown capable of solving any computational problem

A proof made public today illustrates thatStephen Wolfram's 2,3 Turing machine number 596440 is a universal Turing machine, and ithas netted a University of Birmingham undergraduate $25,000. In 1936, mathematician Alan Turing proposed a machine that was the original idealized computer. A small subset of these Turing machines are known as Universal Turing machines; they are capable of solving any computational problem known. In May, mathematician Stephen Wolfram put forth the challenge to amateur and professional mathematicians alike to determine if one of the Turing machines listed in his book, "A New Kind of Science," was indeed universal. HangZhou Night Net

Turing machines aresimple logic devices that can be made to simulate the logic of any standard computer that could be constructed. They consist of an infinite number of cells on a tape (the memory) and an active cell that is referred to as the "head." Each cell can be one of a set number of colors, and the head can have a fixed number of states. A set of rules determine how the combination of cell color and head state dictates what color should be written to the tape, what state the head should be placed in, and what direction it should move (left or right).

2,3 Turing Machine, number 596440
in Wolfram's numbering scheme

On the fifth anniversary of the publication of "A New Kind of Science," Wolfram issued a challenge, namely, "how simple can the rules for a universal Turing machine be?" In order to spur interest in this, he offered up a $25,000 prize for anyone who could prove, or disprove, that the 2-state, 3-color Turing machine illustrated at right is universal. Finding small universal Turing machines is not a major problem in modern computer science or mathematics. According to MIT computer scientist Scott Aaronson, "Most theoretical computer scientists don't particularly care about finding the smallest universal Turing machines. They see it as a recreational pursuit that interested people in the 60s and 70s but is now sort of 'retro.'" However, people on Wolfram's prize committee hoped this would spur new work.

While not the $1,000,000prize attached to the Clay Millennium problems, it spurred interest in at least one person. 20-year-old Alex Smith, an electronic and computer engineering student at the University of Birmingham in the UK, has solved the problem and will receive the award money.

Smith said he first heard about the problem in an Internet chat room, decided the problem was interesting, and attempted to tackle it. His proof was not direct; he demonstrated that the 2,3 Turing machine was computationally equivalent to a tag machine—something that is already known to be universal. In addition to his proof—available for free (PDF)—he developed a "compiler" that would generate 2,3 Turing machine code that is capable of solving any computational problem. According to Smith, he has no big plans for the prize money. "I'm just going to put it in the bank," he said.

Small plans: NVIDIA and the future of smartphones

It’s the last two decades all over again

The past two decades of PC history have been about desktops, servers, and laptops, but the "personal computer" of the coming decade is a small, pocket- or purse-sized device with a brightly lit screen, wireless networking and I/O, a sizable chunk of storage, and plenty of CPU and GPU horsepower on board. In short, you might say that the iPhone is the Macintosh 128K of the post-PC era, the 2008 lineup of Intel-based mobile products are the IBM PC XT, and we're all about to relive the 80s and 90s (complete with a brand new RISC versus CISC faceoff) but on a much smaller scale and in a more compressed timeframe. HangZhou Night Net

Over the past few weeks, I've told you a bit about Intel's plans for this coming wave of pocket-sized personal computers: Silverthorne/Poulsbo will bring high-powered x86 hardware down into the ultramobile PC (UMPC) form factor in 2008, followed by the even smaller 32nm Moorestown chip that will be Intel's first full-fledged x86 media SoC and which could possibly be the future brains of Apple's iPhone. But I haven't yet told you about Intel's competition.

NVIDIA, AMD/ATI, ARM, and other powerhouses in the PC and embedded spaces aren't sitting idly by while Intel takes direct aim at what will be one of the hottest new battlegrounds of the post-PC era: your pocket. In the coming days, I'll tell you what each of these companies is up to, starting with NVIDIA.

"It is ultimately a computer that ends up in your pocket"

I recently had a series of exchanges with NVIDIA, including a free-ranging chat with Mike Rayfield, the general manager of NVIDIA's mobile group, about NVIDIA's plans for handheld devices. Like the rest of the technology industry, NVIDIA has been closely watching the smartphone space in general and the iPhone launch in particular, and the company has learned a few things both from Apple and from their own experience with the GoForce line of media SoCs.

The first lesson of the emerging mobile market is this: desktop PCs are about applications and performance, but handheld devices are about functionality and features. And on the list of important handheld features, the ability to make a voice call has gone from the top to somewhere near the bottom in the post-iPhone era.

"Historically, the handset market has been all about making a phone call," said Rayfield. "When you see advertisements for every phone but the iPhone, it's all about showing the form factor of the phone, and what color it is, or what size it is. It's basically an industrial design advertisement, or an advertisement by the network saying that your calls won't get dropped."

"The iPhone was the first one where, when you see the ad, you're actually looking at the phone doing something. The last thing they show you on the advertisement is making a phone call. So we believe that's reflecting what's happening in the industry, that these handheld devices are ultimately becoming your most personal computer. It is ultimately a computer that ends up in your pocket."

Repair service dubs Apple most reliable, Lenovo takes second

In a recent satisfaction survey, Apple scored highly in the reliability, tech support, and repair categories. That was an opinion survey, though, and not necessarily a scientific measure. As a more quantitative measure of manufacturer reliability, a third-party repair company called RESCUECOM has released its yearly Computer Reliability Report, which also puts Apple in the top spot for reliability. HangZhou Night Net

The survey methodology calculates a reliability index for each manufacturer, which is based on the percentage of calls for each manufacturer's products made to RESCUECOM and then compared to the average Q2 market share for that manufacturer. Apple finished first with a score of 357, meaning that the percentage of repair calls was only about one-third of the estimated market share percentage of 5 percent. Lenovo dropped down to second place—its score was over 100 points lower than that of Apple.

Although this looks like a mark in the 'win' category for Apple, I'm not sure a report from a third-party repair service is the best indicator of reliability. My biggest issue with this report is the fact that calls to RESCUECOM may not be indicative of overall reliability—I would imagine that if anyone has an issue with a new or warrantied computer, that person would call AppleCare or the other manufacturer first, leaving RESCUECOM out of the picture. Even if something isn't under warranty, knowledgeable friends or local computer stores might get tapped for the repairs.

So, yes, the survey results could mean Apple has great overall reliability. But then again, the report only tells us about repair calls made to RESCUECOM, which could be a fairly small subset of the overall reliability picture. I think the numbers are still good for Apple, but we'd advise you to take the results with a few grains of salt.