Apple considered iPhone with physical keyboard? Wait, what?

One little decision can provoke so much.

It seems that, in those days when everyone believed that BlackBerrys were the most extraordinary machines on the planet, Apple was still cogitating over its little iPod-phone thingy.

And apparently one of the options the company considered was to have a physical keyboard. Yes, like the BlackBerry. With real physical buttons.

Fadell — who left Apple to create learning thermostat companyNest — offered that there were three designs being considered — one involving a hardware keyboard.

One imagines this might have involved the keyboard sliding out of the phone. You know, like, well, all those wonderful phones that still have that design.

Some might muse that it’s something of a relief that Apple committed itself to touch-screen technology, something that makes using a smartphone peculiarly pleasant. However, what would have happened if Apple had gone with a physical keyboard?

Would everyone else have decided that because Apple is doing it, that must mean it’s cool? Or would some other enterprising company have been the first to go with its instincts and created the first touch-screen smartphone?

Which company might that have been? Microsoft, surely.

Dell says XPS 13 ultrabook exceeds sales expectations

The XPS 13 ultrabook is selling well above expectations, a Dell executive told CNET this week, offering some hope for the new class of skinny laptops.

“A little bit less than 3X the expected demand,” he said. Burd declined to be more specific, saying Dell “never” discloses numbers.

Still, an upbeat statement about sales — however nonspecific — is good news. Industry observers are watching the category closely to see if it can succeed and take some of the wind out of the sails of the ImageMacBook Air and ipad The latter is selling at a blistering pace of more than 10 million a quarter.

“I’m optimistic in the long run about ultrabooks,” said Stephen Baker, an analyst at the NPD Group.

He says PC makers and retailers need to get off the “$399 treadmill” by cutting back on the number of models and making more money off the ones that remain. “Look at the iPad. People are willing to pay $600 or $700 for something that gives them a great experience. Something that looks good and makes them feel comfortable and confident,” he said.

The passes the good-looks test. And it’s thin and light (0.71 inches, 3 pounds).

But it’s not cheap, starting at $999. So, why is it selling so well? “Half the sales of the XPS 13 are coming from enterprise [large corporate] customers. That’s a lot of its success,” Burd said.

Related stories

And that’s one of the bigger challenges for Dell — to straddle the consumer and corporate markets with a single design. For those who haven’t noticed, Dell is becoming more of a corporate enterprise-centric company and less of a consumer outfit. So, designs like the XPS 13 that appeal to both sets of customers are an imperative.

This trend is sometimes referred to as the “consumerization” of IT: employees bringing their personal devices — like iPads — to work.

Burd says the XPS 13 inherits some of the traits that make the iPad and smartphone so popular. “We took the things that an iPad or smartphone does well, in terms of booting up quickly, being highly mobile…and then took that even further. You can do productivity and not lose anything,” he said, referring to common business tasks like word processing and spreadsheets.

But it’s still corporate-capable. “We can load a company’s image on the system, we can put custom BIOS settings on the system, an asset tag so they can track it,” he said.

This is a different tack than the company took with its original ultrathin laptop, the Adamo. That aluminum-clad, 0.65-inch thick design — announced back in early 2009 — was the first thoughtful response to the MacBook Air from a first-tier PC maker. But it was not marketed alternatively as a corporate workhorse like the XPS 13.

“The [Adamo] design was cutting edge [and] ended up being great looking but an expensive system with less power. It was run off ULV [ultra-low-voltage] processors that at that time were a lot slower,” he said. The XPS 13 — designed in Austin by Dell — uses much faster Sandy Bridge processors today.

What’s next for Dell? “We think touch becomes a pretty interesting option for products that have Windows 8 loaded on them,” Burd said. But that won’t happen automatically. “Touch adds cost…part of it becoming standard is that people need to see the value of that. It’s still a pretty significant added cost, adding capacitive touch,” he said.

And expect more XPS and Inspiron (Dell’s consumer brand) models later. “We’ll have sister, brother products to the XPS 13 that will build out that portfolio and we’ll have a new design language for the Inspiron too,” he said.

Comparing apples and oranges

Every year, U.S. supermarkets lose roughly 10 percent of their fruits and vegetables to spoilage, according to the Department of Agriculture. To help combat those losses, MIT chemistry professor Timothy Swager and his students have built a new sensor that could help grocers and food distributors better monitor their produce. 

ImageThe new sensors,  can detect tiny amounts of ethylene, a gas that promotes ripening in plants. Swager envisions the inexpensive sensors attached to cardboard boxes of produce and scanned with a handheld device that would reveal the contents’ ripeness. That way, grocers would know when to put certain items on sale to move them before they get too ripe.

“If we can create equipment that will help grocery stores manage things more precisely, and maybe lower their losses by 30 percent, that would be huge,” says Swager, the John D. MacArthur Professor of Chemistry.

Detecting gases to monitor the food supply is a new area of interest for Swager, whose previous research has focused on sensors to detect explosives or chemical and biological warfare agents.

“Food is something that is really important to create sensors around, and we’re going after food in a broad sense,” Swager says. He is also pursuing monitors that could detect when food becomes moldy or develops bacterial growth, but as his first target, he chose ethylene, a plant hormone that controls ripening.

Plants secrete varying amounts of ethylene throughout their maturation process. For example, bananas will stay green until they release enough ethylene to start the ripening process. Once ripening begins, more ethylene is produced, and the ripening accelerates. If that perfect yellow banana is not eaten at peak ripeness, ethylene will turn it brown and mushy.

Fruit distributors try to slow this process by keeping ethylene levels very low in their warehouses. Such warehouses employ monitors that use gas chromatography or mass spectroscopy, which can separate gases and analyze their composition. Those systems cost around $1,200 each.

“Right now, the only time people monitor ethylene is in these huge facilities, because the equipment’s very expensive,” Swager says.

Detecting ripeness

Funded by the U.S. Army Office of Research through MIT’s Institute for Soldier Nanotechnologies, the MIT team built a sensor consisting of an array of tens of thousands of carbon nanotubes: sheets of carbon atoms rolled into cylinders that act as “superhighways” for electron flow.

To modify the tubes to detect ethylene gas, the researchers added copper atoms, which serve as “speed bumps” to slow the flowing electrons. “Anytime you put something on these nanotubes, you’re making speed bumps, because you’re taking this perfect, pristine system and you’re putting something on it,” Swager says.

Copper atoms slow the electrons a little bit, but when ethylene is present, it binds to the copper atoms and slows the electrons even more. By measuring how much the electrons slow down — a property also known as resistance — the researchers can determine how much ethylene is present. 

To make the device even more sensitive, the researchers added tiny beads of polystyrene, which absorbs ethylene and concentrates it near the carbon nanotubes. With their latest version, the researchers can detect concentrations of ethylene as low as 0.5 parts per million. The concentration required for fruit ripening is usually between 0.1 and one part per million.  

The researchers tested their sensors on several types of fruit — banana, avocado, apple, pear and orange — and were able to accurately measure their ripeness by detecting how much ethylene the fruits secreted.

Lead author of the paper describing the sensors is Birgit Esser, a postdoc in Swager’s lab. Grad student Jan Schnorr is also an author of the paper.

John Saffell, the technical director at Alphasense, a company that develops sensors, describes the MIT team’s approach as rigorous and focused. “This sensor, if designed and implemented correctly, could significantly reduce the level of fruit spoilage during shipping,” he says.

“At any given time, there are thousands of cargo containers on the seas, transporting fruit and hoping that they arrive at their destination with the correct degree of ripeness,” adds Saffell, who was not involved in this research. “Expensive analytical systems can monitor ethylene generation, but in the cost-sensitive shipping business, they are not economically viable for most of shipped fruit.”

Swager has filed for a patent on the technology and hopes to start a company to commercialize the sensors. In future work, he plans to add a radio-frequency identification (RFID) chip to the sensor so it can communicate wirelessly with a handheld device that would display ethylene levels. The system would be extremely cheap — about 25 cents for the carbon nanotube sensor plus another 75 cents for the RFID chip, Swager estimates.

Physicists Benchmark Quantum Simulator With Hundreds of Qubits

Many important problems in physics — especially low-temperature physics — remain poorly understood because the underlying quantum mechanics is vastly complex. Conventional computers — even supercomputers — are inadequate for simulating quantum systems with as few as 30 particles. Better computational tools are needed to understand and rationally design materials, such as high-temperature superconductors, whose properties are believed to depend on the collective quantum behavior of hundreds of particles.Image

The NIST simulator consists of a tiny, single-plane crystal of hundreds of beryllium ions, less than 1 millimeter in diameter, hovering inside a device called a Penning trap. The outermost electron of each ion acts as a tiny quantum magnet and is used as a qubit — the quantum equivalent of a “1” or a “0” in a conventional computer. In the benchmarking experiment, physicists used laser beams to cool the ions to near absolute zero. Carefully timed microwave and laser pulses then caused the qubits to interact, mimicking the quantum behavior of materials otherwise very difficult to study in the laboratory. Although the two systems may outwardly appear dissimilar, their behavior is engineered to be mathematically identical. In this way, simulators allow researchers to vary parameters that couldn’t be changed in natural solids, such as atomic lattice spacing and geometry. In the NIST benchmarking experiments, the strength of the interactions was intentionally weak so that the simulation remained simple enough to be confirmed by a classical computer. Ongoing research uses much stronger interactions.

Simulators exploit a property of quantum mechanics called superposition, wherein a quantum particle is made to be in two distinct states at the same time, for example, aligned and anti-aligned with an external magnetic field. So the number of states simultaneously available to 3 qubits, for example, is 8 and this number grows exponential with the number of qubits: 2N states for N qubits.

Crucially, the NIST simulator also can engineer a second quantum property called entanglement between the qubits, so that even physically well separated particles may be made tightly interconnected.

Recent years have seen tremendous interest in quantum simulation; scientists worldwide are striving to build small-scale demonstrations. However, these experiments have yet to fully involve more than 30 quantum particles, the threshold at which calculations become impossible on conventional computers. In contrast, the NIST simulator has extensive control over hundreds of qubits. This order of magnitude increase in qubit-number increases the simulator’s quantum state space exponentially. Just writing down on paper a state of a 350-qubit quantum simulator is impossible — it would require more than a googol of digits: 10 to the power of 100.

Over the past decade, the same NIST research group has conducted record-setting experiments in quantum computing, atomic clocks and, now, quantum simulation. In contrast with quantum computers, which are universal devices that someday may solve a wide variety of computational problems, simulators are “special purpose” devices designed to provide insight about specific problems.

This work was supported in part by the Defense Advanced Research Projects Agency. Co-authors from Georgetown University, North Carolina State University and in South Africa and Australia contributed to the research.

Researcher misinterprets Oracle advisory, discloses unpatched database vulnerability

Instructions on how to exploit an unpatched Oracle Database Server vulnerability in order to intercept the information exchanged between clients and databases were published by a security researcher who erroneously thought that the company had patched the flaw.

Oracle’s April 2012 Critical Patch Update (CPU) advisory, published on April 17, credited security researcher Joxean Koret for a vulnerability he reported through cyber intelligence firm iSight Partners.

[ In a major finding, InfoWorld uncovered a fundamental Oracle flaw and its repercussions for database customers. | Learn how to secure your systems with Roger Grimes' Security Adviser blog and Security Central newsletter, both from InfoWorld. | Subscribe to the InfoWorld Daily newsletter to make sure you don't miss an article. ]

In an email sent to the Full Disclosure mailing list on April 18, Koret revealed that the vulnerability is located in the Oracle TNS Listener, a component that routes connections from clients to Oracle database servers depending on which database they are trying to reach.

TNS Listener has a default feature, introduced in 1999, that allows clients to register a database service or database instance remotely without authentication, Koret said.

The client sends a remote registration request to the TNS Listener and defines a new service name, its IP address, the database instances under it, and other settings. The TNS Listener then starts routing all client requests that include that service name or database instance.

However, TNS Listener also allows the remote registration of a database instance or service name that is already registered, Koret said. “The TNS listener will consider this newer registered instance name a cluster instance (Oracle RAC, Real Application Clusters) or a fail over instance (Oracle Fail over),” he said.

In this case, the TNS Listener performs load balancing between the two instances by sending the first client to the most recently registered one and the second client to the original one. This allows a local attacker to route between 50 and 75 percent of clients to a database server that he controls, Koret said.

The attacker can then use the TNS Listener on the server he controls to route the client requests back to the legitimate database instance, effectively establishing a TNS proxy that allows him to intercept all data exchanged between clients and the targeted database.

However, this is not the only attack scenario that this vulnerability allows. By being in a man-in-the-middle situation, the attacker can also inject rogue commands in the SQL queries sent by clients or completely hijack their sessions to execute arbitrary queries, Koret said.

The researcher mentioned that he didn’t test whether Oracle’s patch for this vulnerability, that he believed to be included in the April 2012 CPU, actually addressed all attack vectors.

However, after a few follow-up emails with Oracle, he realized that the company hadn’t actually patched the flaw for currently supported versions of the database server, but instead addressed it in an yet-to-be-released version.

MIT to host 2013 American Nuclear Society Student Conference

The MIT American Nuclear Society Student Section has won the bid to host the 2013 ANS Student Conference in Spring 2013. A team of more than 30 undergraduate and graduate students from the Department of Nuclear Science and Engineering at MIT crafted the successful proposal to bring the conference back to MIT. The conference, which has grown significantly in attendance, visibility and stature over the last decade, has been hosted at MIT three times previously, but not since 1994.

The proposed conference theme, “Public Image of the Nuclear Engineer” is aimed at developing awareness of political challenges and inspiring young nuclear engineers to engage with society in ways that reflect positively on nuclear technology. It is derived, in part, from the Department of Nuclear Science and Engineering educational hallmark: Science-Systems-Society.

MIT-ANS will film and launch a video outreach project at the 2013 conference titled “I’m a Nuke.” The goal is to break the old stereotype of nuclear engineers and introduce the public to today’s nuclear scientists and engineers — young, diverse, and as Steve Jobs would say, “insanely great.”

Algorithmic incentives

In their groundbreaking 1985 paper on the topic, Goldwasser, Micali and the University of Toronto’s Charles Rackoff ’72, SM ’72, PhD ’74 proposed a particular kind of interactive proof, called a zero-knowledge proof, in which a player can establish that he or she knows some secret information without actually revealing it. Today, zero-knowledge proofs are used to secure transactions between financial institutions, and several startups have been founded to commercialize them.

At the Association for Computing Machinery’s Symposium on Theory of Computing in May, Micali, the Ford Professor of Engineering at MIT, and graduate student Pablo Azar will present a new type of mathematical game that they’re calling a rational proof; it varies interactive proofs by giving them an economic component. Like interactive proofs, rational proofs may have implications for cryptography, but they could also suggest new ways to structure incentives in contracts.

“What this work is about is asymmetry of information,” Micali adds. “In computer science, we think that valuable information is the output of a long computation, a computation I cannot do myself.” But economists, Micali says, model knowledge as a probability distribution that accurately describes a state of nature. “It was very clear to me that both things had to converge,” he says.

A classical interactive proof involves two players, sometimes designated Arthur and Merlin. Arthur has a complex problem he needs to solve, but his computational resources are limited; Merlin, on the other hand, has unlimited computational resources but is not trustworthy. An interactive proof is a procedure whereby Arthur asks Merlin a series of questions. At the end, even though Arthur can’t solve his problem himself, he can tell whether the solution Merlin has given him is valid.

In a rational proof, Merlin is still untrustworthy, but he’s a rational actor in the economic sense: When faced with a decision, he will always choose the option that maximizes his economic reward. “In the classical interactive proof, if you cheat, you get caught,” Azar explains. “In this model, if you cheat, you get less money.”

Complexity connection

Research on both interactive proofs and rational proofs falls under the rubric of computational-complexity theory, which classifies computational problems according to how hard they are to solve. The two best-known complexity classes are P and NP. Roughly speaking, P is a set of relatively easy problems, while NP contains some problems that, as far as anyone can tell, are very, very hard.

Problems in NP include the factoring of large numbers, the selection of an optimal route for a traveling salesman, and so-called satisfiability problems, in which one must find conditions that satisfy sets of logical restrictions. For instance, is it possible to contrive an attendance list for a party that satisfies the logical expression (Alice OR Bob AND Carol) AND (David AND Ernie AND NOT Alice)? (Yes: Bob, Carol, David and Ernie go to the party, but Alice doesn’t.) In fact, the vast majority of the hard problems in NP can be recast as satisfiability problems.

To get a sense of how rational proofs work, consider the question of how many solutions a satisfiability problem has — an even harder problem than finding a single solution. Suppose that the satisfiability problem is a more complicated version of the party-list problem, one involving 20 invitees. With 20 invitees, there are 1,048,576 possibilities for the final composition of the party. How many of those satisfy the logical expression? Arthur doesn’t have nearly enough time to test them all.

But what if Arthur instead auctions off a ticket in a lottery? He’ll write down one perfectly random list of party attendees — Alice yes, Bob no, Carol yes and so on — and if it satisfies the expression, he’ll give the ticketholder $1,048,576. How much will Merlin bid for the ticket?

Suppose that Merlin knows that there are exactly 300 solutions to the satisfiability problem. The chances that Arthur’s party list is one of them are thus 300 in 1,048,576. According to standard econometric analysis, a 300-in-1,048,576 shot at $1,048,576 is worth exactly $300. So if Merlin is a rational actor, he’ll bid $300 for the ticket. From that information, Arthur can deduce the number of solutions.

First-round knockout

The details are more complicated than that, and of course, with very few exceptions, no one in the real world wants to be on the hook for a million dollars in order to learn the answer to a math problem. But the upshot of the researchers’ paper is that with rational proofs, they can establish in one round of questioning — “What do you bid?” — what might require millions of rounds using classical interactive proofs. “Interaction, in practice, is costly,” Azar says. “It’s costly to send messages over a network. Reducing the interaction from a million rounds to one provides a significant savings in time.”

“I think it’s yet another case where we think we understand what’s a proof, and there is a twist, and we get some unexpected results,” says Moni Naor, the Judith Kleeman Professorial Chair in the Department of Computer Science and Applied Mathematics at Israel’s Weizmann Institute of Science. “We’ve seen it in the past with interactive proofs, which turned out to be pretty powerful, much more powerful than you normally think of proofs that you write down and verify as being.” With rational proofs, Naor says, “we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn’t think of in the past.”

Naor cautions that the work is “just at the beginning,” and that it’s hard to say when it will yield practical results, and what they might be. But “clearly, it’s worth looking into,” he says. “In general, the merging of the research in complexity, cryptography and game theory is a promising one.”

Micali agrees. “I think of this as a good basis for further explorations,” he says. “Right now, we’ve developed it for problems that are very, very hard. But how about problems that are very, very simple?” Rational-proof systems that describe simple interactions could have an application in crowdsourcing, a technique whereby computational tasks that are easy for humans but hard for computers are farmed out over the Internet to armies of volunteers who receive small financial rewards for each task they complete. Micali imagines that they might even be used to characterize biological systems, in which individual organisms — or even cells — can be thought of as producers and consumers.


Get every new post delivered to your Inbox.