Thursday, June 18, 2009

Mozilla Shows Microsoft Where $10,000 Is Buried

Yesterday, we poked fun at Microsoft’s tacky $10,000 online treasure hunt to get people to use IE8, at the domain TenGrandIsBuriedHere.com. We were hardly the only ones. Today, a developer at Mozilla, makers of IE rival Firefox, weighed in with his own way of mocking Microsoft: TenGrandIsBuriedThere.com.

The site is simply a Google Map zoomed out to a certain point. If you zoom in enough, you’ll find a surprise. The developer took exception to Microsoft calling Firefox “old” on its site. That is a bit odd since IE is much older than Firefox.










by MG Siegler on June 18, 2009 From: techcrunch.com

Tuesday, June 16, 2009

Layar, worlds first mobile Augmented Reality browser

The first mobile Augmented Reality browser premiers in the Netherlands
layar
www.layar.eu

Five Dutch content providers to participate in the worlds first AR browser

AMSTERDAM, Tuesday June 16th, 2009. Mobile innovation company SPRXmobile launches Layar, worlds first mobile Augmented Reality browser, which displays real time digital information on top of reality (of) in the camera screen of the mobile phone. While looking through the phones camera lens, a user can see houses for sale, popular bars and shops, jobs, healthcare providers and ATMs. The first country to launch Layar is The Netherlands. Launching partners are local market leaders ING (bank), funda (realty website), Hyves (social network), Tempo-team (temp agency) and Zekur.nl (healthcare provider).

How it works
Layar is derived from location based services and works on mobile phones that include a camera, GPS and a compass. Layar is first avaliable for handsets with the Android operating system (the G1 and HTC Magic). It works as follows: Starting up the Layar application automatically activates the camera. The embedded GPS automatically knows the location of the phone and the compass determines in which direction the phone is facing. Each partner provides a set of location coordinates with relevant information which forms a digital layer. By tapping the side of the screen the user easily switches between layers. This makes Layar a new type of browser which combines digital and reality, which offers an augmented view of the world.

Dutch launch
The premier launch is for the Dutch market. Launching content partners are ING (ATMs), Funda (houses for sale), Hyves (social network hot spots) Tempo-team (jobs) and Zekur.nl (healthcare providers). Layar will be launched per country with local content partners in order to guarantee relevent results for the end user. SPRXmobile is planning further roll-outs, together with local partners, in Germany, the UK and the United States this year. SPRXmobile wil continue with regular releases of new layers after each local launch. The Layar application will be available via the Android Market. Other handsets and operating systems are in development with a prime focus on the iPhone 3G S.

SPRXmobile
sprxmobile
Layar is developed by SPRXmobile, a mobile innovation company.

Eventually, the physical and the virtual worlds will become one. Many visions on Augmented Reality have already been developed, but we are proud to be able to bring this one step closer to reality, says Raimo van der Klein, co-founder of SPRXmobile.

More information:
http://layar.eu, http://www.sprxmobile.com.


Kiwis give Microsoft the finger

One of the money milking machines in Microsoft's stable has just gone dry. It's a little teat but significant in that it shows the way for other, bigger teats to be pulled out of the Redmond suction pump.

New Zealand's government-wide deal to purchase Microsoft products has fallen apart. The fact that the country has won awards for its work towards adopting open standards and open source means that it has the resources to look at other options for its software.

Instead of the three-year deals that Microsoft had concluded in the past, New Zealand has now retained the option to obtain "recommended retail price certainty for agencies as a basis for their individual negotiations.

The New Zealand State Services Commission says it will support agencies "to explore how they can maximise their ICT investment and achieve greater value for money."

That phrase should be enough to bring Steve Ballmer to the boil. The kool-aid that Microsoft has been selling to governments worldwide has been that it provides the best value for money.

A small country of four million has now chosen to contradict that fallacy and it's one that has the expertise to show that it can prove that statement.

In a media release, the SSC said: "It became apparent during discussions that a formal agreement with Microsoft is no longer appropriate.

"Microsoft have agreed to provide recommended retail price certainty for agencies as a basis for their individual negotiations, and the State Services Commission will be supporting agencies to explore how they can maximise their ICT investment and achieve greater value for money."

"Since 2000 the government has negotiated a series of three-year agreements with Microsoft, enabling public sector agencies to purchase Microsoft products on an opt-in basis."

"In late 2008 the State Services Commission commenced leading the re-negotiation of the G2006 Microsoft agreement on behalf of government agencies, and established an advisory steering committee comprised of senior executives from the largest IT purchasers in the public sector."

Reaction from the country's IT sector has been, predictably, upbeat.

Don Christie, president of the New Zealand Open Source Society, said in a radio interview, that even though the idea of a whole of government deal was to obtain big discounts, the bulk discount for the 2006 deal had probably amounted to about $NZ3 million a year, small change in the context of the total amount, which probably ran to hundreds of millions.

He said the deal was commercial-in-confidence and that exact figures could not be known.

Christie said the government had had no option but to back out from a deal this time because it was offered the same recommended, retail price as any other customer, despite the volume of licences it would buy.

He said replacements would now have to be sought and it would be a long, hard haul as the Government had got used to the Microsoft applications.

"Essentially, Microsoft software is like a virus; once it's in your system it's very difficult to get rid of it," Christie said.

David Lane, director of egressive, a company dealing in open source, said: "I'm excited about the possibility that free/open source solutions are no longer excluded from government procurement... That and the increasing grassroots understanding of FOSS within business and government is causing a subtle but profound shift in the mindset throughout NZ. Microsoft is now seen as the frivolously expensive 'closed' choice, which it is.

"Now it's just up to the FOSS vendors in NZ to seize the opportunity and rise to the challenge of filling the gaps as they form. Rest assured that we will!"

He said it would not be long before the NZ Ministry of Education tie-in with Microsoft received similar scrutiny. "The latest agreement is being negotiated now, with results announced in a month - that agreement reportedly covers twice as many licenses as the G2009 agreement would have had."

"When we demonstrate the ability for Kiwi FOSS vendors to make the grade, I don't think Microsoft will have much of a future in pan-industry/government agreements here in NZ (or anywhere else, given NZ's bellwether status for this sort of thing). The first domino has fallen."

Andrew McMillan, one of the founders of New Zealand's biggest open source IT firm, Catalyst, from which he retired last year, said: "I think that this result is really a reflection of the times, as the incoming government looks for ways to reduce bureaucracy and pare back the government spending which has grown somewhat during Labour's period in the hot seat... (and also due to) current economic conditions.

"My guess is that not signing the deal means that these software costs will now become more significant and more frequent operational decisions in the day to day of each individual government department.

"Obviously it's good news for the alternatives, and I expect that Microsoft will ramp up their sales and marketing in the government sector, but in reality the effects will be subtle in the short term.

"One likely effect is that this will increase Microsoft's profits for at least the coming year, while they reap benefits of fee increases, but in the longer term it certainly levels the playing field, giving greater attention to the alternatives."

McMillan, who is also a senior Debian developer, added: "I do think that the non-deal certainly does indicate a thorough awareness within the State Services Commission, and within the government sector in general, that there are now realistic alternatives available."

Zane Gilmore, development and web infrastructure team leader for the New Zealand Institute for Plant and Food Research, said his understanding of the affair was that the SSC (State Services Commission) tried to get their usual agreement and was knocked back by  Microsoft.

"Microsoft claims that that was their plan all along so that they can form relationships with the bureaucrats. SSC claims that they were just trying to do more of the same," said the pony-tailed Gilmore.

"With some luck we can use this as leverage to get more open source software into government but I can't see Microsoft allowing that for long. The agreement allowed for cheap or free use of Microsoft software for government organisations. It made it very difficult for OSS vendors to have any look-in as they could use Microsoft Office etc free.

Gimore said the schools used MS Office free and as a result school children gained familiarity with only Microsoft software. Teachers could just say, 'we get MS Office free so why should we use Open Office?'

"I can't imagine that Microsoft will allow schools to start using anything else for long but we may get a temporary toe in the door," he added. "Hopefully it will make CIOs (or their equivalents) think before they just automatically use Microsoft software but I'm not holding my breath."

The New Zealand government move comes a few weeks after a leading British academic questioned the way in which the Gates Foundation allocates money to various causes.

Dr David McCoy who is with the Centre for International Health and Development at University College London, rasied several questions about the fund's level of accountability and transparency.

by Sam Varghese Wednesday, 27 May 2009
From: itwire.com

The Living Robot

Researchers have developed a robot capable of learning and interacting with the world using a biological brain.

Kevin Warwick’s new robot behaves like a child. “Sometimes it does what you want it to, and sometimes it doesn’t,” he says. And while it may seem strange for a professor of cybernetics to be concerning himself with such an unreliable machine, Warwick’s creation has something that even today’s most sophisticated robots lack: a living brain.

Life for Warwick’s robot began when his team at the University of Reading spread rat neurons onto an array of electrodes. After about 20 minutes, the neurons began to form connections with one another. “It’s an innate response of the neurons,” says Warwick, “they try to link up and start communicating.”

For the next week the team fed the developing brain a liquid containing nutrients and minerals. And once the neurons established a network sufficiently capable of responding to electrical inputs from the electrode array, they connected the newly formed brain to a simple robot body consisting of two wheels and a sonar sensor. 


Credit: Kevin Warkwick


A relay of signals between the sensor, motors, and brain dictate the robot’s behavior. When it approaches an object, the number of electrical pulses sent from the sonar device to the brain increases. This heightened electrical stimulation causes certain neurons in the robot’s brain to fire. When the electrodes on which the firing neurons rest detect this activity, they signal the robot’s wheels to change direction. The end result is a robot that can avoid obstacles in its path. 

At first, the young robot spent a lot of time crashing into things. But after a few weeks of practice, its performance began to improve as the connections between the active neurons in its brain strengthened. “This is a specific type of learning, called Hebbian learning,” says Warwick, “where, by doing something habitually, you get better at doing it.”

The robot now gets around well enough. “But it has a biological brain, and not a computer,” says Warwick, and so it must navigate based solely on the very limited amount of information it receives from a single sensory device. If the number of sensory devices connected to its brain increases, it will gain a better understanding of its surroundings. “I have another student now who has started to work on an audio input, so in some way we can start communicating with it,” he says.

But it would be a bit shortsighted to say that adding sensory input devices to the robot would make it more human, as theoretically there is no limit to how many sensory devices a robot equipped with a biological brain could have. “We are looking to increase the range of sensory input potentially with infrared and other signals,” says Warwick.

A robot that experiences its environment through devices like sonar detectors and infrared sensors would perceive the world quite differently from a person. Imagine having a Geiger counter plugged into your brain — or perhaps better yet, an X-ray detector. For future generations of Warwick’s robot, this isn’t just a thought experiment. 

But Warwick isn’t interested only in building a robot with a wide range of sensory inputs. “It’s fun just looking at it as a robot life form, but I think it may also contribute to a better understanding of how our brain works,” he says. Studying the ways in which his robot learns and stores memories in its brain may provide new insights into neurological disorders like Alzheimer’s disease.

Warwick’s robot is dependent upon biological cells, so it won’t live forever. After a few months, the neurons in its brain will grow sluggish and less responsive as learning becomes more difficult and the robot’s mortal coil begins to take hold. A sad thought perhaps — but such is life.

From: seedmagazine.com  New Ideas / by Joe Kloc / March 26, 2009


Opera Unite: Reinventing the web, and liberating the tenants of the Internet

by Robin Wauters on June 16, 2009 From: techcrunch.com

We told you last week that browser maker Opera was generating quite some buzz by being secretive about their plans to ‘reinvent the web’. Well, the company this morning unveiled what it was referring to: technology that essentially turns every computer running the Opera browser into a full-fledged Web server. Behold Opera Unite.

You can use Opera Unite to share documents, music, photos, videos, or use it to run websites or even chat rooms without third-party requirements. The company extended the collaborative technology to a platform that comes with a set of APIs, encouraging developers to create their own applications (known as Opera Unite services) on top of it, directly linking people’s personal computers together, no matter which OS they are running and without the need to download additional software. The company recognizes that the current services are fairly basic, but says this is just the tip of the iceberg.


We’ll take a deeper dive in Opera Unite real soon, but I’m impressed with what it looks like on the surface. This is a really good idea at its core, and I encourage you to read Opera product analyst Lawrence Eng’s blog post on the subject for more background and an idea of where Opera is heading with the concept. A small excerpt:


“Currently, most of us contribute content to the Web (for example by putting our personal information on social networking sites, uploading photos to Flickr, or maybe publishing blog posts), but we don’t contribute to its fabric — the underlying infrastructure that defines the online landscape that we inhabit.

Our computers are only dumb terminals connected to other computers (meaning servers) owned by other people — such as large corporations — who we depend upon to host our words, thoughts, and images. We depend on them to do it well and with our best interests at heart. We place our trust in these third parties, and we hope for the best, but as long as our own computers are not first class citizens on the Web, we are merely tenants, and hosting companies are the landlords of the Internet.”



Monday, June 15, 2009

Immortal Information

A new nanoscale storage device could preserve all the digital information you want, for as long as you want—and longer.

New Technology / by Lee Billings / June 15, 2009
From seedmagazine.com

A new nanoscale storage device could preserve all the digital information you want, for as long as you want—and long
For centuries, archivists have noted a curious relationship between “quantity” and “quality” of items in their collections. That is, typically a storage medium’s durability is inversely proportional to the amount of information it can hold. For instance, Sumerian scribes could perhaps only fit a dozen lines of cuneiform onto a typical clay slab, but some of their inscriptions can still be read on surviving tablets six millennia later. Even something as fragile as printed words on paper can endure for hundreds, sometimes thousands of years if properly preserved.

Modern electronic storage media like CDs, DVDs, and computer hard drives can store vastly greater amounts of information, but typically don’t last more than decades at best. Environmental disturbances like fluctuating electromagnetic fields or changing temperature and humidity can corrupt and destroy digitally stored data very quickly. Furthermore, the fast pace of technological progress quickly renders electronic media formats obsolete, leaving users with few options to retrieve data stored on defunct media types.

Perversely, our culture’s explosive production of information may in time wipe out almost all records of our accumulated knowledge and achievements.

To solve that problem, a team led by Alex Zettl, a physics professor at the University of California, Berkeley, has devised a robust nanoscale system that could store massive amounts of digital information for very long periods of time. Any products that eventually emerge from this work could conceivably be the last archival storage devices we would ever need.

The system consists of a minuscule particle of iron encased in a carbon nanotube and represents information in binary notation—the zeroes and ones of “bits.” Using an electric current, information can be written into the system by shuttling the iron particle back and forth inside the nanotube like a bead on an abacus—the left half of the nanotube corresponds to zero, the right half corresponds to one. The encoded information can then be read by measuring the nanotube’s electrical resistance, which changes according to the iron particle’s position.

Because of their very small size, a square-inch array of these nanotube memory systems could store at least one terabit—a trillion bits—of information, approximately five times more than can be packed into a square inch of a state-of-the-art magnetic hard drive. But Zettl believes the technology could be pushed to much higher information densities.

“We can manipulate this particle and read out its position so accurately, we could divide the nanotube’s length into 10 or even 100 units instead of just two,” Zettl says. “Whether this is worthwhile to implement right away, I’m not sure, because it adds complexity, but it could immediately give us 10 or 100 times the information density with the same device.”

As promising as the technology is, much work remains to be done before it could result in a product competitive with other established storage options.

“What you’d want to do is make arrays of these things to get very high capacity, high density storage. Right now we only have high density since we’ve only made a few of these systems,” Zettl says. “You need something that can be scaled up, that’s easy to manufacture, with a low price and high reliability.”

The system certainly seems reliable—Zettl and his team have estimated that information stored within it would be essentially impervious to degradation.

The key is to make sure that the particle doesn’t slide too easily by itself at room temperature because if it did, you’d eventually lose the memory.” Zettl says. Other memory-degrading processes include the random kinetic jiggling of atoms and the rusting, or oxidization, of a device’s components. But since the chemical bonds between a nanotube’s carbon atoms are so strong, Zettl says, it forms a hermetically sealed system that protects the iron particle from a wide range of environmental contamination.

The team determined the lifetime of a bit stored in their system by finding the threshold energy required to jostle the iron particle so that its information was lost, then modeling the particle’s motion and stability at room temperature. Their result showed that the iron particle—and thus the bit—should be stable within the nanotube for more than a billion years.

Zettl hastens to point out that this system is only the core element of a potential commercial storage device, and that additional necessary components could have shorter lifetimes, lowering the total longevity. But, he says, “Whether it’s stable for 1 million years or 1 billion years, this very small, very high-density component still has an excellent archival timescale associated with it.” Indeed, if the system does store information for a billion years, it seems unlikely we will be present to confirm it: Scientists estimate complex plant and animal life on Earth may only persist for another 500 million years or so. After that, an aging Sun and plummeting levels of atmospheric CO2 should transform our blue-green planet into a dismal brown orb—though that’s probably cold comfort for anyone whose every Facebook and Flickr foible could be immortalized within some descendant of Zettl’s system.

Sunday, June 14, 2009

Why Windows is not yet ready for the Desktop

by R. McDougall

I don't spend my time telling other people which OS should or shouldn't suit their way of working. But it seems there are people who do, and like to get blog hits for it.

The problem with these "critiques" is always that the author is carrying around the self-serving assumption that their preferred OS embodies the only real way to organize a software ecosystem, and all others have inferior value. Moreover, since they are naturally only looking for a way to justify their existing pre-conclusion, they are often sadly misinformed about most of their "complaints", half of which are either entirely subjective, or just flat-out wrong.

And it is thus that I find myself moved enough to mock their contribution to the state of public discourse as follows (public service announcement: this is tongue-in-cheek parody):

Preface:

In this document we only discuss Windows deficiencies while everyone should keep in mind that there are areas where Windows has excelled other OSes.

A primary target of this comparison is Linux OS.

Windows major shortcomings and problems:



0. Premise: free and open software will stay indefinitely. Full stop. You may argue eternally, but free software is the ultimate disruptive technology, moving up from the low ground, replacing complicated and ill-fitting proprietary alternatives at every turn, such as web-browsers, e-mail clients, video players, office software, etc., which at one point cost money, but now most people find that they can no longer justify spending money to buy an upgrade for more "Clippy the Happy Assistant". Proprietary software will only be able to stay relevant by searching out ever more niche applications, or by massive expenditure on research in high-end applications for which it will take time for the ideas and algorithms to filter down to the greater community, and thus a brief window of profitability will remain. Software patents are nothing but a destructive force to retard innovation, and with more and more of the technology and legal communities realizing this basic fact, software patents are about to go away forever.

1. Security

1.1 History's greatest playground for malicious software. With unpatched machines on the internet taking only minutes to become infested with viruses, or become a slave bot for massive illegal spamming operations, Windows is a blight on the Internet's infrastructure.

1.2 Countless applications are released every year with obvious security holes. The programmers that make Windows applications are clearly some of the worst.

1.3 Microsoft has countless times avoided appropriate steps to secure the OS and limit the potential damage a compromised binary could cause. It has consistently either or added half-measures, out right refused to take necessary steps to ensure a safer computing environment for all users, for fear of making "Auntie Jo'" 10% more confused about the "1.3GHz hard drive" on her desk.

1.4 Every windows application I've ever installed messes with the Registry, places files about my hard drive which it never cleans up, installs icons, or worse surreptitiously installs spy- or ad-ware.

1.5 Any OS that regularly requires a wipe and reinstall to fix is beyond tolerance by any sane person.

1.6 A galore of software bugs across all applications. Just look into Vista, or call Microsoft tech support, pay exorbitant support fees, then wonder why some bugs are now ten years old with over several dozens of duplicates and no one is working on them.

2. User Interface

2.1 No consistent API. Win32? MFC? WinForms? WPF?

2.2 No scripting bindings for UI programming. No Python, Perl, Ruby, Java, etc.

2.3 Themeing and skinning support is laughable. Widget toolkit, display, rendering, input, and window managers, are all joined in a ridgid, monolithic blob, opaque to outside developers. Non-trivial changes to look and behaviour of the UI require either proprietary add-ons or third-party hacks; and even then most of your choices are hard-coded by Microsoft designers.

2.4 Lack of CLI (command line interface) errors for user applications (see clause 4.). All GUI applications should have a CLI errors presentation. Why on earth would you flash some crazy warning message to the user when you should be logging it to a file for a skilled technician to view instead of the poor unsuspecting end-user.

3. Interoperability

3.1 Windows has NO interoperability with non-Windows OSs. Installing Windows arrogantly destroys any previous OS boot-loader you may have had. Totally unable to read non-FAT or NTFS partitions.

3.2 Windows ships no other runtime environments except .NET. Has actively tried to disable or cripple competing platforms such as Netscape or Java.

3.3 Microsoft is in regular legal trouble for monopolistic and anti-competitive practices, which as a consumer of non-Microsoft products, means Microsoft considers me an enemy. Why own an OS that is constantly out to defeat you, from a vendor that requires massive anti-trust lawsuits to force it to simply not behave in an under-handed manner?

3.4 It should be possible to configure everything from the command line. Why should I give myself a work-place injury clicking everywhere with the mouse like a tweaking junkie in order to make a change that could be described succinctly in a line or two of text?

4. Drivers

4.1 Windows driver support is so abysmal, each individual device manufacturer must ship drivers with the device itself. If you have to reinstall windows, none of your devices will work until you individually download and install the latest versions from each vendor's website, potentially consuming many long frustrating hours.

4.2 Drivers often need to be installed, tweaked, or configured before they can even be used as intended. They often don't work "out of the box". Moreover, they never seem to be *just* drivers, there is always some application that gets installed without your consent which provides questionable value yet consumes resources and slows your computer down.

4.2 Drivers are one of the main sources of system instability (likely just behind viruses/malware). Poor quality drivers make Windows experience painful.

4.3 Windows has no means to reliably update drivers when critical updates have been made available for them.

4.4 A lot of Linux specific embedded devices do not have any Windows support. An argument that embedded device developers should make their device Windows compatible is silly since that way Windows won't ever gain even a traction of popularity among people who need source-level access to the OS. Why should I install an OS where my own hardware doesn't work?

5. Installing Applications

5.1 Very few Windows applications, by volume, are free or open source; which means you are totally beholden to application developer in ways that would never be allowed by law for makers of physical products. Happen to have your business critical data in a proprietary format when your license runs out? Lost your dongle just before the big presentation? Had to transfer your application to another computer because your laptop was stolen? Sorry to hear you just went out of business.

5.2 Windows has no regular time-based release cycle. You paid good money for a few features and a lot of bugs. It may be a few years from now when you can expect them truly fixed, but you can't count on it. And you'll have to pay again.

5.3 Windows has no central means of downloading new software, their dependencies, or upgrades. Each new application must be purchased from a physical store, or from each individual vendor's website. There is no dependency tracking (or worse no library sharing!), and updating for security, bug-fixes, or features is ad-hoc and entirely dependent on the whim of the vendor. Likely the vendor will use remote updating features to unethically sneak updates to your computer without your knowledge.

5.4 Windows comes almost barren on a fresh install. To get your machine back to a usable state, you must spend hours remembering what applications you had installed, and manually downloading and installing each one individually. With a reboot in between each install.

5.5 Windows applications need to reboot any time a new application or library is installed. 1991 called. They want their loading technology back. I hear DLL-hell isn't a problem any more though.

5.6 Microsoft enforces a great many intra-windows compatibility constraints to minimize the ever-present costs of portability, but it comes at the cost of inconsistent behaviour, buggy programs, and internal complexity that is slowing rotting Windows itself from the inside out.

5.7 Lack of hard-core Linux programs like grep/awk/GDB/valgrind/SystemTap/SELinux. Programmers just won't bother installing Windows until they can work for real.

6. Problems stemming from the fact that Windows isn't Linux

6.1 Ok I am officially tired of this game.

To be clear, I don't necessary truly believe all of the above, as unlike most people, I realize the world is full of complications and subtlety -- I'm just tired of hearing coming in the opposite direction, and had to vent lest my head explode from idiocy-overload.


Original Article