Thursday, November 12, 2009

Windows 7 Gets a Bye on Latest Patch Tuesday

As usual, Microsoft pushed out a set of bug fixes for Windows on the second Tuesday of the month, but none of the security fixes were aimed at Windows 7, it's newest operating system. That may not last for long -- "Attackers will take more time to figure out ways of breaking into Windows 7," according to Symantec's Ben Greenbaum.

Microsoft's newest computer operating system has survived its first few weeks on the market without needing any security fixes.

Microsoft plugged several security holes Tuesday, but none are aimed at Windows 7, which was released Oct. 22.

Give It Time

That's to be expected, said Ben Greenbaum, a researcher at the antivirus software company Symantec (Nasdaq: SYMC). "Attackers will take more time to figure out ways of breaking into Windows 7," he said.
Computer users can get the patches through Microsoft's automatic-update service, or by visiting microsoft.com/security.

One of the fixes Microsoft marked "critical," its highest severity rating, would thwart an attacker from infecting all the PCs on a local network after gaining access to just one.

In other words, even if most people in the office are good at avoiding clicking on unknown links or opening mysterious documents, if one person's computer is compromised the attacker could take over the rest.

Locking Down Attackers

The software maker also fixed flaws in its Excel and Word software that would give an attacker control of a PC if its owner opened a tainted spreadsheet or document.

It also patched problems in several older versions of Windows, including XP and Vista, that would give an attacker who already has control of a computer access to more of its functions.

Thursday, September 24, 2009

Tips for Buying digital cameras

Choices
Digital cameras come in many sizes, shapes, and price ranges. Since you will be living with your decision to purchase a particular make and model for many years, it is a good idea to carefully weigh the various options available before buying a camera. Break the decision down into a checklist of factors to help determine which camera is best for you. Consider image quality, performance, ergonomics, features, and price. Also consider whether you will just be taking family snap shots or something more elaborate.

First decide how much you are willing to pay for a digital camera. You may need to adjust this figure up if you want and need a lot of manual control and features. Decide on camera body size, and features that you may actually use. Then look for a camera in your price range.

Digital cameras can be grouped into four types:
Compact, Ultra compact, Super zoom, Enthusiast, and digital SLR, or D-SLR.

Compact
This is by far the most popular camera category. It represents the best value for the average user. Cameras in this group take reasonable quality pictures and have a good set of features. However, these cameras don’t have many pre features or perform as well as more expensive cameras. Unless you need a higher-end or smaller camera, you should consider this type of camera first.

Ultra compacts
Are small enough to fit into tiny bags or pockets but performance is usually sacrificed in the interest of style and size. Not a good choice if you want decent pictures.

Enthusiast
These full-sized models offer more precise controls, better lenses, and more features. They produce better images, suitable for larger prints. They often include zoom lenses, faster performance, exposure bracketing, high resolution, and manual controls for shutter speed, f-stop, and white balance. This category is a good choice if you don’t want to spend a lot of money on a more expensive camera but still want a fair number of advanced features.

Super zooms
These cameras are the same as “enthusiast” except that they include at least 10X optical zoom lenses. Some of these cameras can correct for camera shake by using image stabilization software. This is a great feature for long zooms.

D-SLR
D-SLRs are the high end of digital cameras, with true reflex through-the-lens viewfinders, interchangeable lenses, good control over exposure and color, and lots of accessories.

The many professional features and functions these cameras possess almost match those of conventional 35-mm film cameras. They also produce the best images of any digital camera type.

Avid amatures and pros are the typical users for this class of camera. However, unless you plan to use the many manual features you will not get your moneys worth. Choose a camera in this class if you want the best print quality at 8x10 or higher.

Features

Mega pixels
You would think that a 6MP camera would produce better images than a 5MP one but that is not always the case.

Mega pixels are a measure of quantity (the amount of data captured), not quality. A digital camera's image quality is not based solely on mega pixels, but on an entire system. More important than the number of pixels is the actual pixel size. The bigger the pixel size the better they can record detail in the shadows and highlights. Larger sensors generally produce greater dynamic range, higher sensitivity, and better signal-to-noise ratio, mostly because they have room for bigger, more light-sensitive pixels.

So instead of just going for the largest mega pixel count when comparing cameras, ask about other important factors such as image quality.

Image quality
Image quality is a more useful measure than the number of mega pixels. Most digital cameras will produce good images, with color, sharpness, and dynamic range that will satisfy most people.

If all you want to do is e-mail your photos or make small or low quality prints at home then any digital camera will do. However, if you want good quality professionally looking 8x10 prints or larger, then the camera’s image quality is very important. Make sure you tell the sales person what you intend to do with the images so that you will be shown the most appropriate camera for your needs. A good guide is to go for at least 3MP or better and ask about image quality.

Digital cameras are slow
Most digital cameras take time to start up and be ready to shoot. They also have a recycle time between each shot. This can be annoying if you need to react fast and shoot a number of frames quickly. Make sure you can live with a camera's picture taking speed by testing it out in the store before buying.

Is the feature set right for your needs?
Taking digital pictures can be as simple as pointing the camera and pressing the shutter button. But digital camera models are available that provide as much control over exposure, color, dynamic range, and so on as you choose to use. It is recommended that you choose a camera that has the key features that you might actually use and takes better images, rather then one that has a ton of features but takes poorer quality images.

Ergonomics and style
Take time to practice holding and using a camera in the store to get a feel for how easy and comfortable it is to use. How does it feel to hold? How about the size and weight? Does it feel sturdy or flimsy? Are the controls easy to reach and understand? Are menus easy to understand and navigate? And finally, how stylish does the camera look?

Top 10 buying tips when buying digital cameras

Deciding which digital camera to buy can be difficult because of the vast array of features available. Here are some tips that should help you find a camera that meets your needs, budget, and level of photo taking experience.


  1. Select a digital camera recommended for the largest print size you're likely to print at. If you want to make 8x10 inch prints, choose a 4-mega-pixel model, though a 3MP camera will do a fair job. If you need up to 16x20 inch prints you will need an 8MP camera. If all you want is to send images by e-mail or Web posting, even a 2MP camera will do. Remember, mega pixels correspond only to image size, not quality.
  2. Make sure the camera has the right features for your needs, such as an optical zoom lens and a certain amount of useful manual controls. If you wear glasses but prefer to take pictures without them, make sure that your camera has an adjustable dioptre. This will allow you adjust the focus of the viewfinder so that you can see your subject clearly.
  3. Choose a camera with a bright LCD. This will allow you to better see the LCD image in bright sunlight. Having a large LCD screen will help you compose and review your images on the camera.
  4. When comparing costs, be sure to calculate extras that may or may not be included, such as rechargeable batteries and charger, and a large enough memory card that can hold all your pictures until you can download them to a computer.
  5. Most digital cameras come with a USB interface to transfer digital photos from camera to computer. If you will be transferring large high quality photo files, try to get USB 2.0 to speed things up.
  6. When considering digital cameras with a zoom lens, what’s important is the optical zoom distance and not the digital zoom distance. Digital zoom uses software to crop and magnify an image, resulting in a loss of image quality.
  7. If you don't know a lot about cameras, a digital camera with lots of modes and manual settings will be overkill. Don’t buy a camera that is higher in price and more difficult to use if all you really want to do is point-and-shoot.
  8. A good option, if available, is a pocket-sized instruction manual instead of one on CD. You can take it with you when you're out shooting.
  9. If you have difficulty using your hands, look for a camera with a limited number of large buttons that are easy to reach and press.
  10. Test how fast the camera performs. Look for a camera that takes 4 seconds or less to get ready to shoot and 6 seconds or less between shots.

Glossary of terms
Aperture
An adjustable opening through which light enters through the camera’s lens. The larger the aperture is, the greater the camera's photosensitivity. A smaller aperture, however, gives greater depth of field to a picture. The aperture setting is called the f-stop. A small aperture has a relatively high f-number, such as f8 or f11, and a larger aperture has a smaller number, such as f2.8. The aperture setting must be balanced against the shutter speed. The faster the shutter speed, the larger the aperture must be, and vice versa, to admit the right amount of light to the image sensor for proper exposure. These adjustments are done automatically by the camera or manually by the operator.

Compression
A process that reduces the amount of data representing an image so that the file takes up less space in your camera, memory card, or computer. Smaller files are quicker to use for e-mail and on the Web. When a file is too compressed, however, image quality can seriously suffer.

Depth of field
Indicates how much of a scene will be sharp and in focus. A greater depth of field implies an increased distance between well-focused background and foreground, with everything in between properly focused. A narrow depth of field concentrates the area of sharp focus within a small range, based on the main subject's distance from the camera.

For instance, if your subject is standing in an open field, using a narrow depth of field will make most of the scene in front and behind look blurry; only the main subject will be focused. This effect is achieved when using long zoom lens. Using a wide-angle lens will produce a greater depth of field, thereby keeping the whole scene in focus.

Image sensor
The semiconductor chip or Image Sensor is what captures the photographic image. It collects the light of a scene or subject, which it turns into digital data that we see as a photo in the camera or on the computer. There are two main types of image sensors CCD (charge-coupled device) and CMOS (complementary metal-oxide semiconductor). The CCD is the most popular. CMOS is used in very low and high end cameras.

Interpolation
A process that increases image file size and can take place either in the camera or by computer software. Interpolation is used to magnify a picture but does not improve image quality and in fact it can decrease sharpness.

LCD viewfinder
A small screen on the back of a camera that displays what the lens sees. It is used to compose the picture, choose settings, focus and frame an image in macro mode. It is also used to view photos stored on the memory card.

Mega pixel
A measure of a digital camera's resolution. A three-mega pixel rating means that the camera can capture up to 3 million pixels, or points of data.

Memory card
A removable storage device that holds the images a digital camera captures. It is a good idea to have an extra one on hand so that when one card is full it can be swapped for another allowing you to continue shooting.

Pixel
A point of data in a digital image; the word is short for picture element. A digital camera's resolution is a measure of the number of pixels it can capture on its image sensor.

Shutter speed
A measure of how long a camera allows light to fall on the image sensor (expressed as a fraction of a second). Though some digital cameras have both electronic and mechanical shutters, inexpensive models will utilize an electronic shutter to turn off the photosensitivity of the image sensors.

Tuesday, September 22, 2009

Microsoft's Latest Threat: VMware

VMware will make a big push into the desktop and notebook market, with technology to allow users to do work even when not connected to a network. In data centers, VMware wants to demonstrate that the next frontier beyond hardware savings is the reduced operating costs that result from increasing the number of servers that are "virtualized."

Microsoft 's No. 1 rival is a household name, Google. But a strong candidate for No. 2 is a company scarcely known outside the technology industry: VMware.
"VMware is definitely a threat," said Gary Chen, an analyst at IDC, a research firm. "After Google, it is the company Microsoft fears most."

Google and VMware, which is based in Palo Alto, California, pose a broadly similar challenge to Microsoft, by potentially undermining the dominance of its most lucrative desktop software and operating systems. Google represents the attack from above, while VMware is the assault from beneath.

Google, the search giant, is offering free and advertising-supported software for e-mail, word processing, calendars and spreadsheets online as alternatives to Microsoft's popular Office products. For Web-based programs like these, it is the browser -- not an operating system like Windows -- that is the vital layer of software on the computer.

VMware is the leader in so-called virtual machine software, which allows a computer to run two or more operating systems at once. Its software resides on top of the hardware and beneath the operating system.

But as VMware's technology becomes more powerful and adds more features to its products, it is starting to supplant the operating system from below -- just as the browser can from above.

VMware's leadership adds an edge to its challenge. A year ago, Paul Maritz, a former senior executive at Microsoft, took over as chief executive. In the late 1990s, he was regarded as Microsoft's third-ranked executive, the person with the most responsibility and authority after Bill Gates and Steven A. Ballmer.

Mr. Maritz walked away from Microsoft in 2000 a very wealthy man, and he focused mainly on philanthropic work like microfinance, conservation and rural development, especially in Africa (he was born and raised in Zimbabwe). In 2003, he founded a small Web start-up company, but his business interests were a far cry from the mainstream of corporate combat.

The lure at VMware, Mr. Maritz explained, was the chance to lead a company riding a wave of disruptive, game-changing technology. "It's a rare opportunity to be part of a paradigm shift," he said. "That's what attracted me."

In January, Mr. Maritz was joined by Tod Nielsen, another former Microsoft executive, who became VMware's chief operating officer.

As 11,000 business partners, developers and customers gather in San Francisco for the start of the company's VMworld conference on Monday, the strategy under Mr. Maritz is clearly taking shape. This month, the company said it planned to pay $420 million to acquire SpringSource, a maker of open-source software development tools, some of which analyze and tweak the performance of software applications. Adding such features could allow VMware's technology to essentially sidestep an operating system like Windows.

So far, virtualization technology has been used mainly to achieve cost savings in data centers, where it lets companies handle computing chores with fewer machines, using less energy and floor space. Now, companies are increasingly starting to use virtual software to manage desktop software that is being delivered to their workers on PCs across the corporate network.

VMware plans to make a big push into the desktop and notebook market, introducing technology next year to better handle high-end graphics and allow users to do work even when not connected to a network.

In data centers, VMware wants to demonstrate that the next frontier beyond hardware savings is the reduced operating costs that result from increasing the number of servers that are "virtualized."

Today, VMware says companies typically have one human administrator for every 50 server computers, while data centers with more than half of their machines virtualized can fairly quickly increase that to one to 200.

"We have to go beyond capital costs to speak to doing more for our customers by using virtualization to reduce operating costs and operational complexity," Mr. Maritz said. "We are entering a significant turn in this market."

And, he observed, "We do have the footsteps of Microsoft behind us."

Indeed, Microsoft is coming. And its game plan is a rerun of the strategy it used in the Web browser market -- bundle its free virtual machine software into its operating system. Last July, Microsoft introduced its HyperV virtual machine in Windows Server 2008. New features that help it catch up to VMware will be introduced in October.

"Our strategy is to integrate virtualization into our product line in Windows, with our management software and the familiar Microsoft developer tools," said Mike Neil, a general manager in the Windows server division.

Microsoft has a long way to go. At the end of last year, more than 80 percent of virtualized computing workloads ran on VMware, analysts estimate, with the remainder shared by Microsoft, Citrix Systems' Xen, Virtual Iron and others. But only 15 percent of servers have been virtualized, and with that percentage likely to at least double or more over the next five years, there is still plenty of opportunity in the market.

There is considerable interest in Microsoft's bundled offering, analysts say. A recent report by Gartner projected that Microsoft's share of installed virtual machine software would increase to 29 percent by the end of 2012 from 8 percent at the end of last year.

"Microsoft is going to be very formidable in this space," said Stephen F. Shuckenbrock, president of Dell's large enterprise division, which is a partner of both VMware and Microsoft. "Many customers, at the very least, are intrigued by the free virtualization software bundled by Microsoft."

Adobe Hosts Platform Services To Distribute Flash Apps

Adobe Systems is hosting Flash Platform Services to distribute Flash-based applications to social networks, desktops and mobile devices. Adobe announced the services at an Interactive Advertising Bureau conference. Adobe is partnering with Gigya for social distribution and will likely use analytics from its recent acquisition of Omniture.

Adobe Systems announced Monday new services that will allow advertisers and content publishers to "promote, measure and monetize" Flash-based applications over "social networks, desktops and mobile devices."
Called Flash Platform Services, the hosted set of offerings is intended to provide advertisers, game makers, publishers and others with a distribution solution and a management tool for measuring, distributing and creating revenue streams from Flash applications and games. In particular, the services will make it easier to share, track, and monetize the Flash-created content through social media, including Facebook, MySpace, Twitter and others.

Partnering with Gigya

The new service was announced at the Interactive Advertising Bureau MiXX Conference and Expo, currently taking place in New York. Adobe is partnering with Gigya, a social media-distribution platform, to provide the services.

The growth of application use and distribution through social networks is a key driver. Adobe said later this year it will release its Social service, which will allow developers to write a single application. Users will be given a choice of which social network they want to access through the app.

Using the platform service, apps can be distributed to multiple mobile platforms. Users wanting to install the app can click on a link in a SMS message, and Adobe's distribution service can determine which device is making the request and provide the application for that device.

The platform also offers various analytical tools for measuring customer usage and distribution of a given application or mini-application. And cross-promotion of other Web applications is provided, so downloading one application could lead to another being offered.

The services include ad hosting for shared applications, using either ads available through other providers or through Adobe. Distribution, tracking, creating campaigns, and enabling ad hosting can all be managed through the service's Distribution Manager.

To use the service, applications will be built in Flash Professional or Flex Builder. Dreamweaver can be used to place the Share menu adjacent to the application on a Web page.

Why Not 'Get Into This Game?'

Jeffrey Hammond, an analyst with industry research firm Forrester, said Flash Platform Services makes sense for Adobe, especially in light of its recent acquisition of Web analytics provider Omniture.

He noted that, as the platform allows companies and developers to track and monetize their Flash widgets, Adobe is essentially getting into a viral market that its technology has helped to create.

Adobe had to be asking themselves, Hammond said, "'Why don't we get into this game ourselves?'" Some of the services, he noted, have been offered by companies like Clearspring.

One company working with the Adobe platform is The Wall Street Journal. The Journal's WSJ Radio Network will provide a widget with Web tools for its 370 radio station affiliates to use on their own sites. The widget will offer content from WSJ Radio and the Journal, including podcasts and live audio business updates, and users can share the widget through social-networking sites and through blogs.

Microsoft's New Zune Tries to Catch Up

Of course, there's an elephant in this particular room, and it's called the Apple's App Store. Oh, the Zune has an app store, all right. As of today, there are exactly nine programs in the Zune App Store. A calculator. Weather. A Space Invaders game. Microsoft says that more are coming. It promises, furthermore, that they will all be free.

Over the years, mention of the word "Microsoft " has triggered a variety of emotions. Some consider how Microsoft achieved its success and feel anger. Some consider how Microsoft borrows other companies' ideas and feel indignant. Some consider some recent battle with Windows and feel frustration.
But when you try out Microsoft's new Zune HD music/video player, you may feel a whole new emotion that most people don't associate with Microsoft: sympathy.

Why? We'll get to that.

The Zune, which replaces the old models, is Microsoft's version of the iPod Touch -- a gorgeous multi-touch screen dominates the front. Its handsome, beveled metal case weighs next to nothing yet still feels expensive and solid in the hand. It is nearly buttonless: You operate it as you do the iPod Touch -- you navigate by tapping things on the screen, magnify photos or Web pages by spreading two fingers apart, rotate images by turning the player 90 degrees, and so on. The software design is fluid, beautiful and incredibly responsive.

The new Zune has an incredibly bright, sharp and colorful OLED screen (organic light-emitting diode, not that that helps). Finger-grease streaks are an ugly problem, at least when the screen is off.

The Zune HD is narrower and shorter than the Touch, and a hair thicker. It's available in black or silver; online, you can order a Zune HD with any of several fancy artist-designed back panels. The 16-gig model is $220; the 32-gig model is $290. The "HD" means two things. First, like its predecessors, this Zune can tune into FM radio, but now it can tune into HD radio stations, too.

The Zune HD's name also refers to the hi-def (720p) movies that you can buy on Microsoft's online store. The store is a big new push for Microsoft; the same music, television shows and movies will eventually be available for Xbox, Zune and even Windows Mobile cell phones. Buy a movie on one gadget, watch it on another. Alas, for now, the selection is relatively puny. The store offers a choice of six million songs, 10,000 television shows and 500 movies.

The Zune's own screen isn't fine enough to show you hi-def video. But when you set the player into the $90 Zune Dock, you can play your hi-def Zune movies in hi-def on your television. The Dock can also play your photos, music and radio stations through your home-entertainment system. All of it looks and sounds great, and is effortless to control with the included remote.

Music is still at the Zune's heart, especially if you sign up for Microsoft's $15-a-month, all-you-can-download music-store plan. Now, you could argue that those subscriptions are something of a rip-off; the day you stop paying that monthly fee, you lose your entire music collection.

The Zune Pass, though, eases the sting: You get to keep 10 songs a month forever (90 percent of Microsoft's songs are not copy -protected). Better yet, you can listen to your infinite playlist by logging into Zune.net from any Mac or PC, anywhere you go.

The Wi-Fi Web browser, and its accompanying iPhone-style on-screen keyboard, is new to the Zune. When you are in an Internet hot spot, you can call up Web sites, zoom in to magnify text and so on, just as on the iPod Touch or iPhone. It generally works well, though it is basic: You can open only one page at a time, and it can't play YouTube videos, Flash animations or Pandora radio stations. There is no e-mail program on the Zune, either.

Of course, there's an elephant in this particular room, and it's called the Apple's App Store. Oh, the Zune has an app store, all right. As of today, there are exactly nine programs in the Zune App Store. A calculator. Weather. A Space Invaders game. Microsoft says that more are coming. It promises, furthermore, that they will all be free, which is nice. Unfortunately, for now, Microsoft intends to write all of these programs itself -- it isn't inviting the world's programmers to participate -- so the Zune app store will remain relatively tiny.

There are other minor disappointments. For example, adjusting the volume requires a step too many: You have to press a side button to bring up on-screen controls. There are 1.0-style bugs and glitches, as when my PC wouldn't see the Zune until after a couple of restarts. When you're playing a movie, there's no Rewind to Start button. And there is no speaker at all, not even a feeble one. But overall, Microsoft has done a truly beautiful job with this player and its software.

All right, then: So why sympathy? Because, after three years, hundreds of millions of dollars in advertising, and, yes, a lot of real innovation , the Zune has managed to claim a measly 1.1 percent of the music-player market.

The problem is the iPod's head start: Its catalogs of music, movies, apps and accessories are ridiculously superior to the Zune's -- and the Zune's reputation as the player for weirdos and losers. Among the under-25 set, "Zune" is a punch line.

It's an outdated joke. The Zune HD player itself is every bit as joyful, polished and satisfying as its rival. The question is whether Microsoft will stick it out long enough to close the catalog gap, the ecosystem gap and the image gap.

Motion-Detecting Earphones Offered by Sony Ericsson

Motion-detecting MH907 earphones from Sony Ericsson let mobile-phone users listen to music and take calls by removing an earbud. Sony Ericsson's MH907 earphones turn on music when both earbuds are inserted. The MH907 earphones use Sony Ericsson's SenseMe technology and only work with newer Sony Ericsson phones.

Sony Ericsson is giving consumers a way to talk and bee-bop to their favorite song. The handset maker has developed motion-detecting earphones that allow mobile -phone users to listen to music and take mobile calls by placing and removing earbuds.
The London-based company has created marketing materials to promote its MH907 earphones, including a cartoon step-by-step demonstration of a barefoot boy sitting on a bus listening to music, receiving and ending a call, then popping the earbuds in again to resume listening to music.

The earphones turn on music once the user inserts both earbuds. The user can pause the music by removing one of the earbuds.

What happens when a user is listening to music and the phone rings? The user needs only to remove one earbud to answer the call, according to Sony.

"With the MH907, consumers can pocket their phone but still stay connected all day, every day by simply plugging in or removing their earbuds -- there is no need for a remote control or any buttons," said Jacob Sten, senior vice president of Sony Ericsson Mobile Communications. "At Sony Ericsson we believe it is important to listen to what our customers need, and introducing the world's first-ever motion-activated headphones highlights our commitment to offer our customers a complete communications experience."

Motion Detection

The company uses its SenseMe technology to detect when an earbud has been placed in the user's ear, so a user cannot accidentally answer a call when the earbud is in a pocket or purse because it is only activated by body contact.

SenseMe is the same technology used in Sony Ericsson's W910i device, which feels its owner's mood and suggests music.

Overall, blog posters are happy with Sony's innovation . Some, however, have pointed out that keeping both earbuds in creates safety issues for runners or walkers not listening for beeping horns or other emergency sounds.

Others have pointed out that the device is only compatible with newer Sony Ericsson phones, which leaves out users with older phones.

Fun Interaction

The new device fits with Sony Ericsson's realigned brand strategy to build fun and inclusive interaction for customers, according to Sten.

Color choices for the earphones are yellow/white and titan chrome. They can only be used with Sony Ericsson phones that have a fast port connector.

Sony Ericsson has not released any details on when the device will be available or how much it will cost, but some reports place the cost at about $55.

"As of this time a carrier has not been announced and we can't comment on price," said Lauren Haralson, a Sony Ericsson spokesperson.

Netflix Launches Second $1 Million Search Contest

Just after announcing the winner of its $1 million Netflix Prize for improving Netflix search, Netflix announced a second challenge. The first Netflix Prize went to a team that improved the Netflix movie-recommendation system. Netflix Prize 2 focuses on a tough challenge: Predicting movie enjoyment by members who don't rate movies often.


Netflix on Monday announced the winner of its $1 million search contest. Just moments later, Netflix launched a new million-dollar challenge to encourage engineers, computer scientists, and machine-learning communities to keep working on improvements.

After three years and submissions by more than 40,000 teams from 186 countries, Netflix awarded the $1 million prize to the team that most improved the Netflix movie-recommendation system. Specifically, the teams set out to improve upon the company's ability to accurately predict Netflix members' movie tastes by 10 percent -- a hurdle Netflix scientists were not able to overcome on their own over the last decade.

Netflix cofounder and CEO Reed Hastings said it was a bona fide race to the end, with teams that had previously battled it out independently joining forces to surpass the 10 percent barrier. "New submissions arrived fast and furious in the closing hours," Hastings said, "and the competition had more twists and turns than The Crying Game, The Usual Suspects, and all the Bourne movies wrapped into one."

Improving Netflix

When Netflix launched the Netflix Prize in October 2006, it made 100 million anonymous movie ratings -- ranging from one star to five stars -- available to contestants. All personal information that could identify individual Netflix members was removed from the prize data. The data contained movie titles, star ratings, and dates, but no text reviews.

Accurately predicting the movies Netflix members will love is a key component of the Netflix service. Neil Hunt, Netflix chief product officer, said this extreme level of personalization is "like entering a video store with 100,000 titles and having those that are most interesting to you fly off the shelves and line up in front of you."

Netflix Prize 2 focuses on a much tougher problem: Predicting movie enjoyment by members who don't rate movies often, or at all, by taking advantage of demographic and behavioral data carrying signals about the individuals' taste profiles.

Netflix's Five-Star Move

Unlike the first challenge, the new contest has no specific accuracy target. That's because Netflix and contest judges have little idea how far experts can push the data to drive useful predictions. For this reason, $500,000 will be awarded to the team judged to be leading after six months, and an additional $500,000 will be given to the team in the lead at the 18-month mark, when the contest is wrapped up.

Greg Sterling, principal analyst at Sterling Market Intelligence, called the Netflix contest a great move all the way around.

"Crowdsourcing the Netflix algorithm, getting a better user experience as a result, and all the positive PR from the contest. It's an example of Web 2.0 best practices, although that term is now passé," Sterling said. "One million dollars is nothing to Netflix, and it's a big enough prize to get some top-notch folks involved. It could potentially even result in some engineering hires down the line."

Windows 7: Will Its Features Impress You?

One completely new feature you'll be confronted with in Windows 7 is Libraries. In essence, libraries are folders that can point to files or resources in a number of separate physical locations. You can create a library called "my documents," for instance, that aggregates files from your local C drive as well as from external or network drives.


Now less than two months from world-wide release, Microsoft 's Windows 7 is bound to make the legions of existing Windows users wonder whether they should upgrade. And it will make those who have put off hardware purchases in anticipation of the new operating system wonder whether the time is right to hand over their hard-earned cash. Windows 7 has already won over the majority of those who have been beta testing the product for over a year now. So there's really just one question that remains: will it impress you? To give you a head start on deciding, here's an overview of what are likely to be the most talked-about new features of Windows 7.
Speed and Stability

Promises of greater speed and stability are likely to interest the majority of Windows users. But it's helpful to remember that every major release of Windows in memory has arrived with similar assurances. And each time, the reality of the operating system, once unleashed onto the millions of computers around the world, has fallen short of expectations.

Will Windows 7 actually be different? In terms of performance, most benchmarks put the release-to-manufacturing version of Windows 7 roughly on a par with both Windows XP and the first service pack release of Windows Vista. But Windows 7 is noticeably speedier in areas that matter a lot to most users: startup is faster, as is Windows shutdown, and most disk-intensive tasks are at least on a par with the speed of Windows XP.

In terms of stability, only time will tell. Anecdotal reports of Windows 7 running nonstop for a month or more without requiring a reboot are rampant around the Web, which is good. But there are also plenty of reported incidences of Explorer crashes and other glitches occurring in the new operating system, just as with Windows Vista. Do not, in short, expect miracles out of Windows 7 in either speed or stability. The good news is that in neither of these areas does the new operating system appear to be worse than the ones it will replace.

Installation

Installation is much improved in Windows 7 over any previous version of Windows. If you purchase an upgrade or a full retail version of the new operating system, you'll likely be delighted at how seldom the operating system interrupts the installation process for input from you. And you'll be pleasantly surprised at how many components of your PC Windows 7 recognizes automatically, finding and installing the correct device drivers in the process.


Compatibility

Microsoft took backward compatibility seriously with Windows 7, and the result is an operating system that will be compatible with the majority of existing Windows applications, regardless of which version of Windows they were originally designed to run under.

One of the secrets to Windows 7's impressive compatibility is the new Windows XP compatibility mode. With this feature, you can run any application that works under Windows XP within Windows 7 by using what amounts to a virtual Windows XP machine within the confines of Windows 7 itself.

Simplicity

Windows 7 has done away with much of the most annoying clutter and intrusiveness of the Vista interface. Gone are the gadgets bar, the Welcome screen, and many of the most objectionable aspects of the User Account Control (UAC), which in Vista impeded users at almost every turn. It's also very easy to turn off UAC altogether in Windows 7.

The taskbar has been impressively improved in Windows 7. Right away you'll see that Windows 7 groups multiple windows of the same application by default on the taskbar, so your taskbar won't be cluttered by, say, eight instances of Internet Explorer. Also nice about the automatic grouping feature is that allowing your mouse cursor to hover over a taskbar icon will result in pop-up thumbnails of the contents of the open applications, so you can move to the window you want without any guessing.

Some taskbar icons are also graced with a built-in status indicator in those cases where minimized applications are performing some time-consuming functions. For instance, the taskbar icon for a minimized Windows Explorer that is busy copying files will get progressively greener as the file copying proceeds.

The new operating system is by no means as uncluttered and workmanlike as the Windows XP interface. But Windows 7 does a decent job of merging Windows XP's utilitarian bent with the conveniences that emerged along with the interface changes unveiled first in Windows Vista.

Libraries

One completely new feature you'll be confronted with in Windows 7 is Libraries. In essence, libraries are folders that can point to files or resources in a number of separate physical locations. You can create a library called "my documents," for instance, that aggregates files from your local C drive as well as from external or network drives. You tell the library container where to look, and it does the job of assembling the files. You can access libraries directly from Windows Explorer. Merely double-clicking a library name accesses the underlying files.


Improved Search

Searching for existing data is every bit as important on the desktop as it is on the Internet. And Windows 7 improves on the strides made by Windows Vista in the search arena, thanks to a feature dubbed Federated Search. In essence, with Federated Search, the near real-time search capability unveiled in Windows Vista now extends to network drives and other remote storage repositories in Windows 7. Search in Windows 7 is also quite customizable. With so-called search connectors, for example, you can even perform searches on Web sites such as Twitter right from your Windows 7 desktop.

There are plenty of other pleasant surprises hidden within Windows 7, but these headline features are likely to be what tempts you to give Microsoft's new operating system a try. Just remember that a good deal of hype always precedes the release of a new Microsoft operating system. While Windows 7 makes some significant strides in important areas, there's no law against sticking with Windows Vista or even XP if those operating systems are serving you well.

10 Critical Trends for Cybersecurity

The Internet, private networks, VPNs, and a host of other technologies are quickly weaving the planet into a single, massively complex "infosphere." These connections cannot be severed without overwhelming damage to companies and even economies. Yet, they represent unprecedented vulnerabilities to espionage and covert attack.


"Cybersecurity is the soft underbelly of this country," outgoing U.S. National Intelligence Director Mike McConnell declared in a valedictory address to reporters in mid-January. He rated this problem equal in significance to the potential development of atomic weapons by Iran.
McConnell does not worry so much that hackers or spies will steal classified information from computers owned by government or the military, or by contractors working for them on secret projects. He is afraid they will erase it and thereby deprive the United States of critical data. "It could have a debilitating effect on the country," he said.

With this concern in mind, Forecasting International undertook a study of factors likely to influence the future development of information warfare.

Real-world attacks over the Internet also are possible. In March 2007, the Department of Energy's Idaho National Laboratory conducted an experiment to determine whether a power plant could be compromised by hacking alone. The result was a diesel generator smoking and on fire due to some malicious data that could easily have been sent to it over the Internet from anywhere in the world. In January 2008, a CIA analyst told American utilities that hackers had infiltrated electric companies in several locations outside the United States. In at least one case, they had managed to shut off power to multiple cities.

We conclude that information warfare will be a significant component in most future conflicts. This position is in line with both U.S. military doctrine and white papers published by the Chinese People's Army. One study affirms that as many as 120 governments already are pursuing information warfare programs.

Repeated reports that Chinese computer specialists have hacked into government networks in Germany, the United States, and other countries show that the threat is not limited to relatively unsophisticated lands. A 2007 estimate suggested that hackers sponsored by the Chinese government had downloaded more than 3.5 terabytes of information from NIPRNet, a U.S. government network that handles mostly unclassified material. More disturbingly, The Joint Operating Environment 2008: Challenges and Implications for the Future Joint Force (the JOE) comments that "our adversaries have often taken advantage of computer networks and the power of information technology not only to directly influence the perceptions and will of the United States, its decision-makers, and population, but also to plan and execute savage acts of terrorism."

Many factors guarantee that the role of information warfare in military planning and operations will expand greatly in the next two to three decades. These include the spread of new information technologies such as Internet telephony , wireless broadband, and radio-frequency identification (RFID); the cost and negative publicity of real-world warfare; and the possibility that many information operations can be carried out in secret, allowing successful hackers to stage repeated intrusions into adversaries' computer networks.


10 Critical Trends for Cyberwar

Forecasting International [rates] the following as the 10 most significant trends that will shape the future of information warfare. This ranking is based largely on the responses of our expert panelists, but also on our own judgment, developed over 50 years of trend analysis and extrapolation in military and national-security contexts. In nearly all cases, these two inputs agreed.

1. Technology Increasingly Dominates Both the Economy and Society

New technologies are surpassing the previous state of the art in all fields. Laptop computers and Internet- equipped cell phones provide 24/7 access to e-mail and Web sites.

New materials are bringing stronger, lighter structures that can monitor their own wear. By 2015, artificial intelligence (AI), data mining, and virtual reality will help most organizations to assimilate data and solve problems beyond the range of today's computers. The promise of nanotechnology is just beginning to emerge.

Ultimately, speculations may prove correct that we are approaching the "Singularity's event horizon." At that time, our artifacts will be so intelligent that they can design themselves, and we will not understand how they work. Humanity will be largely a passenger in its own evolution as a technological species.

Implications for Information Warfare and Operations: The growing domination of technology is the ultimate foundation for cyberwar. Complex, often delicate technologies make the world a richer, more-efficient place. However, they also make it relatively fragile, as it becomes difficult to keep industries and support systems functioning when something disrupts computer controls and monitors, and the opportunities for disruption proliferate rapidly.

A frequently overlooked scenario is the use of infotech by organized crime, according to consulting futurist Joseph F. Coates. "It is 2015, and the Mafia electronically wipes out the records of a modest-sized bank in Texas or Nebraska, and then quietly visits a small group of large financial services organizations with a simple message: 'We did it-you could be next. This is what we want, to protect you.'"

Futures-studies scholar Stephen F. Steele notes, "Cyber systems are not simply information, but cyber cultures. Coordinated cyberattacks at multiple levels will be capable of knocking out the macro (national defense systems), meso (local power grids), and micro (starting an automobile) simultaneously."

2. Advanced Communications Technologies Are Changing the Way We Work and Live

Telecommuting is growing rapidly, thanks largely to e-mail and other high-tech forms of communication. However, the millennial generation has already abandoned e-mail for most purposes, preferring to use instant messaging and social-networking Web sites to communicate with their peers. These and other new technologies are building communities nearly as complex and involved as those existing wholly in the real world.

Implications for Information Warfare and Operations: This is one of the two or three critical trends that give information warfare and operations their significance.

As our institutions integrate their operations, their connectivity makes them more vulnerable to unauthorized access. As they redesign their operations to take advantage of the efficiencies that computers offer, they also open them to disruption by technologically sophisticated adversaries.

Disruption may not be overt or easily detected. With manufacturing systems increasingly open to direct input from customers, it might be possible to reprogram computer-controlled machine tools to deliver parts that were subtly out of spec-and to rework the specifications themselves so that the discrepancies would never be noticed. If the tampering were carried out with sufficient imagination and care on well-selected targets, the products might conceivably pass inspection, yet fail in the field. This could have significant military implications.

"The Internet is a mess, open to all kinds of uses, misuses, antisocial material, irksome intrusions from ads, identity theft, international swindles, and on and on," observes Coates. "For these reasons, as well as the potential for national-security interventions and general hell raising, it is time to plan, design, and execute over the next five to seven years a replacement for the Internet."

Infotech and business management consultant Lawrence W. Vogel calls attention to the impacts of cloud computing (third-party data hosting and service-oriented computing) and Web 2.0 applications (social networking and interactivity). "The cybersecurity implications associated with cloud computing, whether a public or private cloud, are significant," he says. "As more companies and the government adopt cloud computing, they become more vulnerable to disruption and cyberattacks. This could result in disruption in services and the ability to rapidly access critical software applications. And with the widespread use of Facebook, blogs, and other social- networking applications in our personal lives, government organizations are seeking similar capabilities for communicating and interacting with their stakeholders. Once the government permits interactive, two-way communications over government networks, the chance for cyberattacks dramatically increases."


3. The Global Economy Is Growing More Integrated

Critical factors here include the rise of multinational corporations, the relaxation of national distinctions (e.g., within the European Union), the growth of the Internet, and computerized outsourcing of jobs to low-wage countries.

Implications for Information Warfare and Operations: The Internet, private networks, virtual private networks, and a host of other technologies are quickly weaving the planet into a single, massively complex "infosphere." These nearly infinite connections cannot be severed without overwhelming damage to companies and even to national economies. Yet, they represent unprecedented vulnerabilities to espionage and covert attack. This is another major trend for information warfare and operations.

"Another thing to think about here [is that] the sheer volume of information racing through the 'infosphere' enhances the opportunity for cyberwar operators to embed encrypted information within routine data flows," says law enforcement strategic planner John Kapinos. "This could take the form of system-disabling viruses, or secret message traffic concealed within an ocean of regularly transmitted, legitimate data. Sophisticated data-monitoring programs designed to detect unusual patterns would be needed to counteract such a scheme."

Futurologist Ian D. Pearson adds that, as interactions become more complex, "it will be harder to spot points of vulnerability. Fraud and cyberterrorism will increase."

The actors in cyberwarfare now include non-state entities, points out strategic-planning consultant Frank Sowa of the Xavier Group Ltd. "Corporations in the twenty-first century are borderless and are not geopolitical," he argues. "The key to actively thwarting cyberwarfare is to recognize corporations and organized religions on the same-or even higher protocol-than geopolitical governments and borderless, non-geopolitical terror and extremist operations."

4. Research and Development Play a Growing Role in the World Economy

Total U.S. outlays on R&D have grown steadily in the past three decades. Similar trends are seen in China, Japan, the European Union, and Russia.

Implications for Information Warfare and Operations: This trend is responsible for the accelerating technological advances seen in recent decades. It is another critical factor in the development of information warfare.


The chief product of R&D is not clever new merchandise or technologies, but information. Even the most sensitive output from research results is routinely stored in computers, shipped through company intranets, and usually transmitted over the Internet. This accessibility makes it a prime target for espionage, whether industrial or military. This problem has been growing nearly as quickly as the mass of information available to prying. It will be a still greater concern for security specialists in the years ahead.

Many R&D programs promote the dissemination of research results, observes public-policy specialist Mark Callanan of the Institute for Public Administration in Dublin. "While this is of course entirely sensible for the vast majority of research, the emphasis on getting as much information out there [as possible] may pose additional security dilemmas in terms of cybercrime," he argues.

Pearson adds that "the downside is that R&D also occurs in weapons tech, so there is always a background arms race. High-capability technologies will present enormous threats to mankind in the second half of this century."

5. The Pace of Technological Change Accelerates with Each New Generation of Discoveries and Applications

In fast-moving engineering disciplines, half of the cutting-edge knowledge learned by college students in their freshman year is obsolete by the time they graduate. The design and marketing cycle-idea, invention, innovation , imitation-is shrinking steadily. As late as the 1940s, the product cycle stretched to 30 or 40 years. Today, it seldom lasts 30 or 40 weeks.

The reason is simple: Some 80% of the scientists, engineers, technicians, and physicians who ever lived are alive today-and exchanging ideas in real time on the Internet.

Implications for Information Warfare and Operations: As new technologies arrive, industry will be forced to hire more technology specialists and to train other employees to cope with new demands. Some support functions may be moved offshore, where technically knowledgeable adversaries might have greater access to them, opening the way to disruption.

"It is important in the discussion not to neglect the large amount of information technology now obsolescent or obsolete, but [still] in place," observes Joe Coates.

The advance of machine intelligence will also have confounding implications for cybersecurity. According to knowledge theorist and futurist Bruce LaDuke, "Knowledge creation is a repeatable process that is performed by humans and could be performed by machines exclusively or in systems built to interact with humans ('man-in the-loop' systems). Artificial knowledge creation will usher in [the] Singularity, not artificial intelligence or artificial general intelligence (or technology advancing itself). Artificial intelligence has already been achieved by any computer, because intelligence is appropriately defined as knowledge stored that can be retrieved (by human or computer). The first arriver to [artificial knowledge creation] technology will drive the entire paradigm shift."


6. The United States Is Ceding Its Scientific and Technical Leadership to Other Countries

In June 2009, a U.S. National Security Agency-backed "hacking" competition pitted 4,200 programmers from all over the world in algorithm coding and other contests; of the finalists, 20 were from China, 10 were from Russia, and only two were from the United States, reports Computerworld. "We do the same thing with athletics here that they do with mathematics and science there," says Rob Hughes, president of TopCoder, the software development company that operates the annual competition. Hughes argues that the United States needs to put more emphasis-and earlier-on math and science education.

"The scientific and technical building blocks of our economic leadership are eroding at a time when many other nations are gathering strength," the National Academy of Sciences warns. "Although many people assume that the United States will always be a world leader in science and technology, this may not continue to be the case inasmuch as great minds and ideas exist throughout the world."

R&D spending is growing in raw-dollar terms, but, when measured as a percentage of the total federal budget or as a fraction of the U.S. GDP, research funding has been shrinking for the last 15 years. Only half of U.S. patents are granted to Americans, a proportion that has been declining for decades.

More than half of U.S. scientists and engineers are nearing retirement. At the rate that U.S. students are entering these fields, the retirees cannot be replaced except by recruiting foreign scientists.

Implications for Information Warfare and Operations:

To whatever extent the United States loses its leadership in science and technology, it falls behind other countries in the intellectual and personnel base required for information warfare and operations. If this trend is not reversed, the United States could find itself at a significant disadvantage in this strategically and tactically important area.

"The strength of the United States is in knowledge creation under the auspices of innovation and invention that has been applied in all kinds of technologies," argues LaDuke. "Ceding existing technology as technology converges and rises exponentially is not as significant as not creating the knowledge that is empowering future advances in technology."

Pearson adds, "The increased power of smart individuals is more of a problem, especially in NBIC [nanotech, biotech, infotech, and cognitive science] areas. Unabomberstyle activity from inconspicuous people within a community is more of a danger than hostile states or terrorist groups."

Steele warns that, "not only is the United States ceding the 'left brain' sciences, but the continuation of a linear, industrial model for education has [it] ceding a growing need for 'right brain'-creative and synergistic -- thinking."

7. Technology Is Creating a Knowledge-Dependent Global Society

More and more businesses-and entire industries-are based on the production and exchange of information and ideas rather than exclusively on manufactured goods or other tangible products. At the same time, manufacturers and sellers of physical products are able to capture and analyze much more information about buyers' needs and preferences, making the selling process more efficient and effective.

Implications for Information Warfare and Operations: Increasing dependence on technology effectively translates to growing fragility. Disrupt essential information or communications systems, and a company, government agency, or military unit could be dead in the water, or at least cut off from oversight and coordination with its partners. Telecommuting systems, for example, offer several obvious opportunities to disrupt the operations of the company or agency that depends on them.

"The 'bunker-buster' ammunition that could be brought to bear within the context of cyberwar has not yet been deployed (or at least apparently not yet in a manner that has worked well)," says Cynthia E. Ayers, a security specialist and visiting professor at the U.S. Army War College's Center for Strategic Leadership. "How knowledge-dependent populations react-or how 'new media' societies are capable of reacting-when such weapons are deployed may ultimately determine their fate. The chaos that could be caused either under a limited (homemade) EMP [electromagnetic pulse] scenario or as a result of one or more high-altitude nuclear blasts would be devastating to a Western population in many ways. The losses incurred would make the current economic downturn seem like a mere irritant."

Lt. Col. Kevin Gary Rowlatt of the Australian Army observes that "countermeasures to cyberthreats developed by us will impede our ability to work effectively, let alone efficiently. Firewalls, authentication, and encryption programs have the potential to slow the flow of information. An enemy would love to slow down some decision cycles. This approach would allow them to achieve the aim simply by presenting a threat, be it credible or virtual. We become distrustful of information contained or processed within cyber networks."


8. Militant Islam Continues to Spread and Gain Power

It has been clear for years that the Muslim lands face severe problems with religious extremists dedicated to advancing their political, social, and doctrinal views by any means necessary. The overthrow of Saddam Hussein and the American occupation of Iraq has inspired a new generation of jihadists, who have been trained and battle-hardened in the growing insurgency.

Implications for Information Warfare and Operations: Information systems are another category of attack that Muslim radicals could mount against their chosen enemies in the West. One likely source of such an attack would be India, a land with a substantial Muslim minority (about 150 million people) and strong computer and communications industries.

As Ayers observes, "It has long been noted that radical Islamists have been using the Internet to preach, recruit, glorify suicide-bombers, and perform training on a global basis. The 'e-possibilities' for Islamic militants are obviously limited only [by] the imagination, just as they are for more harmonious or legitimate activities. The cyberworld offers a wealth of opportunity to engage in the spread of Islam, followed by -- or in conjunction with -- a cyberwar that would be seen as just in the Islamic tradition."

9. International Exposure Includes A Growing Risk of Terrorist Attack

Terrorism has continued to grow around the world as the wars in Iraq and Afghanistan proceed, even as the rate of violence in Iraq itself has declined. Nothing will prevent small, local political organizations and special- interest groups from using terror to promote their causes.

On balance, the amount of terrorist activity in the world will continue to rise, not decline, in the next 10 years. In fact, terrorist attacks have risen sharply since the invasion of Iraq, both in number and in severity.

Implications for Information Warfare and Operations: Until the terrorist problem is brought under control- which will probably not happen for at least a generation-we will face a growing threat that Muslim extremists will master computer and Internet technologies and use their skills to disrupt essential communications and data. The impact will be seen in U.S. corporations, research laboratories, universities, utilities companies, and manufacturing. Cyber operations will be at best second choices for many terrorists, who prefer the newsworthy gore of attacks with bombs and firearms. However, their potential for maximum economic impact with minimum risk eventually will make them irresistible to forward-looking extremists.

"National security needs to address the freedom that big business has in moving its IT services off shore," says Rowlatt. "If a business is a major contributor to a nation's GDP, then what right does it have to expose its 'cyber underbelly' to a foreign power, which in turn, exposes the nation to unnecessary cyber risks? Look at how terrorists targeted Mumbai, the cyber center for India, which serviced many international organizations' IT needs."

10. The World's Population Will Grow To 9.2 Billion by 2050

The greatest fertility rate is found in those countries least able to support their existing people-the Palestinian Territories, Yemen, Angola, the Democratic Republic of Congo, and Uganda. In contrast, populations in most developed countries are stable or declining. The United States is a prominent exception.

Implications for Information Warfare and Operations: The world population's growth in itself is less significant than where that growth is concentrated. India already has the largest supply of English-speaking [people]," observes Francis G. Hoffman, research fellow at the Marine Corps Center for Threats and Opportunities. "The educational systems in the latter will not support the advancement of knowledge workers to any degree, and could be swamped by poor governance, lack of services, and chronic disorder. Many places in Asia will experience some of the same downsides of large population growth without adequate governance, services, and education."

Steele adds that the disparities in population growth will widen the gap between developed and developing worlds, producing "environments of anomie and alienation as a breeding ground for terrorist ideology." Moreover, increased education and technological sophistication in the developing world could compound these problems. Steele argues, "A growing proportion of the world's population (including the developing world) is gaining primary and secondary-school-equivalent education. The diffusion of cyber systems in the developing world increases opportunity for global cyberwar."

Conclusion: Lessons for Avoiding Cyberwar

Our major concern is no longer weapons of mass destruction, but weapons of mass disruption. The cost of "going nuclear" is simply too high for atomic weapons to be used by any but a rogue state unconcerned with its own survival. Cyberweapons may kill fewer people, but they can have enormous economic impact. A particularly clever opponent might even carry out a devastating attack without ever being identified or facing retribution. Information has become the battlefield of choice. It will remain so well into the future.


Lesson number one: As the world becomes more dependent on information technology, it becomes more fragile. It is possible to make any specific site or network more secure, but not the "system" as a whole. As network connections proliferate, electronic controls-for example, of petroleum refineries, chemical plants, or electrical grids-become more complex and interlinked, and the number of users grows, the opportunities to interfere with its operations expand exponentially. There is a growing possibility that even accidental missteps could cause significant harm. This damage would not necessarily be limited to data but could strike at real-world infrastructure , with potentially devastating effects. Economic losses could be severe, and loss of life is possible.

Lesson number two: Cybercrime could be as significant as cyberwar. Four members of our panel cited profit-motive information crimes as a problem of potential importance. An information "protection racket" aimed at financial institutions could entail serious economic risks, and perhaps security risks as well. These crimes might use many of the same techniques as information warfare and could be difficult to distinguish from it. Indeed, in a world where rogue governments have supported themselves in part through counterfeiting major currencies, there may be no useful distinction. However, it is not clear that cyberwar and cybercrime will be amenable to the same countermeasures.

Lesson number three: The rise of artificial intelligence will change the nature of cyberwar. As computer systems "learn" to imitate human reasoning and skills, the nature of cyberwar will change. Instead of relying on human hackers to carry out their attacks, antagonists will automate their information warfare, relying on AI systems to probe opposing defenses, carry out attacks, and defend against enemy AI. This competition will quickly outstrip human control, or even monitoring. This is one aspect of the hypothetical "Singularity," the time when artificial intelligence exceeds our own and it becomes impossible even in theory to predict what will happen in the further future.

Lesson number four: The United States is losing its leadership in critical technologies. As other countries build up their technological capacity, the United States is allowing its own to deteriorate. As China and India turn out more scientists, engineers, doctors, and technicians, the United States has been producing fewer. As other lands spend more on research and development, the United States has been spending less. And as other countries devote more of their research budgets to fundamental science, where breakthroughs happen, the United States has focused increasingly on short-term applications. All this may put America at a serious disadvantage in future cyberwars.


In the spring of 2009, the U.S. government undertook new steps to meet this threat. The Pentagon is in the process of creating a Cyber Command center with the aim of protecting the Department of Defense's 17,000 networks and 7 million computers from attack. President Obama also announced a new "cyber czar" position within the administration. Scott Charney, former head of security at Microsoft , is said to be on the top of the shortlist. The question of how effective any one cabinet official can be against a cyberattack remains unanswered.

"They're still trying to fight this problem within the traditional command and control structure," says Patrick Tucker, senior editor of THE FUTURIST. "How does a czar take down an international, unaffiliated network of anonymous attackers? It's like using a hammer against killer bees."

Many questions about information warfare remain to be answered. What are the most likely targets? How would they be attacked? What are the probability and potential impact of each attack? What would the consequences be in terms of human lives, economic cost, and continuing disruption? How could we tell such an attack was coming? And most importantly, what could we do to stop it? These future-critical questions urgently need further study.

Password Managers: Your Key to Safe Surfing

Passwords that are at least eight characters long and are a combination of letters, numerals, and symbols are the best. One common tip these days for creating secure passwords is to think of a sentence you're unlikely to forget -- such as "I was born in 1945" -- and then create a password consisting of the first letter of each word.


Take all of the antivirus, anti-spyware, and anti-phishing software in the world. None of it can protect you if you surf the Internet with weak or weakly-protected passwords.
Just imagine the consequences if hackers were able to obtain one or more of your passwords. Would they be able to access your bank accounts, online shopping accounts, credit cards, and more? Even one compromised password could be big trouble.

Most people know this. Yet many, recent reports suggest, continue to use the same password for most if not all of their online accounts.

A recent study by U.S.-based communications firm @www found that over 60 percent of Internet users employ the same password for all of their online accounts. Other recent studies resulted in similar findings. So what's the solution?

Password Managers

Password managers can be a great solution to the problem of trying to create and remember passwords. There are dozens on the market, but two stand out on most people's lists as best-of-breed: RoboForm and Lastpass.

Roboform (http://www.roboform.com) has been around for many years. It has evolved from a first-class form filling application -- with a free version as well as a commercial "pro" version -- into a combination password-form filler that integrates into your browser by means of a toolbar. It's fast, easy to use, and contains no annoying pop-ups or adware.

The main knock against Roboform has been that the process of synchronizing your passwords on one machine with those on another is less than elegant. An add-on product, RoboForm2GO, is required to take your password and form filling data with you to another machine. And yet another associated product, GoodSync, can help to keep passwords, form data, and other common application data in sync automatically, assuming the computers are connected or that you carry around a flash drive with the latest updates.

What's really missing, though, is the ease of use that would come with being able to synchronize passwords, form data, and other data over the Internet. Such a system uses the Internet as an intermediate storage location. That way, when you log on with your second or third computer, you can quickly and easily synchronize your passwords by accessing the synchronization file online.

The programmers at SiberSystems, makers of RoboForm, are addressing this shortcoming with the introduction of RoboForm Online (https://online.roboform.com), currently in beta. RoboForm Online works its magic by allowing you to store your passwords and data on servers supplied by SiberSystems. That way, no matter where you are or which computer you're using, RoboForm Online will automatically keep your passwords up to date by fetching the latest passwords and other data from the server online.


RoboForm Online is no doubt a welcome enhancement for veteran RoboForm users. But RoboForm Online is actually playing catch up to relative newcomer Lastpass (https://lastpass.com), also available for free.

Lastpass was built from the ground up with easy synchronization in mind. Essentially a Web-based application, Lastpass stores an encrypted copy of your passwords and other Internet data in your online Lastpass account. Go to a new computer, and all you have to do is log in to your Lastpass account to get your passwords installed on the new machine.

Lastpass, like RoboForm, attempts to do much more than just store passwords. It's also a form filler, allowing you to create multiple identities for different types of form filling activity. RoboForm's form filling capabilities are a bit more robust than those of Lastpass. RoboForm allows the creation of unlimited custom fields that the program should automatically recognize and fill, for example. But Lastpass's form-filling features are enough for most, and the program's ease of use and elegant synchronization method stand out.

Security Issues

While online password synchronization is clearly an important feature -- and the direction in which password managers are going -- many might justifiably be concerned about how safe their password data is on someone else's server.

Both Lastpass and RoboForm state that no unencrypted password or personal information is ever sent over the Internet through their applications or stored on their computers. The only way for a third party to be able to see your data is to have an unencryption key, which is something you create and is never transmitted along with your encrypted data. Now, if you don't want to be one of those who tests the veracity of one these companies' claims, then you may want to stick with the less portable RoboForm -- or even create your own passwords.

Do It Yourself

If the idea of a password manager doesn't appeal to you, you can create secure passwords that are tough to crack. But you need to follow some guidelines.

First, avoid creating passwords that are common names, years (as in year of birth), or words that can be found in the dictionary. Also avoid names -- especially the name of your spouse, your kids, or your pet.

Passwords that are at least eight characters long and are a combination of letters, numerals, and symbols are the best. One common tip these days for creating secure passwords is to think of a sentence you're unlikely to forget -- such as "I was born in 1945" -- and then create a password consisting of the first letter of each word, and include any numbers. So for the example above, your password would be "iwbi1945." Experts suggest mixing numbers or symbols in-between letters for extra security.

Once you have a secure password, use it for one site -- and one site only. Remember that if you tend to use the same password for everything, a skillful hacker could get into all of your online accounts by guessing just one password. You'll want to avoid that at all costs.

Finally, don't write your passwords down. You'd be surprised at just how many people live with passwords written on sticky notes that are close to their computer -- there for anyone to uncover.

But if creating and remembering multiple, secure passwords seems to you to be a daunting task, that's because it is. These days, a password management add-on is really a necessity.

Microsoft Opens Office Web Apps for Selective Testing

Microsoft will allow a select group to test its Office Web Apps online. Invitees to the Office Web Apps Technical Preview program will be able to access lightweight versions of Microsoft Word, Excel and PowerPoint. Microsoft said Office Web Apps will be integrated with Office 2010 to deliver productivity across PCs, mobile devices, and browsers.
Microsoft took the cover off its Web-based versions of the Microsoft Office suite on Thursday, at least partially. The software giant offered what it calls the Office Web Apps Technical Preview program that will allow a select group to give the software a test drive before the official beta rolls out later this year.
Invitees will receive access to a lightweight version of Microsoft Word, Excel and PowerPoint on the Web through Windows Live. Microsoft also announced the formal name for the Web-based applications: Office Web Apps. The suite includes Word Web App, Excel Web App, PowerPoint Web App, and OneNote Web App.

Microsoft's aim with Office Web Apps is to allow people to access, share and work on Office documents from virtually anywhere with an Internet connection.

"Our mission with the upcoming release of Microsoft Office 2010 is to deliver a great productivity experience, improving upon what customers depend upon today, and innovating on what they'll expect tomorrow. Office Web Apps are a key part of our vision for Office 2010," said Michael Schultz, director of marketing for Microsoft Office Services.

Anywhere Productivity
Schultz said the Office 2010 release is designed to deliver a productivity experience across PCs, mobile devices, and browsers. Office Web Apps will be integrated with Office to give users the ability to save open documents on the Web directly from Microsoft Office 2010.

Microsoft is offering Office Web Apps through Windows Live because the company sees it as a strategic hub for people to store and share information such as photos, contacts, calendars and documents on Windows Live SkyDrive.

"The latest statistics tell us two-thirds of the worldwide population is online at least once a month, and in the United States, 89 percent of the top 100 companies offer telecommuting," Schultz said. "That means people are on the move and need to stay productive with access to their information, no matter where they are. Office Web Apps empower people to access information and edit and share documents in a familiar environment from practically anywhere, on virtually any device."

The full feature set for Office Web Apps will be available in the first half of 2010, and offered in three ways. Windows Live customers will have access to Office Web Apps on Windows Live SkyDrive. Office Web Apps will be available to Office 2010 volume-licensing business customers, hosted with a Microsoft SharePoint Server on-premises. Businesses will also have access to Office Web Apps through Microsoft Online Services.

Better Than Google Apps?
Matt Rosoff, an analyst at Directions on Microsoft, said Office Web Apps appears to give end users an almost identical experience to what they will get with Office 2010. And he points to a job well done with document fidelity.

"There's round-tripping, so if you open a document online and you also decide to open it in your local version of Office and make changes, those changes will be saved back to the online version," Rosoff said. "It really made it pretty transparent to the end user."

The big question: How does it compare to Google Apps? Rosoff said Office Web Apps offers more functionality than the free version of Google Apps.

"Google Apps has a 500k maximum upload size. So if you have any images or if you have a long text document with formatting and images, that's not going to work in Google Apps," Rosoff said. "With these Office Web Apps, you get 25 gigabytes of storage per user. It's pretty generous for a free service."

Twitter's Legal Challenges: Lessons for Startups

Not every startup can achieve success to match the speed and scale of Twitter's ascent, but every startup can learn some important lessons from Twitter's experience. For example, it's never too early to protect your intellectual property. Taking proactive steps is better than waiting until a lawsuit rears its head.

On March 21, 2006, Twitter cofounder Jack Dorsey sent out the first tweet: "Just setting up my twittr."
A mere three years later, in February 2009, Twitter had approximately 4 million visitors. At that point, Twitter set an audacious goal of one day reaching 1 billion users and becoming "the pulse of the planet."
A mere six months later, Twitter's user base had grown exponentially from 4 million to 20 million in the United States, and almost double that to 37 million worldwide.
With such rapid growth, reaching a billion people one day all of sudden seems less an impossible task and more a reachable target. However, as Twitter cements its status as the Internet's next big thing, the company faces the inevitable legal challenges that come with its increased popularity.

Long Litigation Road Ahead
One legal battle emerging companies such as Twitter often face involves patent litigation. For Twitter, the first lawsuit of what surely will be many such lawsuits was filed on August 4, 2009. A Texas company, TechRadium, sued Twitter in the Southern District Court of Texas, alleging infringement of three of its patents, which relate to a technology that sends out mass notifications via telephones, faxes and wireless systems.
These types of "we did it first" lawsuits will only escalate with Twitter's increasing prominence and expanding user base. Given its enormous popularity, Twitter no doubt has been anticipating these types of lawsuits.
In fact, Twitter recognized in its internal meetings that it would likely be sued for patent infringement repeatedly and often.
Twitter is probably more ready to defend itself in these lawsuits than some other startup companies that may have been unaware of such threats. Above and beyond being prepared for these types of lawsuits, Twitter is considering taking a more aggressive step to hire patent attorneys to go after patents proactively.

'Tweet' Trademark Tussle
In addition to patent protection, another proactive step startup companies should consider taking as early as possible is obtaining trademark protection. Twitter provides an object lesson in the need for prompt action. It may have been too late in seeking trademark registration.
Twitter filed for a trademark on "tweet" on April 16, 2009. But by that time, three other companies -- TweetMarks, Cotweet, and Tweetphoto -- had already applied for trademarks containing "tweet." On July 1, 2009, the United States Patent and Trademark Office preliminarily denied Twitter's trademark application, citing those three pending trademark applications, all of which were filed prior to Twitter's application.
While obviously not affecting Twitter's common law trademark rights, Twitter's delay in seeking trademark registration may present an obstacle in its own enforcement activities. On the morning of July 1, 2009, Twitter made public an email sent to a developer, asking that person to find a new name for his application.
Twitter also reassured the Internet community that it does not plan to go after the use of "tweet" when associated with the Twitter brand, but will do so to protect the brand if the use of "tweet" is confusing or damaging. Until the Patent and Trademark Office grants Twitter's application, however, Twitter does not hold the registered trademark to "tweet" and will be limited in its remedies when enforcing any alleged violation.

Winning the Verification Game
In addition to those patent and trademark issues, Twitter also faced a legal challenge involving the issue of right of publicity, a person's right to control the use of his name and likeness. A lawsuit was filed against Twitter in May of 2009 by St. Louis Cardinals manager, Tony La Russa. In his complaint, La Russa alleged that someone pretending to be him created a fake profile and that the unauthorized page damaged his reputation and caused him emotional distress.
The fake La Russa tweets made light of the deaths of Cardinals pitcher Darryl Kile and Cardinals reliever Josh Hancock. Though the fake account was shut down, La Russa sought damages for the misappropriation of his name and likeness, trademark infringement and trademark dilution, among other charges.
In early June of 2009, the case took a strange turn when La Russa announced that Twitter settled with him and agreed to pay his legal fees as well as make a donation to his charity. Twitter, on the other hand, blogged that it was "not playing ball," and stated that "Twitter has not settled, nor do we plan to settle or pay." On June 26, 2009, the case was finally voluntarily dismissed by La Russa, with prejudice.
It appears that Twitter may have won that round. The dismissal clearly states that "no payment was made by Twitter to La Russa in exchange for this dismissal."
Twitter, however, is taking this opportunity to improve the Twitter user experience through its Verified Accounts concept. Instead of simply removing the fake accounts once alerted to them, Twitter apparently is going one step further to verify user accounts. Twitter has begun Beta testing the Verified Accounts. The first group of user accounts that may receive a verified account badge will be "well-known accounts that have had problems with impersonation or identity confusion" (e.g., famous artists, athletes, actors, public officials, public agencies etc.). For instance, President Obama's Twitter account is verified.
Twitter's response to the La Russa lawsuit indicates its willingness to change and adapt its technology to an evolving legal landscape. To succeed, emerging companies need to be similarly reactive.
While it is difficult to predict the next Twitter, one thing is certain: To succeed, startup companies need to protect their intellectual property with patents and trademarks, anticipate and be prepared for potential litigation over their intellectual property rights, and react quickly to those challenges on both the legal and technical fronts.

5 Keys for Full Recovery in the Cloud

The cloud is a natural solution for disaster recovery, but careful consideration must be given before entrusting you data to a sky-high backup repository. Can you recover workloads from the cloud? How well does it scale? What's the nature of its billing system? Is its infrastructure secure? And will it offer complete protection?

While cloud computing is a familiar term, its definitions can vary greatly. So when it comes to online backup, the cloud is an important feature that can play a large role in securing and protecting during a disaster, which I like to refer to as "cloud recovery."
In order to be worthy of this cloud recovery title, a solution should have the following five features, which I have outlined below.

1. Recover Workloads in the Cloud
There is an old saying in the data protection business that the whole point of backing up is preparing to restore. Having a backup copy of your data is important, but it takes more than a pile of tapes (or an online account) to restore. You might need a replacement server, new storage, and maybe even a new data center, depending on what went wrong.
The traditional solutions to this need are to either keep spare servers in a disaster recovery data center or suffer the downtime while you order and configure new equipment. With a cloud recovery solution, you don't want just your data in the cloud -- you want the ability to actually start up applications and use them, no matter what went wrong in your environment.

2. Unlimited Scalability
If you were buying disaster recovery servers for yourself, you would have to buy one for each of your critical production servers. The whole point of recovering to the cloud is that they already have plenty of servers.
The ideal cloud recovery solution won't charge you for those servers up front but is sure to have as much capacity as you need, when you need it. Under this model, your costs are much lower than building it yourself, because you get the benefit of duplicating your environment without the cost.

3. Pay-Per-Use Billing
I love pay-as-you-go business models because they force the vendor to have a good product. Plus, this make the buying decision much easier -- just sign up for a month or two (or six), and see how it goes.
Removing the up-front price and long-term commitment shifts the risk away from the customer and onto the vendor. The vendor just has to keep the quality up to keep customers loyal.
We also know that data centers are more cost-efficient at larger scale, especially the management effort, and they require constant improvement. In your own data center, you might have some custom configurations, but in the data recovery data center, you just need racks, stacks of servers, power and cooling. You are much better off paying a monthly fee to someone who specializes.

4. Secure and Reliable Infrastructure
Lots of people like to bash cloud providers for security and reliability, but I think they hold the providers to the wrong standard. Although it is fine, in the abstract, to point out all the places where cloud providers don't achieve perfection in security and reliability, as a customer evaluating a cloud vendor, it seems better to compare them to your own capabilities.
I believe that most of the major cloud providers' infrastructures are more secure and more reliable than those of most private data centers. The point is that security and reliability are hard, but they are easier at scale. Having control over your own data center isn't enough -- you also have to spend the money to buy the necessary equipment, software , and expertise. For most companies, infrastructure is a necessary evil. Companies like Amazon (Nasdaq: AMZN) and Rackspace do infrastructure for a living, they and do it at huge scale. Sure, Amazon's outages get reported in news, but do you think you can outperform them over the next couple of years?

5. Complete Protection
Remember the "preparing to restore" line? For me, it really comes home in this idea of complete protection. If your backup product asks you what you want to protect, I am already suspicious. My vote is, "get it all." I see lots of online products offering 20GB plans, and to me, they look like an accident waiting to happen. I don't want to know which files I need to protect -- I want to click "start" and know that any time I want, I can click "recover", and there won't be any "please insert your original disk" issues.
The places people normally get bitten by this are with databases (do you have the right agent?), configuration changes (patched your server, or added a new directory of files?), and weird applications (the one that a consultant set up, and you don't really understand how it works). Complete protection means that all of these things can be protected without requiring an expert in either your own systems, or with the cloud recovery solution.