Open Source

Open Source Softwares As An Alternative For Small Businesses

Is it safe to use? How secured is it? Are there quality softwares?

These are some of the common questions asked by small scale business owners when they are faced with decisions to adopt open source solutions that could make their day-to-day operations more efficient. For many entrepreneurs without any IT background (but a sharp business sense), these are brilliant and relevant questions. The answers to these questions may help reduce the apprehension of a significant number of small scale business owners. My interactions have shown that a lot of these businesses are looking to grow, enhance their productivity, and most importantly, save costs.

A significant number of businesses in the developing world, are of a mom-and-pop business nature. Based on my recent interactions with these small scale business owners, I see widespread misconceptions pertaining to open source software. In order for small scale businesses to adopt open source solutions, it is vital to address these misconceptions.

Is Open Source Software Really Safe?

The question arises from the basic process that is followed to write code using open source way. If any hacker can read your code, then why can’t they use the knowledge to their personal benefit? Most of those sorts of malicious attempts fail because there are a lot of committed people looking over the source code, finding problems, and fixing them. More eyes tame bugs quickly. And security by obscurity is no security at all. What strikes me at this point of time are the words of security expert Bruce Schneier, “Public security is always more secure than proprietary security. For us, open source isn’t just a business model; it’s smart engineering practice.”

Developing code in an open source fashion is an expression of a technique. Software, in our world, should be treated as a service which can be customized based on the specific needs of a user, rather than merely as a product.

I know a lot of people involved at different levels of open source projects. All of them are driven by their commitment to reach technical and professional excellence, and to add to the existing body of technology knowledge. The entire ecosystem of open source is built on that commitment. The Linux operating system, for example, with its proven track record of stability and security, forms the backbone of complex infrastructures and data centers world over. The same benefits that help Linux and other open source tools succeed at the enterprise level can be reaped by small businesses, too.

A couple of months back, I read Thomas Friedman’s The World is Flat. An otherwise helpful and insightful book, the author seems to host a thought process that open source is contrary to the developers’ right to make a profit. A lot of people who think that way do not see the forest for the trees. They see free software, they see Linux, but they miss the multi-billion dollar ecosystem that surrounds open source. Its true that Brian Behlendorf, the person who orchestrated Apache web server, did not make a dime off it, but the immense value that this server has added to the economy and the legions of small to medium size businesses that use this infrastructure is an important contribution. Free software developed by a community is not tantamount to insecurity.

Are there quality Open Source alternatives available?

Gone are the days when open source was produced only by the engineers, for the engineers. From word processing to calendar applications to servers and to setting up telephone communication networks, small businesses can benefit hugely from open source solutions. Let us take the example of word processing, an activity that almost all small businesses, irrespective of their field, carry out.

Microsoft Word is the premium software in the area but it is cluttered with features that a lot of small businesses won’t ever use. The bloating of Microsoft Word has cost its simplicity. There are easy to use, simple, free, and open source word processors available out their. A few of these that I have been using (and suggesting to small businesses) as an alternative to Microsoft Word are:

Apache Open Office: This software primarily consists of six tools for managing office tasks, namely: Writer as a word processor, Calc as a spreadsheet tool, Impress for multimedia presentations, Draw for diagrams and 3D applications, Base as a database tool, and Math for creating mathematical equations.

AbiWord: Developed in 1998 with the help of gtkmm, this open source word processor includes both simple word processing features to sophisticated features like multiple views, page columns, and grammar checking.

LibreOffice : This is my favorite and always at the top of my recommendation list for anyone looking for a free and efficient word processing suite. Although the features are similiar to those of ApacheOpen Office, LibreOffice is better when it comes to community support.

There are dozens of other excellent alternative solutions to proprietary software and thousands of open source projects that can serve small businesses. It can sometimes be difficult to select the software which best matches specific needs, but there are plenty of people globally willing to help you make those decisions and help take small businesses down the path to an open and productive future.


Culled From

Entertainment Gadgets

Xbox Music – Free Unlimited Music Downloads

One of my favourite apps available on Windows 8.1 OS is the Xbox music. This service was introduced with Windows 8 at launch in 2012 but for some reasons, it was not available to Nigerian card holders. However, the gates have now been flung open to just about anyone that cares.

The business model for this music service from Microsoft can be summarized as follows;

  • Unlimited Music Streaming
  • Unlimited Music Downloads
  • Stream with your Android and iOS devices
  • Access to over 30 million songs. Believe me, even very old school Nigerian tracks from Cloud 7, Bongos Ikwue & Sonny Okosuns are available.
  • All for a monthly fee of US$9.99 (N1650), which can be paid for using your regular Naira denominated Visa/Mastercard cards.
  • You even get this service free for a whole month. You get charged only from the second month

Too good to be true? Okay, there are caveats;

  • Unless you make an outright purchase of a track, usually in MP3 format for US$0.99 per track, songs downloaded with your subscription (called Xbox Music Pass) are usually in encrypted (DRM) WMA format. The tracks are not playable elsewhere but only on PCs that you are signed into with your Xbox Music credentials.
  • You can not download music using Xbox Music Android and iOS apps.
  • The Android and iOS apps are still not available in Nigeria but there are workarounds.

Even with these limitations, i believe it is still worth the fees being charged. Moreso, i am actively working on ways to circumvent the restrictions Microsoft has put in place. I have some ideas already…


Gadgets Mobile

How Free Apps Can Make More Money Than Paid Apps

While building apps for Apple and Android app stores can be highly lucrative ventures for developers, one of the hardest decisions an app developer has to make is how to get the app to pay for itself. Often the “monetization strategy” — shorthand for “how will this app make money?” — is left for last.

It’s hard enough to get discovered by consumers among the millions of already existing apps, not to mention convince people to buy it. People increasingly prefer free, ad-supported apps for their tablets and smartphones, yet many developers still aren’t sure how to tackle the free vs. paid issue. Deciding when to charge for your app, and when to try an ad-supported model, is one of the hardest decisions developers must make.

Four Monetization Strategies for Apps?

Developers have several monetization options available, each with its own requirements and pitfalls.

Before moving forward with a strategy though, there are a few of questions an app developer should explore in order to answer the ultimate question, “how can I monetize my app?”

  • Is my app engaging enough for people to use it often?
  • How willing are people to pay an up-front fee for my app?
  • How do competitors in my space monetize their apps, and how successful are their strategies?

As app markets across platforms explode, developers are talking to each other to determine the best type of monetization model to use. Most will tell you it’s a choice among four major options:

  • Selling your app in the app store
  • Offering a free, subscription-supported app
  • Offering a free app, with in-app purchases
  • Offering a free, ad-supported app

But the choice really boils down to two strategies: getting paid by users or getting paid by advertisers.

Who Pays More for Apps: Users or Advertisers? (AKA Which Monetization Strategy Makes More Money: Free or Paid?

When it comes to users, the overwhelming majority of Android and iOS users resist paying — whether it’s for apps, subscriptions, or add-ons–so smartphone and tablet developers are particularly interested in experimenting with monetization through ads. On the other hand, advertisers are MUCH more willing to pay developers than users are. Just like developers, advertisers need to market their product.

How Much Money Can an App Make With Advertising?

Ad spending on apps of all kinds – both mobile and desktop — is growing. Most industry analysts choose to measure only mobile app spending though, as most apps are created for the mobile and smartphone space. Mobile advertising revenue increased nearly 1.5X in 2011, to top out at $1.6 billion for the year.

The future of app monetization clearly lies in ad-supported model. A recent study by Cambridge University computer scientists found that 73% of apps in the Android marketplace were free, and of those, 80% relied on advertising as their main business model. Free apps are also far more popular in terms of downloads, the researchers said. Just 20% of paid apps are downloaded more than 100 times and only 0.2% of paid apps are downloaded more than 10,000 times. On the flipside, 20% of free apps get 10,000 or more downloads.

The Best Part Is: You Can Deploy Multiple Monetization Strategies

While free apps reach the majority of users who tend to be price-sensitive and almost never buy apps, there is a subset of users who prefer to avoid advertising and seek paid (sometimes called “pro”) versions of their favorite apps.

Developers can cater to both types of users with a two-pronged approach to app development: create both a free version and a paid version. The multi-pronged approach is popular with big players in the publishing industry. For example, The Guardian (UK) is testing two apps: a free, ad-supported Android app and a paid-only iPhone app. Echofon is also a great example of an app that caters to both kinds of users. They have free and paid apps for all platform versions of their Twitter apps.

In short, before moving forward, investigate all ad-supported revenue options when you launch a new app. There are many ad networks available – shop around and see what CPMs (cost per thousand impressions) or CPCs (cost per click) they offer and how much work is involved integrating ads into your app.

And more than anything, make sure to consider any changes to the user experience that will occur if you introduce by the ads. After all, you want ads that integrate well into the user experience, rather than ads that drive users away in annoyance.



The Death Of The Smartphone

Smartphones and tablets might be the current hot technology, but history says it’s all just another fad. Twenty years from now, almost nobody will own either device. Seems unbelievable, but the same technology that makes them hot today will make them not tomorrow. If this sounds ridiculous, consider what happened to another “must-have” technology that almost nobody uses any more: the fax machine.

Back in 1991, the Baby Bells were predicting an explosion of landlines and a corresponding shortage of phone numbers because “everyone will need a fax machine.” Phone companies offered to lease fax machines for “only (US)$60 a month on a three-year contract.” (Sound familiar?) Newspapers were offering early faxes of their main stories to subscribers for a buck a day. Every office supply store had shelf after shelf of fax machines for home and office use.

All those dreams got trashed by the Internet and cheap computers. Email attachments killed the fax machine boom. Today a fax “machine” is a $1 chip in a laptop, and like the modem chip, nobody even bothers to configure it. Faxing the newspaper? Newspapers are dropping like old news, and paywalls are mostly money-losers. Even those cries of “mom, we need a second line for the Internet” are just a dim memory. Instead of two, three or four landlines, many homes now have none. Indeed, many existing “landlines” are actually VoIP phones.

Holding On for Dear Life

The problem facing the telcos is that they’re in the phone business, not the “find the best way for people to communicate and give it to them at a competitive price” business. Their product is access to the telephone network. Worse, their entire business model hinges on an archaism — the 10-digit phone number monopoly. People increasingly don’t use phone numbers to contact each other, and the telcos are at risk of becoming just another data pipe for when you’re not near a WiFi connection.

Fax machines are just one of many examples of the future not turning out the way the telcos envisioned it. “Sure-thing” premium services like video calling never saw beyond limited use — too expensive, and people were not willing to shell out $600 for a videophone, plus the extra monthly charges — not when there was almost nobody to talk to on the phone network. Now it’s too late. You can have your “videophone of the future” experience via Skype, Google Talk or Google+ Hangouts at no extra charge.

Also dying is the business model of locking customers into long-term contracts by financing expensive mobile phones. Unlocked Android smartphones are going for less than $200 with no contract, and LG makes a nice $60 flip-phone.

Rise of the Smart Network

Today the same technology that lets phone companies move voice calls cheaply over the Internet also directly competes with them. What keeps phone subscribers on the hook are inertia (the “phone number” habit), lower prices, and increasing services — all of which explain why I’m paying less for a phone with unlimited calling across the country today than I was for local service 20 years ago.

The clock is ticking … and IPv6 will be the second-to-last step in our journey to a phone-free future, where every device has its own unique “phone number” and the network has enough smarts to locate you wherever you are, routing all communications to the nearest device, whether it’s a TV, car, public security camera, or the active display on the shopping cart at the mall.

Smartphones and Tablets in 2031?

Let’s go 20 years in the future. Pretty much every electronic device can interact with your video SPEKZ, which can be anything from a pair of plain-jane NokiaSofts to the latest cool shades from Apple. Cars, streetlight surveillance cams, water meters, televisions, and even your clock radio are all talking to each other — and your SPEKZ are piggybacking on their data streams. There’s not a single laptop, desktop, smartphone or tablet computer in sight.

It’s an amazingly seamless experience. The tiny twin cams on your SPEKZ let you share what you see with your friends and stream a copy to your home server. Your watch and charm bracelet contain sensors to detect your wrist movements and the muscles and tendons of your fingers flexing, all descended from Nintendo WiiMote technology.

Of course, since most men would be about as likely to wear a charm bracelet as they would a pink shirt (some things haven’t changed), they can also sub-vocalize emails and use eye-tracking technology to make selections “just like a fighter pilot!” You type on your SPEKZ virtual keypad and pick from menus and icons floating in 3D before your eyes.

Passwords? “What’s a password, mom?” Instead, your watchface contains a small camera that does both facial and fingerprint identification as well as other biometrics, and your SPEKZ do retinal, iris and voice ID.

It’s a safer, more polite world. The latest Amber Alert system allows people to opt in to automatically search the last few minutes of their SPEKZ data stream against a possible match. Road rage is also much less frequent, and not only because most cars are driving themselves. People even stoop and scoop because other fed-up dog owners forward SPEKZ videos of the culprits caught in the act to the city and post them on the Net.

SPEKZ systems are also saving lives. Before SPEKZ, 20 percent of all heart attacks went undetected. Now, biometric watchbands and ubiquitous WiFi detect heart attacks, heat strokes and hypothermia earlier, and your SPEKZ alert medical services even when you can’t.

How Do We Get From Here to There?

The telcos and ISPs will continue to try to oppose ubiquitous free WiFi mesh networks, just like they’re dragging their feet on implementing IPv6, but competition and public safety concerns will trump their increasingly weakened lobby.

With both phones and their phone network monopolies long gone, carriers will have to settle for being sellers of wireless bandwidth in areas without regular WiFi coverage, and operators of commodity infrastructure.

Source – Linux Insider

– Posted using BlogPress from my iPad


The Sins Of Ubuntu

Canonical Ltd., the company behind Ubuntu Linux, estimates that the product has over 12 million users worldwide. And why not? Ubuntu is free and it runs more than ten thousand applications. It has a vibrant user community, websites covering everything you might ever need to know, good tutorials, a paid support option, and more. Yet I often hear friends and co-workers casually criticize Ubuntu. Perhaps this the price of success. Or is it? In this article I’ll analyze common criticisms and try to sort fact from fiction.

I should mention that I’m a big Ubuntu fan and have used it for five years. Even so, it pains me to see the obvious ways it could improve. As I’ll explain, I believe Canonical’s business model holds Ubuntu back from fulfilling its potential.

Why It Matters

One obvious response to anyone who criticizes Ubuntu is to say to them: why don’t you just run another operating system? There are so many competing Linux and BSD distros out there.

True. But there is a larger issue here. Ubuntu’s great popularity means that it represents Linux to many people. It’s the distro vendors pre-install. It’s the distro the mainstream media always review. It’s the one distro everybody’s tried. It’s been ranked #1 in DistroWatch‘s yearly popularity ratings for the past six years (1).

Fair or not, Ubuntu reflects on the Linux community as a whole. How well Ubuntu meets criticisms matters even to Linux users who don’t use it.

So what are common Ubuntu criticisms? Here are those I often hear…

It’s Bloated

To say that Ubuntu is bloated only makes sense if comparing it to some alternative.  So let’s do that.

Is Ubuntu bloated compared to Windows?

This chart compares Ubuntu’s system requirements to the last three Windows releases:

Resources: Windows XP: Vista: Windows 7: Ubuntu 10 and 11:
Processor: P-III P-IV P-IV P-III
Memory: 128 / 512 m 1 / 2 g 1 / 4 g 512 m / 1 g
Disk: 5 g 40 g 20 g 5 g
Cost: $ 199 – 299 $ 239 – 399 $ 199 – 319 $ 0
Locks to Hardware: Yes Yes Yes No

Sources: websites for Microsoft and Ubuntu, plus web articles and personal experience. Chart is simplified and details have been omitted for clarity. Microsoft offers many Windows editions, this chart addresses the most common. Microsoft prices are for full versions. In the Memory column, the first number for each system is generally considered the minimal realistic memory, while the second is the memory recommended for best performance.

By any measure Ubuntu is not bloated compared to Windows. I’m writing this article with Ubuntu 10.10 running on a seven-year-old Pentium IV with a single core 2.4 ghz processor and 768 M of DDR-1 memory. This computer wouldn’t even boot Vista or Windows 7. It runs Windows XP great, but that’s not current software. XP is two Windows releases back.

Is Ubuntu bloated compared to prior releases?

Ubuntu’s system requirements indicate the product’s resource requirements have crept upwards over the years. Here are its memory requirements:

Ubuntu Desktop Version: 6.06 7.04 8.04 9.04 10.04 / 10.10 11.04
Memory (M): 256 256 256 384 512 / 1 G 512 / 1 G

Sources: Ubuntu offical system requirements and various websites on efficient product use. Note that some sites do report slightly different memory requirements. 1 G is the recommended RAM for 10.04 and above.

These RAM requirements and the recommended minimum 1 ghz processor mean that nearly any computer sold in the past seven to ten years can run Ubuntu. I’ve run 10.x on P-IV’s and even P-III’s. By this measure, one could hardly label Ubuntu “bloated.”

Is Ubuntu bloated compared to other Linux distributions?

Linux distros divide into full-size, mid-size, and lightweight. Ubuntu is full-size.

Most full-size distros come in multiple versions. Their standard product usually requires at a P-IV or better with at least 512 M to 1 G memory. You may be able to get by with lesser hardware but it’s not recommended.

Mid-size distributions like the standard editions of Zenwalk and VectorLinux go a bit lower than the full-size distros. They’ll run fine on a P-III with 256 M. Lightweight distros like Puppy or VectorLinux Light Edition will run down to 128 M or less if properly configured.

To compete with this, full-size distros usually offer pared-down versions for those with lesser hardware. For example, Ubuntu offers Lubuntu; PCLinuxOS has PCLinuxOS LXDE and other variants; Mint can run with lightweight GUIs like LXDE, XFCE, Fluxbox; and so on.

Compared to other full-size Linux distros Ubuntu is not bloated. For something lighter, try Lubuntu. Lubuntu requires half Ubuntu’s memory and only 1/3 to 1/2 of its disk footprint. It’s also lighter on the processor. Read my detailed review of Lubuntu here.

It Lacks Enterprise Integration

This complaint is that Ubuntu lacks the enterprise-wide integration and manageability critical to large organizations.

System administrators require a single control point for automated administration and monitoring of remote Ubuntu desktops. Landscape, Canonical’s product for enterprise-wide management, fulfills this need. But it is too narrow to address the larger integration issue. What about a single sign-on for login, email, and web access? What about directory services? How about Kerberos network authentication and LDAP (Lightweight Directory Access Protocol) support? How about coordinated information management across client and server products?

Microsoft is the competitor in this space. Its full range of client and server products seamlessly integrate. The server products include Active Directory, Exchange Server, and SharePoint Server. Client products like Windows desktop, the Outlook email client, and the Office suite seamlessly integrate with the server software.

There are two ways Canonical can challenge Microsoft’s client-server headlock on the enterprise. It can either:

  • Directly compete with a full range of directory, mail, and information management services


  • Better integrate Ubuntu desktop into the Microsoft ecosystem already in place at most companies

The second option is in progress at Edubuntu but not complete. It leverages standards like Kerberos and LDAP to facilitate integration.

One system administrator summarizes the situation this way, “… Microsoft continues to win on the desktop. Not because an individual PC running Windows is easier for most people to use, but because its easier to set up Active Directory to work with Outlook and Exchange than it is to roll your own directory service with the tools available out of the box on Ubuntu.”

Here’s a management consultant whose clients manage between 50 and 150,000 desktops: “Until there is a true competitor to Active Directory, Exchange, Outlook, and the MANAGEMENT of the machines, Ubuntu will not succeed in the Enterprise.”

Too bad Canonical let Attachmate Corp. buy Novell when the company was up for grabs late last year. Novell products like eDirectory and GroupWise could synergize with Ubuntu. Canonical’s Linux dominance plus Novell’s directory services and deep experience integrating into the Microsoft ecosystem might have been very competitive.

Perhaps cloud computing will ameliorate the integration issue. Organizations may shift their integration focus from internal servers to cloud services. This is the premise underlying Google’s Chromebook.

In any case, Canonical needs to recognize this key source of corporate resistance to Ubuntu and make explicit their plan to overcome it. Then they need to promote the plan in the IT community. Thus far they have failed on both counts.

It Doesn’t Install Complete

Here’s a complaint with which we’re all familiar. Ubuntu bundles a ton of great software but leaves out some essentials. Codecs, Adobe Flash Player, multimedia players, and proprietary hardware drivers are examples. You can easily install the missing programs, but you have to:

  1. Know what is missing
  2. Know how to install it
  3. Make the effort to install it

The underlying cause of this problem is the distinction between free and non-free software. Linux partisans have strong beliefs about how to handle this conundrum. Canonical is caught in the middle. They try to provide a complete user experience while also respecting intellectual property rights. This task is complicated by the fact that IP rights are interpreted differently in the many countries in which Ubuntu is used.

Canonical addresses this criticism in several ways. They segregate non-free software into its own Multiverse Repository, so that it can easily be identified and installed. Medibuntu (Multimedia, Entertainment & Distractions In Ubuntu) is “a repository of packages that cannot be included into the Ubuntu distribution for legal reasons (copyright, license, patent, etc).” Users can check for proprietary hardware drivers through the Startup Applications panel or the Administration -> Hardware Drivers option.

Good documentation and How To’s help Ubuntu users. But navigating these can be difficult for the inexperienced. Not all docs are dated or identify the release(s) to which they refer. In the worst case, the user googles and retrieves conflicting instructions for a simple task they want to perform.

Some distros build on top of Ubuntu to give a more complete user experience. Linux Mint, for example, states its first goal as: “It works out of the box, with full multimedia support and is extremely easy to use.” PCLinuxOS is another competitor that emphasizes it is “a full multimedia operating system.”

I feel the “completeness criticism” is but a nit for experienced users. They can easily install the few apps or plugins Ubuntu doesn’t initially provide. For newbies, though, this is a hurdle. End users don’t know and don’t care about the debate in the Linux community over “free versus non-free.” They just want software that does everything they want with as little effort as possible.

Here’s how Canonical could address this problem. Add an install panel allowing the user to select what goes into his installation. Give him a checklist of installable products — with each denoted as free or proprietary. Users could choose software conforming to the IP laws of their country. With the customer checking acceptance of licensing conditions, Canonical would be absolved of legal responsibility. Users would get the most complete system permitted in their jurisdiction by a simple install panel checklist.

It Doesn’t Install Secured

Comparative studies and vendors alike confirm that Linux has a superior track record as a secure operating system. Ubuntu upholds this great tradition. You’d be hard-pressed to find evidence of malware infections in the Ubuntu community.

But does Ubuntu install as secure as it could, right out of the box? Surprisingly, no.

Take the default firewall as an example. In version 10.x, the Uncomplicated Firewall, or UFW, installs as Disabled. You’d think such a fundamental security tool as a firewall would default to Enabled. Or failing that, that the installation panels would give you a checkbox for enabling it.

UFW’s front-end management interface, Gufw, doesn’t install by default. You get the firewall without the GUI to manage it! The user must know about Gufw and install it separately.

How about configuring the firewall? Windows products like ZoneAlarm help you “train” them. They intercept each program the first time it communicates through the internet, and ask you to Allow or Deny the communication. Then they automatically generate the proper firewall rule for your decision. They also provide a checklist of installed programs. You simply check Yes or No for each program, indicating whether it has Incoming and/or Outgoing Internet communication privileges.

In contrast, UFW expects the user to write its rules with its barren, minimalist GUI. This is neither state-of-the-art nor competitive. It’s certainly not user-friendly. As a friend complained to me: “I don’t want to manage ports, I want to manage programs!”

To anyone who claims that Ubuntu “doesn’t install secured,” I’d say the product’s outstanding track record argues otherwise. This is a highly secure system. Yet ease of configuration is missing. This isn’t the only area where Ubuntu’s ease of use falls short…

Its File Manager Isn’t User Friendly

Ever taught a class of new Ubuntu users when they run into Nautilus? They always ask how to create a sub-folder instead of a top level folder in a filesystem. They ask how to copy folders to their USB drive or backup disk.

Nautilus doesn’t always show that a copy worked as expected, and if you’re overwriting an existing file, it doesn’t display timestamps so that you know which copy is the more recent. It doesn’t always display error messages. For example, try to delete a directory for which you don’t have valid permission. Or copy into that directory. You won’t get an error message! Users need feedback. The old Unix dictum “no news is good news” is completely inappropriate for products that target end users.

There’s an easy fix. The huge Ubuntu software repositories contain more than a dozen competing file managers. Ubuntu’s superior install tools — the Ubuntu Software Center and the Synaptic Package Manager — make it easy to download them. If you don’t like Nautilus, just click the mouse a couple times and install another product.

The mystery is why Ubuntu bundles Nautilus as its default. File managers are one of the most frequently used tools in any operating system. Consumers expect to use the default file manager without having to replace it. Fixing or replacing Nautilus should be a no-brainer.

It Won’t Run Windows Software

Those who make this accusation either aren’t familiar with Wine, or they haven’t used it lately. The Wine database lists over 16,000 Windows programs that it runs on Linux. I’m constantly surprised that even big, complex applications run under Ubuntu with Wine. Examples include web site generators like Adobe Dreamweaver and NetObjects Fusion, and office products like Microsoft Office and Adobe InDesign.

Wine works like you’d expect. After installing it, you run Windows programs in the exact same manner you would under Windows.

Another compatibility option is DOSBox, an emulator designed for old DOS software. I have a number of simple Windows 3.1 games, such as Ringo, Ludo, and Boule (free download here). The games run fine under either Wine or DOSBox. They don’t run natively under either Vista or Windows 7 — even with its new Program Compatibility panel. Compare Ubuntu with Wine and DOSBox to native Vista and Windows 7, and you’ll often find that Linux is more compatible with old Windows programs than Windows!

I’ve found an analogous relationship between Microsoft Office and OpenOffice. Microsoft releases new versions of Office every three years or so: Office 95, Office 97, Office 2000, Office 2003, Office 2007, Office 2010. (This excludes MacIntosh versions). As far as I can determine, the company only regression-tests back one version. The result in my experience is that OpenOffice is often more compatible with older versions of Microsoft Office than is Office itself.

When critics complain that Ubuntu is not compatible with Microsoft software, I sympathize. In spite of all that I’ve pointed out, gaps persist. But when one considers Microsoft’s own software — rooted in a business model of continuous releases based on planned obsolescence — it becomes apparent that compatibility is not an issue only for Ubuntu. Depending on your compatibility needs, you may get a better deal from Ubuntu than from Microsoft.

It’s Buggy

Several academic studies and papers conclude that Linux and open source software have fewer bugs than commercial products. Ubuntu has bug-tracking identification and resolution procedures equal to those of any large, well-run software project.

From years of participating in the Ubuntu forums, I’ve encountered consistent anecdotal evidence. I read very few posts where a user abandons the product due to a bug. This is a huge vote of confidence in Ubuntu. (You can’t say this about every Linux distro.)

However, it’s not unusual to see posts from first-timers who abandon Ubuntu due to install issues. Examples are things like Ubuntu not recognizing a sound card, or being unable to get wireless networking going, or a display problem of some sort. While these may not be bugs, they are cases where Ubuntu doesn’t work for the prospective user. If I were to recommend one area for the Ubuntu team to target for a better user experience, device recognition and configuration would be it.

A related issue is that Ubuntu actually removes hardware detection capabilities as new versions come out. So a machine that worked fine with an older release of the product suddenly fails when you move to a newer release!

I’ve maintained Ubuntu instances for five years, since release 6.06, and have repeatedly run into this problem. In several cases video worked fine on one release and then fails under a newer one. Right now I’m trying to fix wireless networking on a laptop that worked fine in 8.04 and fails under 10.04. It doesn’t work whether I do an upgrade or a fresh 10.04 install. (Wireless works fine for this laptop with Puppy Linux and Windows XP.)

Admittedly, device recognition and configuration is a sisyphean task. When you try any Linux distribution for the first time, you just hold your breath and hope that the product recognizes all your devices. This remains Linux’s biggest challenge.

From the user perspective, though, to have a product that works fine under one release break under a newer release… that really doesn’t look good. If there is a single issue that tarnishes Ubuntu’s reputation, comprehensive, consistent device recognition and configuration is it.

It Changes Quickly But Doesn’t Protect Its Users

Ubuntu improves rapidly. In the last two years, the product has moved from the GRUB boot loader to GRUB 2, to continually changing networking management tools, to eliminating the xorg.conf configuration file and moving to RandR for video, to switching the user interface from GNOME to Unity, to replacing OpenOffice with LibreOffice. I’ve read about replacing GDM with LightDM, moving to more regular updates, replacing with Wayland,  and more.

Ubuntu’s aggressive improvements are among its greatest strengths. But this benefit causes work for the existing user base.

The Ubuntu team could easily shield their customers from the impacts of these changes. Often they don’t.

Here’s an example. With GRUB 2 you no longer configure the boot menu of OS options by editing the menu.lst file. Instead, you edit bash scripts. That’s fine for me, but an unreasonable expectation for end users. How about a simple GUI front end for editing the boot-time menu?

Another example: new releases take away the xorg.conf video display file that generations of Linux support personnel are accustomed to editing. You can generate this file and then edit it if you look up the commands to create it. But why should you have to? Why doesn’t the System –> Administration menu have a button to generate a xorg.conf file for you? And automatically plop you into editing it?

A final example. Right now I’m researching how to install the Java browser plugin under Ubuntu 10.04. Websites are providing conflicting answers. This was trivial in earlier releases. But no longer. Apparently we switched from Sun’s Java packages to OpenJDK. Beyond inadequate details in the Release Notes, no one bothered to insulate the users from this change. Why is it put on the customer to manage this change?

The Ubuntu team does a superior job in adding new features. They need to protect their users from the disruption these changes cause. This should be a top priority because it deeply impacts the product’s ease of use.

To the average consumer little GUI “transitional aids” like those I’ve mentioned would help tremendously. They would be trivial to program. Why doesn’t Canonical include them? Is it simply a lack of focus on ease of use? Here’s my theory …

Fix the Business Model

Of the above criticisms, those I feel have the greatest merit focus on whether Ubuntu is as easy to use as it could be. You see this in:

  • Device recognition
  • Configuration
  • Upgrades
  • Default file manager
  • Security configuration

One underlying explanation ties all this together. Canonical embraces the same philosophy of product development as Microsoft. The emphasis is on introducing new features. New features trump massaging the product to improve its user-friendliness. They trump intra-release compatibility and disruption to the existing user base. They trump device recognition and easier configuration.

Consider Microsoft’s business model. The company makes 27% of its total sales revenue from Windows and 27% from Office (2). That’s over half Microsoft’s revenue. Without it, the company as we know it would cease to exist. Microsoft can’t afford to stick with a product and polish it until it shines. Its business model forces it to constantly update, replace, and repackage existing code into new product.

No Windows version achieves its full potential because Microsoft must abandon it to introduce revenue-generating new product. New features are critical because they are used to justify the new version to the consumer public. The GUI is often the focus of “improvement” because it is the most visible to customers.

The history of Windows releases verifies this continual forced march to new product:

Windows Releases
Courtesy: Wikipedia article

Canonical implicitly accepts Microsoft’s disruptive business model as the terrain for their competition. Ubuntu directly challenges Windows in the new features competition. And it succeeds. But other design goals get pushed to lower priority.

Here’s an example. Canonical and Microsoft sell to both consumers and corporate customers. They drive product change from the consumer side. This conflicts with the expectations of their corporate customers. Corporate customers value stability, compatibility, minimal bugs, and ease of upgrades over the headlong rush to new features.

Canonical tries to bridge this gap through differentiated policy, support, and pricing. For example, they distinguish between Desktop and Server products, and between regular and Long Term Support (LTS) releases. They offer corporate customers comprehensive support options and contracts.

Readers with long memories might recall that Red Hat also got caught in the conflict between consumer and corporate expectations. The company flip-flopped several times over their support for desktops versus servers. Ultimately Red Hat solved the conflict by spinning off desktop Linux to the Fedora project in 2003, while it went forward with Red Hat Enterprise Linux for servers.

I believe Canonical would be better served by protecting those who find that rapid change causes them work — its user base. Polish existing code to improve ease of use. Concentrate on easy upgrades, great device recognition and intelligent automated configuration. Minimize bugs. Abandon the pell-mell rush to new features. Improve the product at a measured pace. Nurture and organically grow the base. New users will come naturally if the product provides solid long-term value. You needn’t hype an “all new” interface to attract them. That’s Microsoft’s game.

The best way to compete with Windows isn’t to mimic Microsoft’s business model. You win by presenting an alternative vision grounded in a unique competitive model.

And the Consensus Is?

Ubuntu’s popularity means that it represents Linux to many people. How well the product meets criticisms is important even to Linux users who don’t use it.

I’ve presented my views to stimulate your thinking. But here’s a better idea. Why don’t we see if we can come up with a community consensus?  Add your comments to this article to address:

  1. What is Ubuntu’s greatest strength?
  2. Are any of the criticisms listed here valid?
  3. If you could ask the Ubuntu team to fix one thing or improve one area, what would it be?

Thanks for participating.


Source – OSNews


Happy 20th Anniversary, Linux

Around this time twenty years ago, Linux Torvalds, a Computer Science Student started developing an Operating System to run on his newly acquired computer. His major motivation was the fact that the OS it came with greatly underutilized its capabilities. After going at it for a while, he realized that his pet project could actually be useful to some others. At that point, he released the source code for his operating System online, and what is now known as Linux was born.

Over the last twenty years, there has been an exponential increase in the usage of Linux. From the early exclusive reserve of geeks and computer scientists, it can now be found on every electronic device from servers to smart phones, wrist watches and even toasters. The Linux Kernel has revolutionized the definition of an Operating System by being scalable and capable of running on anything with a processor.

The major strength of the Linux OS is the fact that it is Free Software. Released under the GNU General Public License (GPL 2), Linux was available for whoever is interested to use, edit, and distribute as they like. This spurred the rapid growth of the OS and caused it to evolve in ways that were previously unimagined. Some might think that such a model does not give room for commercial benefits from the OS, but as the past 20 years have shown, it only requires a bit of creativity for one to develop a successful business model around Linux. Success stories abound in Android, RedHat, Canonical, etc.

The major strenght of the Linux OS earlier on, due to its similarities to the UNIX OS is the server and super computers field. In this field, Linux today controls over 50% of all active servers in the World and 95% of all Supercomputers. Android is fast becoming the most popular smartphone Operating System, showcasing Linux dominance in that field too.

One area that has always eluded Linux has been the desktop. Pundits and fanatics alike have repeatedly declared several years as the “Year of Desktop Linux“; however, the market share is still abysmal. There is cause for hope however, especially with the new innovations into the desktop being embarked upon by Ubuntu (Unity) and GNOME 3.

The Linux Foundation has decided to mark the 20th Anniversary of the invention of this Revolutionary OS with a video showcasing its creation and evolution. To view the video, check the link below.

Looking forward, it is obvious that in the next twenty years, Linux will still become relevant. This is evident in the fact that every Linux is at the centre of every new and emerging technology; from the Smartphones and tablets to Cloud computing, etc.

To participate in the festivities, and for more details, check out the Linux Foundation’s official 20th anniversary page

Useful Links

1. The Story of Linux: Commemorating 20 Years of the Linux Operating System (Video)
2.Celebrating 20 years of Linux with us (Linux foundation)


How Safe is your Business?

Just how safe is that business model of yours? For how long will your business be a going concern?  This question is especially pertinent in the technology world. The threat of the big names muzzling or stifling the smaller companies out of business is becoming a very clear and present danger. They seem to be involved in about every thing technology!

You already have your business on-track, running smoothly, the cash is rolling into your bank accounts.  Then out of the blues, the very foundation is bull-dozed out of your business. Some examples of this kind of situation is the decision by facebook, google and such behemoths to implement their own URL shorteners.  If  Twitter also comes out to join this gang, well, you get the picture! With this singular move, any URL shortening company, that bases its income directly or indirectly on offering URL shortening service, should be worried  and start thinking fast. A list of existing URL shorteners that may find their businesses threatened can be found here.

Yet another example is the fairly  new service by Google allowing you to send smses from your computer – for free. So, what will be the fate of web-sms services companies ?

Again, you are a manufacturer of digital cameras, Good! That is your buiness area. You have invested good money, spent time and cool cash on Research and Development. Suddenly, a phone manufacturer like Nokia decides to incorporate a class-braking camera in a Nokia N8. And you can be sure other manufacturers will follow the suit.

Nowadays, cameras, clocks, FM radios, GPSes, Altimeters, Portable television sets are now within the domain of Phone Handset manufacturers. What happens to companies who have their business models based exclusively, say, on the manufacture of just one of these items? In the era of digital convergence, we are beginning to have single devices combining numerous functions.

When we talk of storage today, hard disk manufacturers had also better re-invent their business strategies, Flash technology will eventually totally replace the conventional hard disk – as we know them today.

The list is endless. One thing that is , however, common to all is that in today’s world of rapid-fire technology innovation, it is becoming increasingly difficult for smaller companies with expertise in narrow fields to stay relevant.

It is worrisome!