Why can't they stop computer viruses?

It's no secret that the Internet is flooded with viruses and spam. A recent report says that 95% of all email is spam. In fact, the spam and viruses go together. The spam contains attachments or links that infect your machine with a virus, and one of the main jobs of a virus is to spread itself by sending out more spam. Call it viruses, worms, or Trojan horses, adware, spyware or bots, it's all malicious software ("malware") and it's a problem.

Originally, viruses were just pranks written by amateurs. Then, noticing how widely they spread, hackers produced the notorious "adware" viruses that pop up advertising windows all over your desktop. And unscrupulous companies were found to pay for advertising in this new medium. More recently, criminal organizations have created a demand for viruses that scan your machine for things like bank account, credit card and social security numbers. And finally, a more general-purpose virus has been created, called a "bot" (short for "robot"). These are programs that don't do much of anything on their own, but sit and wait for commands from a remote machine. Those commands can then mobilize a huge number of machines -- millions, in some cases -- to do whatever the authors require. The collection of infected machines is called a "botnet."

Since everyone in the industry knows this stuff is out there, you might reasonably ask why no one has put a stop to it. In fact, there is a whole industry dedicated to dealing with viruses. Companies like Symantec write the anti-virus, firewall and spam filter applications you can install on your machine. So a better question is, why don't they work?

There are two answers usually given for this. First, the industry blames users for clicking on links to dodgy websites, or opening email attachments from people they don't know. Users are also blamed for poor system hygiene -- we don't install all the security updates to our operating systems, and we don't buy the firewall, antivirus and spam filtering software written by the security industry!

The second answer to complaints is that viruses are a moving target. There's an arms race going on between the virus programmers and the security companies. The security companies learn how to spot a class of virus programs, but then the virus programmers learn to write a new type of virus that cannot be spotted. The virus programmers even directly attack the security software on your machine. Unfortunately, the virus programmers are winning the race. In a recent piece for Wired, security expert Bruce Schneier writes about the Storm botnet:

Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it. Inoculating infected machines individually is simply not going to work, and I can't imagine forcing ISPs to quarantine infected hosts. A quarantine wouldn't work in any case: Storm's creators could easily design another worm -- and we know that users can't keep themselves from clicking on enticing attachments and links.

He describes this botnet as "the future of malware." Blaming lazy, foolish users, or evil virus writers is the easy answer to the problem. Unfortunately, the real problem is that the security architecture of our PC operating systems is fundamentally flawed. Viruses will not be eliminated until operating systems are changed.

A Change in the Threat

In the beginning, there was no security on computers, because there was no operating system on computers. The computer was a blank slate. You would load a single program into memory, run it from address zero until it stopped, then clear memory and run another program. The program would be loaded from punched cards. For you youngsters, think of a stack of playing cards with holes in them to indicate the bits and bytes of the program. There was no screen, so the results would be printed out, or the program would punch a new stack of cards with the result. There was no file system (no hard disks!), so nothing the program did really mattered once the program ended. The worst you could do was an infinite loop that punched too many cards or printed too much output on the printer.

Later, computers had peripherals like tape drives (for years, any movie made with a computer in it would just show reels of tape spinning). Operating systems were created to manage these devices. You didn't want to include the code for reading and writing the tape in every program you wrote. Instead, the operating system had all that code. It would then be running all the time, and when you loaded your stack of punched cards, that program would call into the operating system for things like reading or writing files to the tape.

These early operating systems still didn't have any security. After all, it was your computer, and the purpose of computers is to run programs. Sure, the operating system could complain if you tried something impossible, like writing a file bigger than the tape, but other than these purely technical errors, it wasn't going to tell your program what it could and couldn't do. If your program erased the entire tape, the answer of the industry was "well, don't run that!"

Computers got bigger and more expensive, and so programmers needed to get more out of them. They wrote "timesharing" systems which ran many programs at once. The idea wasn't to run a word processor and an email program at the same time (neither had been invented). It was to let multiple programmers each run their one program at a time. These programmers would drop off their deck of punched cards and tapes at the front desk, and a system operator would run as many programs as they could fit in memory at once on the machine. You'd get your printed results or new stack of punched cards, or a rewritten tape back later that day (if you were lucky).

Now the operating system needed a bit more security. It had to keep those several programs that were running at once from messing with one another. There still were no files though, and no concept of a user. Security was minimal, and mostly concerned with defending the operating system against badly written programs. But if worst came to worst, the operating system would crash, everything would be reset, and the operators would write a nasty note on your deck of cards or printout.

More time passed and hard disks and terminals were connected to computers. The first terminals were teletypes, a combination of keyboard and printer. You typed in your commands and the computer answered with printed output. Suddenly you had multiple users "logging in" (for the first time!) and running programs outside the supervision of the system operators. Again, for you youngsters, the computer was a big machine that filled a room. The terminals were just dumb printers with no processors at all. The central computer was shared by dozens (later hundreds) of people at a time. It was slower than any computer you can buy now. Imagine an iPod processor shared by 500 people. This is not an exaggeration!

And so computer security as we know it was born. The purpose of security was to defend the system against threats. There were a few simple classes of threat. First, make sure no one logged in as a user without permission. This was solved with passwords. Second, make sure one user cannot read or alter another user's files. This was solved by tagging files with owners (actually, many complicated systems of permissions were implemented). And lastly, programs had to be isolated enough to keep from crashing the system or messing with the peripherals (writing the disk directly instead of going through the file system, for example). But the basic concepts were unchanged. The purpose of a computer is to run programs. If you ran a program that erased all your files, the industry answer was still "well, don't run that!"

Later on, computers were connected together in networks and everything changed. But not right away. Even when the first email was implemented, it was difficult to send programs. There were many different types of computer, and they all had different processors with different instruction sets. Again, this seems odd to modern readers. Imagine that instead of Mac vs. PC, there were a hundred different, completely incompatible machines, each running their own flavor of operating system. Sending a program binary from one computer to another would have been pointless unless the two machines were the same type, running the same OS. You would probably send the source code for the program (the text written by programmers.) The person who received it would need to have programming tools, and would compile the program for his particular machine, then run it. He would only go to this trouble for something he had actually asked for. Many users would not have had any programming tools, and would have been unable to run a program they received even if they had wanted to.

But there were some virus-like things happening. Scripting languages had been invented that would take text commands directly and execute them without a separate compiler. So you could get a script from someone and run it without too much trouble. I remember an early one on IBM's corporate network. It was a "Christmas Card" program that would do a long, non-interruptible animation on your terminal (text-only screens by that point.) People thought it was a laugh and would send it around to one another as a prank. It wasn't a virus, since it couldn't spread itself, but it was a taste of things to come. For example, it was perfectly possible for a script to erase all your files. But since you'd know exactly where you got it (emailed from a friend), a malicious script was unlikely. Again, if you were annoyed by whatever the script did when you ran it, the industry answer was "well, don't run that!"

From the mid 1970s, there was an academic internet with email, but it was tiny by today's standards and connected a huge variety of computers. Most of the users were students, professors, and more generally, programmers. They wouldn't think of spreading a virus -- it would be like burning your own house down for fun! And a virus wouldn't spread well anyway, because of the variety of computers and operating systems. And so operating systems did not adapt to this new threat environment. It was still about defending users from one another on timesharing systems. There was a new threat of someone using a network connection to attack your operating system, but it was a very manageable threat.

By the time the Web was created in the 1990s, things had changed. Most of the machines on the internet were PCs, and there were only a few types. As more and more of them were Intel processors running Microsoft Windows, there was a bigger and bigger opportunity for viruses. Since so many computers were running the same operating system, and would run the same programs (without needing to be compiled), viruses could spread widely. What's more, the internet was for general public use now. A teenager halfway around the world could write a virus that would run on your computer. It was only a matter of time.

The problem was that the security model hadn't changed along with the threats. The thinking was still "the purpose of computers is to run programs" and "programs can do whatever they want" and the responsibility was still on the user to avoid running malicious programs. In the old days, there were only a few ways to get a program -- you bought it from a company, you asked for it from another user, or you wrote it yourself. And once you had a program, you had to ask for it to be run. Programs did not just appear over the network, and they did not run automatically. So there was no need to defend against that threat.

Unfortunately, all the popular operating systems out there -- Windows, MacOS and Linux -- were all using the same security architecture as the operating systems developed in the 1960s and 70s. In fact, they had less security than the systems that came before. Personal computers only had one user. The Unix-based operating systems (Linux and MacOS) would still take a password to authenticate a user, which was slightly useful. They would defend the operating system files against user programs, which was useful. And they would defend your files against other users on the same machine, which was pointless, since there were no other users.

Windows in its early versions had no security at all. There was only a single user, so no login password, and a program could alter any file on the machine, even the operating system. It was wide open! When security was added later, it was along the lines of Unix and the other old timesharing systems. And so none of the popular operating systems were prepared for viruses.

With Web browsers and "smart" email programs, everything was different. An email program would support attachments. If an attachment were a program, it would be run. If a web page contained a script, the script would be run. An ActiveX control is a binary program for Windows, and can do anything it wants to your machine. The web browser would run it, as part of displaying a web page. So users were no longer in control of the programs they ran and had only a limited idea of where they were coming from. Yet the operating system still let programs do anything they wanted. The operating system was still designed with the old threats in mind. It was obsolete.

Many people think that Mac and Linux systems are more secure because of better technology. This is not true. They have the same security model as Windows. They get viruses less often because they are less popular. Since there are fewer Mac or Linux machines out there, virus writers don't target them as often. But in fact, they are equally vulnerable.

Why Anti-Virus Programs Fail

When you click on email attachments or browse the wrong web pages, you are running programs. Continuing the traditional approach, the operating system designers don't see their job as protecting you from programs you choose to run. To fill the gap, the security companies have written programs that try to add this protection.

An anti-virus program has a very demanding task in front of it. It has to check all your incoming email and all the web pages you visit and try to spot malicious programs. It then warns you that the program is suspect, so that you will not run it. If it misses a piece of malware (or you approve it by mistake), it may also catch some of the effects -- attempts to communicate with the internet, or modify your operating system, for example. It can then warn you again, or offer to remove the offending malware. Finally, it can scan all the files and system settings of an existing system, trying to find malware to remove.

The first problem is to determine what constitutes a virus or other bit of malware. The anti-virus program cannot analyze a suspect program on its own. There's no simple rule that says what a program should or shouldn't be able to do. A useful program might want to read files to manage your bank account (Quicken, for example). A malicious program might want to scan those same files to steal your credit card information. To the operating system (and the anti-virus program), they are both programs that read files. The difference is in the intent, and no computer is smart enough to determine intent.

So the anti-virus programs all rely on the company that makes them to list all possible malicious programs. It downloads this list (and updates it regularly), and checks your system for any of these programs. This is a "blacklist" of all malware. One common security program, "Spybot Search and Destroy", has a blacklist of 86,018 different pieces of malware.

The first problem with a blacklist is that a virus may appear on your machine before the security company knows about it. In fact, they only know about it because people complain of viruses, or they see them on their own machines. They can't visit all suspect websites, or receive all the email you receive to check for attachments. So they are inevitably behind the curve. New viruses are being written all the time and any that haven't been seen by the security company can get into your machine and past the anti-virus program without setting off any alarms.

Second, there's the problem of identifying the virus. It used to be that these were simple programs that were the same everywhere they occurred. On each infected machine, Virus X would look the same. So when the security company sees Virus X for the first time, it adds a description of Virus X to its blacklist. Then your anti-virus program could use those descriptions to catch Virus X on your machine.

Something similar used to be true of spam email. A spammer would send out a million copies of the same message. Now, you'll notice that spam varies a bit. The same spam may arrive with different subject headers, it will include lots of misspellings to make sure simple rules (no "Viagra" messages allowed) will fail, since the key words are all missing. By slightly randomizing the message, so that no two copies are exactly alike, the spammer gets past the spam filter.

The same thing is now happening with viruses. Instead of sending the same program to each machine that it infects, the virus varies itself a bit, sending slightly randomized copies to each new machine. And just as randomized spam gets through spam filters, randomized viruses can get through anti-virus programs. The security company can't add every possible random variation to their blacklists, and the anti-virus program can't analyze the malware on its own.

Finally, clever viruses can attack the anti-virus program itself. If the anti-virus program is disabled, it will never alert you about the virus, no matter how many descriptions are added to the security company blacklist. On Windows, the virus can dig itself so deeply into the operating system that even when the anti-virus program removes the virus, it returns again quickly.

As viruses become more sophisticated, security companies are losing the battle against them. They will see the virus, but be unable to write a description of it, since the virus changes too quickly. When the anti-virus program detects it, it will reappear. Or the virus will disable the anti-virus software. Using other techniques, the virus writers can disguise the internet sites they report to, making it impossible to track them back to their source. And with millions of infected botnet machines at their command, a virus writer could probably attack and shut down any of the large security companies. They simply have more resources.

What Needs to be Done

Several types of solution have been considered up to this point, but none are really promising. Here's a quick rundown:

Well, don't run that!

You could avoid viruses by simply not running any programs you receive off the internet. This is harder than it sounds. First, you'd avoid all email attachments, even ones from friends. Second, you'd turn off preview panes in your email program, since the preview automatically displays images and some attachments (including programs.) Then you'd turn off all scripting in your web browser. This will cause many of the web pages you look at to display incorrectly, or just fail to display at all. You'd have a much more boring internet, but a much safer one.

Unfortunately this isn't quite enough either. Some images, movies, music, etc. also contain viruses. The virus authors have discovered ways to exploit the programs that display this content, so that when a particular item is displayed, the display program itself fails and is made to run a bit of virus program. And so your machine is infected anyway, despite all your precautions.

The only way to be safe at this point is to restrict your machine to displaying only text. So far as I know, virus programmers haven't been able to make simple text carry a virus. Given how complex browsers and web pages have gotten, there's probably a way to do it though!

There ought to be a law!

Some people claim if Microsoft and other OS companies were legally responsible, they would solve this problem. This is unlikely. An operating system and its application programs are literally millions of pages of software. Finding every possible way that the software could fail and introduce a virus is a practical impossibility. Instead, if companies really were liable, they would simply refuse to run or display any content from the internet without your permission. In other words, they would set the defaults the way I just described. If you tried to change the defaults to display web pages or email the way you'd like, a big legal warming would pop up, forcing you to take responsibility for any viruses that you contract due to your decision. This is not a solution.

True, the Windows operating system could be more resistant to tampering by viruses. Changes along these lines have been made in Windows Vista, with the result that whenever you run any program that might alter the OS, a warning is displayed. This is the infamous User Agent. There are two problems with this approach. First, users ignore the warnings after awhile and say "yes" to all the prompts. The virus is home free when you do this. Second, these are only attempts to protect the operating system. Protecting your own files from programs you run is still not an option.

Only allow good programs

Instead of blacklisting all bad programs, how about making a list of all the good programs and only allowing those? This is called "whitelisting", and if the industry fails to come up with a better solution, this is probably what we will get. It has its own problems though.

It's true that the average user only runs the software preinstalled on his machine plus a few other programs (word processing, games, etc.) A complete list would be much smaller than the tens of thousands of viruses now on the blacklist. It's also true that unlike viruses, good software is not continually randomizing itself and trying to avoid detection. So the problem of identifying the software is gone.

But we have to consider the side effects of this approach. First, people will still want to run odd little things they find on the net -- games, movies, etc. They will get a nasty pop-up warning when they do this ("This program is not on the whitelist!"), but many will do it anyway and get infected.

Second, virus writers will work around this system. All they have to do is get on the whitelist or make it appear they are on the list. A virus could imitate a legal program on the list. A virus writer could attack a legitimate software company and get his virus built into whitelisted software. The virus writer could form a company, write some legal-looking application, get it put on the list, and deliberately build a virus into it. And they will still be able to exploit programming errors, using bad data to crash whitelisted programs and get them to run viruses.

Finally, even whitelisted software has to be able to change so that new versions can be written. Each new version will have to be approved by whoever keeps the whitelist (all the different security software companies.) This will be a tedious, expensive process. It will shut many small software companies (including individuals) out of the industry. In the long term, it would lead to complete stagnation.

Rethink the Problem

From here on out, you're reading my opinion as a programmer. Although many would agree with my description of the history of security and the problems with various approaches, there's no agreement on what to do about it.

Operating systems were written by programmers for programmers. Their goal is to run any piece of software a programmer can think of. Security was only added grudgingly, first to protect against badly written software, then to keep those clumsy users from messing with one another on timesharing systems. It was never the intention of system security to prevent users from running programs they wanted to run. The industry persists in treating the user of a computer as if they were a system administrator from the 1970s, who only ran the programs they trusted. Now that users run programs inadvertently all the time, this approach has failed.

What has to change is our model of what a program is and what it can do. The programmer is reluctant to impose any restrictions on programs in general, since that limits what he can get the computer to do. What he wants is to be able to run any program, even ones that make (useful) changes to the operating system. What the user gets from that is chaos, because programs are too powerful and cannot be trusted.

Instead, we need to define roles for programs, from programs that run in a browser to present content, to programs that edit files, such as a word processor or finance package, to system utilities that can do the full range of changes. Each of these roles needs to be strictly defined by the operating system, and programs limited to the functions defined by that role. By limiting program capabilities, we'll be able to trust them more, and run the huge variety of programs out on the internet without fear of viruses.

This is not a fantasy. Some attempts have already been made along these lines. When the Java programming language was introduced in the late 1990s, the idea was that "applets" would run in the web browser only. They would have strictly limited capabilities, such as drawing animations on a piece of the page. The applet could do nothing other than this limited set of things, and so you could trust it. The nastiest programmer in the world couldn't make it do anything dangerous. You would be free to visit any web page you liked.

The key idea is to restrict the program to some kind of "virtual machine", a simulated computer where the program can run, but can't do any damage to the underlying real computer. This idea has reappeared in various forms many times in the industry. The problem with it is that it's difficult to get right. First, it really has to be impossible for a program to reach outside the virtual machine. This means implementing that machine with excruciating attention to detail. Second, it all has to be efficient, or else programmers will not use it. Programmers are not willing to give up 50% of the speed of a machine, just so their programs will be secure (after all, they aren't the ones writing viruses!) Third, the virtual machine has to have enough connections to the real machine to do the tasks it needs to do.

Java failed on all three counts. It was possible (though hard) to break out of the virtual machine and mess with the real machine. The first versions of Java were too slow, and they couldn't use all the OS capabilities that programmers wanted to use. If programmers restricted themselves to Java, their pages didn't look as good as programmers using other, less secure technologies. Java did get better, but by then, the industry had moved on.

As far as I know, the industry has never tried to take a virtual machine approach to desktop applications that edit files. We haven't found the right definition for the application "role", so that an application can do what it needs to do, but not be able to mess with the entire system. Instead, desktop applications are free to do whatever they like, which opens the door to viruses.

More recently, the idea of running the entire operating system in a virtual machine has become more popular. If a virus attacks the virtual machine, it can just be destroyed and recreated, rather than continue with an infection. It can also more thoroughly isolate the operating system from applications. However, this doesn't help users much. Applications can still read and write user files, open connections to the internet, etc. If application can do these things, then so can a virus.

Finally, since protecting the operating system is an obvious precaution, the industry has made some progress in that direction. Unfortunately, the industry still wants to support customization and add-ons of both the operating system and of key applications like the web browser. If you allow programs to customize or attach themselves to these key components, you've provided the perfect way for a virus to gain control of your system.

Where will a solution come from?

The software industry today is intensely competitive, with very short development cycles for products. Ordinary application companies don't have the time to develop new models of security, and no way to get them accepted if they did. An innovation probably will not come from the commercial software developers.

Microsoft has a near monopoly on PC operating systems, but Windows is tied down in a million places by the need to be compatible with previous versions and with the rest of the industry. Microsoft would need a solution that did not seriously disrupt its own products or all the products that run on Windows. I doubt this is possible.

The most likely source for any radical rethinking of security will be the open source community. These are mostly volunteer community that wrote the Linux operating system, and continue to add applications to it. Unfortunately, they are the least affected by the problem, since Linux isn't popular enough to attract viruses. And being programmers, they are reluctant to restrict all programs in the way I think needs to be done.

To solve this problem, we need to rethink the infrastructure and get it right this time. Currently, we are just extending the legacy of the 1970s, with little original thinking in this area. It will probably take some kind of disaster before the industry wakes up and realizes that security is not just "important", but essential.

by Michael Goodfellow.
For more, see Free The Memes!

Have a Comment?

More Writings
    Home
Email Me

Mike's Newspaper
Writings
Projects
Sea of Memes
Photos