Your computer does something suspicious. You discover that the modification dates on your system software have changed. It appears that an attacker has broken in, or that some kind of virus is spreading. So what do you do? You save your files to backup tapes, format your hard disks, and reinstall your computer's operating system and programs from the original distribution media.
Is this really the right plan? You can never know. Perhaps your problems were the result of a break-in. But sometimes, the worst is brought to you by the people who sold you your hardware and software in the first place.
The fact that Intel Pentium processors had a floating-point problem that infrequently resulted in a significant loss of precision when performing some division operations was revealed to the public in 1994. Not only had Intel officials known about this, but apparently they had decided not to tell their customers until after there was significant negative public reaction.
Several vendors of disk drives have had problems with their products failing suddenly and catastrophically, sometimes within days of being placed in use. Other disk drives failed when they were used with UNIX , but not with the vendor's own proprietary operating system. The reason: UNIX did not run the necessary command to map out bad blocks on the media. Yet, these drives were widely bought for use with the UNIX operating system.
Furthermore, there are many cases of effective self-destruct sequences in various kinds of terminals and computers. For example, Digital's original VT100 terminal had an escape sequence that switched the terminal from a 60Hz refresh rate to a 50Hz refresh rate, and another escape sequence that switched it back. By repeatedly sending the two escape sequences to a VT100 terminal, a malicious programmer could cause the terminal's flyback transformer to burn out - sometimes spectacularly!
A similar sequence of instructions could be used to break the monochrome monitor on the original IBM PC video display.
A few years ago, there was a presumption in the field of computer security that manufacturers who distributed computer software took the time and due diligence to ensure that their computer programs, if not free of bugs and defects, were at least free of computer viruses and glaring computer security holes. Users were warned not to run shareware and not to download programs from bulletin board systems, because such programs were likely to contain viruses or Trojan horses. Indeed, at least one company, which manufactured a shareware virus scanning program, made a small fortune telling the world that everybody else's shareware programs were potentially unsafe.
Time and experience have taught us otherwise.
In recent years, a few viruses have been distributed with shareware, but we have also seen many viruses distributed in shrink-wrapped programs. The viruses come from small companies, and from the makers of major computer systems. Even Microsoft distributed a CD-ROM with a virus hidden inside a macro for Microsoft Word. The Bureau of the Census distributed a CD-ROM with a virus on it. One of the problems posed by viruses on distribution disks is that many installation procedures require that the user disable any antiviral software that is running.
The mass-market software industry has also seen a problem with logic bombs and Trojan horses. For example, in 1994, Adobe distributed a version of a new Photoshop 3.0 for the Macintosh with a "time bomb" designed to make the program stop working at some point in the future; the time bomb had inadvertently been left in the program from the beta-testing cycle. Because commercial software is not distributed in source code form, you cannot inspect a program and tell if this kind of intentional bug is present or not.
Like shrink-wrapped programs, shareware is also a mixed bag. Some shareware sites have system administrators who are very conscientious, and who go to great pains to scan their software libraries with viral scanners before making them available for download. Other sites have no controls, and allow users to place files directly in the download libraries. In the spring of 1995, a program called PKZIP30.EXE made its way around a variety of FTP sites on the Internet and through America Online. This program appeared to be the 3.0 beta release of PKZIP , a popular DOS compression utility. But when the program was run, it erased the user's hard disk.
Consider the following, rather typical, disclaimer on a piece of distributed software:
NO WARRANTY OF PERFORMANCE. THE PROGRAM AND ITS ASSOCIATED DOCUMENTATION ARE LICENSED "AS IS" WITHOUT WARRANTY AS TO THEIR PERFORMANCE, MERCHANTABILITY, OR FITNESS FOR ANY PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE RESULTS AND PERFORMANCE OF THE PROGRAM IS ASSUMED BY YOU AND YOUR DISTRIBUTEES. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU AND YOUR DISTRIBUTEES (AND NOT THE VENDOR) ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING, REPAIR, OR CORRECTION.
Software sometimes has bugs. You install it on your disk, and under certain circumstances, it damages your files or returns incorrect results. The examples are legion. You may think that the software is infected with a virus - it is certainly behaving as if it is infected with a virus - but the problem is merely the result of poor programming.
If the creators and vendors of the software don't have confidence in their own software, why should you? If the vendors disclaim "...warranty as to [its] performance, merchantability, or fitness for any particular purpose," then why are you paying them money and using their software as a base for your business?
Unfortunately, quality is not a priority for most software vendors. In most cases, they license the software to you with a broad disclaimer of warranty (similar to the above) so there is little incentive for them to be sure that every bug has been eradicated before they go to market. The attitude is often one of "We'll fix it in the next release, after the customers have found all the bugs." Then they introduce new features with new bugs. Yet people wait in line at midnight to be the first to buy software that is full of bugs and may erase their disks when they try to install it.
Other bugs abound. Recall that the first study by Professor Barton Miller, cited in Chapter 23, Writing Secure SUID and Network Programs , found that more than one-third of common programs supplied by several UNIX vendors crashed or hung when they were tested with a trivial program that generated random input. Five years later, he reran the tests. The results? Although most vendors had improved to where "only" one-fourth of the programs crashed, one vendor's software exhibited a 46% failure rate! This failure rate was despite wide circulation and publication of the report, and despite the fact that Miller's team made the test code available for free to vendors.
Most frightening, the testing performed by Miller's group is one of the simplest, least-effective forms of testing that can be performed (random, black-box testing). Do vendors do any reasonable testing at all?
Consider the case of a software engineer from a major PC software vendor who came to Purdue to recruit in 1995. During his presentation, students reported that he stated that two of the top 10 reasons to work for his company were "You don't need to bother with that software engineering stuff - you simply need to love to code" and "You'd rather write assembly code than test software." As you might expect, the company has developed a reputation for quality problems. What is surprising is that they continue to be a market leader, year after year, and that people continue to buy their software.[3]
[3] The same company introduced a product that responded to a wrong password being typed three times in a row by prompting the user with something to the effect of, "You appear to have set your password to something too difficult to remember. Would you like to set it to something simpler?" Analysis of this approach is left as an exercise for the reader.
What's your vendor's policy about testing and good software engineering practices?
Or, consider the case of someone who implements security features without really understanding the "big picture." As we noted in "Picking a Random Seed" in Chapter 23 , a sophisticated encryption algorithm was built into Netscape Navigator to protect credit card numbers in transit on the network. Unfortunately, the implementation used a weak initialization of the "random number" used to generate a system key. The result? Someone with an account on a client machine could easily obtain enough information to crack the key in a matter of seconds, using only a small program.
Over the past decade, several vendors have issued public challenges stating that their systems are secure because they haven't been broken by "hacker challenges." Usually, these challenges involve some vendor putting its system on the Internet and inviting all comers to take a whack in return for some token prize. Then, after a few weeks or months, the vendor shuts down the site, proclaims their product invulnerable, and advertises the results as if they were a badge of honor. But consider the following:
Few such "challenges" are conducted using established testing techniques. They are ad hoc, random tests.
That no problems are found does not mean that no problems exist. The testers might not have exposed them yet. Or, the testers might not have recognized them. (Consider how often software is released with bugs, even after careful scrutiny.) Furthermore, how do you know that the testers will report what they find? In some cases, the information may be more valuable to the hackers later on, after the product has been sold to many customers - because at that time, they'll have more profitable targets to pursue.
Simply because the vendor does not report a successful penetration does not mean that one did not occur - the vendor may choose not to report it because it would reflect poorly on the product. Or, the vendor may not have recognized the penetration.
Challenges give potential miscreants some period to practice breaking the system without penalty. Challenges also give miscreants an excuse if they are caught trying to break into the system later (e.g., "We thought the contest was still going on.")
Seldom do the really good experts, on either side of the fence, participate in such exercises. Thus, anything done is usually done by amateurs. (The "honor" of having won the challenge is not sufficient to lure the good ones into the challenge. Think about it - good consultants can command fees of several thousand dollars per day in some cases - why should they effectively donate their time and names for free advertising?)
Furthermore, the whole process sends the wrong messages - that we should build things and then try to break them (rather than building them right in the first place), or that there is some prestige or glory in breaking systems. We don't test the strengths of bridges by driving over them with a variety of cars and trucks to see if they fail, and pronounce them safe if no collapse occurs during the test.
Some software designers could learn a lot from civil engineers. So might the rest of us: in ancient times, if a house fell or a bridge collapsed and injured someone, the engineer who designed it was crushed to death in the rubble as punishment!
Next time you see an advertiser using a challenge to sell a product, you should ask if the challenger is really giving you more confidence in the product...or convincing you that the vendor doesn't have a clue as to how to really design and test security.
If you think that a security challenge builds the right kind of trust, then get in touch with us. We have these magic pendants. No one wearing one has ever had a system broken into, despite challenges to all the computer users who happened to be around when the systems were developed. Thus, the pendants must be effective at keeping out hackers. We'll be happy to sell some to you. After all, we employ the same rigorous testing methodology as your security software vendors, so our product must be reliable, right?
There is also the question of legitimate software distributed by computer manufacturers that contains glaring security holes. More than a year after the release of sendmail Version 8, nearly every major UNIX vendor was still distributing its computers equipped with sendmail Version 5. (Versions 6 and 7 were interim releases which were never released.) While Version 8 had many improvements over Version 5, it also had many critical security patches. Was the unwillingness of UNIX vendors to adopt Version 8 negligence - a demonstration of their laissez-faire attitude towards computer security - or merely a reflection of pressing market conditions?[4] Are the two really different?
[4] Or was the new, "improved" program simply too hard to configure? At least one vendor told us that it was.
How about the case in which many vendors still release versions of TFTP that, by default, allow remote users to obtain copies of the password file? What about versions of RPC that allow users to spoof NFS by using proxy calls through the RPC system? What about software that includes a writable utmp file that enables a user to overwrite arbitrary system files? Each of these cases is a well-known security flaw. In each case, the vendors did not provide fixes for years - even now, they may not be fixed.
Many vendors say that computer security is not a high priority, because they are not convinced that spending more money on computer security will pay off for them. Computer companies are rightly concerned with the amount of money that they spend on computer security. Developing a more secure computer is an expensive proposition that not every customer may be willing to pay for. The same level of computer security may not be necessary for a server on the Internet as for a server behind a corporate firewall, or on a disconnected network. Furthermore, increased computer security will not automatically increase sales: firms that want security generally hire staff who are responsible for keeping systems secure; users who do not want (or do not understand) security are usually unwilling to pay for it at any price, and frequently disable security when it is provided.
On the other hand, a computer company is far better equipped to safeguard the security of its operating system than is an individual user. One reason is that a computer company has access to the system's source code. A second reason is that most large companies can easily devote two or three people to assuring the security of their operating system, whereas most businesses are hard-pressed to devote even a single full-time employee to the job of computer security.
We believe that computer users are beginning to see system security and software quality as distinguishing features, much in the way that they see usability, performance, and new functionality as features. When a person breaks into a computer, over the Internet or otherwise, the act reflects poorly on the maker of the software. We hope that computer companies will soon make software quality at least as important as new features.
Network providers pose special challenges for businesses and individuals. By their nature, network providers have computers that connect directly to your computer network, placing the provider (or perhaps a rogue employee at the providing company) in an ideal position to launch an attack against your installation. For consumers, providers are usually in possession of confidential billing information belonging to the users. Some providers even have the ability to directly make charges to a user's credit card or to deduct funds from a user's bank account.
Dan Geer, a Cambridge-based computer security professional, tells an interesting story about an investment brokerage firm that set up a series of direct IP connections between its clients' computers and the computers at the brokerage firm. The purpose of the links was to allow the clients to trade directly on the brokerage firm's computer system. But as the client firms were also competitors, the brokerage house equipped the link with a variety of sophisticated firewall systems.
It turns out, says Geer, that although the firm had protected itself from its clients, it did not invest the time or money to protect the clients from each other. One of the firm's clients proceeded to use the direct connection to break into the system operated by another client. A significant amount of proprietary information was stolen before the intrusion was discovered.
In another case, a series of articles appearing in The New York Times during the first few months of 1995 revealed how hacker Kevin Mitnick allegedly broke into a computer system operated by Netcom Communications. One of the things that Mitnick is alleged to have stolen was a complete copy of Netcom's client database, including the credit card numbers for more than 30,000 of Netcom's customers. Certainly, Netcom needed the credit card numbers to bill its customers for service. But why were they placed on a computer system that could be reached from the Internet? Why were they not encrypted?
Think about all those services that are sprouting up on the World Wide Web. They claim to use all kinds of super encryption protocols to safeguard your credit card number as it is sent across the network. But remember - you can reach their machines via the Internet to make the transaction. What kinds of safeguards do they have in place at their sites to protect all the card numbers after they're collected? If you saw an armored car transferring your bank's receipts to a "vault" housed in a cardboard box on a park bench, would the strength of the armored car cause you to trust the safety of the funds?