We all know that in day-to-day life there are various kinds of trust. There are people you would lend a thousand dollars but not tell a secret to; people you would ask to babysit but not lend a book to; people you love dearly but don't let touch the good china because they break things. The same is true in a computer context. Trusting your employees not to steal data and sell it is not the same thing as trusting them not to give it out by accident. Trusting your software vendor not to sell you software designed to destroy your computer is not at all the same thing as trusting the same vendor not to let other people destroy your computer.
You don't need to believe that the world is full of horrible, malicious people who are trying to attack you. You do need to believe that the world has some horrible, malicious people who are trying to attack you, and is full of really nice people who don't always pay attention to what they're doing.
When you give somebody private information, you're trusting them two ways. First, you're trusting them not to do anything bad with it; second, you're trusting them not to let anybody else steal it. Most of the time, most people worry about the first problem. In the computer context, you need to explicitly remember to think about the second problem. If you give somebody a credit card number on paper, you have a good idea what procedures are used to protect it, and you can influence them. If carbon sheets are used to make copies, you can destroy them. If you give somebody a credit card electronically, you are trusting not only their honesty but also their skill at computer security. It's perfectly reasonable to worry about the latter even if the former is impeccable.
If the people who use your computers and who write your software are all trustworthy computer security experts, great; but if they're not, decide whether you trust their expertise separately from deciding whether you trust their honesty.