Please, Mr. Postman

One of my friends has the following “signature” attached to all his outgoing emails:

The materials in this message are private and may contain Protected Healthcare Information. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

I’ve seen messages like this many, many times, and I feel like they really highlight a major problem with email. This message is like someone putting a sign outside of a window that says “Warning: You are not allowed to look in this window, and if you do, forget what you see immediately.” Does that message do any good at all? It almost makes you want to look in the window more, doesn’t it?

The following is the first post in a series entitled, “Securing your Email.” I’ll start the series by highlighting the dangers of insecure email and why this topic is important. Once I’ve got you convinced that every email you’ve ever read is a fraud, I’ll use the rest of the series to outline many different options you have to address the problems.

In case you are unaware, email is an inherently insecure form of communication. If you don’t know a lot about how web servers and the Internet work, that might not be terribly intuitive, so let’s start off with a pretty good analogy. Let’s pretend there’s no such thing as email. You work from home, and you need to send your boss a fairly important message with some sensitive information in it. It’s brief, so you just grab a pencil and jot your thoughts down on the back of a postcard and stick it out in your mailbox by the curb for the postal carrier to pick up. Pretty soon afterwards the message gets picked up, and it makes its way through various post offices and into your boss’s mailbox a while later.

Ignoring how slow things went, that still probably sounds like a pretty dumb way to send an important message. Unfortunately, it’s a pretty good description of how email works. To get a better picture of why it’s dumb, let’s examine some of the security flaws present in a system like the one described above. First, you left the message in your mailbox unattended. Anyone walking down the street could just open up the box and read what you wrote. Considering the message contains sensitive information, you probably don’t want just anyone to read it. More importantly, if the person had a pencil they could easily erase what you wrote and replace it with something else.

You also place an inherent trust in the postal system. If the message you’re sending truly contains sensitive information, you need to have it available in a form that only the intended recipient can read. Then, even if someone steals your letter, at least you won’t have to worry about private information getting out.

Finally, once your boss receives the message, there is no way for him or her to verify that you actually wrote it. Did the person who opened your mailbox change what you wrote? Did someone from a competing company send your boss false information under your name? The information is highly suspect unless your boss can verify that you were the original author and that the message hasn’t been altered since you sent it.

If you’ve made it this far, you may be asking yourself whether any of this is even relevant to you. Maybe you don’t make it a habit of emailing people sensitive information, but I would bet that you’re mistaken. A lot of seemingly harmless information in the wrong hands could be used to do a lot of damage. Plus,what if a criminal tried to impersonate you for their own gain? Don’t your friends and colleagues deserve to know that you actually wrote the words they’re reading? The same applies to emails you receive from your friends and colleagues. Maybe you don’t think that anyone would ever waste their time reading, copying or altering your private emails. There are a lot of good reasons why criminals would want to do this, however. The most obvious would be to make money at your expense. Plus, some of these security flaws could be used to get you into a lot of trouble at work. Is that something you want to risk?

The rest of the posts in this series will outline both simple and more complicated steps you can take to secure your email. Since my original reason for these posts was to address email security for health care providers, I’ll include a post that demonstrates how the health care community could begin implementing more secure email today. Hopefully this analogy has taught you something about the emails you read every day. Check out the rest of the entries in the days to come, and leave comments if you have questions or think I’m wrong about something!

Fixing the Holes

The following is the second post in a series entitled, “Securing your Email.” Throughout the post, I am going to be referencing an analogy about mailing a letter that I described in the first post of the series. If you’re not familiar with it, you may want to take a minute to read it. I’ll wait…

There are a number of ways you can make email a more secure form of communication. One of the easiest ways to start patching holes in your current system is to look for major lapses in security and take care of those first. In my analogy, multiple security threats could be effectively eliminated by handing your postcard directly to the mail carrier and having your boss’ mail carrier hand it directly to him or her. That way, the postcard is never left sitting in an insecure location. There’s no chance for a random person walking down the street to read, copy, or change your message. If you really trust your postal workers and your message doesn’t contain too much sensitive information, this may be all the protection you need.

The equivalent in the email world is making sure you do all of your communication to and from your email provider over a secure connection. This is actually really easy to do, as long as your email provider supports it. Since GMail is fairly ubiquitous these days, I’ll use them as an example.

You want to make sure you’re viewing, sending, and receiving email over a secure connection. If you’re using a web browser to access GMail, you just need to make sure you log in using https://gmail.com (note the https). GMail also intelligently offers an option to “Always connect using https,” which makes sure you never forget and leave a postcard sitting out by the curb. I highly recommend enabling that option if you haven’t already. If you’re using a desktop mail client like Outlook or Thunderbird to access your email, make sure you specify a TLS or SSL connection when you’re setting up your account. By making sure the connection to and from your email service is secure, you’ve eliminated a major lapse in email security. Since many email services offer (or even require) it, it’s a good idea to get into the habit of verifying that you’re communicating over a secure connection whenever you check your email. It’s also imperative that you’re using a secure connection if you’re emailing on public networks, especially insecure wireless networks.

But what if your email service doesn’t offer secure connections? Or what if that’s not enough? Personally, I don’t think it is! If you look back to my analogy, you’ll see that there are still a number of prominent security threats. For example, what if some nefarious character at the post office tampers with your message?

The best way to deal with the remaining threats is using public-key cryptography. This is a fairly complicated topic, and I’ll only be scratching the surface of it in my posts. For now, let’s say that all you need is a file on your computer called an encryption key. In future posts, I’ll briefly cover how to obtain and use an encryption key and some of the basic theory behind public-key cryptography. For now, let’s understand some of problems that public-key cryptography aims to solve.

Flickr: Zappowbang's "Seal"

Flickr: Zappowbang's Seal (CC-BY)

In my analogy, I pointed out that upon receiving your message, your boss has no way to verify whether you were the original author. In the real world, a person uses their written signature to symbolize a document’s authenticity. Unfortunately, forged signatures are sometimes difficult to spot, except by experts. To provide an extra layer of verification, important documents that have been authenticated usually bear a seal. In the old days, many important people sealed their letters using some wax that they would impress with a unique design. This ensured the recipient that a document had not been tampered with en-route. Today, an official document whose signature was verified by a notary public also bears a seal. A digital signature is similar to a seal. It asserts that all (or part) of an electronic document was verified by the signer and that it has not been tampered with en-route. Since the documents are electronic, this makes a digital signature much more portable than a paper document bearing a seal. Unlike a wax-sealed message, digital signatures do not prevent anyone who comes across the message from reading or copying it. It is simply a indicator of the author’s authenticity.

Flickr: timg_vancouver's Engima Machine (CC-BY-SA)

Flickr: timg_vancouver's Engima Machine (CC-BY-SA)

If you need to send a message that contains extremely private information, which no one but the intended recipient must see, the message needs to be encrypted. As you can probably guess, this is means that even if someone were to examine the message text, they would not be able to read it. For example, the Enigma machine was used by the Germans in World War II to encrypt messages between their armies. An encrypted message contains a long string of seemingly random letters and numbers that have been disguised using a code. Messages are encoded so that only a person who has the proper key can unlock the code and reveal the original message. This is starting to sound a bit like a Dan Brown novel, isn’t it? Aside from that, encryption is kind of boring. It does its job really well, and as long as you pick an adequate passphrase for your encryption key, your information is fairly secure. Just make sure the Allies don’t capture your Enigma machine, or you might be sunk. 🙂

My purpose here is not to convince everyone to start using public-key cryptography to digitally sign or encrypt all of their emails. My purpose is to help you understand how insecure your email is and why this extra security is used and ultimately necessary in a lot of situations. Many of us have come to rely heavily on email as form of communication. If you are one of those people, it may be time to reconsider how much trust you place in that system. The methods that I described in this post should help restore most of the trust you may have taken for granted in the past. Still, if I can get a handful of you to start digitally signing your emails when I’m through, I will consider that a victory.

Now that I spent almost this entire post talking about strange things like digital signatures and encryption keys, you might be fairly confused. Before things get any more technical, I wanted to help you understand why we use these things and how they work on a basic level. In my next installment in this series, I’ll discuss some of the theory behind public-key cryptography and how it’s used to create an extra layer of security on top of email by establishing your true electronic identity. In the future, I’ll also cover how to obtain or create an encryption key and how to use it. Then you’ll truly be on the road to secure messaging!

Who do you trust?

The following is the third post in a series entitled, “Securing your Email.” If you’re just tuning in and you’re not very familiar with words like “digital signature” and “public-key cryptography,” you may want to take a few minutes to read the first two posts in this series.

After reading my earlier posts, hopefully you’ve gotten a better understanding of what digital signatures and encrypted emails are and why they can be necessary. Before getting into the technical details of how to obtain or create an encryption key, you should gain a better understanding of how keys are authenticated. After all, this entire process is about learning to trust a system more than a plain old email. So why should you trust an email with a digital signature more than a normal email? How do you know whether that digital signature is even valid? Your recipients need a way to verify that your digital signature actually belongs to you. There are two competing theories (or “schemes”) about how this should be done, each with advantages and disadvantages. Both involve “verifying” that the identity of someone’s key matches their true identity in real life. A key is considered “verified” when a person or company signs the key. By signing a key, the signer is asserting two things: the person is who they say they are and their identity matches their key.

In the public key infrastructure (PKI) scheme, a certificate authority is responsible for this verification. You’ve probably heard of Verisign, the most popular certificate authority. Certificate authorities are companies who take responsibility for verifying that your key matches your identity (as determined by a driver’s license or passport) in exchange for a fee. For example, an individual “Digital ID” from Verisign costs $20/year. Thus, for a fairly nominal fee you get a verified encryption key that can be used to digitally sign or even encrypt emails. In order to do this, you’ll need to use a desktop email client, such as Microsoft Outlook Express or Mozilla Thunderbird, or you’ll need to use a browser-based plug-in for web email, such as the GMail S/MIME extension for Firefox. It’s important to realize that your recipients also need to use one of these applications to be able to check your digital signature or decrypt your email messages.

While the PKI scheme costs an annual fee, nearly all of the work and upkeep is done by the certificate authority. The user simply has to purchase the key and start using it. Cost aside, however, many people in the computer industry do not trust keys that are signed by certificate authorities simply because the companies can be easily subverted. In other words, they do not trust that the certificate authorities are doing their jobs thoroughly. People claim that for your $20, you can get a certificate identifying you as basically whomever you want. If true, this destroys the integrity of the entire system. Once they have your $20, does the company have a significant interest in whether you actually are who you say you are? You also have to wonder how much you can trust a lesser known certificate authority. For example, if you got a message from Joe, whose key was verified by Jimbo’s Certificates, would you be more likely to trust Joe’s signature? Less likely? Also, what happens if a certificate authority goes bankrupt?

As an alternative to PKI, the web-of-trust (WOT) scheme is used. Instead of a central company (or set of companies) in charge of certifying people’s identities, groups of users validate each other’s identity. You first need to create an encryption key on your computer, which I will cover in my next post. Your key will have a “digital ID number.” You then need to meet in person and exchange “digital ID numbers” with your friends and colleagues. Once you’re back at a computer, if you’re satisfied with someone’s identity you can use this number to locate their key on the web and sign it. You’re essentially saying, “I met this person, and I can confirm their identity and that this is their true key.”

Using this scheme reduces cost (it’s free), and thus eliminates the bias of companies that are more concerned with profit than actual validity. Plus, multiple validations for a given key should greatly increase your ability to trust it. If 15 of your close friends have verified Joe’s key, would you be more likely to trust Joe’s signature or less likely? A disadvantage of the WOT scheme is that without a backing company, it is usually a lot more work for the individuals. You can’t just pay someone $20/year to take care of it for you. It also usually involves meeting face-to-face with people in order to verify their identity or have them verify yours, which can be difficult for some people depending on where they live. Fortunately, with the advent of free software tools like key servers, it is really easy to assess the trust level of a person’s key.

As with PKI, you need software tools to be able to digitally sign or encrypt an email. You’ll also need this software to be able to verify someone’s signature and decrypt an email. Some examples of these tools are the Enigmail plug-in for Mozilla Thunderbird or the FireGPG Firefox extension for GMail. I will go through how to use these two applications in more detail in my next post.

Now that you’ve got a good background on how key validation works, you’re ready to get started with your own key. If you’d like to purchase a key from a certificate authority, go for it. Unfortunately, I can’t describe how to use it in detail since I’ve never purchased one, but some of the information I’ll provide may still be helpful. In my next post in this series, I’ll show you how to use free software to create your own encryption key right on your computer for free. I’ll also discuss how to use it and how to get started in the web-of-trust.

Safety Dance

The following is the fourth post in a series entitled, “Securing your Email.” So far in this series, I’ve done a lot of talking about the theory of secure email and why you might want to make your email more secure. If you’re not familiar with these concepts, I strongly urge you to go back and read the rest of this series. In this post, I’m going to highlight how you can get started using public-key encryption today, using only free software.

Disclaimer: It’s important that you read this entire post before attempting anything. It is perfectly acceptable to create a “test” key using a fake name and email address. Just make sure you label it as “test” and that you delete it (or revoke it) when you’re done. Also, creating a key is (somewhat) serious business. Only create one if you plan on using it and keeping track of it.

Before we begin, we need to get a few terms out of the way. First, I’ve oversimplified one point up until now. I let the term “encryption key” have multiple meanings. An “encryption key” is actually an “encryption key pair.” Each pair consists of a “public key” and a “private key.” As you can probably guess, a “public key” is meant to be shared with anyone and everyone. For example, here’s mine. It is meant to be your public electronic identity. If anyone wants to check whether I truly wrote an email that I’ve digitally signed, they can use my public key to verify the authenticity of the message. A “private key” is never meant to leave your possession. Never, ever give someone access to your private key. It is the lifeline of your electronic identity. It’s used to digitally sign outgoing messages and to decrypt any private mail sent to you. Your private key is still passphrase-protected, so as long as you’ve chosen a strong passphrase, your key is relatively safe even in the wrong hands. This is why it’s important to use an extremely strong passphrase when creating your key: uppercase and lowercase letters, numbers, and symbols that is at least 8 characters long. It is almost certain that your passphrase will be the weakest link in the security of your key. Check out this article by Microsoft to learn more about creating a truly secure passphrase.

Let’s take a minute to understand how public and private keys work together. Basically, you are the only one who uses your private key, and everyone else uses your public key. For example if I want to digitally sign an email, my private key takes a snapshot of the outgoing email and attaches a small digital signature to the message (attachment: signature.asc). The signature is tied to the exact text of my email. If the contents of the message is altered en-route, it will not validate when it reaches the recipient. When you receive my email, FireGPG or Enigmail will see the signature.asc attachment and analyze it using my public key, which will be retrieved from the Internet if it’s not on your computer already. If my message has not been altered, the signature shows up as “valid.” No matter what, the message of the email will be visible, and if the recipient does not use any GPG-capable software, the email will look normal (except for a simple signature.asc file attached to it).

Encryption and decryption work differently. If I encrypt a message to you, I want you to be the only one able to read it. So, I encrypt the email using your public key. The text of the email is encoded, and the only way to read the original message is for you to decode it using your private key. Nobody else can read the message (unless they’ve somehow stolen your private key and cracked the passphrase). Not to get too technical, but you can actually use digital signatures and encryption at the same time. For example, I want to send you a message that is meant only for you (encryption), and I want you to verify that I actually sent it (digital signature).

Ok, enough talk. To get started creating your key pair, you’re going to need to install a piece of software called GNU Privacy Guard (GPG or GnuPG for short) on your computer. Somewhat confusingly, GPG is a free software implementation of the OpenPGP standard. PGP stands for “Pretty Good Privacy,” an ironic acronym considering that it’s extraordinarily secure. Windows users can download and install Gpg4Win, which should contain everything you need to get started, including the WinPT keyring manager. I’ll discuss more about what a “keyring manager” is in a moment. It looks like a new version of Gpg4Win is due out in a few weeks if you’d rather wait, but you shouldn’t need to. Mac OS X users can install MacGPG, which should likewise contain everything you need to get started. Linux users almost definitely have GPG installed on their system by default along with a keyring manager (like Seahorse for Gnome or KGpg for KDE).

GPG by itself uses a command-line interface. Since most users want a more robust interface, they install additional applications that harness the power of GPG and make it available via a more user-friendly point-and-click interface. The “keyring managers” that I mentioned earlier provide this interface. A keying manager is an application that keeps track of other people’s public keys so that you can verify their digital signatures and encrypt emails to them. They keep track of the trust level of these keys, and they can be used to sign someone’s public key if you want to confirm their identity. A keyring manager can also be used to generate a new key pair for yourself and keep key pairs in sync with the many online public keyservers in the world. As I mentioned in a previous post, if you want to use GPG to sign and encrypt emails, you need to use a browser extension or a desktop email client. Gpg4Win comes bundled with an extension for Microsoft Outlook. Enigmail is a fantastic extension for Mozilla Thunderbird that can serve as a keyring manager and desktop email interface for GPG. FireGPG is a great Firefox extension that can serve as a keyring manager and a GMail interface for GPG. As you can see, software developers, many of whom need to send secure emails every day, have developed a great set of tools that make getting started with secure email fairly easy.

Unfortunatley, I can’t ouline all the possible ways to use this software. I’m going to go through a few examples here and provide some links to relevant documentation. In general, to create a new key for yourself you need to enter your “key manager,” for example by going to “OpenPGP -> Key Management” in Thunderbird (with Enigmail installed) or “Tools -> FireGPG -> Key Manager” in Firefox (with FireGPG installed). Then you want to generate a new key pair by going to “Generate -> New Key Pair” in Enigmail or click “New Key” in FireGPG. If you’re using Gpg4Win without Enigmail or FireGPG, you can follow this guide (although it looks a little dated, and you may want to use the WinPT key manager over GPA). Make sure you enter your full name and the email address you plan on mailing from. The “comment” field can be left blank, unless you’re creating a “test” key. It’s up to you whether you want to give your key an expiration date. (Most people don’t.) If you’re feeling adventurous, check out the advanced options. While most key managers should use these advanced options by default, you want your key to have the following options: “Key size/length: 2048” and “Key Type: DSA & Elgamal.” It will probably take a minute or two to generate your key. Make sure to move your mouse around and do something like browse the Internet, which helps create a strong key. Once your key pair has been created, you’ll be able to add additional email addresses to it if you plan on using more than one.

If you are asked whether you’d like to create a back-up your of key pair and/or create a revocation certificate, do it. If you are not given the option to back up your key pair and you cannot figure out how with the software you’re using (either “back up” or “export” your public and private keys), you may need to use an alternative key manager or the gpg command line tool. For example, FireGPG does not allow you to export a private key (only a public key, which is not an adequate back up). If you are not given the option to create a revocation key/certificate and you cannot figure out how with the software you’re using, you should create one using an alternative key manager or the gpg command line tool. You should back up all three of these files (your public key, your private key, and your revocation certificate) on a USB flash drive, external hard drive, or blank CD and store it in a safe place. This is especially true if you created a key pair without an expiration date. This will be your fall-back so that your key is never completely unusable.

I should take a minute to explain the revocation key (or revocation certificate). It can be used to “revoke” your key pair if it is ever lost or stolen. It will be used to notify public key servers that your key is no longer valid, and it will tell your friends and colleagues to no longer use this key. This is obviously something you want to keep private as well, since someone could easily revoke your key if they got ahold of it.

Now that you’ve hopefully got a functional key pair created, you’ll want to take some steps to enter into a web-of-trust. First, you have to publish your key to one (or many) of the free pgp/gpg key servers. There are likely a few built in to your key manager, like hkp://pgp.mit.edu and hkp://pool.sks-keyservers.net. You can also manually upload your public key to the PGP Global Directory (make sure it’s not your private key!). Now your friends and colleagues can use these key servers to keep track of your public key.

To get your own web-of-trust started among your friends and colleagues, you will need to exchange two numbers with them (usually in person). Your “Key ID” should be fairly apparent in your key manager. Mine is D4188BF6. Your “Key Fingerprint” should also be available through your key manager, but you may need to do a little bit of digging into the key via “Properties.” Mine is C14A 6784 162F 9C42 E404 D37F E5B6 FD50 D418 8BF6. You might notice that the last 8 digits of my key fingerprint match my key ID. These numbers will allow you to positively identify my correct key on a public key server. Thus, you can be sure that if I gave you that key fingerprint, I am in possession of the corresponding private key. If you can positively identify me, you should sign my public key and synchronize that signed key to the key server. Once I do the same for you and more of our friends and colleagues do the same, a web-of-trust will begin to form. Using this, I can reasonably trust people that I’ve never met if their keys are signed by my trusted friends and colleagues.

I realize there was a lot of terminology thrown around in this post that you probably weren’t familiar with. I apologize for that. I also apologize for not giving step-by-step instructions for individual key managers and software. I thought it would be better if I gave more general instructions that should be applicable to any key manager. There should be guides, documentation and forums available to give specifics for most of the popular key managers. If you’re having trouble with that (or anything in this entire series), leave me a note and I’ll see if I can clarify things for you or point you toward an answer.

I’m basically done with “new educational material” for this series. In my final post, I’m going to outline some ways that the medical community could use the tools described here to change the means of communication between health care providers. Since that was my original intent of this series, I hope you’ll tune in for the final installment. It may have a lot of “what ifs,” but it should be interesting to think about.

Medicine in an electronic age

The following is the fifth (and probably final) post in a series entitled, “Securing your Email.” I’ve spent the majority of the series talking about logistical things like why secure email is important and how to get started with public-key cryptography. If you look back at my first post, you’ll see that the reason I went out and learned all of this (and wrote about it ad nauseum) is because I feel like it’s an incredibly interesting and important topic where medicine meets technology.

Communication throughout the world is becoming more and more electronic, and things are changing rapidly. In the field of medicine five years ago, most institutions (including very large hospitals) were still using paper records. In fact, even today a number of institutions still do. Doctors communicated by telephones and pagers, and records from other facilities were carried in by hand or faxed. With the technological advances in the last 10 years, today a physician could easily be consulted halfway around the world with a simple email, and a copy of an X-ray or CT-scan could be sent electronically. These changes in the way health care is administered presents a new set of problems to the industry.

This electronic age spawned a strong concern about health care privacy in the United States, which was addressed by HIPAA. The health care industry spends an incredible amount of time and resources dedicated to preserving people’s privacy. They spend millions and millions of dollars on “enterprise level solutions” to make sure that they can work online safely. These are not always dollars well-spent, but that’s the topic for another day. Unfortunately these solutions end up restricting health care professionals in such a way as to reduce the utility of the system. As an example, I’m going to talk about email (as you might have guessed).

As I pointed out in my first post, I’ve been thinking about this for a while. How in the world can health care institutions, who are so concerned about privacy and protection of their patient’s data, not be doing more to provide secure email solutions? I think I’m in an appropriate position to answer that question. I’m part of a committee that has been charged with selecting a new email provider for the hospital. We’re currently looking into a number of different vendors, and a question that consistently comes up is about “email security.” We’ve got a number of people on our committee including people from IS, the legal department, and human resources staff as well as physicians, nurses and students. Their “email security” questions have the best intentions. They want to make sure that the solution we choose is going to keep our patients’ data safe.

At the same time, however, I feel like there is a knowledge gap as to what they know about email security. I feel like most (if not all) of the people involved just want someone to say “your email is super-duper secure with our system.” One vendor took it a step further and started talking specifics of cool stuff that their system can do to prevent, for example, someone from emailing Protected Healthcare Information, or PHI, to someone outside of Rush. The problem I (and some members of the legal department) have is that sometimes this information needs to be sent out, for example to a lawyer’s office. From a patient’s perspective, if I request that my physician contact me via email with my lab results as opposed to over the phone, should that be discouraged? But it is, and that’s because some of the people in the IS departments across the land realize how insecure email is. So we need to make it more secure, and in order to do that, we have to understand where its security flaws lie.

The problem is that most institutions don’t look at the problem like that. They don’t get an unbiased assessment of email security. Instead they get a vendor to sell them an “email security solution” in which the vendor defines what secure email is and how their solution fits the bill. I’m not saying that all companies are giving a false sense of security, but it’s definitely a concern. It’s exactly why you have to understand the problem before you go looking for an answer. Things would be significantly different if a group of people like the “free software community” assessed a health care institution’s email security needs. In fact, the purpose of my post is to propose the following: the health care community should embrace the free software community’s model of email security.

Health care institutions have all the right resources already in place. They simply need to implement it. It would be fairly easy the create a public key server for your health care institution. When Housestaff and Physicians begin their tenure, they could easily be required to create a key pair during new employee orientation. Key pairs could be distributed on cheap flash drives for safe keeping and stored on a private server for easy access while on campus. Alternatively, keys could be distributed on smart cards. Since an institution has verified who an employee is, their internal web-of-trust will form easily. As long as someone’s public key has been signed by the company’s IS department, it can be trusted. These key servers could be made to exchange keys with those of other institutions or even external key servers, such as one set up by the NIH or the Department of Health and Human Services. Physicians also often travel to conferences, and “key signing parties” or booths could be set up to create a more full-fledged web of trust.

Having public keys freely available would make it easy for physicians to communicate more securely with one another. They’d be able to trust an email from a colleague. Plus, they’d be able to encrypt emails and attachments containing PHI. Physicians would also be able to communicate with their patients via email more freely. Patients could be given instructions how to acquire the physician’s public key and how to use it. It would be even better to set up a way to simplify the process by just emailing the patient a link so that an encrypted email could be viewed directly on the institution’s website. They wouldn’t need to worry about having the proper GPG client software installed, since they’d just have to click a link and the web page would decrypt the email for them.

Unfortunately, there are many in the health care IS industry that would rather none of this communication go on via email. They are probably smart to have a firm stance that no PHI should be communicated via email at this point since their email system is probably very insecure. The problem with their plan is that both now and in the future PHI is being sent via email and it’s probably not going to stop unless some serious consequences are put into place at individual institutions.

I have to wonder though. If the email system was actually set up securely and properly, why couldn’t PHI be sent via email? Why shouldn’t I be able to request my test results in electronic format from my doctor? These aren’t questions that are going to be addressed by any single institution, unfortunately, and this presents a very big problem in the near future. A number of other industries are currently caught in a downward spiral because they chose not to adapt to the Internet era. Does a similar fate await a health care industry that wants to deny physicians and consumers access to the PHI electronically under the guise of HIPAA and “we know what’s best” for protecting patients rights? Doing so is just going to drive the process more underground, giving them less control over the situation in the future. They’d be better off embracing the idea now and preparing for the future of medicine in an electronic age.