Balancing Privacy and Trust
With A Smart Card Based National Identity Card
Jeffrey Jonas
New
Jersey Institute of Technology
jeffj@panix.com
There is much recent speculation in the
National Identity, smart card, credentials, mutual trust
The NJIT paper "Public Attitudes towards a National Identity Smart Card" [[i]] explores what data may be on such a card: date of birth, photograph, fingerprint or other things to verify the card is that of the bearer. What if accessing the data was not "all or nothing", but done per item? That way, buying beer would need to verify that the person's age meets the local laws without revealing anything else since there is no need to know[1].
The paper also discusses many facets of privacy vs. the need to share verified information about one's identity and entitlements. Many of the sources are biased. Larry Ellison naturally favors solutions using his Oracle database product and many manufacturers are quick to sell their own solutions. Since the government is the "customer", I fear that they are not giving proper attention to the concerns of the citizenry since we're often excluded from the selection process.
Machine readable ID is not new. Driver's licenses, passports and government issued documents often have the data in human-readable form (text, photo) as well as machine readable (barcode or magnetic stripe). Companies and transit systems trust machine readable ID cards enough to grant access to their facilities solely based on using the card.
Until now, our saving grace has been the separation of personal data into different companies and government departments so consolidating the information is difficult or impossible. The unification of all that data is the new and scary element to having a National ID. But what if instead of revealing a unique identifier to enable such tracking, or depending on a central database, the ID system revealed only the personal information appropriate for the transaction with proper access controls that were assured, guaranteed and securely implemented from the beginning?
The U.S. Social Security Number was intended only for correlating your contributions to your Social Security account even if you change employers. Since it was associated with one's pay, it was extended to refer to all one's finances: income, taxes, banking, etc. Driver's licenses are used as proof of age or as photo ID for transactions totally unrelated to driving. The logic is that there are sufficient safeguards and checks to get a driver's license that others trust it.
A major problem with existing ID cards is that the card often reveals more information than is required for the transaction, or the information might be used in inappropriate ways. For example: using my driver's license to prove I'm over 18 also reveals my address and perhaps social security number. Web sites often ask for a credit card number to indirectly prove the person is over 18, but there's no current way to prevent that from being used to place charges on the account. While asking for one's driver's license may be more useful, perhaps the web site cannot access each state's agency to verify that the card belongs to someone over 18 without also risking revealing one's driving record and other inappropriate data.
By admitting that there are many valid uses for a nationally validated and issued ID then proper privacy safeguards ought to be properly integrated from the start.
How many times have you tried to perform some transaction only to be turned away with the excuse "the system's down"? That's the common failure mode of ANY system that depends on a constant link to a central database. Any system that is not self-contained is vulnerable to such problems, so having just a machine readible ID number is not a complete solution.
I believe that smartcards can achieve mutual trust and privacy using a system where
1. information is in plain text, but is electronically signed by the issuer [ESIG] That way I can clearly see what is being said about me but I cannot alter it [LOCKBOX].
2. The smartcard shares public information without any action by the cardholder. This should be minimal things such as name and photograph since even now it's reasonable and customary to use that to establish identity and most people are comfortable with revealing that.
3. The smartcard requires active consent on part of the cardholder for accessing non-public data (unless mutual trust was already established). Just as existing cash cards and debit cards require a user entered PIN to work, I foresee a mechanism where the cardholder sees what non-public data is being requested and must agree for the card to release that data. Furthermore, permission may be "just this once" or "always allow to this person" (similar to the way web browsers allow scrutiny and management of "cookies"). Yet there are cases where accessing non-public information is essential without the cardholder's consent, such as a medical emergency or after death. That can be part of the design too, preferably with an audit trail (described below). For example: always allow my primary physician to read and update my medical records but a hospital may read the history and only append to the medical history.
4. The smartcard maintains an internal audit trail, logging the signature/id of those making the queries so the cardholder may look them up later and independently verify their identities. Only the cardholder may purge the log, it's append-only to all others. It's only fair: if I'm being asked to prove who I am, I may verify who's asking and why.
5. Lost or stolen cards are detected[STOLEN].
Contact-based smart cards are preferred to wireless because
· People are familiar with the concept that handing the card to someone is consenting to read the public data (similar to showing one's ID card or driver's license).
· The cards are less expensive and can be very small (such as cell phone SIM cards)
· Inexpensive equipment to access it (AmEx is giving away the readers for their cards to encourage home use!), which is important for people to exercise their right to review the data on their own cards
· Wireless cards usually have no "OFF" switch (for instance, the EZ-Pass rfid toll tag has no off switch, thus leading to aftermarket holders and bags to shield it). While RFID type ID cards would be ideal for unintrusive surveilance, there are technical issues (anyone can monitor the wireless communications, jamming and interference), and privacy issues (cards may be read without consent).
To borrow from Bruce Schneier[2] it's easiest to describe protocols in terms of people.
Bob is the other
party in the transaction, using the smartcard to verify
In Object-Oriented programming languages such as C++, there is the concept of who may access data within an object. For this discussion the definitions are modified slightly.
The smartcard itself is the object. The internal microprocessor implements the protocol for all the interfaces.
Public data on the smartcard may be read without special permission. Alice handing Bob the card implies permission to view this data.
Protected data may be read only with Alice's permission. Alice is informed what data is being requested and may agree to all, some or none (usually in some active manner such as entering a PIN or password). The code in the smartcard enforces these rules, not the external reader.
Private data is embedded in the smartcard and is NEVER directly revealed. That's the magic of the smartcard and why it cannot be cloned. The microprocessor in the card executes code that uses the PRIVATE information as part of a cryptographic process so it's never directly revealed, but the results are transmitted on the user's behalf.
The following scenarios demonstrate how the smartcard facilitates Alice proving her ID and necessary information in a secure manner that assures her privacy too.
Alice gets her smartcard from Trent (a trusted agency that issues the cards: a government agency or private company). Trent verifies Alice's identity via traditional means (birth certificate photo ID, passport, etc) and transfers that to the smartcard.
Inside Alice's smartcard is:
PUBLIC:
· name: Alice Smith
· gender: F
· is over 18: YES
· is over 21: YES
· is US citizen: YES
· has valid driver's license: YES
· her photo
· 5'6"/170cm,150 lbs/68 kg, brown hair, blue eyes
· MedicAlert (perhaps with stickers on the card too)
· other medical id such as Diver's Alert Tag [[ii]]
· fingerprint or other common biometric
PROTECTED:
· address
· date of birth
· Social Security Number
· driver's license number, restrictions
· Passport number, visas, immigrant status
· record of vaccinations
· medical
· DNA
· criminal record
· military id, status
· organ donor
· emergency contacts
· will/estate intentions
PRIVATE:
· her super-secret National ID number
· private-encryption keys
Alice goes to a bar and hands the card to Bob the bouncer. Using only the public data, Bob verifies that Alice is over 21 (the state's legal age), matches her photo and lets her in. Bob knows that the data is trustworthy since Trent's signature is valid (using Trent's permanent public key). Bob also verifies that Alice's smartcard is not stolen[STOLEN]. Bob has no need to ask Alice for her driver's license or anything else since he trusts Trent did that already when issuing Alice's smartcard. Unlike the current practice of using a driver's license, Alice didn't reveal her address, age or anything Bob has no need to know.
Alice sees an "R" rated movie. Bob the usher uses Alice's smartcard to verify that she's over 18 by checking the public boolean "is over 18".
Alice browses a web board that discusses adult topics. Her smartcard securely communicates directly to the web server and shares the public boolean "is over 18". Perhaps it's used to log her into the system too by verifying her name, or perhaps the BBS ignores that and allows her to communicate anonymously using the knickname of her choice. The smartcard only needed to share one bit of information: that the person at the keyboard is over 18.
Alice goes to a casino. Bob the security officer verifies that Alice is of legal age, but also needs to check her criminal record (since certain criminals are not allowed to gamble). Alice sees that request on the verifier display and agrees to share her protected data by entering her PIN (or she might decline and not proceed). Alice hits the jackpot. Bob now requests Alice's authorization to read her Social Security number from the card to report the winnings to the IRS. Bob could have asked for Alice's SS number upon entry, but Bob had no demonstrated need to know at that point, and many people might protest that by refusing to comply and going to casinos without that entry requirement.
Alice goes to an interview. Bob the interviewer uses the public data on Alices smartcard to verify that Alice is a US citizen, old enough to work and can drive, which all relate to the job requirements. Bob can NOT access Alice's age or other information without her consent because he's not allowed to know that during the interview process to prevent discrimination[GOV-PRIV]. Alice is hired and accepts the position, and THEN allows Bob to access her Social Security number, address and other data for which the employer has a demonstrated need to know. If Bob tries to read the "emergency contacts" from Alice's card, she might refuse to allow that since it's revealing information that she wants to keep confidential (maybe she's ashamed of her family?) or wants to prevent the employer's copy from going out of sync should the data on her card get updated.
Alice goes on vacation and goes to the airport. Bob the immigration agent needs to verify Alice's citizenship status, check her passport, the proper visa to visit that country and has the required vaccinations. Since this is protected data, a display shows Alice what information is being requested. Alice agrees by entering a PIN/password to release the information for this time only. The exit visa is then recorded on Alice's smartcard.
Alice has a serious accident. Bob the first-aider sees her MedicAlert bracelet, or stickers on the smartcard, or reads it from the smartcards public data and acts accordingly for a diabetic who's allergic to penicillin. It's a serious attack and she's admitted to the hospital. They don't overlook her special needs since the smartcard shares that with them electronically with her other medical information. Since Alice is not conscious, the emergency room has the proper codes so the smartcard shares her protected medical information. But the smartcard also records an audit trail of who accessed the protected information, timestamped and such so she can review that later.
Whether issued by the Federal government, State Government or private companies, there are genuine needs for people to prove things about themselves. People are punished for giving alcohol, cigarettes or "adult" information to minors, yet there's no universal way for them to protect themselves by proving the other person's age, particularly if it's not done in person (over the phone, mail order or via the Internet). Several ad-hoc solutions are currently used but not all are trustworthy (particularly with fake IDs, forged passports, etc.). Technology may thwart forgery, but there's still the question of what is on the ID and how much is revealed to whom, when and why. Smartcards are a very useful technology since it doesn't necessarily depend on a central database, and if used in a respectful manner, allows the cardholder to determine what information is shared, with proper audit trails to assure mutual trust.
Perhaps I'm being rather idealistic about the total responsibility and accountability of this system, particularly in light of recent legislation mandating DRM and other faulty systems. But it's not the technology that's faulty. It COULD be done "right". Right now, with existing technology, on time and on budget. If only the right people were empowered to see it thru.
[1]"need to know" has special meaning to military security. When something is on a "need to know basis", only those with the proper clearance and a demonstrated need to access the information may get it. The conversation goes like this "tell me about xyz" "why" "my need to know is ..."
[2] Bruce Schneier is an expert
in cryptography, author of many authoritative texts and the newsletter
http://www.counterpane.com/crypto-gram.html
[i] Hiltz S.R., Han H., Briller V., 2003, Public Attitudes towards a National Identity "Smart Card:" Privacy and Security Concerns Proceedings of the Hawaii International Conference on System Sciences (HICSS), 2003.
ESIG What is an electronic
signature?
Pretty
Good Privacy (PGP) is commonly used for email. quoting
http://www.pgp.net/pgp-faq/
PGP can also be used to apply a digital signature to a message without encrypting it. This is normally used in public postings where you don't want to hide what you are saying, but rather want to allow others to confirm that the message actually came from you. Once a digital signature is created, it is impossible for anyone to modify either the message or the signature without the modification being detected by PGP.
The
data on the smartcard can be verified offline by knowing the issuer's public
key (which ought to rarely change).
Verifying if the ID is lost or stolen is described in [STOLEN].
To
learn more about secure protocols, see:
http://www.mathematik.uni-marburg.de/~gasi/Doc/Div/kerberos-dialogue.html
http://www.faqs.org/faqs/kerberos-faq/general/
http://www.contrib.andrew.cmu.edu/usr/shadow/kerberos.html
http://www.isi.edu/~brian/security/kerberos.html
LOCKBOX While I would prefer the data
in the card in plaintext so it's open for review and verification, perhaps
there are times the data must be secret, even from the citizen. There are good
models on how to achieve that while retaining access control.
Let's
say a car maker wants only authorized mechanics to work on your car.
·
If you get the key to your hood, then you can open it yourself, or hand
it to unauthorized mechanics.
·
if all authorized mechanics have the key to open ALL hoods of the cars,
then they could open your car without your permission.
·
the hood could require BOTH your key and the mechanic's key to open
(similar to safety deposit boxes) but that requires 2 locks.
WHAT
IF: you had a box with your engine compartment's key in it. But the box is locked. Only the authorized
mechanic has the key to open the box to get your car's key. You hand the box to the mechanic only when
needed, and it's handed back to you with the key inside when the car is
returned. That way you can't use the
engine compartment key but you retain control over it. Similar systems are used by car dealers (each
car's keys in in a box that all salesmen can access, but the boxes' lock can be
changed to thwart theft by previous employees, or perhaps it's time-locked
too). Visiting nurses have similar boxes
on people's doorknobs: they need a combination to open the box to get the
key. The housekey NEVER leaves the
premises.
Following the locked box analogy, a SmartCard may hold information about me in a form that I cannot read, but I can still participate in granting WHO may read that data!
STOLEN What if the smartcard is lost
or stolen?
I
fear a central database of invalid cards is inevitable since you can't depend
on a stolen card to say "I'm stolen!". That opens pandora's box: will the National
ID number be public data then? I'd
prefer some way to verify the card is not on the stolen list without revealing
the National ID number. I'm unsure how
to accomplish that.
If
there's no central database of the contents, wouldn't the replacement card lose
all data that was acquired since being issued?
The data's validity is from the electronic signature regardless of the medium, so it should be perfectly valid for the cardholder to back up all public and protected data on their own (preferably in some encrypted form as well as locking up the backup medium). Perhaps banks will offer virtual safety-deposit boxes to safeguard the card contents, protected by cryptographic keys and under extreme security similar to today's safety deposit boxes and vaults.
When
a replacement card is issued, Alice gives Trent her backup to restore onto the
card. Trent validates all the data (via
electronic signatures and other means to assure it's still true) and writes
that to the smart card, but with new signatures all bearing the current
timestamp. Similarly, when Alice needs
to change her data, Trent must participate by validating her claims and place
new signatures on all the data so the signatures are coherent.
What's
to stop Alice from changing her address at will? Alice might have changed address several
times, each time saving the complete record with a valid signature from Trent.
That could be detected by a heirarchy of signatures: a signature per data
piece, and a signature for the entire card. Changing the address may have a
valid signature for that piece of data, but the overall signature will fail. Or perhaps the timestamp of the address'
signature cannot be different from the overall card signature. This might require some private data that only Trent can set so anyone reading the card for
the address then uses a validation function in the card (ex: verifyTimeStamp("address")
returns OK if the timestamp of the address signature matches Trent's private timestamp for card validation
without revealing the timestamps). There
are many existing schemes to address that problem.
[ii] http://www.diversalertnetwork.org/
People in high risk
occupations or sports often make their medical records readily available. There are several companies and organizations
addressing that need with technologies such as microfiche-on-a-card, or an ID
and number for medics to call.
GOV-PRIV While government agencies are
usually governed by law not to ask for personal information unless there's a
demonstrated "Need To Know" (because age, religion, gender, etc could
be used for bias or discrimination), private companies are usually under no
such obligation. When a company gets too
nosey and invades one's privacy, the consumer's only option is to give up and
bring their business elsewhere (if possible).
I fear that such abuses will continue regardless of the technology,
safeguards and disclosures until laws are extended to protect one's privacy. I
also fear that continued "outsourcing" of government business to
private companies is a deliberate action to avoid such scrutiny and accountability.