Tags: access control, technology implementing law, privacy technology, technology for lawyers, accountability, knowledge discovery for litigation, information management, data protection, digital evidence, technology for business managers, global outsourcing, information security, digital rights, privacy, eDiscovery, forensics
Discovery, at its simplest, is the concept that one party to a lawsuit can learn what the opposing party knows that is relevant to the resolution of the case. In the US, this had long been accomplished through gamesmanship and strategy (think, hide-and-seek meets go-fish) while, for example, the UK had moved on to affirmative disclosure, the idea that each side needs to identify the truly relevant and provide it. In either case, the parties have needed to decide what data to preserve and how to search it. For a variety of reasons, corporations are adding and deleting data all the time -- doing things like updating client or supplier addresses, changing prices, adding sales, marking deliveries. So, typically, one needs to select a moment in time that's relevant to the issues in a lawsuit and look at all data from that time or up until that time. This is no easy task, as the challenges of selecting the moment, deciding how to save the data, and which tools will provide the best search result are all subject to debate.
Handling a case that involves data in multiple countries compounds the challenge. The EU has had detailed and tightly controlling rules about the handling of information about people by commercial entities for nearly thirty years. By comparison, the US historically has had a comparatively limited concern about the privacy of people whose identities appear in commercial files. For example, in many cases EU rules prohibit making the sort of "moment in time" copy of entire systems described in the last paragraph and have rules that as a practical matter prohibit sending data about people out of the country. Recently, these rules have come into head-on conflict with courts in the US requiring that certain information be turned over in discovery. The decision not to violate the EU rules has resulted in some significant financial penalties being imposed by US judges, while the decision to violate the EU rules and provide the data in the US has resulted in some equally significant financial penalties being imposed by European judges, leaving litigators between a rock and a hard place.
Much discussion is ongoing about ways to resolve this problem. For example, governmental, public policy, and commercial bodies are discussing possible changes to their rules. New forms of insurance may be offered to indemnify parties caught in the current situation. At the same time, there is a quiet march forward of new technologies which may resolve some of the issues. For example, systems that track each data transaction at a very granular level and account for their compliance with rules, called "accountable systems", are in development. Such systems would make it possible to understand the data in the system at a particular moment in time without requiring a "copy" to be made. And, they would be able to recognize competing data rules and apply the correct ones, wherever the resolution of a rules conflict is possible. In theory, this technology might also make it possible to transfer the substantive portions of the information without the personal information, so that the parties could define very small subsets that are relevant and actually required to be disclosed, thus limiting the release of personal information to subsets so small that requirements, like notice to the individuals in the data, could reasonably be met.
While this new type of technology offers promise for resolving some of the cross-border eDiscovery challenges without requiring any jurisdiction to change its rules, it has drawn relatively little attention in this context to date. Perhaps this is because the technology needs to be refined and then implemented in the day-to-day digital business practices of organizations before it can be capitalized upon to address this issue. How long it will be before this occurs will be driven by how quickly people recognize the problems this technology can solve.
Tags: access control, identity management, technology implementing law, privacy technology, technology for business managers, law about technology, public policy, technology b2b customer service, information security
It's not news that our society is divided into technological haves and have-nots. Much has been written about the advantages lost or gained - education, professional, and social - based upon the primacy and recency of one's technology. Recently, I've become increasingly attuned to another place where technological caste matters -- legal standards.
It's been clear to me for quite some time that the lawyer who resonates with technology can do more successful and faster legal research; propound vastly superior discovery requests; and produce substantially more incisive disclosures. It's now becoming increasingly clear to me that the law itself is being skewed by those of us who live to keep up with the next big thing in technology. Debates among lawyers rage in my email inbox about the differences in things like encryption technologies and metadata standards, with lots of cool techie references to things like ISO, NIST, Diffie, OASIS, and XACML.
In the meantime, I was on the the Social Security Administration website the other day and they wanted me to use an eight digit alphanumeric password (case insensitive, no special characters) to upload W2 and other sensitive tax information. My bank's brokerage affiliate is using the same outdated and readily hackable password technology I still see commercial and bar association websites seeking personal and financial information without indicating that they're using SSL or some other baseline method of securing the information. I still get requests from security professionals to email my Social Security Number. If you're not particularly technical, trust me, none of these are good things.
The distance between these two realities has got me thinking about all the places that these two technological castes will be competing to set legal standards. For example, does a "time is of the essence clause" apply the perception of time of a blackberry owner or a person without a laptop?
As the new administration provides the first coordinated national focus on technology, I'd like to add this to the list. Perhaps the new national CTO (yet to be appointed) could work with the American Bar Association and other leaders to identify a rational strategy for standards setting in such a technologically bifurcated society.
Computer hacks were the topic of tech news on the day after Senator Obama's historic election. On Wednesday, Newsweek reported that the Obama and McCain campaigns were the subject of computer hacks during the campaign. The Obama campaign reported a possible email phishing attack this past summer. They were ultimately told by federal authorities that both the Obama and McCain campaign computers had been compromised. Reports are circulating that the attacks came from a "foreign entity" and lifted significant amounts of data from both campaigns.
Also on Wednesday, malware creators took advantage of the tremendous interest in the election and began sending emails with "Obama" somewhere in the subject line. The most common subject lines promised video of a speech, additional election coverage, or new interviews. One security company alone reported that it had filtered more than 10 million emails in less than 6 hours on Wednesday morning. Apparently, hundreds of thousands of people sought to open them and were instead infecting their computers with malware.
These two events highlight the importance of email security. This is the first major election heavily conducted, financed, covered, and influenced on the web. It reflects the transition to technology for ever-increasing numbers of the population. And, it reflects our ready acceptance of the transition.
Too many people assume that their spam filter, anti-virus software, etc will protect them. Yet, any technology professional will tell you that firewalls and software alone are not enough to protect a computer from data theft or destruction. They'll also tell you that emails are the easiest means of attacking computers because people still act before they think. A huge percentage of hacks rely on "social engineering" - convincing a person to do something that works to the hacker's benefit.
Education is still a significant tool in the computer security arsenal. Users must learn to stop and ask themselves whether the email is likely to be what it seems. First the easy questions: How likely is it that some stranger will really send you millions of dollars? Is your US bank really going to send you any request from an email address that doesn't contain the company name? And, if your friend really did lose a wallet on a spur-of-the-moment vacation how likely is it that she'd email you for a credit card number instead of calling her husband, the consulate, or American Express for help?
Is it possible to go the next step and teach users a little technology? They should always check to see if the attachment they're about to open like a present on Christmas morning ends with ".exe" (a file that will execute some program). If it does, they should beware and seek tech support. Or can we teach them to look at the "properties" of the link they're about to click, see the web address ("URL") and recognize that the source is the wrong country? A quick look at the domain registry will make it pretty obvious that something that purports to come from around the corner has a two letter code that means it's really coming from a country around the the world.
With so much hacking going on, the problem is no longer just a technical one. More laws are creating responsibility to take reasonable care to protect other people's information and liability for failing to do so. It is important to remember that with these changes, the standard of care is expected to improve, and what was reasonable yesterday may be unreasonable today.
Train wreck caused by text messaging? Multiple news reports have raised the possibility that the conductor of a Los Angeles train was sending text messages just before the train crashed and many were killed. The questions under investigation are whether this is true and whether the conductor was distracted by it when he should have seen red light signals indicating the hazard ahead.
This is the saddest outcome of an issue I, and others, have been raising for years. The use of technology for non-work activities has pervaded the work environment to the extent that it is impacting work performance. The obvious problem is lost revenue and reduced profits to the employer, but sometimes it correlates to increased liability. If true in this case, it means lost lives.
If the shopclerk with an mp3 player or cellphone in the ear is too distracted to answer questions accurately or make correct change, what makes me think my car mechanic, stock broker, or doctor's lab technician isn't? In 2006, eDiscovery companies were estimating that one quarter to one third of all emails flowing through a corporation were personal email. At the time, I wrote about the thousands of football and fantasy football gambling emails that had passed through Enron. I also wrote about the dirty jokes, hook ups, and other sex emails there.
It's getting technically easier to discover that people aren't really working when they claim to be. This summer before lecturing at a state bar convenion, I stood in the back of the large hall and observed what people were doing. I explained the ways I could prove that they had been using their laptops, blackberries, and iphones to shop on the web, play video poker, and text friends and family. I explained how, In the not-to-distant-future, these activities will probably void the professional certification credit they thought they were earning by being present but not paying attention.
This week's train wreck brings more attention to the debate about just how much people's attention is diverted and what the consequences can be. At a New York panel discussion last fall, a group of senior financial industry compliance managers uniformly said they weren't concerned about personal web, email, and phone use at work. Perhaps they ought to be.
Google's mantra is "organizing the world's information." If you're organizing information in your corporation or organization, that might not be a viable option. URIs present the opportunity for everyone in a web environment to make a step in that direction.
One of the major challenges for large organizations is that different people, departments, etc. use the same words to mean different things. Every business and subset of business has "terms of art", often common words or phrases that mean something special to that group.
To a programmer, the word "beta" means the test of software before it's released for general use. To a stock broker "beta" is a number that shows whether a stock is more or less volatile than the market. They're in diffferent industries so, talking face-to-face, it's pretty easy to tell that they're talking about different things.
There are plenty of examples, though, where the same word in the same industry means different things. In the financial industry, "wealth" is used to define the threshold for accepting clients for certain services. Every institution picks its own number and they can be the same or different (e.g., over $1 million in net worth; over $1 miillion in liquid funds invested; over $1 million in assets other than personally-used real estate). When those institutions merge, the inconsistent definitions become an impediment to merging their data.
In computer systems, there historically weren't good ways to know which meaning someone had in mind when they put a particular word in a file or database. The problem was the same for the names of fields or columns. Now, we have metadata...data that let's us provide information about data. So, we can stick tags on data in a file that tells us things like where it came from, what day it was collected, or what size it's supposed to be.
A URI (uniform reference identifier) can store the definition you have in mind. So Citi/define/wealth can have a different meaning from UBS/define/wealth. And, your system can point to the appropriate one whenever "wealth" appears in your data. This makes it possible to merge data and retain different meanings or to compute across disparate meanings.
Recently, I was invited to facilitate a workshop to learn about customer data uses, flows, and needs. It was an interesting idea, so I agreed.
"Know your customer" has become a hackneyed phrase in fairly short order. One of the post-9/11 bundle of laws, intended to gain anti-terrorism assistance from the public, was a "know your customer" mandate requiring financial institutions to better understand who their customers are and where their money comes from. Like many things we do in this automated life, it seems to have quickly lost its meaning in favor of a single massive data collection effort...like when my bank of many years -- which has seen my entire transition from debt to net worth through both my business acounts and the deposit of every paycheck -- asks me for id.
The workshop was intended to provide an opportunity for a fairly large group of data architects to hear a group of customers talk about their business day and tasks; how they interact with each other; and what they want. It was my job to draw them out over the course of two days, to find slices of life to talk about and elicit tremendous detail. It was expected that we would have an accelerated opportunity to gather needed data elements and identify system access requirements.
With facilitation, the customers opened up about their work lives. They described a tremendous amount of human interaction to obtain information. They described phoning folks in other parts of the organization to find out information they wanted. We, the folks with strong information technology orientation, thought we were making a break-through, identifying systems to which these customers could or should get access.
What happened next was unexpected. Wen we sought to validate these system access requirements, the customers repeatedly and politely told us we misunderstood. They repeatedly explained that they liked to get information in this unautomated fashion. They liked the opportunity conversation gave them to get context -- group meaning of terms, background for the way information is gathered, information that's inappropriate for permanent records, and other related information.
Since then, I've been thinking about what it really means to know your customer. As the provider of services, it's not enough to learn your customer's business. And, it's not enough to spend time in their space and observe them at work. You need to do those things but, in the end, if you really want to give them what they want, sometimes you just need to ask.
What do software development and the Titanic have in common? They both hit icebergs! It sounds like a bad joke, but there's an important kernel of truth here.
The software development process, unfortunately, has a predictable pattern. You, as the business leader, meet with software developers and reach agreement on "system requirements." The programmers toil and arrive with the new software and both sides are immediately unhappy. Developers think you change your mind. You think developers don't listen.
What really happens is what I call "the iceberg" phenomenon. Both sides believe they have a meeting of the minds and don't realize that their agreement rests upon a tremendous number of assumptions. You and the developers each understand the words, phrases, and concepts of any requirements document in the context of your own experience and environment. Like an iceberg, the words that are used are the 10% that both sides can see; under the surface lies the 90% that defines their differences and creates the many risks that increase time and cost.
For example, programmers don't know if a particular design will have legal implications or will cause problems for someone in the supply chain. Business professionals are facing time pressures that keep them from providing the tiny details of date formats (12/31/2007 vs 31/12/2007) or country codes which may be critical to the business. On the other hand, I once discovered business professionals phrasing a requirement in a way that was about to cause ten weeks of programming, when the right question reduced the issue to a ten minute text change. That's an iceberg!
So, what's the "sonar" for this problem? Here are four alternatives:
1) Find a translator. If you can, find someone who has worked in both worlds and can serve as the "translator" for both sides.
2) Make everyone a translator. Assign someone to create a "lexicon" - a glossary of terms that are unknown to one side or the other. To avoid definitions filled with new inscrutable terms, ask contributors to check with a twelve year-old to see if the explanation is intelligible.
3) Create ambassadors. When possible,insist that a designer or programmer spend time at the side of the person(s) who will use the system. It's amazing how much the developers can learn through watching the workflow, overhearing an occasional conversation, or a chat at the coffee machine. If they are inalterably offsite, consider collaboration tools, giving people the ability to see and hear as much of the user's current business process as possible.
4) Require an "open" development environment. Remember that the vendor's work is not a surprise Christmas present. Consider the unorthodox approach of keeping the vendor's progress accessible at all times. Rather than waiting for benchmarks, assign someone to regularly view the developer's work. Developers using best practices will have wire diagrams, screen mock-ups, and functioning modules that will allow for course correction long before the code is finished.
I know that everyone is facing business pressure to be somewhere else, doing something else -- usually something that seems more directly relevant to the bottom line. But, I guarantee that time spent with developers while they work will save vastly greater amounts of time and money later.
When building or modifying a web business, consider two broad topics while deciding how to address consumer privacy: volition and culture. "Volition" addresses the voluntariness of the release of information. "Culture" addresses the general perceptions of information in a community. These are considerations separate and apart from legal requirements or liability potential.
In our culture, there are norms about what is intended to be private from whom. As a general rule, you can think of information disseminated in concentric rings -- the inner ring is typically a spouse, the next is typically immediate family and closest friends, then business colleagues, acquaintances, and strangers. For example, in the case of pregnancy or serious illness, we tell those closest to us first, then the folks at work, and likely never discuss it with strangers. We do the same thing with our home address or phone number.
At the other end of the spectrum, there is information we'll give to anyone who asks -- "how tall are you?" "is that your bag?" "what's your favorite color?" Generally, this is information which we believe can't be used in any negative way, that won't reduce our competitive advantage in business or social settings.
The exception to the concentric rings is when we trade a bit of privacy in return for something we want or need. While we wouldn't usually detail our income and debts to a stranger, we'll give that information to a mortgage broker in order to get a home loan. We're conditioned to respond to the most intimate questions of our life to almost any doctor in order to get treatment. To get a job, we may give up information about others who don't even know we've done it -- for example, the home addresses and phone numbers of family members and references.
When your business is in possession of information about individuals, considering culture and volition will help guide your decisions about what to make public through your website. And, pay attention to how your customer demographic is changing.
Sixteen year-olds and twenty-five year olds may want to publicly list their ages on MySpace because they don't want to socialize with each other, but Barbara Walters is one of a very small number of 78 year-old American women willing to publicly declare her age there. In another country, where age is revered, this might be different.
What someone self-published, on MySpace, their blog, or their professional site can be treated as publishable or share-able in almost all contexts. This is quite different from the care to be taken with information people gave in order to get their bills paid or insurance underwritten, even if they were induced to provide it through the Web. This is generally the information that creates the most controversy.
If you're not certain how your customers will react to something you're considering, imagine it outside a web context. If you want to post information, imagine how your family member would react if the same information about him was posted on a bulletin board at work. If you're considering selling customer information, imagine your reaction opening a letter from a company you didn't know that said this same information about you had been sold to them.
I'm not suggesting that this is your only consideration. You are in business to make money. Remember, though, that goodwill is so real that it can be given an asset value and any negative impact to it should be balanced against the potential revenue stream.
A friend just sent me a blog which is a bit of a rant about some comments on privacy or lack thereof. It provides a good basis to discuss some concepts and misonceptions about privacy and technology.
What does privacy mean?
Donald Kerr, a Deputy Director of National Intelligence, said that our culture equates privacy and anonymity. Like the blog author, James Harper -- of the Cato Institute and other esteemed institutions-- I disagree that the terms are equivalent in the eyes of the general public. Webster's dictionary describes being anonymous as being unknown or not identified, while defining privacy as keeping oneself apart or free from intrusion. In our culture, volition appears to be a key differentiator. When I close the blinds, I'm choosing privacy. When no one notices me in a crowd, I'm anonymous.
Is it unrealistic to expect privacy?
Kerr asserts that privacy doesn't exist and cites the availability of personal information through MySpace, FaceBook and Google. From a volition standpoint, Kerr's statement is a mixed metaphor. MySpace and FaceBook are entirely voluntary, people deciding to post things about themselves for their friends or the world to see. Google, making great strides at "organizing the world's information", aggregates personal information that may not have been intended or expected to be shared. I recently showed a friend that in five minutes on Google I could find more than his professional profile -- I produced his home address, his parents, his religion, his political leanings, and something about his finances. This undercuts Harper's contrary assertion that people have retained the ability to provide their identifiers to some "without giving up this information to the world".
Can individuals control privacy?
Kerr and Harper are talking when/whether/how the federal government should have access to individual information, but the question extends farther. Anyone signing up for access to a newspaper or making a purchase on the web is giving bits of himself away. Most typically, the information is gathered in "cookies", established by the websites and stored on the individual's computer. This summer, one study concluded 85% of users were aware of cookies, but only about 28% were able to successfully delete them.
The public's misunderstanding about their control over personal information in cookies extends past their technical inabilities. The misunderstanding is exacerbated by a little legal wordplay. Nearly every "privacy statement" I've ever read on an e-commerce website says that the information may be shared with "afflilates" but then doesn't define that term. Each of these companies could call anyone, any company, or any government agency an "affiliate" and give them access to cookies or sell them the information in the cookies.
[Stay tuned for Part II, where I'll talk about what business leaders and system designers can do to offer more privacy and still meet their business goals.]