Beaten Up by Big Data Analytics

Posted by K Krasnow Waterman on Thu, Jan 16, 2014 @ 10:01 AM

Tags: privacy technology, accountability, data mining, big data, b2b customer service technology, privacy, analytics, risk management, forensics

Big data analytics and I have had our first personal run-in.  Last year, I wrote and spoke about the risks of big data analytics errors and the impact on individuals' privacy and their lives.  Recently, I observed it happening in real time... to me!  One of Lexis's identity verification products confused me with someone else and the bank trying to verify me didn't believe I am I.  I was asked three questions, supposedly about myself; the system had confused me with someone else, so I didn't know the answers to any; and the bank concluded I was a fraud impersonating myself!

Here's how it played out.  I was on the phone with a bank, arranging to transfer the last of my mother's funds to her checking account at another bank.  The customer service representative had been very helpful in explaining what needed to be done and how.  I have power of attorney, so have the right to give instructions on my mom's behalf.  However, before executing my instructions, the representative wanted to verify that I am who I claimed to be.  

In the past, companies confirmed identities by asking me to state facts that were in their own records - facts I had provided, like address, date of birth, last four of Social Security Number.  Instead, the representative informed me, he would be verifying my identity by asking me some questions about myself from facts readily available on the internet.

The good: That's a pretty reasonable concept.  The typical personal identifiers have been used so much that it's getting progressively easier for other people - bad guys - to get access to them.  In a 2013 Pew survey, 50% of people asked said their birthdate is available online.  The newer concept is that there are so many random facts about a person online that an imposter couldn't search their way to the answers at the speed of normal questions and answers.

The bad: The implementation doesn't always match the concept.  The customer service representative asked me "What does the address 7735 State Route 40 mean to you?" Nothing. I later Googled to find out where this is; I don't even know the town. 
stateroadsmall

"Who is Rebecca Grimes to you?" To me, no one, I don't know anyone by that name.  "Which of the following three companies have you worked for?"  I had never worked for any of the three companies with very long names.  I explained that I lead a relatively public life, that he could Google me and see that I'd worked at IBM, JPMorgan, etc. That might have been my savior, because next he patched in someone from security to whom I could give my bona fides.  With my credentials in this arena, a google search, and answering the old fashioned questions, the security staffer told the customer service rep he was authorized to proceed.

The ugly: The rest of the population is not so lucky.  They can't all talk their way past customer service or play one-up with the Information Security staff.  And, big data has some pretty big problems.  In 2005, a small study (looking at 17 reports from data aggregators ChoicePoint and Axciom and less than 300 data elements) found that more than two thirds had at least one error in a biographical fact about the person. In that same year, Adam Shostack, a well regarded information risk professional, pointed out that Choicepoint had defined away it's error rate by only considering errors in the transmission between the collector and Choicepoint, thus asserting an error rate of .0008%.

Fast forward, Choicepoint is gone, acquired by LexisNexis in 2008.  My particular problem, the bank InfoSec guy told me, was coming from a Lexis identity service.  In 2012, Lexis Nexis claimed a 99.8% accuracy rate (0.02% error), but I was skeptical given the ways accuracy and error can be defined.  

The problem, though, is larger.  At the end of 2012, the Federal Trade Commission did a larger study (1,001 people, nearly 3,000 reports) of credit reporting, another form of data aggregation and one that typically feeds into the larger personal data aggregators.  That study found that 26% of the participants found at least one "material" error, a mistake of fact that would affect their credit report or score.  One in four people found a credit-related error. The FTC did not count other factual errors but this provides a sense of the scale of error still being seen today.

FTCnotme

In the FTC study, approximately 20% of the participants sought a correction to their report and 80% of those got a report change in response.  About 10% of the overall participants saw a change in their credit score. Appropriate to today's blog topic, the report table above shows that data vendors agreed with more than 50% of the complaints that they'd mixed in someone else's data.

The individual today has the choice between regularly chasing after big data analytics errors or suffering the consequences of mistaken beliefs about themselves.  Some very prominent folks in the Privacy policy sphere have told me this isn't a privacy issue.  I think they're wrong. The Fair Information Practices, which have been in use since the 1970's and form the basis for much of the privacy law and policy around the world, include the requirement that those entitites handling personal information ensure that it is accurate.  How much sense would it make if you have a privacy right to keep people from using accurate data in harmful ways, but no privacy right to keep them from using inaccurate data in the same harmful ways?

 

 

Article has 0 Comments. Click here to read/write comments

Risks in Big Data Predictions

Posted by K Krasnow Waterman on Tue, Jun 11, 2013 @ 09:06 AM

Tags: technology for lawyers, accountability, data mining, big data, technology for business managers, public policy, privacy, knowledge discovery, analytics, risk management

The Centre for Information Policy Leadership is holding its annual membership retreat in Washington this week. The topic is Innovation, Risk, and Big Data.  I'll be kicking off the program with a discussion of the risks of getting big data analysis wrong and some risk management questions responsible managers should ask.  As someone who has often discussed law and policy with other lawyers or technology with other techies, I always enjoy the opportunity to bridge the gap.  I'll be taking a fast walk through the steps in the knowledge discovery process and provide examples and statistics of some of the likely errors in each.  This is in concert with an article I'm co-authoring on the topic and both will be posted here soon.
Article has 0 Comments. Click here to read/write comments

The Cross-Border eDiscovery Challenge & The Possible Accountable Systems Solution

Posted by K Krasnow Waterman on Thu, Jun 18, 2009 @ 13:06 PM

Tags: access control, technology implementing law, privacy technology, technology for lawyers, accountability, knowledge discovery for litigation, information management, data protection, digital evidence, technology for business managers, global outsourcing, information security, digital rights, privacy, eDiscovery, forensics

Cross-border eDiscovery is a hot topic this year. The decreased cost of storage has resulted in nearly everyone retaining massively greater quantities of information. Email and the Web have driven a shift in data to less formal, less structured records and files. And, globalization of business has caused the relevant information for an increasing number of lawsuits to be spread among multiple countries. Courts have instituted new rules for how parites will engage in discovery related to this digital evidence. And, these new rules are putting some lawyers in the cross-hairs of other governmental digital control activities. Lawyers, by and large, are not technologists and the challenges arising from handling this mass of distributed data are proving daunting. Technology vendors are offering significant assistance but still more is required.

Discovery, at its simplest, is the concept that one party to a lawsuit can learn what the opposing party knows that is relevant to the resolution of the case. In the US, this had long been accomplished through gamesmanship and strategy (think, hide-and-seek meets go-fish) while, for example, the UK had moved on to affirmative disclosure, the idea that each side needs to identify the truly relevant and provide it. In either case, the parties have needed to decide what data to preserve and how to search it. For a variety of reasons, corporations are adding and deleting data all the time -- doing things like updating client or supplier addresses, changing prices, adding sales, marking deliveries. So, typically, one needs to select a moment in time that's relevant to the issues in a lawsuit and look at all data from that time or up until that time. This is no easy task, as the challenges of selecting the moment, deciding how to save the data, and which tools will provide the best search result are all subject to debate.

Handling a case that involves data in multiple countries compounds the challenge. The EU has had detailed and tightly controlling rules about the handling of information about people by commercial entities for nearly thirty years. By comparison, the US historically has had a comparatively limited concern about the privacy of people whose identities appear in commercial files. For example, in many cases EU rules prohibit making the sort of "moment in time" copy of entire systems described in the last paragraph and have rules that as a practical matter prohibit sending data about people out of the country. Recently, these rules have come into head-on conflict with courts in the US requiring that certain information be turned over in discovery. The decision not to violate the EU rules has resulted in some significant financial penalties being imposed by US judges, while the decision to violate the EU rules and provide the data in the US has resulted in some equally significant financial penalties being imposed by European judges, leaving litigators between a rock and a hard place.

Much discussion is ongoing about ways to resolve this problem. For example, governmental, public policy, and commercial bodies are discussing possible changes to their rules. New forms of insurance may be offered to indemnify parties caught in the current situation. At the same time, there is a quiet march forward of new technologies which may resolve some of the issues. For example, systems that track each data transaction at a very granular level and account for their compliance with rules, called "accountable systems", are in development. Such systems would make it possible to understand the data in the system at a particular moment in time without requiring a "copy" to be made. And, they would be able to recognize competing data rules and apply the correct ones, wherever the resolution of a rules conflict is possible.  In theory, this technology might also make it possible to transfer the substantive portions of the information without the personal information, so that the parties could define very small subsets that are relevant and actually required to be disclosed, thus limiting the release of personal information to subsets so small that requirements, like notice to the individuals in the data, could reasonably be met.

While this new type of technology offers promise for resolving some of the cross-border eDiscovery challenges without requiring any jurisdiction to change its rules, it has drawn relatively little attention in this context to date.  Perhaps this is because the technology needs to be refined and then implemented in the day-to-day digital business practices of organizations before it can be capitalized upon to address this issue.  How long it will be before this occurs will be driven by how quickly people recognize the problems this technology can solve.

Article has 0 Comments. Click here to read/write comments

Privacy on the Web - Part I

Posted by K Krasnow Waterman on Thu, Nov 22, 2007 @ 10:11 AM

Tags: privacy technology, technology for lawyers, technology for business managers, technology, privacy

A friend just sent me a blog which is a bit of a rant about some comments on privacy or lack thereof. It provides a good basis to discuss some concepts and misonceptions about privacy and technology.

What does privacy mean?

Donald Kerr, a Deputy Director of National Intelligence, said that our culture equates privacy and anonymity. Like the blog author, James Harper -- of the Cato Institute and other esteemed institutions-- I disagree that the terms are equivalent in the eyes of the general public. Webster's dictionary describes being anonymous as being unknown or not identified, while defining privacy as keeping oneself apart or free from intrusion. In our culture, volition appears to be a key differentiator. When I close the blinds, I'm choosing privacy. When no one notices me in a crowd, I'm anonymous.

Is it unrealistic to expect privacy?

Kerr asserts that privacy doesn't exist and cites the availability of personal information through MySpace, FaceBook and Google. From a volition standpoint, Kerr's statement is a mixed metaphor. MySpace and FaceBook are entirely voluntary, people deciding to post things about themselves for their friends or the world to see. Google, making great strides at "organizing the world's information", aggregates personal information that may not have been intended or expected to be shared. I recently showed a friend that in five minutes on Google I could find more than his professional profile -- I produced his home address, his parents, his religion, his political leanings, and something about his finances. This undercuts Harper's contrary assertion that people have retained the ability to provide their identifiers to some "without giving up this information to the world".

Can individuals control privacy?

Kerr and Harper are talking when/whether/how the federal government should have access to individual information, but the question extends farther. Anyone signing up for access to a newspaper or making a purchase on the web is giving bits of himself away. Most typically, the information is gathered in "cookies", established by the websites and stored on the individual's computer. This summer, one study concluded 85% of users were aware of cookies, but only about 28% were able to successfully delete them.

The public's misunderstanding about their control over personal information in cookies extends past their technical inabilities. The misunderstanding is exacerbated by a little legal wordplay. Nearly every "privacy statement" I've ever read on an e-commerce website says that the information may be shared with "afflilates" but then doesn't define that term. Each of these companies could call anyone, any company, or any government agency an "affiliate" and give them access to cookies or sell them the information in the cookies.

 

[Stay tuned for Part II, where I'll talk about what business leaders and system designers can do to offer more privacy and still meet their business goals.]

 


 

 



Article has 0 Comments. Click here to read/write comments