In 1950, Alan Turing, crypto-analyst and computer scientist, wrote the seminal paper “Computer Machinery and Intelligence.”1 The article examined artificial intelligence, questioning whether a computer could think. He created a test for computer thinking, it was called the imitation game and is colloquially known as “the Turing test.” The test is passed if a person is unable to tell the difference between a computer and a human when communicating via text.
Various tests have been proposed for whether we have achieved artificial intelligence: Can the computer play chess; can it win a game of poker.2 Each time a computer passes a test, we create a new obstacle, because we determine that those tests were not good enough and we have not yet achieved the goal of artificial intelligence.As many critics have noted, the Turing test is also not a complete examination of intelligence, but rather a question of trickery that focuses solely on conversation. So, it is inevitable that if a computer ever passes the test, we will create a new challenge. However, the Turing test has thus far proved an effective method to test the capabilities of computers to think.
Human conversation is diverse and complex, and we are difficult for a computer to fool. No artificial intelligence has reliably passed the Turing test, although many have claimed to pass, but only by limiting the test in some way. Due to advancements in technology, we grow ever closer to breaking this barrier.
Which brings us to Watson,3 a cognitive computing system. Cognitive computing involves a self-learning system that reviews data, recognizes patterns and processes language, mimicking the way the human brain works. Watson was programmed and built to answer questions posed by “natural language” or ordinary language used in general communication. Using Watson, ROSS was created, the “Super Intelligent Attorney.”4It is unarguable that the commercial uses of cognitive computers are limited only by our imagination. Such a computer can review the equivalent of one million books in a second. Already, you can see the use of this type of technology in our industry.
Westlaw began using natural language search tools to replace Boolean connecters. Insurance companies use claims adjustment programs such as Colossus to value claims. E-discovery companies use computers to find privileged information and perform redactions.
A modern-day supercomputer with the appropriate programming could be taught to analyze law, draft briefs and form legal opinions. This would require no technological advancements. Such a computer would be more successful in determining case outcomes than a human. Since the computer could process millions of pages of documents per second, it could read every article, case and book on a subject, forming an infinitely more in-depth opinion than is possible for a human.One significant barrier that prevents this from occurring is that our books, laws and articles are simply not readily accessible. The information is greatly protected from the universal access a computer of this type would require. To truly succeed, a computer would need regular access to the entirety of the law, including Internet research databases, libraries, and state and federal court records.
The more information a computer has access to, the more accurate it would become. The only way a lawyer-bot-type super computer could be developed with these capabilities is in close partnership with online legal research services, publishers and the courts.
An additional issue with a computer analysis and in-depth review of the data is that they may cause a computer to emphasize statistical anomalies that would seem illogical. If the computer determines, based on its data set, that the most important factor in a case is something apparently disconnected, its result would reflect this finding. No attorney or judge would assign value to this statistical anomaly and thus the computer would not be trusted, no matter how accurate its predictions.
However, an attorney could apply this information to his or her practice and teach the computer to place less value on certain anomalies over time. This type of learning would ensure that the computer will have ever-increasing likelihoods of success in future cases. The computer would be trained to see red flags in cases, and would be able to notice slight anomalies that contribute to a case’s failure or success, which would be undetectable by humans, giving the computer’s users an edge over offices that lack this technology.This sets the scene for the Altman-Weil survey, the results of which are summarized in 124 pages.5 The survey asked questions regarding the state of the legal industry. Its questions cover areas from equity partnership to non-legal technology. The questions that raised the most interest, address which timekeepers could soon be replaced by super computers.
The question asks:
...login to read the rest of this article.