Keresés

Részletes keresés

Törölt nick Creative Commons License 2005.04.06 0 0 11

Külföldön fejlesztik a szoftvereket.

Más környezet, más filozófia, más társadalmi gyökerek, stb.

 

Nekem már volt részem ilyen külföldön fejlesztett szoftverek hazai használhatatlanságáról véleményt mondani, mely szoftverek szülőhazájukban kiválóan működtek. A szoftverekkel semmi baj, de egy az egyben nem lehet ilyesmit átvenni. Az adaptációt pedig nem bízzák azokra, akik itthon művelik az adott szakterületet.

 

 

Előzmény: dolphin (10)
dolphin Creative Commons License 2005.04.05 0 0 10
Becsszo, hogy nem ertem, hogy jon ez ide... :-(
Előzmény: Törölt nick (9)
Törölt nick Creative Commons License 2005.04.05 0 0 9

Most akkor hívjam fel a figyelmed olyan fontos környezeti változók különbözőségére, mint az eüellátási rendszer, és a jogszolgáltatási rendszerek közötti, alapvető szemlélet és módszerbeli különbségek?

A társadalom kultúrájának különbözőségét, már meg sem említem.

 

Előzmény: dolphin (8)
dolphin Creative Commons License 2005.04.04 0 0 8
De ezeket a szoftvereket korant sem Mo-n fejlesztik es nem is politikusi megrendelesre, sot Ausztraliaban az ottani ugyvedi kamara gyamsaga alatt fejlesztik az egyik "robotjogasz"-t.
Előzmény: Törölt nick (7)
Törölt nick Creative Commons License 2005.03.22 0 0 7
Azt hiszem politikusaink alaposan félrértik az Internetes információk mikéntjét.

Hasonlót tavaly nagy garral bevezettek az eü-ben is, DrInfó néven. Az indoklás is kisértetiesen hasonló volt. Olyan korszerű eü-információs (robot?!) rendszer, ahol az emberek információkat találnak egyszerűbb bajaikra, gyógyszert egyszerűbb bajaikra, és eltájékozódhatnak a rendelkezésre álló eü-intézmények és szolgáltatások közt.
Medgyessy aztán be is égett az avató ünnepségen, hogy hova is mehet tüdőszűrésre?!

Persze ahol egy Kovács Kálmán informatikai miniszter lehet, és a kakabiztos számítógépeket élete fő műveként tüntetheti fel, ott a robotjogász, a robotorvos, a robotpatikus is belefér.
:-(
mpd Creative Commons License 2005.03.22 0 0 6

A Kiberiádában, természetesen, ami a Polfizika alapműve, bibliája.

Ha jól emlékszek, Zsenialon király három mesélőgépe....???

Előzmény: sierra (5)
sierra Creative Commons License 2005.03.22 0 0 5

:o)

És mit szólt ehhez Klapanciusz? (melyik könyvében volt ez?)

Előzmény: mpd (4)
mpd Creative Commons License 2005.03.22 0 0 4

A Polfizika Atyja is megemlékezik egy ilyen gépjogász-kísérletről.

Egy mérnök épített magának egyet. A robijoginak első dolga volt honoráriumot kérni, kiváltságokat szerezni. Ezt követően közölte, hogy az adott ügyben nincs ügyfele (teremtője, gazdája) számára kedvező megoldás.

Tartok tőle, hogy a mérnök (Trurl) antidemokratikus megoldást választott: szétszerelte.

Előzmény: Tamás (1)
mpd Creative Commons License 2005.03.22 0 0 3
Ferdinánd király, amikor gyarmatosokat küldött az Indiákra, nagy bölcsen úgy rendelte, hogy jogtudóst egyet se vigyenek magukkal, különben az Újvilágban igen elszaporodnának a perek, mivel ez a tudomány, természeténél fogva, patvarok és viszályok szülőanyja.
Előzmény: Tamás (1)
Tamás Creative Commons License 2005.03.22 0 0 2
Igen ám, de az esküdt nem jogász...
Előzmény: Konrad (0)
Tamás Creative Commons License 2005.03.22 0 0 1

Na figyu, elmondom, hogy lesz ez. Jogász vagyok, úgyhogy tudom, jól.

 

Szóval.

 

Lesz a robotjogász. Az jó lesz. De aztán tele lesz vele a hócsuka, hogy nincs annyi della, ezért robotjogásznak pusztulnia kell. Igen ám, de akkorra már a robotjogásznak is lesznek jogai, plusz emberi jogászai is, akik áruló módon neki fognak dolgozni, úgyhogy a luddista módszerek mennek a levesbe.

 

No de nem véletlenül vagyunk mi kreatív jogászok, majd megtaláljuk a kivételektől megtisztított, sablonos ítélkezés alóli mégiscsak kivételeket, és hamarosan újabb jogászi hivatás születik majd, aminek az a lényege, hogy mindig van kivétel/egyéni szempont/stb. ami miatt a robotjogász mégsem lesz jó.

 

Vili?

 

:-D

Előzmény: dolphin (-)
Konrad Creative Commons License 2005.03.14 0 0 0
Olvasd el A Dr. Gladiátor című dark future scifit, abban esküdt gép van...
Előzmény: dolphin (-)
dolphin Creative Commons License 2005.03.13 0 0 topiknyitó
És akkor a két cikk:

MIT's Technology Review
Logging On to Your Lawyer
Duncan Graham-Rowe - Febuary 2005

Manufacturing, finance, and the communications industry have in the last decade all come to rely upon artificial intelligence. But theres one industry that continues to put up resistance: the legal profession. The idea of a machine making legal decisions was long considered by opponents to be dangerous and ethically untenable. Thats about to change, says John Zeleznikow, a computer scientist at Victoria University in Melbourne, Australia. Zeleznikow believes AI is about to improve peoples access to justice and massively reduce the costs of running legal services.

Joining forces with Andrew Stranieri at the University of Ballarat, also in Victoria, Zeleznikow launched startup JustSys to develop AI-based online legal systems that dont overstep the ethical line. Judges already use one program to assist them in the complicated and arcane process of sentencing criminals. Divorce lawyers and mediators are using another, called SplitUp, to help couples settle property disputes without resorting to the courts.

But by far the most widely used program is GetAid, which assesses applicants entitlement to legal aid. Historically, assessment has consumed a significant portion of Victoria Legal Aids operational budget. Using something like Get­Aid frees lawyers and paralegals from the task so they can spend their time actually representing people, says Domenico Calabro, a Victoria Legal Aid lawyer. The system is due for commercial launch in the next month or so, and according to Calabro, the Australian authorities are considering whether to roll it out nationally.

Of course, lawyers and judges have been using specialized software tools for years. But what sets these AI-based programs apart is their ability to draw inferences from past cases and predict how the courts are likely to interpret new cases.

Its not the first time researchers have tried to develop such tools. In the 1980s, an AI-based program was developed at Imperial College London to interpret an immigration law called the British Nationality Act. Critics worried that the ­system, which was never implemented, could be used to entirely bypass lawyersand the protections they afford. Parliament produces statutes, but these are renterpreted by the legal profession, says Blay Whitby, an AI expert at the University of Sussex in England. Taking lawyers out of the loop could allow a government too much control over the interpretation of laws, he says.

Zeleznikow believes his software overcomes these kinds of problems by preserving human participation and by limiting the systems authority. GetAid, for ex­ample, cannot reject applicants; it can only approve them. All other applications are referred to human officers for reassessment. Similarly, a judge can use the Sentencing Information System to help examine trends in other judges decisions, but ultimately, the sentence has to come from the judge. They are still making the decision, but their decision is far more visible, says Stranieri. This, in turn, encourages greater consistency in sentencing, he says.

Sussexs Whitby is cynical about the legal professions opposition to AI. Loss of income and an irrational suspicion of technology are at least partly responsible, he says. They might also have to sharpen up their arguments and practices to deal with machine-advised clients, he says. With a new generation of techno-savvy lawyers, of course, attitudes may change within the legal profession.

In fact, this already appears to be happening, says Stuart Forsyth, a consultant for the American Bar Associations Futures Committee. Forsyth sees a willingness, at least among U.S. lawmakers, to embrace technology. The reason, he says, is likely the growing trend toward self-­representation in U.S. courts. In domestic dispute cases its well over 50 percent, and in others its as high as 80 percent, he says. This is worrying, he says, because if people are going into court with no legal skills, they may be getting short shrift. The bottom line: artificial justice may be better than no justice at all.


The Economist - Technology Quarterly
AI am the law
Mar 10th 2005
From The Economist print edition

Computing: Software that gives legal advice could shake up the legal profession by dispensing faster and fairer justice

GIVEN the choice, who would you rather trust to safeguard your future: a bloodsucking lawyer or a cold, calculating computer? Granted, it's not much of a choice, since neither lawyers nor computers are renowned for their compassion. But it is a choice that you may well encounter in the not-too-distant future, as software based on “artificial intelligence” (AI) starts to dispense legal advice. Instead of paying a lawyer by the hour, you will have the option of consulting intelligent legal services via the web. While this might sound outlandish, experts believe that the advent of smart software capable of giving good, solid legal advice could revolutionise the legal profession.

What is arguably one of the most conservative of all professions has already been quietly undergoing a technological revolution: many lawyers now use automated document-retrieval systems to store, sort and search through mountains of documents. But the introduction of smarter programs, capable of not just assisting lawyers but actually performing some of their functions, could turn the profession on its head. Such software could both improve access to justice and massively reduce legal costs, both for the client and the courts.

That is not to say that laptops will soon be representing people in court. But when a civil case goes to court it is usually a good indication that all other options have failed. Technology has the potential to preclude this last resort. “You move from a culture of dispute resolution to dispute avoidance,” says Richard Susskind, a law professor who is technology adviser to Britain's Lord Chief Justice. Making legal advice more accessible, he says, means people are more likely to seek advice before getting themselves into trouble.

Some such programs already exist online and are currently being used by lawyers, says John Zeleznikow, a computer scientist at the University of Melbourne in Australia and one of the orchestrators of this transformation. Although current programs are designed to help lawyers give advice, this is just the beginning. The trend, he says, is to make such services available to the masses. One service is designed to help resolve property disputes between divorcing couples. Aptly named SplitUp, the system can examine a client's case and, by scrutinising previous rulings, predict what the likely outcome would be if it went to court. The system, developed and now operating in Australia, is proving to be very helpful in getting couples to settle their disputes without having to go to court, says Andrew Stranieri, an AI expert at the University of Ballarat, in the Australian state of Victoria.

Dr Zeleznikow and Dr Stranieri have teamed up and launched a company, called JustSys, to develop AI-based legal systems. GetAid, another of their creations, is being used in Australia by Victoria Legal Aid (VLA) to assess applicants for legal aid. This is a complicated process that normally consumes about 60% of the authority's operational budget, because it involves assessing both the client's financial status and the likelihood that his or her case will succeed.

Although both these systems are only available for use by lawyers and mediators, it is the clients who benefit, says Dr Zeleznikow. With SplitUp, a client can avoid going to court with a claim that will surely lose and is instead given a chance to find a more realistic solution. With GetAid, although it may appear to be the legal professionals who are directly benefiting, there is a real knock-on effect for the client, says Domenico Calabro, a lawyer with VLA. Automating the application process frees up lawyers and paralegals so they can spend more of their time actually representing people rather than processing applications, he says.

Anatomy of an artificial lawyer

What makes both these programs so smart is that they do more than just follow legal rules. Both tasks involve looking back through past cases and drawing inferences from them about how the courts are likely to view a new case. To do this, the programs use a combination of two common AI techniques: expert systems and machine learning.

Expert systems are computer-based distillations of the rules of thumb used by experts in a particular field. SplitUp, for example, uses an expert “knowledge base” of 94 different variables, which are the factors identified by legal experts as most important to judges dealing with domestic-property disputes. Because no two cases are ever the same, and because judges use different degrees of discretion, it is not enough simply to apply a set of rules to these variables, however.

Hence the need for machine learning, a technique in which a decision-making system is “tuned” using historical examples, and adjusting the model to ensure it produces the correct answer. The system is trained using a sample of previous cases to learn how these variables have been combined by judges in the past. All of this builds an accurate model of the decision-making process a judge might use, and allows it to be applied to new cases, says Dr Zeleznikow. GetAid also makes inferences, but instead of working out what the courts will award the client, its intelligence lies in its ability to predict whether the client has a winnable case.

Both systems are incredibly accurate, says Mr Calabro. Tests of GetAid, carried out by VLA, showed that when 500 past applications were fed into the system it gave the same result as the actual outcome 98% of the time. The remaining 2% were then re-examined and found to be borderline cases. All 14 of VLA's offices now use GetAid, and the Australian authorities are considering rolling it out in the country's other seven states.

Some may regard all this as too impersonal, but those people can probably continue to afford a human lawyer, says Dr Susskind. Most of the people on the receiving end of this technology are not getting any legal advice at all at the moment. Stuart Forsyth, a consultant for the American Bar Association's Futures Committee, points to a growing trend in America of people representing themselves in court. This happens in more than half of all domestic disputes and an even larger proportion of some other types of case. This is worrying, says Mr Forsyth, because these people are probably not doing a very good job for themselves.

Internet-based legal-advice software could not only create a more level playing field but in doing so could also dramatically alter the nature of legal guidance, says Dr Susskind. Instead of being a one-to-one advisory service, it could become a one-to-many information service. Lawyers, of course, might not regard this as such a good thing. So it is not surprising that AI has traditionally been frowned upon within the legal profession.

[bLawyer v computer

In the 1980s, a program designed to help lawyers interpret immigration law laid down by the British Nationality Act caused consternation among academics and lawyers alike. Shockingly, it could be used by lawyers and non-lawyers alike. Critics were worried that bypassing lawyers might pose a threat to democracy, because of the important role lawyers play in re-interpreting statutes laid down by Parliament, says Blay Whitby, an AI expert at the University of Sussex. “Any change to the status quo should be the subject of proper, informed democratic debate,” he says.

Such concerns still linger, but attitudes seem to be shifting, says Mr Forsyth, as a new generation of more technology-savvy lawyers emerges. In 1999, a Texas court banned a basic self-help software package, Quicken Family Lawyer, on the grounds that the software was, in effect, practising law without a licence. Yet within 90 days this decision was overturned. This indicates a willingness among judges, at least, to tolerate the technology. Americans may like lawsuits, but they like technology even more.

One reason for optimism, suggests Dr Zeleznikow, is the way in which the programs are designed to be used. To have a machine making legal decisions about a person's welfare would be morally untenable in many situations, he says. So these days, programs are designed to have built-in safety checks to prevent them from overstepping this ethical line. For example, GetAid cannot reject applicants, but can only approve them: the rest are referred to a legal officer for reconsideration. Another example concerns the systems used by judges to help them in the complex and arcane process of sentencing. There is a real drive for sentencing to become more transparent and consistent, says Mr Forsyth. “People have great difficulty rationalising why one person gets one punishment, while someone else ends up with a lesser sentence,” he says.

Some judges are already using software tools to address this issue, but these are mainly statistical packages which give judges nothing more than a sense of how similar convictions have been sentenced in the past, says Uri Schild, a computer scientist at Bar-Illan University in Israel. However, these programs are now becoming more sophisticated. Dr Schild has developed a system that attempts to go one stage further, by considering not just the nature of the crime, but also the offender's previous conduct.

Magistrates and judges are often under considerable time constraints when working out sentences, and are unable to give detailed consideration to the offender's previous convictions. So Dr Schild's system evaluates an offender's record and creates a brief overview for the judge to peruse, including the number of previous offences, how serious they are, their frequency, and so on. For each category the program indicates how significant it is to the case in hand. Another program, from JustSys, appears to push things even further. The Sentencing Information System helps judges construct and record their arguments for deciding upon a sentence. The decisions still come from the judges, says Dr Zeleznikow, but the system helps them justify their decisions by mapping out their reasons.

People have to be kept in the loop because of accountability, says Dr Whitby. But the technology itself need not be feared as a new entity. On the contrary, the same AI techniques have been helping engineers and businesses for years, in fields from marketing to oil-drilling—and they would not have been so widely adopted if they did not work. The real issue is one of acceptance, he says.

None of these systems threatens to put lawyers and judges out of a job, nor is that the intention. They do things that people do at the moment, says Dr Zeleznikow, “but they could be quicker and cheaper”. What the systems still lack is the ability to exercise discretion, and that is not likely to change for the foreseeable future—so humans need not worry about losing their jobs to an army of robo-lawyers. But smart software has the potential to make legal advice more readily available, unnecessary court battles less frequent, and rulings more consistent. Surely not even a lawyer could argue with that.

Ha kedveled azért, ha nem azért nyomj egy lájkot a Fórumért!