Michael Cohen, a former lawyer for Trump, recently admitted in court that he had cited bogus court cases generated by artificial intelligence (AI) programs in a legal document to try to justify Trump's business dealings and ties to Russia. The incident has raised concerns about the misuse and manipulation of AI in the legal field, and has also dealt another blow to Trump's political image.
Cohen is Trump's personal lawyer and has handled a number of sensitive and contentious matters for Trump, including hush money payments to Trump's former ** and negotiations with the Russian side over Trump's plan to build a building in Moscow. In 2018, Cohen was sentenced to three years in prison on charges of tax evasion, financial fraud and violating campaign finance laws, and reached a plea agreement with federal prosecutors in New York, agreeing to cooperate with Special Counsel Mueller in his Russiagate investigation into Trump and Russia.
In cooperation with prosecutors, Mr. Cohen testified to Congress that he had terminated talks with the Russian side in January 2016 and that Mr. Trump did not know the details of the plan. In fact, however, this negotiation lasted until June 2016, when Trump had already secured the Republican nomination. Cohen later admitted that he had perjured in Congress to match Trump's political image and interests.
More recently, in a motion asking a judge to shorten his probation, Cohen cited three sham court cases generated by AI programs in an attempt to prove that his cooperation was genuine and valuable. This AI program is Google's Bard, which was originally a chatbot for creating ** and poetry, but Cohen mistakenly thought it was a "super search engine" that could provide any information he wanted. In his motion, Cohen wrote that he used Bard to search for the question of "how federal judges treat cooperating witnesses" and then got three cases in his favor. However, these cases are all generated by Bard based on information and text on the web and simply do not exist in reality.
When the judge found out, he asked Cohen's attorneys to explain why the bogus cases were used and whether Cohen was involved in drafting the motion. Cohen's lawyers responded that it was an inadvertent mistake and that they did not realize that Bard was an AI program and did not verify the veracity of the cases it provided. They also said that these cases are not a core part of the motion and do not affect the outcome of Cohen's collaboration. The judge has not yet ruled on this.
Trump told reporters at the White House that Cohen was a "weak man" who did not hesitate to say what prosecutors wanted to hear in order to reduce his sentence, and even used AI to fabricate evidence. He also insisted that he had never had any inappropriate dealings with Russia and had not violated any laws during the election campaign. He believes that his real estate plans in Moscow are legitimate business practices, and he did not give up even during the election campaign, and there are no problems.
This incident has once again raised public concern about the use and regulation of AI in the legal field. Some experts believe that AI can help lawyers and judges improve their productivity and quality, for example through automated text analysis, case search, and other features. At the same time, AI can also be abused or manipulated, for example, by generating false evidence, testimony or precedents to mislead the court or the public, affecting the fairness and credibility of the judiciary. Therefore, it is necessary to establish corresponding norms and mechanisms to ensure the transparency, reliability, and accountability of AI, and prevent AI from becoming a "black box" or "Trojan horse" in the legal field.