Survey: More than half of British college students use AI to complete their studies

According to a study conducted by the Higher Education Policy Institute, more than half ofU.K.College StudentsuseAIThe study asked more than 1,000 college students whether they would use AI to complete their studies.ChatGPTThe results showed that 531 TP3T students admitted to using the technology. Of these, 51 TP3T participants said they simply copied and pasted AI-generated text as their academic work.

Survey: More than half of British college students use AI to complete their studies

Source Note: The image is generated by AI, and the image is authorized by Midjourney

Andres Guadamuz, a reader in intellectual property law at the University of Sussex, said: “My biggest concern is that many students do not understand the potential risks of ‘illusionment’ and inaccuracy in AI. I think as educators we have a responsibility to address this issue head on.”

Meanwhile, Italy's data protection agency has accused OpenAI of violating Europe's GDPR regulations and has given the startup a chance to respond to the allegations. Last year, Italian regulators temporarily banned access to ChatGPT in the country on the grounds that OpenAI may have obtained Italians' personal information from the internet to train its models. Investigators are concerned that AI chatbots may recall and repeat people's phone numbers, email addresses and other information by querying the model to extract data. Now, regulators believe that the company is violating data privacy laws and OpenAI may face a fine of up to 20 million euros or 4% of the company's annual revenue.

Additionally, a New York lawyer is in trouble again for citing a case invented by ChatGPT in a lawsuit. New York attorney Jae Lee was reportedly called before the Attorney Dispute Panel for citing a “non-existent state court decision” in court. She admitted to relying on the software to “find precedent that might support her view” without bothering to “read or confirm the validity of the decision she cited.” Unfortunately, her mistake meant that her client’s medical malpractice lawsuit was dismissed. This isn’t the first time a lawyer has made a mistake by relying on ChatGPT at work.FirstYet many lawyers continue to use this tool, which is particularly dangerous in legal applications.

A report from the House of Lords said the UK could miss out on the "AI gold rush" if it focuses too much on "distant and unlikely risks". The report said the government's desire to set up a regulatory framework on large language models could stifle domestic innovation in the emerging industry and warned of a "real and growing" risk of regulatory traps, describing a "multi-billion-pound race to capture the market". The report recommended that the government should prioritize open competition and transparency to avoid a small number of technology companies quickly consolidating control of "key markets", thereby stifling opportunities for new players.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Jua raises $16 million to build AI models for the natural world, starting with weather forecasts

2024-2-6 10:56:50

Information

Kunlun Wanwei releases the "Tiangong 2.0" MoE large model: supports more than 150,000 Chinese character context windows

2024-2-7 8:21:39

Search