ChatGPT is inseparable from PyTorch, and LeCun s remarks have sparked heated discussions, and the re

Mondo Social Updated on 2024-01-30

Reported by the Heart of the Machine.

Editor: Du Wei, **Chicken

In fact, open source and closed source have their own reasons, and the key depends on how to choose.
In the past two days, the topic of open source has become popular again. Some people say that without open source, AI will be nothing, and continue to keep AI open. This view is echoed by many people, including Yann Lecun, Turing Award winner and chief scientist at Meta.
Imagine what the AI industry would be like today if the industry's AI research labs were still closed, not open source**, and patent and enforce everything they do?

Imagine a world without PyTorch and where Transformer, ResNet, Mask-rcnn, FPN, SAM, Dino, Seq2Seq, W**2Vec, Memory-Enhanced Networking, Tansformers, BatchNorm, LayerNorm, ADAM, Denoising Autoencoder, Federated Embedding Architecture, and a plethora of SSL methods are all patented, AI What will the industry look like?

Source: The opinion resonated with more people, and some people believe that OpenAI would not even be able to invent GPT if Google did not open-source Transformers. What a fake openai. 」

Source: Don't forget to mention that ChatGPT is also built without pytorch. 」

This raises the question of why companies like OpenAI and Anthropic, are they reluctant to open source large model weightsForeign media Venturebeat wrote an in-depth long article, interviewed some executives, and analyzed the reasons. We know that in machine Xi, especially deep neural networks, model weights are considered to be crucial, and they are the mechanism by which neural networks Xi and make the best of it. The final value of the post-training weights determines the model performance. At the same time, a study by the non-profit RAND Corporation pointed out that while weights are not the only component of a large model that needs to be protected, they are closely related to the large amount of training data that the model calculates, collects, and processes and optimizes the algorithm. Obtaining weights allows malicious actors to leverage the full model at a very small training cost.

*Address: The large model company pays more attention to weight security. Jason Clinton is the Chief Information Security Officer at Anthropic, and his primary mission is to protect the terabyte weight files of his model, Claude, from others. I probably spend half of my time protecting the weighting file. It's what we're most focused on and prioritized, and it's where we invest the most resources, he said in an interview with Venturebeat. Model weights can't fall into the wrong handsJason Clinton emphasized that there are concerns that companies are concerned about model weights because they represent extreme value of intellectual property. In fact, Anthropic's more important consideration is to prevent these powerful technologies from falling into the wrong hands, which can have immeasurable negative consequences. Clinton is far from alone in expressing deep concern about who gets the base model weights. In fact, the White House's recent executive order on the safe and secure development and use of artificial intelligence requires foundational model companies to provide federal** documentation reporting on ownership, possession, and protective measures taken to protect model weights. OpenAI has expressed a similar position. In an October 2023 blog post, OpenAI said it is continuing to invest in cybersecurity and insider threat protection measures to protect proprietary and unpublished model weights.

40 attack vectors are being executedRand's report, "Securing Artificial Intelligence Model Weights," was co-authored by Sella Nevo and Dan Lah**. The report highlights the security threats and future risks to AI model weighting. In an interview with Venturebeat, Nevo stated that the biggest concern at the moment is not what these models can do now, but what might happen in the future, especially in terms of ***, such as the possibility of being used to develop bio**. One of the purposes of the report is to understand the attack methods that actors may employ, including unauthorized physical access, compromising existing credentials, and chain attacks. The report finalized 40 different attack vectors, emphasizing that they are not theoretical, but that there is already evidence that they are being executed and in some cases even deployed widely. Risks of the Open Foundation ModelIt is important to note that not all experts can agree on the level of risk of leakage of AI model weights and the extent to which restrictions are needed, especially when it comes to open-source AI. This once again confirms the complexity and many challenges of governance in the field of artificial intelligence. The Stanford School of Artificial Intelligence policy brief, Considerations for Governing Open Foundation Models, highlights that while open foundation models (i.e., models with widely available weights) can combat market concentration, promote innovation, and increase transparency, their marginal risk relative to closed models or prior technologies is unclear.

Link to the brief: This fact-based, non-deliberate incitement to fear has been praised by Kevin Bankton, Senior Advisor on AI Governance.

The briefing uses Meta's LLAMA 2 as an example, a model released in July with widely available model weights, enabling downstream modifications and reviews. While Meta had promised to secure its unpublished model weights and limit who could access them, the leak of model weights in March 2023 was impressive. Heather Frase, a senior researcher at Georgetown University's AI assessment, noted that open-source software and ** have historically been very stable and secure because it can rely on a large community. Until powerful generative AI models emerged, the potential for harm from common open source technologies was limited. She mentions that, unlike traditional open-source techniques, the risk of open-source model weighting is that it's not the user who is most likely to be harmed, but the person who is deliberately targeted for harm, such as a victim of deepfakes**. A sense of security usually comes from opennessHowever, there are others who express the opposite view. In an interview with Venturebeat, Nicolas Patry, Xi Machine Learning Engineer at Hugging Face, emphasized that the risks inherent in running task programs also apply to model weights, but that doesn't mean they should be closed. When it comes to open-source models, the idea is to open up to as many people as possible, like Mistral's recent open-source big model. Nicolas Patry argues that security often comes from openness, and transparency means more security and anyone can view it. Closed security can make it unclear what you're doing. VentureBeat also spoke with William Falcon, CEO of Lightning AI, the company behind the open-source framework Pytorch Lightning, who believes that if the company is concerned about model leaks, it will be too late. It's unimaginable for the open source community to catch up, and open research can derive the various tools needed for AI cybersecurity today. In his view, the more open the model, the more democratized the capabilities and the better tools can be developed to combat cybersecurity threats. For Anthropic, the company sought to support domain research on the one hand, and to ensure the safety of model weights on the other, such as hiring good security engineers. Original link:

Related Pages