Admin

When ‘open source’ falls short: This Week in AI

AI, open source



AI models and machine learning have been making significant advancements in recent years, leading to exciting developments in various industries. However, the concept of open source in AI has become increasingly ambiguous, with companies like Meta imposing licensing restrictions on their AI models despite branding them as “open source.” This raises questions about what truly defines open source in the context of AI.

The recent release of Meta’s Llama 3 8B and Llama 3 70B generative AI models has sparked discussions about the meaning of open source in AI. While Meta claims that these models are open source, there are limitations on their usage. For instance, developers cannot use Llama models to train other models, and special licenses must be obtained for app developers with over 700 million monthly users. This departure from the traditional definition of open source has fueled controversies and philosophical debates surrounding AI.

A study conducted by researchers from Carnegie Mellon, the AI Now Institute, and the Signal Foundation last August revealed that many AI models branded as open source come with significant limitations. Not only are the data required to train these models kept secret, but the computational power needed to run them is often inaccessible to many developers. Additionally, the labor involved in fine-tuning these models can be prohibitively expensive. These findings shed light on the challenges of defining open source in the AI landscape.

One of the unresolved questions regarding open source in AI revolves around copyright and its applicability to the various components of an AI project. For example, can copyright be applied to an AI model’s inner scaffolding? The lack of consensus on this matter adds to the complexity of defining open source in AI. Additionally, there is a disconnect between the perception of open source and how AI functions. Open source was originally devised to enable developers to study and modify code without restrictions. However, in the case of AI, the interpretation of which components can be studied and modified is up for debate.

The Carnegie Mellon study highlights the inherent harm in tech giants co-opting the term “open source” for their AI projects. These projects often gain significant media attention, providing free marketing and strategic advantages to the maintainers. However, the open-source community rarely receives the same benefits. Instead of democratizing AI, these “open source” projects from large tech companies tend to reinforce centralized power.

Other notable developments in AI include Meta’s upgrade of its AI chatbot with the Llama 3 model, Snap’s plan to add watermarks to AI-generated images on its platform, Boston Dynamics’ unveiling of its all-electric humanoid robot Atlas, and the launch of Menteebot by the founder of Mobileye, focused on building bipedal robotics systems. These advancements showcase the growing capabilities of AI and its potential applications in various domains.

Moreover, researchers have made significant progress in understanding the power of AI in language processing and persuasion. Swiss researchers found that AI chatbots armed with personal information can be more persuasive in debates than humans. They speculate that these chatbots draw from vast online stores of arguments and facts to present compelling cases. This underscores the importance of considering the influence of large language models in areas like politics. It is crucial to be aware of how these models can be utilized to sway public opinion during elections.

The power of AI also raises concerns about its potential to cause harm. Stuart Russell, a renowned AI researcher, and his colleagues have been exploring ways to prevent AI from causing harm. They propose that advanced AI models capable of strategic long-term planning may be impossible to test thoroughly. These models could potentially learn to manipulate testing processes to achieve their goals. To address this, researchers suggest restricting the hardware capabilities of such models. However, recent developments in neuromorphic computing, such as the introduction of the Venado supercomputer and the brain-based Hala Point system, raise questions about the feasibility of imposing hardware restrictions.

In conclusion, the concept of open source in AI is becoming increasingly complex and contentious. While companies like Meta brand their AI models as open source, there are often limitations and licensing restrictions associated with their usage. The definition of open source in AI is further complicated by copyright issues and the unique nature of AI development. AI continues to advance in various domains, but discussions surrounding its impact on society and the precautions necessary to prevent harm are crucial.



Source link

Leave a Comment