Admin

Follow the money to comprehend the risks associated with AI

AI, money, Risks



The fallibility of experts in predicting the direction of innovation is a recurring theme throughout history. From Einstein’s skepticism about nuclear energy to today’s debates about artificial general intelligence (AGI), it is clear that even the brightest minds can be mistaken about the future of technology. This raises an important question: if scientists and technologists struggle to accurately predict technological evolution, how can policymakers effectively regulate emerging risks from artificial intelligence (AI)?

David Collingridge’s influential thesis warns against the folly of trying to predict the risks posed by new technologies. He argues that technology evolves in uncertain ways, making it difficult to anticipate its consequences. However, there is one class of AI risk that is generally knowable in advance – risks stemming from misalignment between a company’s economic incentives and society’s interests in how AI models should be monetized and deployed.

To overlook such misalignment is to focus exclusively on technical questions about AI models without considering their socio-economic implications. This narrow focus not only fails to address economic risks but also prevents the widespread sharing of value generated by AI. It is essential to ensure that the economic environment fostering innovation does not incentivize companies to prioritize profit or market dominance at the expense of unpredictable technological risks.

OpenAI, for example, has already become a dominant player in the AI industry, with billions in annual sales and millions of users. To maintain a viable and dispersed ecosystem of innovation, it is crucial that OpenAI’s economic incentives align with the interests of those who create and use its AI models. By examining the economic incentives underlying innovation and how technologies are monetized, we can gain a better understanding of the risks, both economic and technological, posed by different market structures.

A notable example of the impact of economic incentives on technology can be seen in the case of aggregator platforms like Amazon, Google, and Facebook. Initially designed to benefit users, these platforms eventually shifted their focus towards increasing profitability. The problems arising from social media and search algorithms were not solely engineering issues but rather a result of financial incentives that prioritized profit growth over the safe and equitable deployment of algorithms.

For instance, Amazon’s advertising business highlights how economic incentives can drive unethical behavior. Users tend to click on paid advertising placements at the top of the product search results, even when those results are not the best. Amazon exploits users’ trust in its algorithms to allocate user attention and clicks to sponsored products, which are often of lower quality and higher price. This behavior allows Amazon to profit immensely while disadvantaging both users and product suppliers.

Amazon’s shift towards an extractive business model exemplifies the dangers of economic misalignment in technology. This pattern is not unique to Amazon but is common among major online aggregators, including Google and Meta. These platforms initially prioritize user value but gradually prioritize their economic interests, leading to the concentration of profits and market dominance that hinders innovation by other companies.

While some rents received by firms from innovation can be beneficial for society, the misalignment of economic incentives can lead to rent extraction and concentration of profit. Bad platform behavior, such as displacing top-ranked organic product results with advertising placements, becomes a means of extracting rent. This behavior can be detrimental to users, product suppliers, and overall market competitiveness.

To address these issues, regulators can shape market structures through technological mandates like interoperability and open standards. By ensuring digital systems can work together seamlessly and allowing apps from sources other than a platform’s official store, user mobility between markets can be enhanced, reducing the ability of dominant entities to exploit their users and ecosystems. Interoperability and open source software have played crucial roles in maintaining competitive and inclusive markets in the past, and their application in the AI industry could have similar effects.

Disclosure is another powerful tool for shaping markets. Requiring technology companies to provide transparent information about their products and monetization strategies can prevent exploitative behavior. For example, mandatory disclosure of ad load and operating metrics could have prevented Facebook from prioritizing ad revenue at the expense of user privacy. Similarly, requiring AI model providers like OpenAI to disclose their training datasets can ensure fairness and prevent copyright infringement.

It is important to recognize that risks arising from economic misalignment are not unknowable, as Collingridge suggests, but rather predictable economic risks. By recalibrating economic incentives towards open and accountable AI algorithms, we can avoid the mistakes of the past and ensure more equitable and responsible technological development. The limitations placed on algorithms and AI models will play a crucial role in shaping economic activity and human behavior, making it essential to address economic risks alongside technological risks as we navigate the future of AI.



Source link

Leave a Comment