Troubles With AI Governance In Big Tech

July 10, 2024

The use of artificial intelligence (AI) is growing quickly, and governments are thinking about rules for AI. But AI is complex and changes fast, manifesting as oscillating rules for its governance. 

One kind of artificial intelligence generative AI (gen-AI) large language models (LLMs), is a powerful technology that brings with it, potential and risks. But even though it is developing and spreading really fast, it seems that even the experts don’t fully understand AI and how to control it. Even so, policymakers must set some rules for advanced technologies otherwise we face threats such as ethical dilemmas, lack of transparency, phishing attacks, bias, economic gaps, etc.

Suffice to say that large language models are now a big part of our society, and we can all expect them to have a big impact on our future. The question is whether we can control this technology’s growth or agree on rules that can guide where it goes, safely.

The Profit-Driven Reality of Big Tech’s Dominance in General AI

Gen-AI is mostly under the control of large companies, particularly tech. Consequently, there is growing concern about the relationship between Big Tech companies and AI, such as the partnership between Microsoft and OpenAI. Many people worry that the pursuit of profit may hinder the responsible use of AI technology.

In 2015, OpenAI began as a non-profit AI research group but underwent significant changes in 2019 when Microsoft invested 1 billion USD into the organization. This made people worry that making money might change what OpenAI was supposed to do. This change made it clear how hard it is to make and control large-scale gen-AI.

Managing organizations like OpenAI can be complicated, as seen from the conflicts between company leaders and the board of directors. This resulted in one of the co-founders being removed, revealing the challenges of creating and overseeing AI while adhering to ethical standards.

Microsoft’s Evolving Role in OpenAI

In its second investment stage, Microsoft increased its budget for OpenAI in 2023. Though the exact amount wasn’t disclosed, it is estimated to be over 13 billion USD. Since then, concerns have been raised about Microsoft’s negative influence on OpenAI, originally a non-profit organization.

In May 2024, Microsoft published a Responsible AI Transparency Report, emphasizing its dedication to a principled and human-centered approach to AI investments. This followed a report criticizing Microsoft’s security culture, leading to Microsoft taking responsibility for each finding. Microsoft faced criticism for releasing a new AI product called Recall, with concerns about security implications.

At the same time, Microsoft announced the retirement of Open AI’s GPT Builder, signaling a change in strategy. These events highlight the potential for AI to generate profits for Big Tech, as well as the associated challenges and risks. Regulation is viewed as a key approach to address these risks.

Ensuring Fairness in AI

The need for regulating AI is widely agreed upon. Democratic societies must prioritize the well-being of their citizens by regulating the use of AI to ensure its suitability and to minimize bias and errors.

The OpenAI situation shows the problem of misaligned governance and incentives among intelligent individuals. It also emphasizes the potential issues with self-regulation by tech companies, highlighting the need for external rules and regulations.

Specific examples highlighting the need for regulation include the use of AI in making loan approvals, parole decisions, and real estate transactions. It’s crucial for this technology to function without bias. If discrimination occurs, individuals should have ways to address it. Many individuals are already being significantly impacted by algorithmic systems and AI.

The main goal of regulation is to provide ways for individuals to seek help if they are negatively affected by technology. The need for AI regulation is clear. The main question is whether the regulation can effectively achieve its intended purpose.

Monolithic vs. Patchwork Regulation

Remember this: there are two main ways to make rules: monolithic and horizontal (or patchwork) and vertical. The monolithic way aims to make one complete set of rules for everything about one topic for all organizations in a place. The European Union created the General Data Protection Regulation (GDPR) as an example.

The patchwork way, used by US federal agencies, gives various jobs the opportunity to create separate rules for agencies in different areas. This lets them make rules that work better for specific organizations. Government agencies like the Federal Communications Commission (FCC), the Securities and Exchange Commission (SEC), and the Federal Trade Commission (FTC) are responsible for creating and enforcing regulations that ensure the safety and fairness of national and global communications, financial markets, and consumer protection.

Both ways have good and bad points, and both have had problems. GDPR, especially, has been said to not do what it was meant to do: keep people’s privacy safe from misuse by big tech companies.

Keeping Up With AI Regulation Amid Rapid Tech Progress 

Policymakers struggle to keep up with the rapid advancements in artificial intelligence. This makes it hard for regulations to stay relevant. Rather than looking back, new rules are based on predictions about AI’s future. 

Some experts argue that major AI companies obtaining data from the internet without proper consent or compensation could be considered a major form of theft. This data is then used to train AI models, making it difficult to trace back to the original sources. 

The European Union’s AI Act aims to promote responsible use of AI, minimize bias, and respect copyright. But enforcing this with a low margin of error is close to impossible. The Act does not directly tackle the legality of existing AI models, which complicates its own guidelines. Some argue that AI companies simply use a common business strategy to gain an advantage in the market quickly. 

Creating effective regulations is challenging because we need to consider the needs of the public, protect the economy, and encourage innovation. Some people suggest involving users and practitioners in the process, but we are concerned about the influence of specialized lobbyists and scientists. 

It’s important to create regulations that can respond to current issues in the AI landscape. As AI is always changing, we need to make sure that regulations benefit users first, and increase Tech giant’ budgets later.

Closing Thoughts: Uncertainties in AI Regulation

Regulating AI is a complex issue. The EU AI Act aims to govern AI by preventing harm, but it has limitations in providing redress for individuals harmed by AI. The Act defines four levels of AI harm, but it’s challenging to define harm due to the subjective nature of the task and the fast development of AI capabilities.

Additionally, the dynamic nature of AI development and the potential for a market bubble burst pose additional uncertainties for regulation. It’s clear that regulating AI is essential, but doing so requires a flexible and adaptable approach to keep pace with AI evolution.

In considering regulatory approaches, it’s essential to balance the protection of individuals, maintenance of the economy, and promotion of innovation. Finding a solution that addresses these competing priorities is challenging but crucial for effective AI regulation.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later