Legal experts claim that only a select few, including the CEO of Microsoft, a significant financial backer, can compel OpenAI to alter its governance at the artificial intelligence startup.
On Friday, the nonprofit board that was in charge of the company that created the well-known ChatGPT chatbot abruptly fired CEO Sam Altman, causing a stir in Silicon Valley. Microsoft CEO Satya Nadella has called for governance changes, and nearly all of the company's 700 workers signed a letter threatening to resign if the board does not step aside.
There were no comments on the issue from Microsoft.
Investors are considering their legal options in light of the unrest, which has also highlighted disagreements about the safe development of potentially disruptive technologies.
According to Alexander Reid, an attorney at BakerHostetler who represents nonprofit organisations, the only individuals who might force the existing OpenAI board to resign or modify are judges or state attorneys general because the company is a nonprofit.
Attorneys general have broad authority to pursue reforms while supervising and looking into nonprofit organisations.
"Even if they don't go to court, their mere presence typically gets results," he said.
Attorneys general typically have the authority to shut down an organisation completely or change its leadership after discovering fraud or improper conflicts of interest.
One such is Hershey Co. The Pennsylvania attorney general contested the trust's expenditures in 2016, which prompted the trust in charge of the candymaker to agree to replace a number of board members.
Another source of accountability, according to Florida A&M University law professor Darryll Jones, is the Internal Revenue Service of the United States.
"There is a whole boatload of scholarship noting that nonprofit enforcement is severely lacking, but for the most part nonprofits are pretty good at self-policing if only to avoid scandal that would impact donations," he said.
The nonprofit organisation had complete authority over OpenAI's for-profit division, a move designed to prevent corporate greed from influencing decisions on a potentially potent technology.
As a result, even though sources have told Reuters that some investors are thinking about taking legal action, investors who have together invested billions of dollars in the business will have difficulty suing the board over Altman's termination.
Only directors are authorised to remove or choose new board members under OpenAI's bylaws. According to Reid, the structure—known as a self-perpetuating board—is highly typical in the nonprofit sector.
Ilya Sutskever, the chief scientist of OpenAI, and three independent directors make up the current board of directors. The latter has since stated that he "deeply regret(s)" the decision, having collaborated with the other board members to remove Altman and former President Greg Brockman.
Sutskever might now be the only person in a position to formally oppose the board's decision, save for government enforcers.
According to Reid, a board member may bring a lawsuit against another board member for neglecting to perform their duties, either personally or on the group's behalf.
However, he noted that these legal battles are usually limited to cases where there appears to be mismanagement involving payment or expenses.
The more typical course of action in conflicts over organisational direction or control is for the organisation to break apart.
"You just form another nonprofit that does it slightly differently," he said.
OpenAI has previously weathered one of these storms.
The cofounders of Anthropic, who remained executives at OpenAI until 2020, had left their company due to differences in their ideas on how to guarantee the safe advancement and regulation of AI.
In the coming days, it will probably become clear if OpenAI can overcome the division between its board and staff.
(Source:www.reuters.com)
On Friday, the nonprofit board that was in charge of the company that created the well-known ChatGPT chatbot abruptly fired CEO Sam Altman, causing a stir in Silicon Valley. Microsoft CEO Satya Nadella has called for governance changes, and nearly all of the company's 700 workers signed a letter threatening to resign if the board does not step aside.
There were no comments on the issue from Microsoft.
Investors are considering their legal options in light of the unrest, which has also highlighted disagreements about the safe development of potentially disruptive technologies.
According to Alexander Reid, an attorney at BakerHostetler who represents nonprofit organisations, the only individuals who might force the existing OpenAI board to resign or modify are judges or state attorneys general because the company is a nonprofit.
Attorneys general have broad authority to pursue reforms while supervising and looking into nonprofit organisations.
"Even if they don't go to court, their mere presence typically gets results," he said.
Attorneys general typically have the authority to shut down an organisation completely or change its leadership after discovering fraud or improper conflicts of interest.
One such is Hershey Co. The Pennsylvania attorney general contested the trust's expenditures in 2016, which prompted the trust in charge of the candymaker to agree to replace a number of board members.
Another source of accountability, according to Florida A&M University law professor Darryll Jones, is the Internal Revenue Service of the United States.
"There is a whole boatload of scholarship noting that nonprofit enforcement is severely lacking, but for the most part nonprofits are pretty good at self-policing if only to avoid scandal that would impact donations," he said.
The nonprofit organisation had complete authority over OpenAI's for-profit division, a move designed to prevent corporate greed from influencing decisions on a potentially potent technology.
As a result, even though sources have told Reuters that some investors are thinking about taking legal action, investors who have together invested billions of dollars in the business will have difficulty suing the board over Altman's termination.
Only directors are authorised to remove or choose new board members under OpenAI's bylaws. According to Reid, the structure—known as a self-perpetuating board—is highly typical in the nonprofit sector.
Ilya Sutskever, the chief scientist of OpenAI, and three independent directors make up the current board of directors. The latter has since stated that he "deeply regret(s)" the decision, having collaborated with the other board members to remove Altman and former President Greg Brockman.
Sutskever might now be the only person in a position to formally oppose the board's decision, save for government enforcers.
According to Reid, a board member may bring a lawsuit against another board member for neglecting to perform their duties, either personally or on the group's behalf.
However, he noted that these legal battles are usually limited to cases where there appears to be mismanagement involving payment or expenses.
The more typical course of action in conflicts over organisational direction or control is for the organisation to break apart.
"You just form another nonprofit that does it slightly differently," he said.
OpenAI has previously weathered one of these storms.
The cofounders of Anthropic, who remained executives at OpenAI until 2020, had left their company due to differences in their ideas on how to guarantee the safe advancement and regulation of AI.
In the coming days, it will probably become clear if OpenAI can overcome the division between its board and staff.
(Source:www.reuters.com)