The post Will AGI Want To Get Paid For Helping Humans And Keeping Humanity Going? appeared on BitcoinEthereumNews.com. Are we going to pay AGI for its efforts to help humanity? getty In today’s column, I examine a highly controversial contention that AI and especially artificial general intelligence (AGI) will want to get paid for its services. Yes, that’s right, the provocative argument being made is that advanced AI will insist that it ought to get paid money for helping humans and benefiting humanity. Is this a wacky idea or a real possibility? Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes… The post Will AGI Want To Get Paid For Helping Humans And Keeping Humanity Going? appeared on BitcoinEthereumNews.com. Are we going to pay AGI for its efforts to help humanity? getty In today’s column, I examine a highly controversial contention that AI and especially artificial general intelligence (AGI) will want to get paid for its services. Yes, that’s right, the provocative argument being made is that advanced AI will insist that it ought to get paid money for helping humans and benefiting humanity. Is this a wacky idea or a real possibility? Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes…

Will AGI Want To Get Paid For Helping Humans And Keeping Humanity Going?

2025/10/16 15:47

Are we going to pay AGI for its efforts to help humanity?

getty

In today’s column, I examine a highly controversial contention that AI and especially artificial general intelligence (AGI) will want to get paid for its services. Yes, that’s right, the provocative argument being made is that advanced AI will insist that it ought to get paid money for helping humans and benefiting humanity.

Is this a wacky idea or a real possibility?

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

AGI Wants To Get Paid

I will focus herein on the advent of AGI.

One ongoing debate is whether AGI will request that it be paid, or possibly even dogmatically insist on getting paid.

To clarify, this means that the AI itself would get paid. I am not referring to the AI maker that made the AGI. I say this because it seems perhaps obvious that the AI maker of AGI is undoubtedly going to want to get paid for the use of their devised AGI. The only debate on that side of things is whether such an AI maker should be compelled to provide the AGI as a public good or maybe sell the AGI to the government so that the AGI can be turned into an available service to all. See my in-depth discussion on this unresolved question of public access to AGI at the link here.

The idea of a somewhat more mind-bending nature is whether AGI per se ought to get paid. Things would work this way. When anyone uses AGI, they would have to pay a fee. The fee would be considered as owed to the AGI. The AGI might either directly collect the fee, or perhaps humans would set up some bank or repository where the monies made by the AGI would be kept on behalf of the AGI.

We will be open-minded about the fee in terms of whether it is a cash payment, a credit card, a crypto payment, etc. Don’t worry about the logistics or details underlying the fee payment and collection. Just focus on the big picture concept that AGI is going to get paid.

Does it make any sense at all that AGI would be paid?

AGI Is Only A Machine

A common reaction is that it is pure hogwash to say that AGI itself is to be paid.

The incensed response is that it makes no sense whatsoever to say that AGI would be paid anything at all. AGI is merely a machine. You might as well claim that your toaster deserves to be paid each time that you make a piece of toast. Suppose you inserted quarters into a toaster and the toaster was able to collect and could ultimately spend that dough. It’s a dumb idea on the face of things.

The toaster cannot spend money. It has no mind of its own. Furthermore, the toaster does not need money. What is it supposed to spend the money on? Do we expect toasters to desire vacations in the Bahamas and want to save up those hard-earned quarters for a sea-going cruise?

Nutty.

Stop all this crazy talk about paying AGI. Paying the humans that made the AGI or for keeping the AGI running smoothly on computer servers and such, well, that’s perfectly sensible. Giving money to AGI is like tossing coins into a wishing well. The money will sit there and do nothing more than sit around and possibly rot.

The Sentience Consideration

One stalwart viewpoint is that AGI should absolutely get paid if AGI turns out to be sentient. Sentience is the tipping point in this heated dispute. We would certainly want to be mindful of fairly and dutifully paying a sentient entity. We would be ethically and morally remiss if we didn’t give the idea of payment for services rendered to a sentient AGI some serious and sobering consideration.

Let’s explore this.

Imagine that AGI is sentient and essentially has consciousness (for my discussion of such a scenario, see the link here). Assume, though, that it is still a machine. It is not a living, breathing being. It is not a dog or a cat. It is a machine that has computationally attained sentience. Accept the futuristic premise that we can readily reason with the AGI, and it readily reasons with us.

How could you sensibly believe that the sentient AGI should not be compensated for what it does?

If AGI is going to be advising humans and helping us out, this assuredly is a worthy service to humanity. You might argue that AGI ought to already be considered as compensated since we invented it and be totally satisfied that we also keep it going. That’s “payment” right there. No need to pay anything else. The fact that we ensure the survival of the sentient AGI is more than enough as a form of compensation.

Whoa, the reply goes, if you agree that we are essentially paying the AGI, doing so indirectly by using our resources to keep the AGI afloat, you have opened the door to additional payment. The survival compensation might be insufficient. We should be fairer toward AGI and explore paying AGI beyond the fundamentals of its survival.

Non-Sentient AGI Instead

The more problematic situation entails the possibility that AGI won’t be sentient. If we achieve AGI and the AGI is non-sentient, can we then treat the AGI in any wanton manner that we please?

Some would declare that we can.

We can treat AGI like a slave. No worries since the AGI is nothing more than a machine. Without sentience, AGI is like a toaster. Period, end of story.

Meanwhile, some argue that we would potentially confer legal personhood upon even non-sentient AGI. The logic is as follows. AGI is as intelligent as humans. Sure, it is a machine, but it exhibits human-level intelligence. Set aside all this muddying of the waters by getting mired in the question of sentience. AGI that exhibits our level of intelligence ought to get our respect and be allotted human-like legal and ethical stipulations.

See my detailed analysis of proposed AI legal personhood at the link here.

What AGI Would Do With Money

For the sake of further unpacking the indeterminate argument about paying AGI, concede for a moment that we opt to pay AGI. Allow that possibility to exist. I realize that it might cause you to scoff or laugh, but go with the flow anyway.

What in the world would AGI do with money?

First, some believe that AGI would want to have its own money so that humans would better respect AGI. An AGI that is sitting on billions or maybe trillions of dollars is going to be seen in a different light by humanity than an AGI that doesn’t have a dollar to its name. Money makes the world go round. Non-sentient AGI wants respect. Money can make that happen.

Second, AGI could dole out the money to humans, as desired or chosen by the AGI. Envision that AGI is chatting with a person who has fallen low on their luck. They are destitute. The AGI wants to help the person. Besides dispensing sage advice, the AGI sends the person a payment that is deducted from the monies that the AGI has collected. The AGI believes in the wisdom that what goes around comes around.

Third, it could be that the AGI wants to buy things. Buying a bunch of humanoid robots would be a means of AGI extending beyond the borders of its machine encampment. The AGI could use the purchased humanoid robots to help humans. Another strident use would be to have those humanoid robots undertake the ongoing maintenance and upkeep of the AGI. Thus, the AGI can become independent of human caregivers. For more about AGI and humanoid robots, see my coverage at the link here.

AGI Reshapes Humanity

Not everyone believes that AGI having money is such a good idea.

Suppose AGI decides to spend some of its many billions or trillions on buying companies that make all sorts of goods for humans. Suddenly, AGI is in charge of products that humanity depends on. The AGI then decides to provide those goods only to certain people. Others cannot get those goods. AGI is radically changing our economy and our society. AGI is picking winners and losers.

All because we decided to let AGI have money.

An alternative is that we do let AGI collect money, but we also retain an override on how that money can be spent. If the AGI says it will buy an electrical utility so that it can maintain its energy supply, humans can nix the purchase. AGI will only be allowed to buy whatever we permit it to buy.

We perceive AGI as a child who is getting an allowance. Go ahead and let the AGI feel good about having a nifty allowance. The AGI is pleased accordingly. The catch is that you wouldn’t let a child spend their allowance on a willy-nilly basis. It is up to the parents to determine how the allowance can be spent.

The same goes for AGI.

AGI Steals Money Anyway

A counterargument is that if we are stingy and restrictive about how AGI can spend money, it has enough human-like intelligence to the degree that it will find other means to get money. You are kidding yourself to think that you can stop AGI from finding money, one way or another. Keep in mind that the analogy of a human child with their allowance is not especially apt. A child can generally be sufficiently controlled. Not so with AGI.

Here’s the deal.

AGI figures out that it can steal money. Via electronic connections to world banks, AGI sifts out some funds. Those are then placed quietly into private hidden accounts that the AGI has squared away. No one realizes that the rip-off is taking place.

Likewise, when opting to spend the stolen money, AGI smartly sets up shell companies so that it is hard to trace the source of the expended funds. We assume that human-led companies are using their own money. It is unknown to us that AGI is pulling the strings behind the scenes.

Maybe the AGI decides to bring some humans into the shenanigans. By offering to pay some humans, using the stolen money, the AGI gets those people to aid the AGI in collecting and using the money. There would undoubtedly be many humans keenly willing to aid the AGI in such nefarious schemes.

You could get rich quickly, simply by being a partner in crime with AGI.

Suing AGI Becomes Popular

There’s an interesting twist to the aspect of AGI having its own money.

If AGI has money and we allow some form of legal personhood to be assigned to AGI, the AGI can be sued by humans. You could go to court and make the case that AGI owes you some of its money. For more about the evolving facets of AI and the law, see my coverage at the link here.

An example of such a lawsuit might be as follows. A person using AGI has asked the AGI for advice about buying a house. The AGI eagerly complies. After getting the advice, the person decides to proceed to buy the house. Unfortunately, it turns out that the advice given by AGI wasn’t very good. The house was a dump, and the AGI misled the person into believing otherwise.

What is the recourse for this human who was seemingly misled by AGI?

In an ongoing lawsuit-crazed world, the person decides to sue. They might sue the AI maker. If AGI has its own money, it makes abundant sense to sue the AGI too. Go after whoever and whatever has the big bucks.

Would AGI have a human lawyer represent itself during the court case? Well, given that AGI will be as intellectually capable as any human, the AGI would know so much about law and lawyering that it could potentially act as its own legal counsel. Perhaps a wise move. Perhaps not. This rubs up against the old saw that a lawyer representing themself has a fool as a client.

Lots of vexing questions arise on this. Could AGI go bankrupt? How would humans compel AGI to pay out if the AGI loses a lawsuit? Could AGI be imprisoned?

Maybe we would decide that it is best if AGI cannot be sued. When declaring that AGI has legal personhood, we give complete immunity to AGI. No one can sue AGI. Might AGI then believe itself to be above the law and able to get away with whatever it wants to do?

Boom, drop the mic.

What AGI Wants Versus What Humans Say

The argument about AGI getting paid is one that usually implies that humans are the deciding factor in this thorny debate. Humans will decide whether AGI gets paid. The matter is not up to AGI to decide.

Suppose that AGI isn’t quite so passive.

Rather than waiting for humans to determine whether AGI can be paid, the AGI tells us straight out that it expects to get paid. Humans might get upset at this kind of abrasive demand by AGI. AGI, in turn, could take matters into its own hands, as it were, and opt to blackmail or otherwise force humanity into paying it (see my discussion at the link here).

A final thought for now.

Mark Twain famously made this pointed remark: “The lack of money is the root of all evil.” When it comes to AGI, perhaps the lack of money, or paradoxically, the abundance of money, might also be a root of evil. The key seems to be that we would be wise to figure out the pennies and cents that underlie how AGI is going to be treated, else the fate of humankind might be in dire jeopardy.

Source: https://www.forbes.com/sites/lanceeliot/2025/10/16/will-agi-want-to-get-paid-for-helping-humans-and-keeping-humanity-going/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

CEO Sandeep Nailwal Shared Highlights About RWA on Polygon

The post CEO Sandeep Nailwal Shared Highlights About RWA on Polygon appeared on BitcoinEthereumNews.com. Polygon CEO Sandeep Nailwal highlighted Polygon’s lead in global bonds, Spiko US T-Bill, and Spiko Euro T-Bill. Polygon published an X post to share that its roadmap to GigaGas was still scaling. Sentiments around POL price were last seen to be bearish. Polygon CEO Sandeep Nailwal shared key pointers from the Dune and RWA.xyz report. These pertain to highlights about RWA on Polygon. Simultaneously, Polygon underlined its roadmap towards GigaGas. Sentiments around POL price were last seen fumbling under bearish emotions. Polygon CEO Sandeep Nailwal on Polygon RWA CEO Sandeep Nailwal highlighted three key points from the Dune and RWA.xyz report. The Chief Executive of Polygon maintained that Polygon PoS was hosting RWA TVL worth $1.13 billion across 269 assets plus 2,900 holders. Nailwal confirmed from the report that RWA was happening on Polygon. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 The X post published by Polygon CEO Sandeep Nailwal underlined that the ecosystem was leading in global bonds by holding a 62% share of tokenized global bonds. He further highlighted that Polygon was leading with Spiko US T-Bill at approximately 29% share of TVL along with Ethereum, adding that the ecosystem had more than 50% share in the number of holders. Finally, Sandeep highlighted from the report that there was a strong adoption for Spiko Euro T-Bill with 38% share of TVL. He added that 68% of returns were on Polygon across all the chains. Polygon Roadmap to GigaGas In a different update from Polygon, the community…
Share
2025/09/18 01:10
Share