Skip to main content

2 posts tagged with "microsoft"

View All Tags

· 7 min read
Ray Myers

Like alchemists seeking to turn base metals into gold, some contemporary researchers seek to create AGI using large language models as their philosopher's stone.

Grady Booch, Chief Scientist at IBM

The removal of CEO Sam Altman by the OpenAI board and the subsequent high drama has been shocking, fascinating, and deeply confusing. It's difficult assess the board's decision itself (independent of execution) because we don't know the reasons yet. However, perhaps it's time for us to look at the charter itself and realize that what we need to let go of is not merely one executive or board, but the center of the mission itself: Artificial General Intelligence.

We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity's future. Anything that doesn't help with that is out of scope.

OpenAI careers page

For this discussion we'll use the AGI definition in OpenAI's charter, "highly autonomous systems that outperform humans at most economically valuable work". There are many other definitions, so any prediction for AGI timeline partly depends on which definition is used. It may never arrive or it may be already here.

What does AGI do?

Here are some things OpenAI leaders have suggested AGI can do.

Wipe out the human race

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

That open letter was signed this year by OpenAI leaders including then CEO Sam Altman, 3 other co-founders, CTO Mira Murati, and the heads of policy research and governance.

OpenAI's position is essentially that their goal is to build something they consider a high risk for human exinction. Or more charitably, that they have the duty to build it first in order to protect us from whomever else may do it.

Break capitalism

I think that if AGI really truly fully happens... I can imagine all these ways that it breaks capitalism.

Sam Altman, OpenAI CEO (at the time) source

Fair enough, if "most economically valuable work" were automated, it stands to reason that economics and the organization of society could radically change.

Write the law like Judge Dredd

AI is going to enable the creation and enforcement of laws 1000x as complex as we have today. [...] One AI system working on behalf of a politician will go conduct interviews and deeply understand what everyone's preferences are and what would benefit them, and find a way to make everyone happy at once."

Adam D'Angelo, Quora CEO, OpenAI Board of Directors thread

As you can see, this leads to some pretty extreme conclusions. If AGI is so much more competent than us, it should do everything right? It should write the law, it should rule over us. This borders on worship and is just not in line with the capabilities of anything yet seen even if it were desirable.

AGI and TESCREAL

By reducing morality to an abstract numbers game, and by declaring that what’s most important is fulfilling "our potential" by becoming simulated posthumans among the stars, longtermists not only trivialize past atrocities [...] but give themselves a "moral excuse" to dismiss or minimize comparable atrocities in the future. This is one reason that I’ve come to see longtermism as an immensely dangerous ideology. It is, indeed, akin to a secular religion built around the worship of "future value," complete with its own "secularised doctrine of salvation," as the Future of Humanity Institute historian Thomas Moynihan approvingly writes in his book X-Risk.

The popularity of this religion among wealthy people in the West - especially the socioeconomic elite - makes sense because it tells them exactly what they want to hear: not only are you ethically excused from worrying too much about sub-existential threats like non-runaway climate change and global poverty, but you are actually a morally better person for focusing instead on more important things—risk that could permanently destroy "our potential" as a species of Earth-originating intelligent life."

Émile P. Torres - The Dangerous Ideas of "Longtermism" and "Existential Risk"

Here's an example of the all-or-nothing thinking created by the utopian vision of AGI, by a founder who (in contrast to AGI) is himself working on quite an interesting and well-scoped product.

For every day AGI is delayed, there occurs an immense amount of pain and death that could have been prevented by AGI abundance.

Anyone who unnecessarily delays AI progress has an enormous amount of blood on their hands.

Scott Stevenson, CEO of Spellbook Legal

An example of the all-or-nothing thinking from the side of the X-Risk vision is provided by current OpenAI interim CEO Emmett Shear, 5 months before being appointed.

The Nazis were very evil, but I'd rather the actual literal Nazis take over the world forever than flip a coin on the end of all value.

Emmett Shear Thread

The coin flip mentioned refers to having a "p(DOOM)" of some significant chance. Shear places his p(DOOM) between 5% and 50% as of June, expanding on his view in this interview with Logan Bartlett.

Some critics of this umbrella of thinking refer to it as the "TESCREAL" bundle: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. One particularly scathing and well-researched critique is Eugenics and the Promise of Utopia through AGI by Dr Timnit Gebru.

This is a distraction

AI Safety conversions centered around utopian or apocalyptic hypotheticals distract from ongoing AI Ethics work about the benefits and dangers that are already here. For more on this see Talking about a 'schism' is ahistorical by Emily Bender.

This is emphatically not a story of a community that once shared concerns and now is broken into disagreeing camps. Rather, there are two separate threads — only one of which can properly be called a body of scholarship — that are being held up as in conversation or in competition with each other. I think this forced pairing comes in part from the media trying to fit the recent AI doomer PR pushes into a broader narrative and in part from the fact that there is competition for a limited resource: policymaker attention.

In describing the Doom-Topia safety camps as a distraction rather than equally valid concerns, we are taking the position is that the current AI advances like Large Language Models (LLMs) are not an imminent AGI. GPT-4 is not AGI, nor will be GPT-5 nor GPT-6. This is also the position of OpenAI (ex-?)CEO Sam Altman, who said earlier this month that improved LLMs will not get us to AGI or Superintelligence, we still need other breakthroughs. In the nearly 70 year history of AI research, this is characteristic of the pattern of dramatic new capabilities rightly capturing our imagination followed by new hurdles and AI winters.

Whereas if we focus on what's clear and present, we are already sitting on technology with huge potential, good and bad. OpenAI's own projection is that ChatGPT will have some impact on 80% of the workforce - is that not enough responsibility for us to take? Almost everything we do for each other and to each other has the potential to be amplified, whether by LLMs or by some other less exotic advancement that just hasn't been properly applied or maintained.

In summary:

  • Yes to AI
  • No to AGI

Build well-scoped systems and take responsibility for them. Don't try to build a God.

With patience the most tangled cord may be undone.

Can we write to you from time to time? Get updates.

· 2 min read
Ray Myers

Earlier this week, Microsoft Research published the paper "Sparks of Artificial General Intelligence: Early experiments with GPT-4".

This is no small claim. Artificial General Intelligence (AGI), is something of a holy grail in Computer Science. Traditionally, all successful AIs have been specialized for a particular problem, such as the Deep Blue chess engine or Google Translate. While they may outperform us within their specialization, the versatility of human intelligence puts us in another league. A breakthrough to AGI would be world-changing.

Considering the source for a moment, Microsoft Research has some top talent and I wouldn’t question the skills or motivation of any of their researchers. However we also can’t ignore the enormous strategic interest Microsoft has in OpenAI’s technology (e.g. Bing search and GitHub Copilot). Even with the best intentions, a review and publication bias is undoubtedly at play here.

I’ve had a chance to work with GPT 4 a bit, it is a substantial improvement over GPT 3.5, which powered the initial release of ChatGPT. Where many prompts produced plausible-sounding gibberish, results are now more consistently lucid and useful.

As the paper demonstrates, GPT is also capable of interacting with other tools. It’s bad at Math? No problem, it can invoke a calculator during its task. It can run a Google search and take the results into account.

So with augmented LLMs, are we finally on a path that will lead to AGI? A fair portion of the software community is on "yellow alert" for that possibility. It’s also well worth listening to AI-skeptics like Grady Booch. The true value will only be realized once the hype cycle dies down.

As for me, I’ll put it this way. The Turing Test is dead and buried. LLMs are "good enough" at such a broad range of tasks that we will need to redraw our concept of AGI around their weaknesses.

Can we write to you from time to time? Get updates.