OpenAI

Unexpected responses from ChatGPT
Affected components
Updates

Write-up published

Read it here

Resolved

On February 20, 2024, an optimization to the user experience introduced a bug with how the model processes language.

LLMs generate responses by randomly sampling words based in part on probabilities. Their “language” consists of numbers that map to tokens.

In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense. More technically, inference kernels produced incorrect results when used in certain GPU configurations.

Upon identifying the cause of this incident, we rolled out a fix and confirmed that the incident was resolved.

Thu, Feb 22, 2024, 01:02 AM
17h earlier...

Resolved

ChatGPT is operating normally.

Wed, Feb 21, 2024, 07:14 AM
6h earlier...

Monitoring

We're continuing to monitor the situation.

Wed, Feb 21, 2024, 12:59 AM
1h earlier...

Identified

The issue has been identified and is being remediated now.

Tue, Feb 20, 2024, 11:47 PM

Investigating

We are investigating reports of unexpected responses from ChatGPT.

Tue, Feb 20, 2024, 11:40 PM
Powered by

Availability metrics are reported at an aggregate level across all tiers, models, and error types. Individual customer availability may vary depending on their subscription tier: PAYG, Scale-Tier, or Reserved Capacity, as well as the specific model and API features in use.