Partial outage for some engines
Incident Report for OpenAI
All models are operational. Thank you for your patience.
Posted Jun 24, 2022 - 08:25 PDT
Babbage is now stable. We're investigating 1-2 remaining issues with our fine tuned curie models and a few lesser used engines.
Posted Jun 24, 2022 - 08:11 PDT
At this time davinci fine-tuned models should be back to normal. We're investigating an issue with our babbage engine.
Posted Jun 24, 2022 - 07:30 PDT
We have brought back our original cluster and are bringing back traffic. As of this post, davinci fine-tuned models should be normalizing in latency and error rates.
Posted Jun 24, 2022 - 06:46 PDT
Davinci fine-tuned models are coming back up but are seeing increased latency. We are continuing to work to resolve this outage.
Posted Jun 24, 2022 - 05:24 PDT
We do not have a resolution on this incident but we are working with our upstream partners for support. Users of davinci fine-tuned models are still advised to use text-davinci-002 for the time being.
Posted Jun 24, 2022 - 04:14 PDT
Fine-tuned Davinci model inference is still degraded. We are exploring alternate theories as to what is causing very high latency on these models. Given the set of root causes that have already been ruled out, this unfortunately is indicating that a much more extensive investigation will be needed to fully remediate fine-tuned Davinci model performance.

We suggest using the text-davinci-002 model as a temporary backup while we work to restore fine-tuned Davinci. The text-davinci-002 model is both fully operational and can approach the capability of fine-tuned Davinci models for many applications.

All other public production models are operating nominally and we have restored the original cluster that had an outage.
Posted Jun 24, 2022 - 02:38 PDT
Fine-tuned curie model inference has returned to normal.

Fine-tuned davinci model inference is still in a degraded state.
Posted Jun 24, 2022 - 01:10 PDT
We are seeing error rates drop on curie fine-tuned models as well as davinci fine-tuned models. We're actively monitoring the situation.
Posted Jun 24, 2022 - 00:26 PDT
We are continuing to address health issues with fine-tuned curie and fine-tuned davinci models.

In addition to aforementioned model loading issues, we are experiencing limits in our capacity while we restore the cluster that went out.

All other models are operational.
Posted Jun 23, 2022 - 23:45 PDT
We believe we have found a stable arrangement of our infrastructure. All models are responding to requests; however, fine-tuned davinci and fine-tuned curie have an elevated rates of 429s and 499s.

The fine-tuned davinci and fine-tuned curie model errors are due to customer model weights taking a long time to load. Normally these weights are heavily cached; however, due to these cluster rearrangements, those caches need to be restored. The sudden influx of requests to restore those caches is causing slowdowns upstream from our storage accounts. We expect the error rates to steadily decline, but may take longer than normal due to these bottlenecks.
Posted Jun 23, 2022 - 23:03 PDT
We are continuing to move infrastructure around in our operational clusters to ensure all models are performing optimally with the resources we have. We are much closer to a stable configuration, but are still re-allocating resources to better bring down error rates.

Some Fine-tuned curie models are the most heavily affected right now as we continue to move resources around.
Posted Jun 23, 2022 - 22:15 PDT
We have now moved all models from the broken cluster to new clusters; however, we are still suffering from some warmup and capacity issues.

Fine-tuned davinci and curie models are warming up. Their performance should improve over time and the rates of 429s and 499s should steadily decrease.

We're also experiencing capacity issues with Codex davinci and cushman engines. We are actively working to fix these. Until then, they will have degraded performance until these issues get resolved.
Posted Jun 23, 2022 - 21:39 PDT
One of our clusters has suffered a major communication outage within kubernetes. This has affected the models that are hosted in that cluster.

This includes the following models:
- Inference for fine-tuned davinci and curie models
- Codex: code-davinci-001, and code-cushman-001
- Legacy curie, babbage, and ada
- Embeddings models

We are actively working to migrate most of these models to a functioning cluster. Affected models should be coming online as this happens.

Due to capacity constraints, we unfortunately expect to see some temporary performance and latency degradations in other models as we move infrastructure around.
Posted Jun 23, 2022 - 21:14 PDT
We are currently in a state of degraded performance for most engines. We are still working to recover.
Posted Jun 23, 2022 - 20:43 PDT
We know the source of the outage and are working to mitigate.
Posted Jun 23, 2022 - 20:10 PDT
One of our clusters has had an outage affecting some engines. We are investigating.
Posted Jun 23, 2022 - 19:02 PDT
This incident affected: API.